text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Splinter¶
Splinter is an open source tool for testing web applications using Python. It lets you automate browser actions such as visiting URLs and interacting with their elements.
Sample code¶
from splinter import Browser with Browser() as browser: # Visit URL. browser.visit("") # Find and fill out the search form. browser.find_by_name('q').fill('splinter - python acceptance testing for web applications') # Find and click the 'search' button. button = browser.find_by_name('btnK').click() # Check for result on the page. if browser.is_text_present('splinter.readthedocs.io'): print("Yes, the official website was found!") else: print("No, it wasn't found... We need to improve our SEO techniques")
Note: if you don’t provide any driver to the
Browser function,
firefox will be used. | https://splinter.readthedocs.io/en/latest/ | CC-MAIN-2022-33 | refinedweb | 120 | 59.7 |
Outlook 12 Server2005-11-15T19:17:00ZOutlook Performance Update<P class=MsoNormal<B style="mso-bidi-font-weight: normal"><FONT size=3><FONT face=Calibri>The Short Story<?xml:namespace prefix = o<o:p></o:p></FONT></FONT></B></P> <P class=MsoNormal<FONT face=Calibri size=3>Last Friday, we released an update to Microsoft Office Outlook 2007 that will help to address some performance issues that are discussed in </FONT><A href="" mce_href=""><FONT face=Calibri size=3>Knowledge Base Article 932086</FONT></A><FONT face=Calibri size=3>. You can find a description of the update and a link to the appropriate download page for your version of the update in </FONT><A href="" mce_href=""><FONT face=Calibri color=#0000ff size=3>Knowledge Base Article 933493</FONT></A><FONT face=Calibri size=3>. </FONT></P> <P class=MsoNormal<o:p><FONT face=Calibri size=3></FONT></o:p> </P> <P class=MsoNormal<FONT face=Calibri size=3>(Users of the Business Contact Manager features of Outlook should see the note at the end of this post about a separate update that should be installed first.)</FONT></P> <P class=MsoNormal<o:p><FONT face=Calibri size=3></FONT></o:p> </P> <P class=MsoNormal<B style="mso-bidi-font-weight: normal"><FONT size=3><FONT face=Calibri>The Long Story<o:p></o:p></FONT></FONT></B></P> <P class=MsoNormal<FONT face=Calibri size=3>We’ve been investigating some reported performance problems with Microsoft Office Outlook 2007 for some time. Our investigations revealed that these specific performance problems affected users who had large mailbox files (.pst and .ost files) stored on their computers. </FONT></P> <P class=MsoNormal<o:p><FONT face=Calibri size=3></FONT></o:p> </P> <P class=MsoNormal<FONT face=Calibri size=3>As part of our continued effort to improve our customers’ product experiences, on Friday we released an update to Microsoft Office Outlook 2007 that addresses these specific performance problems. You can find a link to the update below (or above, for that matter). </FONT></P> <P class=MsoNormal<o:p><FONT face=Calibri size=3></FONT></o:p> </P> <P class=MsoNormal<FONT size=3><FONT face=Calibri.<SPAN style="mso-spacerun: yes"> </SPAN></FONT></FONT></P> <P class=MsoNormal<o:p><FONT face=Calibri size=3></FONT></o:p> </P> <P class=MsoNormal<FONT face=Calibri size=3:</FONT></P> <UL> <LI> <DIV class=MsoNormal<FONT face=Calibri size=3>We improved our handling of large table data structures inside of the Outlook storage subsystem. “Large tables” are created whenever a user has tens of thousands of items in a single folder, and they are also created by our search index infrastructure which is new for Outlook 2007. It’s important to note that this is NOT a problem with the search indexing mechanisms for Windows XP or Windows Vista. Rather, it’s a change to the way Outlook stores data that lets Outlook use the index efficiently.</FONT></DIV></LI> <LI> <DIV class=MsoNormal<FONT face=Calibri size=3>We made some changes to the way we handle data replication information in PST and OST files. </FONT></DIV></LI></UL> <P class=MsoNormal<FONT face=Calibri size=3>It’s important to note that these changes do NOT change the format of either .pst or .ost files. There is no need to rebuild these files, and files created with this updated version of Outlook are backwards-compatible with previous versions of Outlook.</FONT></P> <P class=MsoNormal<o:p><FONT face=Calibri size=3></FONT></o:p> </P> <P class=MsoNormal<FONT face=Calibri size=3>Customers who install this update may notice generally improved performance and responsiveness when Outlook is reading or writing data. Examples of these improvements include:</FONT></P> <UL> <LI> <DIV class=MsoNormal<FONT face=Calibri size=3>An improvement in speed when opening and reading messages</FONT></DIV></LI> <LI> <DIV class=MsoNormal<FONT face=Calibri size=3>A decrease in the amount of time it takes to copy or move messages from one folder to another.</FONT></DIV></LI> <LI> <DIV class=MsoNormal<FONT face=Calibri size=3>A decrease in the amount of time it takes to delete messages.</FONT></DIV></LI> <LI> <DIV class=MsoNormal<FONT face=Calibri size=3>An increase in the download speed when downloading messages from the Exchange Server.</FONT></DIV></LI></UL> <P class=MsoNormal<FONT face=Calibri size=3.</FONT></P> <P class=MsoNormal<o:p><FONT face=Calibri size=3></FONT></o:p> </P> <P class=MsoNormal<FONT face=Calibri size=3>As I mentioned above, we appreciate hearing about the experiences of our customers and we certainly hope that this update improves your experience with Outlook 2007. </FONT></P> <P class=MsoNormal<o:p><FONT face=Calibri size=3></FONT></o:p> </P> <P class=MsoNormal<FONT face=Calibri size=3>Here are some links to the important information:</FONT></P> <UL> <LI> <DIV class=MsoNormal<FONT face=Calibri size=3>A Knowledge Base article that describes the performance problems: </FONT><A href="" mce_href=""><FONT face=Calibri size=3><A href="" mce_href=""></FONT></A><FONT face=Calibri size=3> </FONT></A></DIV></LI> <LI> <DIV class=MsoNormal<FONT face=Calibri size=3>A Knowledge Base article that describes the April 13th update: </FONT><A href="" mce_href=""><FONT face=Calibri size=3></FONT></A><FONT face=Calibri size=3>. You can find a link to the download for your particular version in this article. It’s important to note that users of Business Contact Manager should download an additional update first, described here: </FONT><A href="" mce_href=""><FONT face=Calibri size=3></FONT></A><FONT size=3><FONT face=Calibri>. </FONT></FONT></DIV></LI> <LI> <DIV class=MsoNormal<FONT face=Calibri size=3>The download page for the update can found here: </FONT><A href="" mce_href=""><FONT face=Calibri size=3></FONT></A><FONT face=Calibri size=3>.</FONT></DIV></LI></UL> <P class=MsoNormal<FONT face=Calibri size=3>I do hope that this update improves your experience with Outlook 2007!</FONT></P> <P class=MsoNormal<o:p><FONT face=Calibri size=3></FONT></o:p> </P><img src="" width="1" height="1">WillKennedy Support<P class=MsoNormal<FONT face=Calibri size=3>Hey, I’m back. I’m sure everyone thought I had fallen off the face of the earth. </FONT><FONT face=Calibri size=3>I guess I’m not so good at blogging. It’s one of those things that require regular attention, and that’s tough given the demands of my day job.</FONT></P> <P class=MsoNormal<FONT face=Calibri size=3>In any case, we shipped! We signed off on the “Release to Manufacturing” version of Office 2007 in November, and it became available broadly in the United States and many other markets in January 30<SUP>th</SUP>, 2007. By now it's available around the world. I hope everyone has run out to his or her local software store and bought a copy. </FONT></P> <P class=MsoNormal<FONT face=Calibri size=3>If you want to learn everything there is to know about Office 2007, just click over to </FONT><A href=""><FONT face=Calibri color=#0000ff size=3></FONT></A><FONT face=Calibri size=3>. From there you can learn about new features, download a trial version, and even find ways to buy it online from a variety of sources for your country. </FONT></P> <P class=MsoNormal<FONT size=3><FONT face=Calibri><STRONG>Getting Support</STRONG></FONT></FONT></P> <P class=MsoNormal<FONT face=Calibri size=3>Over the past few months a number of people used this blog to send me private questions asking for support for their specific scenarios. We’ve been able to help out a small number of these folks directly, but this really isn’t the best way to get support for the product, since it doesn’t scale well for our customers around the world.</FONT></P> <P class=MsoNormal<FONT face=Calibri size=3>If you’re having trouble with Outlook 2007 or with any previous version of Outlook, you should go to </FONT><A href=""><FONT face=Calibri size=3></FONT></A><FONT face=Calibri size=3> and pick the support options which apply to your situation. That site highlights the top issues and it also provides a path to support resources for lots of different Microsoft products.</FONT></P> <P class=MsoNormal<FONT face=Calibri size=3>You'll hear from me again soon. Very soon.</FONT></P><img src="" width="1" height="1">WillKennedy in Outlook 12 - "It changes the way you work"<P><IMG src=""> </P> <P><FONT face=Tahoma size=2.</FONT></P> <P><FONT face=Tahoma size=2>In Outlook 12, we’ve made this much easier. It’s part of the “Manage your time & information” theme. We’ve overhauled our Search feature to make it fast, flexible, and completely integrated.</FONT></P> <P><FONT face=Tahoma size=2>First, a digression. </FONT></P> <P><FONT face=Tahoma size=2>There are really two different scenarios that people call “Search.” </FONT></P> <P><FONT face=Tahoma size=2><STRONG>Scenario #1. </STRONG!) </FONT></P> <P><FONT face=Tahoma size=2.</FONT></P> <P><FONT face=Tahoma size=2><STRONG>Scenario #2.</STRONG>.</FONT></P> <P><BR><FONT face=Tahoma size=2>A careful reader will notice that there can indeed be some overlap between these two scenarios. But the important point here is that the scenarios pursue different ends, and it’s reasonable to think that you should build different tools to solve each scenario.</FONT></P> <P><FONT face=Tahoma size=2. </FONT></P> <P><FONT face=Tahoma size=2>In Outlook 12, we’ve built new tools that can be used for both the Search and the Find scenarios, and we’ve updated the old tools we have to make them much faster and useful.<BR>The Search improvements in Outlook 12 span all of Outlook’s “modules” (Calendar, Contacts, Mail, etc.). My discussion here will focus on the Mail module, since that’s the one that most people live in most, but the Search tools work everywhere.</FONT></P> <P><FONT face=Tahoma size=2>There are two qualities I want to discuss briefly about the Outlook 12 Search experience. It’s fast, and it’s integrated.</FONT></P> <P><FONT face=Tahoma size=2><STRONG>Fast</STRONG><BR. </FONT></P> <P><FONT face=Tahoma size=2. </FONT></P> <P><FONT face=Tahoma size=2. </FONT></P> <P><FONT face=Tahoma size=2><STRONG>Integrated</STRONG><BR>Perhaps.</FONT></P> <P><BR><FONT face=Tahoma size=2><STRONG>Enough already. What does it look like?</STRONG><BR>Here’s a single screen shot that shows a lot of the Search experience in Outlook12.</FONT></P><IMG src=""> <P><BR><FONT face=Tahoma size=2>Notice the filtered list shown in the Outlook list view. Notice the “hit highlighting” that shows where the Search terms were found. Notice that it just looks like Outlook!</FONT></P> <P><FONT face=Tahoma size=2.</FONT></P> <P><IMG src=""> <BR><FONT face=Tahoma size=2>If you need to do a more precise Search, you can drop down the expanded “Search Pane” to search in particular fields.</FONT></P> <P><FONT face=Tahoma size=2></FONT> </P> <P><IMG src="" border=0> </P> <P><FONT face=Tahoma size=2>Typing in this expanded Search Pane will also train you to Search more efficiently. For example, if you type “Michael” in the “From” field, then Outlook automatically puts “from: Michael” in the Search box. Soon, </FONT><FONT face=Tahoma size=2>customers figure out what they can type things like “from: Michael subject: XML” to see all items from anyone named Michael with the string “XML” in the subject. This is super powerful and it feels really natural.</FONT></P> <P><FONT face=Tahoma size=2></FONT> </P> <P><FONT face=Tahoma size=2 <a href="">blog</A> that talks about Search, among other things. He’ll discuss some of the finer points of Outlook 12 Search.</FONT></P> <P><FONT face=Tahoma size=2><BR></FONT> </P><img src="" width="1" height="1">WillKennedy’s Start Again: Account Configuration<P><FONT face=Tahoma size=2>This seems like a good place to start, since this is how many of our customers first become acquainted with Outlook.</FONT></P> <P><FONT face=Tahoma size=2>One of the themes for Outlook 12 is “Connect across boundaries.” For many customers, the first boundary they encounter is between them and their email account. </FONT></P> <P><FONT face=Tahoma size=2>Configuring Outlook (and other email clients) is just too complicated for many people. (Keep in mind, you digerati who are reading this blog all grok how email account setup works.) (You probably even know where the word "grok" came from.) Account setup is a huge support challenge for Microsoft. It also costs ISPs a ton of money and time to helping their customers get connected.</FONT></P> <P><FONT face=Tahoma size=2>Let’s consider two examples.</FONT></P> <P><FONT face=Tahoma size=2><STRONG>Example #1.</STRONG> I checked out the online instructions for my broadband provider at home. They have instructions for how to set up Outlook Express (but not Outlook) on their web site. One wacky thing is that you have to log in to their web site to find the instructions. Hmmm. That’s a blocker right there.</FONT></P> <P><FONT face=Tahoma size=2>The instructions have 7 different steps and 6 screenshots. 7! 6! Not too simple.</FONT></P> <P><FONT face=Tahoma size=2><STRONG>Example #2.</STRONG> Here’s the POP account setup dialog from Outlook 2003:</FONT></P><IMG src=""> <FONT face=Tahoma size=1><BR>(Shown smaller than actual size.)</FONT> <P dir=ltr<FONT face=Tahoma size=2>It’s bad form to ridicule the current version of my product in this blog, but let’s see how this works. Put yourself in the place of someone who doesn’t do computers for a living:</FONT></P> <BLOCKQUOTE dir=ltr <P dir=ltr<FONT face=Tahoma size=2><STRONG>Your Name:</STRONG> OK, I know what that is. <BR><STRONG>E-mail address:</STRONG> My ISP gave me that somehow. <BR><STRONG>User Name:</STRONG> Wait, I thought I just typed in my name.<BR><STRONG>Password:</STRONG> My ISP gave me this, too.<BR><STRONG>Incoming mail server:</STRONG> Uhhhh…<BR><STRONG>Outgoing mail server:</STRONG> Uhhhh…<BR><STRONG>SPA:</STRONG> What?<BR><STRONG>More Settings:</STRONG> I hope I don’t have to go in there.</FONT></P></BLOCKQUOTE> <P><FONT face=Tahoma size=2>Notice the “Test Account Settings…” button. We put that in Outlook because it’s so hard for regular users to tell if they got it right.</FONT></P> <P><FONT face=Tahoma size=2>Our support folks see a variety of problems when people try to set up their accounts. They swap the POP3 and SMTP servers. They don’t know the difference between e-mail address and user name. They don’t know they need to turn on SPA. The list goes on and on.</FONT></P> <P><FONT face=Tahoma size=2>So, what have we done to make this simpler?</FONT></P> <P><FONT face=Tahoma size=2>In Outlook 12, we’ve added a feature that will make this easy for the majority of our customers. We're calling it Auto Account Setup. It comes in a couple of parts:</FONT></P> <P><FONT face=Tahoma size=2><STRONG>Part 1:</STRONG> Exchange Accounts. For customers who use Microsoft Exchange “12,” Outlook will automatically detect your account information from the Active Directory and an Exchange 12 web service during account setup. This information tells Outlook your Exchange server name, your email address, any configuration options your admin has set for you, including RPC over HTTP settings, etc. Outlook can also detect basic account information from older versions of Exchange, as long as the customer’s computer is in the same domain as the Exchange server. </FONT></P> <P><FONT face=Tahoma size=2>This makes Exchange configuration really simple for most customers. No more server names. No more arcane settings.</FONT></P> <P><FONT face=Tahoma size=2><STRONG>Part 2:</STRONG> POP & IMAP Accounts. While the account configuration details are often confusing for regular users, they’re pretty predictable for experts. For example, if your email address (given to you by your ISP) is john_doe@isp.com, then there’s a good chance that your POP3 server is something like mail.isp.com and your SMTP server is probably something like smtp.isp.com. </FONT></P> <P><FONT face=Tahoma size=2>With Outlook 12, the customer needs to just type in his or her name, email address, and password. Outlook then tries several predictable combinations of server addresses, SPA, and server ports (995, 993, and 587 for secure connections, 110, 143, and 25 for unsecure connections) until it finds a configuration that works. It tries the combinations in a prioritized order, so for most ISPs the correct combination is often detected very quickly.</FONT></P> <P><FONT face=Tahoma size=2>But what if an ISP uses a non-standard (or hard to predict) configuration? For example, some ISPs use different ports, even for POP3 or IMAP accounts, for largely historical reasons. </FONT></P> <P><FONT face=Tahoma size=2>These ISPs can deploy a XML file in a predictable location on their server (e.g. isp.com/autodiscover/autodiscover.xml, among other places) that includes information about the correct configuration settings for POP3 and IMAP accounts on that domain. Outlook downloads the XML file and uses it to automatically configure the customer’s account. We figure out where to look based on the email address.</FONT></P> <P><FONT face=Tahoma size=2>Before Outlook 12 ships, the format for the XML configuration file will be available for ISPs everywhere.</FONT></P> <P><BR><FONT face=Tahoma size=2>Hopefully ISPs will be able to save tons in support costs by deploying these XML files or by using predictable configuration settings. Our goal is to make sure that we can autoconfigure for the largest ISPs around the world by the time Outlook 12 is released.</FONT></P> <P><FONT face=Tahoma size=2>Hopefully, our customers will be able get up and running a lot sooner than before. They’re just getting started with Outlook 12, and they’ve already connected across a boundary.</FONT></P> <P><FONT face=Tahoma size=2>More later.<BR></FONT></P><img src="" width="1" height="1">WillKennedy's Get Started<P class=MsoNormal<?xml:namespace prefix = o<o:p><FONT face=Tahoma size=2> </FONT></o:p></P> <P class=MsoNormal<FONT face=Tahoma><FONT size=2>It’s time to blog about Outlook 12! This space will be dedicated to sharing news of the upcoming release of Outlook, part of the Office 12 suite. I’ll talk about what we did, what we didn’t do, how it fits with the rest of Office, and some of the process we’re using to build Outlook. <SPAN style="mso-spacerun: yes"> </SPAN>>First, let me introduce myself. I started at Microsoft in 1991 as a programmer on Microsoft Word. I was the development manager for Outlook 2002 and Outlook 2003, and I’m currently the General Manager of the Outlook team. By some odd quirk of fate, I’m also the acting lead of the Outlook design team, which handles the feature design for the product. (These feature designers are called “program managers” inside of Microsoft.)>But enough about me. Outlook is the star here.>I hear from customers (and my friends, relatives, co-workers…) all the time about how they spend tons of time in Outlook. Clearly it’s a widely-used app. It’s a huge thrill to work on it every day, but it comes with a bunch of responsibility, too. Our job is to be super careful to deliver features that will really solve customer problems. >In Outlook 12, I hope we’ve done just that. There are three major themes to the work we’ve done in Outlook 12:<o:p></o:p></FONT></FONT></P> <UL> <LI><FONT face=Tahoma size=2>Manage your time & information </FONT> <LI><FONT face=Tahoma size=2>Connect across boundaries </FONT> <LI><FONT face=Tahoma size=2>Remain safe & in control </FONT></LI></UL> <P class=MsoNormal<FONT face=Tahoma><FONT size=2>Over the next few weeks I’ll discuss each of these areas and the features we’ve done to support each one. size=2>I hope that by the time Outlook 12 ships that I’ll get a chance to discuss work we’ve done with the Outlook Calendar, RSS subscriptions, Tasks, Search, </FONT><a href=""><FONT face=Tahoma size=2>SharePoint</FONT></A><FONT face=Tahoma size=2> synchronization, </FONT><a href=""><FONT face=Tahoma size=2>new user interface</FONT></A><FONT face=Tahoma><FONT size=2>, attachments, Exchange Server, account configuration, the offline experience, IMAP improvements, sharing, electronic business cards, etc. It’s a long list.>By the way, I’m going to try really hard NOT to use the word “user” when I refer to people who use Outlook. (I remember hearing 20 years ago that only two industries have “users:” drugs & computers.) I’ll try to use the term “customer” since these are indeed the people who buy our products. Please forgive me if I slip up and user the u-word. >With my next post, I’ll start talking about some of the specific problems we’re trying to solve with Outlook 12.<o:p></o:p></FONT></FONT></P><img src="" width="1" height="1">WillKennedy | http://blogs.msdn.com/willkennedy/atom.xml | crawl-002 | refinedweb | 3,686 | 54.63 |
How to diagnose when a URDF problem stops the gazebo_ros_control from receiving
I have a URDF ackermann vehicle that works just fine, until I put the LiDAR on top of it. The LiDAR works. I can visualize the point cloud in RVIZ, but Gazebo stops responding to the messages I publish to the joint controllers. I echo the joint topic and see the messages going in, but the wheels don't move.
UPDATE:
That is the problem. [WARN] [1571345774.639031, 0.000000]: Controller Spawner couldn't find the expected controller_manager ROS interface.
How can I diagnose this odd issue? Here is my xacro file. The trouble maker is the last bit, where I define the sensor link for the Velodyne laser. This same bit works in other robots... Any ideas? Thanks.
(more)(more)
<?xml version="1.0"?> <robot name="gem" xmlns: <!-- Vehicle Dimensions --> <xacro:property <xacro:property <xacro:property<!--.292 --> <xacro:property <!-- Macros --> <xacro:macro <link name="${name}"> <collision> <origin xyz="0 0 0" rpy="0 1.5708 1.5708" /> <geometry> <cylinder length="${width}" radius="${radius}"/> </geometry> </collision> <visual> <origin xyz="0 0 0" rpy="0 1.5708 1.5708" /> <geometry> <cylinder length="${width}" radius="${radius}"/> </geometry> <material name="black"/> </visual> <inertial> <origin xyz="0 0 0" rpy="0 1.5708 1.5708" /> <mass value="0.2"/> <!--<cylinder_inertia m="0.2" r="0.3" h="0.1"/> --> <cylinder_inirtia m="${mass}" r="${radius}" h="${width}" /> <!--<inertia ixx="0.23" ixy="0" ixz="0" iyy="0.23" iyz="0" izz="0.4"/>--> </inertial> </link> <gazebo reference="${name}"> <mu1 value="2.0"/> <mu2 value="2.0"/> <kp value="10000000.0" /> <kd value="1.0" /> <fdir1 value="0 1 0"/> <material>${material}</material> </gazebo> </xacro:macro> <xacro:macro <xacro:wheel <xacro:wheel <joint name="${lr}_hinge" type="revolute"> <parent link="chassis"/> <child link="${lr}_assembly"/> <origin xyz="${wheel_base_length/2} ${lr_reflect*(wheel_base_width/2)} ${wheel_diameter/2}" rpy="0 0 0" /> <axis xyz="0 0 1" rpy="0 0 0" /> <limit effort="100" velocity="1" lower="-1" upper="1"/> <dynamics damping="0.0" friction="0.0"/> </joint> <joint name="${lr}_rotate" type="continuous"> <parent link="${lr}_assembly"/> <child link="${lr}"/> <origin xyz="0.0 ${lr_reflect*(wheel_thickness/2)} 0.0" rpy="0 0 0" /> <axis xyz="0 1 0" rpy="0 0 0" /> <limit effort="100" velocity="50"/> <dynamics damping ...
I don't know for sure or what is happening, but the
velodyne_descriptionfiles also load Gazebo plugins in their
.xacros. It could be that what you have in your model is conflicting somehow with what the VLP xacro is doing.
You may have already guessed that, but to test, you could disable the Gazebo-plugin in the VLP
.xacroand see whether that changes anything.
I do not know if this matters but the definition of the transmission since ROS Indigo is:
Seems like to me, that you are not able to use the controllers cause they are not loaded in RobotHW.
That is the problem. [WARN] [1571345774.639031, 0.000000]: Controller Spawner couldn't find the expected controller_manager ROS interface.
This warning came up after i improved some other things about the urdf that were causing warnings. I tried using your example here:
<gazebo> <plugin filename="libgazebo_ros_control.so" name="ros_control"> <robotnamespace>/ack</robotnamespace> <robotparam>/ack/gem</robotparam> <controlperiod>0.001</controlperiod> <robotsimtype>gazebo_ros_control/DefaultRobotHWSim</robotsimtype> <legacymodens>true</legacymodens> </plugin>
</gazebo>
My namespace is /ack and my robot name is gem. I'm running out of ideas...
For one experiment I generated the urdf from my xacro, It's the temp2.txt file here:...
The xacro is gem_1kg_LiDAR.xacro
I added the output from the launch in my original post. Any advice is appreciated. Thanks.
Please update your question with this sort of information and make sure to format it properly. Right now it's very hard to read, and comments are too limited to contain this sort of update.
Edit your original question. Use the
editbutton/link for that.
Hi,
The repo you posted is not accessible. It would be good if you give read access.
On the other hand, I think the problem does not rely on the VLP-16 instantiation, and the plugin seems to launch correctly. That warning can be caused by several things, like lack of computing power that force the controller manager to not have enough time to load, conflicts when loading plugins (like the VLP-16)...
I am using the files you provided and I think you may have some collisions problems between links. My Gazebo is unable to start both the model and the controllers.
Also ensure you have installed
gazebo_ros_pkgsand
ros_control,
ros_controllers.
Furthermore, you are running
robot_state_publisherand
joint_state_publisherin a different namespace, this is not advisable unless you do the proper remaps.
Hi Weasfas, thanks. I will update and try using one namespace for both robot_state_publisher and joint_state_publisher. Also, I set my repo to Public. (I thought I already had...)
The computer resources I think are fine. I have another robot using joint publishers along with this same LiDAR setup. Maybe it is a conflict. I will have to do more experiments.
Hi @horseatinweeds
After doing some tests with your package I found a several problems that, once fixed, let me display and control the base on Gazebo.
First of all, you need to take care of the tag
<selfCollide>in the gazebo reference, because is causing self colliding problems in your model.
Next, you will need to change the VLP-16 description to adjust to your environment: you are loading a
samples:=1875 Lidar, that is huge depending on the machine and resources allocate by Gazebo, Try to lower that number and launch the simulation. With those fixes you will be able to launch it and Gazebo will have enough time to load the controllers. | https://answers.ros.org/question/335329/how-to-diagnose-when-a-urdf-problem-stops-the-gazebo_ros_control-from-receiving/ | CC-MAIN-2019-51 | refinedweb | 955 | 59.3 |
Send mail the .NET way
Adding e-mail functionality to apps doesn't get much simpler than this.
Please every Web application needs to send e-mail sooner or later. Whether your site is sending a confirmation e-mail or the dreaded spam (not that you would do that, of course), sending e-mail is a vital function of most Web sites. Although sending e-mail programmatically was never terribly difficult, tackling the task is amazingly simple using the .NET Framework. In this article, you'll learn about the objects the System.Web.Mail namespace provides. You'll also see how to send mail, how to add attachments and how to take advantage of all the members of the various objects. To demonstrate the various techniques, you can try out the sample project, WebMail.sln, which accompanies this article. (See the link at the end of this article for details on downloading this project.)
Two things to note: Sending e-mail requires an SMTP server. That is, unless you can send e-mail from the computer hosting your Web site, your site won't be able to send e-mail. You might need to set up and configure the SMTP server that's installed by default on your computer in order to get this to work. In general, if your machine can send e-mail by any means, you should be able to use the code shown in this article. (Installing and configuring SMTP is beyond the scope of this article. I'll assume you're able to send e-mail already and that you simply want to automate the process for your site by using the code in the article.) In addition, the SmtpMail class uses Collaborative Data Objects (CDO) and COM under the covers. This means CDO must be installed on the Web server where you're hosting the page. (The easiest way I know to ensure CDO is installed is to install Outlook 98 or later, but that might not be practical on your server.)
To start sending mail, try out the sample page from the sample project shown in Figure 1. You can load this page directly or by clicking on the Work with SmtpMail.Send link on Default.aspx. If you fill in fields as shown in Figure 1 and click on the Send button, you'll end up running this code:
' At the top of the file: Imports System.Web.Mail ' In the event procedure: SmtpMail.Send( _ txtFrom.Text, txtTo.Text, txtSubject.Text, txtBody.Text)
Figure 1. The SmtpMail.Send method is really simple. You supply four details and you're on your way. If you need more control, use the second overloaded version of the method, which allows you to pass a MailMessage object.
As you can see, the simple, static (shared) Send method requires four parameters. That's pretty much all there is to it. Of course, you have other options. The Send method allows you to specify a MailMessage object (also provided by the System.Web.Mail namespace). Using the MailMessage object provides much more flexibility, such as the option to include attachments. (The SmtpMail object also provides a static (shared) property named SmtpServer, which allows you to specify an SMTP server to use for sending mail. If you need to use an SMTP server other than the current machine, you can set this property before sending mail.)
Get the MailMessage
If you want to control your messages in greater detail, you should investigate the MailMessage class. Besides sending e-mail using four string parameters, the SmtpMail.Send method can accept a single MailMessage object as its parameter. The MailMessage class provides a number of useful properties (but no specific methods). Try the page shown in Figure 2 to test out most of the members of the MailMessage class. (Select the Mail Using MailMessage link on Default.aspx.)
Figure 2. If you need complete control over your mail message, use the MailMessage class. You can specify attachments, text format, Cc and Bcc recipients, priority levels, and more.
The MailMessage class provides many simple properties. See the online documentation for a full listing, but I've listed the items you're most likely to use:
At its simplest, you're likely to use the MailMessage class as it is in this code from the sample page:
Dim msg As New MailMessage() msg.From = txtFrom.Text msg.To = txtTo.Text msg.Cc = txtCC.Text msg.Bcc = txtBCC.Text msg.Subject = txtSubject.Text msg.Body = txtBody.Text SmtpMail.Send(msg)
Other properties aren't quite so simple. If you need to add attachments, you can use the Attachments property -- an IList containing a list of attachment names. Note that the Attachments property is read-only. You won't be able to assign an IList object to the property. You only can add items individually to the property's list of items.
In the example page, you can add file names to the list of attachments by typing the names into the text box to the right of the Add button.
Although it's not demonstrated in the sample page, you also can set the Encoding property of the MailMessage you're sending. This property can be one of the encoding types the System.Text namespace provides, including ASCIIEncoding, UnicodeEncoding, UTF7Encoding, and UTF8Encoding.
In order to add an attachment to the Attachments property, you'll generally want to create a new MailAttachment object, passing in the filename and optionally the file encoding of the file you're attaching. (The file encoding is one of the values from the Mail.MailEncoding enumeration -- Base64 or the default, UUEncode.)
The sample page uses this code to copy items from the list box containing the selected attachments into a MailMessage's Attachments property:
Dim msg As New MailMessage() Dim li As ListItem For Each li In lstAttachments.Items msg.Attachments.Add(New MailAttachment(li.Text)) Next
In addition, the MailMessage class provides two properties that accept enumerated values. The BodyFormat property can be one of the MailFormat values (HTML or Text). The Priority property can be one of the MailPriority values (High, Low, or Normal). For example:
msg.BodyFormat = MailFormat.Text msg.Priority = MailPriority.High
This article's sample application retrieves its BodyFormat and Priority values from drop-down-list controls, filled with the values of the enumerations. As I described in Enumerating the Possibilities, you can write code like this to fill a list-box control or drop-down-list control with values from an enumeration:
If Not Page.IsPostBack Then FillListWithEnum( _ ddlPriority, GetType(Mail.MailPriority)) FillListWithEnum( _ ddlFormat, GetType(Mail.MailFormat)) End If
The sample code that sends the mail message retrieves the selected values by using this code:
msg.BodyFormat = _ CType(ddlFormat.SelectedItem.Value, Mail.MailFormat) msg.Priority = _ CType(ddlPriority.SelectedItem.Value, Mail.MailPriority)
Try the sample page and see how it works. Send an e-mail to yourself or a friend to test the various properties of the MailMessage class. For fun, try sending a message without specifying a sender or recipient. You'll receive an exception, but it's not a .NET exception. You'll trigger a COMException exception because the Send method uses COM and CDO to do its work. You'll want to add some exception handling to the code for this situation, of course.
The files referenced in this article are available for download.
About the author: Ken Getz is a senior consultant with MCW Technologies and splits his time between programming, writing and training. He specializes in tools and applications written in Visual Studio .NET and Visual Basic, and he is co-author of "Access 2002 Desktop Developer's Handbook" as well as the training materials for AppDev's ASP.NET, Access 97 and 2000, Visual Basic 5.0, and Visual Basic 6.0 classes. He frequently speaks at technical conferences and is a contributing editor for asp.netPRO. E-mail Ken at keng@mcwtech.com.
This article is provided by asp.netPRO Magazine an online information resource for the ASP.NET developer community, which helps professional software developers build, deploy and run the next generation of dynamic, distributed Web applications quickly and easily. Click here for subscription information. | http://searchwindevelopment.techtarget.com/tip/Send-mail-the-NET-way | CC-MAIN-2015-18 | refinedweb | 1,373 | 57.16 |
I've been working on a project involving Facebook authentication and
embedding "likes" and "posts", as well as mapping locations of crowdfunding
projects onto Google Maps..
Since we're mostly Windows developers on Code Project, let's stay in the
Windows platform environment. To do this, you'll need to:
You'll need to create an app from your Facebook account. To do this
(note that the UI might change over time), log in to Facebook and, from the
"gear" pulldown, select "Create App":
Create an application, providing:
Once you've completed the process, you will be provided with an App ID and
App Secret. You'll need these for obtaining a user access token later on.
A user access token is needed to obtain the friend location and hometown,
which we use to map where our friends are. For testing purposes, we can
acquire this token manually, but beware that it expires every two hours, so you
will have to repeat this step if you get a "token expired" error.
Go to
Then click on Get Access Token. A popup will appear, from which you
should select "Friends Data Permissions" and then check "friends_hometown" and "friends_location":
Click on Get Access Token, which closes the dialog and now your access token
will be displayed in the Graph API Explorer. You can copy this access
token into your code for temporary access when testing the application.
While we're here, we might as well test the query we'll be using. Click on "FQL Query"
and enter:
SELECT uid, name, pic_square, current_address, current_location,
hometown_location FROM user WHERE uid IN (SELECT uid2 FROM friend WHERE uid1 =
me())
Then click the Submit button. You should see an array returned, and if
you're friends entered information as to where they are living and/or their
hometown and make this information public, then you should see something along
the lines of:
We'll be parsing these records later in Ruby.
Windows is lacking SSL information which will result in an SSL authentication
failure. To correct this problem, follow exactly these instructions:
SSL_CERT_FILE
For some reason, it originally took me three tries to get this right.
This part is optional. The source code for this project is available on
GitHub at
WhereAreMyFriends. If you want to set up your own GitHub project, this
is what I did:
This is optional for two reasons: you can use the Git command line or you can
use RubyMine's Git integration for working with the repository. Personal,
I prefer to use a separate visual tool such as SmartGit/Hg, which I've found
to be the best of the various visual tools for Git.
We've got a few housekeeping things to take care of before we get started
with actual coding.
If you've cloned my repository, ignore this step, as you can simply open the
directory in RubyMine.
If you're starting from a blank slate because you want to walk through how I
wrote this app, then you'll need to create a Rails app. Again, from the
command line, go to the parent directory into which you create the directory and
contents for the project. If you've cloned a blank repository from GitHub,
don't worry, just do this as well.
From the command line, type in "rails new WhereAreMyFriends" (or, if you gave
your project a different name, use that name.) This will create all the
pieces for a Ruby on Rails application.
In the RubyMine IDE, you should now see something like this when you open the
directory:
If you've cloned a Git repository, RubyMine should already be configured to
use Git as the VCS. Personally, I much prefer using SmartGit/Hg, but you
should know that RubyMine has built-in Git support.
We don't want all the RubyMine IDE files to be part of the repository, so
open the ".gitignore" file (in the application's root folder) and add:
/.idea
which excludes the entire .idea folder that RubyMine creates.
We need to pull in a few components, so edit the Gemfile (in root of your
application folder), adding:
gem 'gmaps4rails'
gem 'fql'
gem 'slim'
gem 'thin'
Once the Gemfile is updated, click on the Tools menu is RubyMine, then select
"Bundler..." then "Install", then click on the Install button (leaving the
optional arguments blank.) This installs the gems and any dependencies
that they have.
What are all these gems?
This is the
gem for interfacing to Google Maps (as well as others, such as OpenLayers,
Bing, and Mapquest).
This gem supports using the
Facebook Query Language in Ruby, which I use to query the locations of my
friends. There are other options as well and other techniques for querying
Facebook, but this is the approach I've chosen.
The Facebook FQL reference documentation can be found
here.
This gem, from the
website: "is a template language whose goal is [to] reduce the syntax of
essential parts without becoming cryptic." I find it makes HTML a lot more
readable, de-cluttering the angle brackets, closing tags, etc. There's a
great online utility for converting HTML to slim for
here.
This gem is a much faster web server than the default, which is WEBrick.
The gmaps4rails gem includes an installer that adds all the JavaScript and
CSS that you need for actually displaying a map. To do this, open a
command line prompt and cd to your application folder. Then type:
rails generate gmaps4rails:install
While we're on the command line, let's create the controller and view.
Type:
rails generate controller map_my_friends index
This creates:
Delete the "index.html.erb" file and create a new file called "index.html.slim",
so that we're using the slim HTML syntax rather than straight HTML.
You should see something like this now in your project tree:
Finally, before we do anything else, let's set the root route to this page,
so we can get to it simply from "localhost:3000". Edit the routes.rb file
(in the config folder), adding:
root to: "map_my_friends#index"
Note that when we created the page controller, the route:
get "map_my_friends/index"
was automatically added for us.
Now we're ready to do some coding. First, we're going to create a basic
model, "Friend", to hold the information about our friends. We could do
this with a generator similar to how we created the controller, but because it's
not a vanilla solution, I prefer to simply create the file manually.
In RubyMine, under the app\models folder, create the file "friend.rb":
class Friend < ActiveRecord::Base
acts_as_gmappable
# Fields we get from FB
attr_accessible :uid, :name, :pic, :address
# Fields required by gmaps4rails (lat and long also come from FB)
attr_accessible :gmaps, :latitude, :longitude
# gmaps4rails methods
def gmaps4rails_address
address
end
def gmaps4rails_infowindow
"#{name}"
end
end
The line "acts_as_gmappable" is a hook that generates latitude and longitude
data when an address is persisted. While it wasn't my intention to even
have a persisting Friend model, the gmaps4rails gem is somewhat coupled with the
Rails ActiveRecord and the five minutes I spent googling and playing around
trying to decouple it, without success, was five minutes more than I wanted to
spend on the issue, so as a result, we have a persistable Friend model.
acts_as_gmappable
No model is usually complete without its associated table, so we will use a
database migration to create the table. Since we're using sqlite3 as the
database, there's no need to futz around with database authentication issues,
database servers, etc.
In RubyMine, in the "db" folder, create a sub-folder called "migrate", and in
that folder, create a file called "001_create_friends_table.rb":
class CreateFriendsTable < ActiveRecord::Migration
def change
create_table :friends do |t|
t.string :uid
t.string :name
t.string :pic
t.string :address
t.float :latitude
t.float :longitude
t.boolean :gmaps
t.timestamps
end
end
end
Your project tree should now reflect these two new files:
Now, run the migration by pressing Ctrl+F9, or right-clicking on the
migration and from the popup menu selecting "run db:migrate".
A "standard" practice in Ruby on Rails code is to put directly into the
controller all the code that's needed to render a page. So, typically, you
would see the code that queries Facebook and populates the model either in the
controller or in the model. Personally, I prefer to put this kind of code
into the lib folder and providing helper methods to interface to whatever model
supports the necessary fields. I've read some articles that disagree with
my on this point, saying that all business logic should go in the model.
The problem as I see it is that there is application-model-independent business
logic (as in, agnostic business logic), such as how we interface with Facebook,
that shouldn't go into the application's model because it is agnostic.
However, because Rails does not auto-load the code in the lib folder, we have
to coerce it. Also note that files in the lib folder are don't
automatically cause the server to reload the Ruby script, so you'll have to
restart the server if you make changes in the lib folder's files.
First, edit the application.rb file found in the app\config folder, adding
the line:
config.autoload_paths += %W(#{config.root}/lib/facebook_wrapper)
which tells Rails we specifically want to include files found in this folder.
Second, in the lib folder, create a sub-folder called "facebook_wrapper".
Third, create a file in that sub-folder called "facebook_wrapper.rb".
Your project structure should now look like this:
Now we're going to wrap our class, FacebookWrapper, in a module called
FacebookWrapperModule:
FacebookWrapper
FacebookWrapperModule
module FacebookWrapperModule
class FacebookWrapper
def ... my functions ...
end
end
and implement the following functions.
This function returns an array of friends in Facebook structure:
def self.get_fb_friends
options = {access_token: "[your access token goes here]"}
friends = Fql.execute("SELECT uid, name, pic_square, current_address,
current_location, hometown_location FROM user WHERE uid IN (
SELECT uid2 FROM friend WHERE uid1 = me())", options)
friends
end
We will fix the hardcoded access token later - for the moment, we just
want to get something up and running.
This function converts the Facebook friends array into an array of our model
instances, which we callback to the application to create each model instance.
Because Ruby is a duck-typing language, all the application needs to do is
implement attributes (properties) or methods for the attributes we expect to
initialize - we don't need to know the "type" or implement this as an interface,
as we would in C#. Furthermore, by utilizing the callback capability of
Ruby, we can request that the application instantiates its model instance
itself:
def self.from_fb_friends(fb_friends)
friends = []
fb_friends.each do |fb_friend|
location = get_location_or_hometown_address(fb_friend)
if !location.nil? # or: unless location.nil?
friend = yield(fb_friend, location)
friends << friend
end
end
friends
end
Another Ruby-ism is to use the "unless" keyword rather than "if !" (if not),
which I personally find reduces the readability of the code. I have no
problem with negative logic, and saying "unless location.nil?" requires me to do
mental gyrations back to "if locations does not equal nil."
Lastly, we have a private helper method for getting the address information
from either the friend's location (preferable) or their hometown (a fallback):
private
def self.get_location_or_hometown_address(fb_friend)
location = fb_friend["current_location"]
if location.nil?
location = fb_friend["hometown_location"]
end
location
end
Note that we never explicitly use the "return" keyword. There's a
reason for that, which I'll illustrate next.
return
Next, we'll update the map_my_friends_controller.rb file, the controller for
our index. The first thing we need to do is
reference our facebook_wrapper library helper. This reveals some of the
intricacies of Ruby's module and file handling.
First, we need to tell Ruby that we "require" the facebook_wrapper.rb
file (which it knows how to get because we added lib\facebook_wrapper to the
auto_load config paths):
require 'facebook_wrapper'
Then, we need to tell Ruby that we want to use the objects defined in the
FacebookWrapperModule:
include FacebookWrapperModule
If we don't do this, we have to qualify the the objects with "FacebookWrapperModule::".
The include keyword is similar to the using keyword in C#, and the
module
keyword is similar to the namespace keyword. The only thing new here is
the dynamic loading of a dependent file facebook_wrapper.rb.
FacebookWrapperModule::
include
using
module
namespace
The implementation for the index method gathers the arrays and
provides the callback method for populating a model (Friend) instance for each
instance of a Facebook structure, and finally that array is formatted as JSON
and passed back to the client:
index
class MapMyFriendsController < ApplicationController
def index
fb_friends = FacebookWrapper.get_fb_friends
@friends = FacebookWrapper.from_fb_friends(fb_friends) { |fb_friend, location|
friend = Friend.new
friend.uid = fb_friend["uid"]
friend.name = fb_friend["name"]
friend.pic = fb_friend["pic_square"]
friend.address = location["name"]
friend.latitude = location["latitude"]
friend.longitude = location["longitude"]
friend.gmaps = true
friend
}
@json = @friends.to_gmaps4rails
respond_to do |format|
format.html # index.html.erb
format.json { render json: @friends }
end
end
end
Note that we do not call return friend in the callback code - if we
do this, it's treated as a return from the calling code and the @friends
property is never initialized!
return friend
@friends
Edit the index.html.slim file (in the app\views\map_my_friends folder),
adding this one line as the entire contents of the file:
= gmaps4rails(@json)
If you run the application (with a current user access token), you should see
your friends mapped onto a Google map:
However, what we'd to do is make the map bigger, so that it takes up most of
a full-screen browser window (I do everything in full screen which is why I like
this). To do this, replace the line that we created above with:
= gmaps( :map_options => { :container_class => "my_map_container" },
"markers" => {"data" => @json,
"options" => {"auto_zoom" => false} })
and edit the map_my_friends.css.scss (found in the app\assets\stylesheets
folder), adding:
div.my_map_container {
margin-top: 30px;
padding: 6px;
border-width: 1px;
border-style: solid;
border-color: #ccc #ccc #999 #ccc;
-webkit-box-shadow: rgba(64, 64, 64, 0.5) 0 2px 5px;
-moz-box-shadow: rgba(64, 64, 64, 0.5) 0 2px 5px;
box-shadow: rgba(64, 64, 64, 0.1) 0 2px 5px;
width: 80%;
height: 80%;
margin-left:auto;
margin-right:auto;
}
div.my_map_container #map {
width: 100%;
height: 100%;
}
Refresh the browser and you will get bigger map which sizes based on the
browser window.
First, we'll add profile_url to the FQL query that we're using, and adjust
our model and controller accordingly. We also need to add a migration to
add this field to the Friend table:
profile_url
class AddProfileUrlField < ActiveRecord::Migration
def change
add_column :friends, :profile_url, :string
end
end
Next, we provide some HTML to render in the Google Maps info window:
def gmaps4rails_infowindow
"<p><a href = '#{profile_url}' target='_blank'>#{name}</a><br>#{address}<br><img src = '#{pic}'/></p>"
end
and the result is:
showing us:
Now that we have a basic application running, let's deal with a different set
of complexity which will also resolve the pesky user access token expiration
problem. The issue is this - rather than gathering our friends, we need
authorization to gather the friends of anyone that visits our site, which means
that we'll need the ability for users to log in using their Facebook login and
authorize us to query their data.
First, add:
gem 'omniauth-facebook'
to the Gemfile found in your root folder. The gems we've now added to
the Gemfile for this project are:
gem 'gmaps4rails'
gem 'fql'
gem 'slim'
gem 'thin'
gem 'omniauth-facebook'
This time, let's create the user model with the model generator. From
RubyMine's Tool menu, select "Run Rails Generator" then double-click on
"model". Enter the options for the rails generator:
User provider:string uid:string name:string email:string oauth_token:string
This creates a new migration file, so find it under db\migrate, right click
on it and select "Run 'db:migrate' "
In the newly create User model, add the following code:
def self.create_with_omniauth(auth)
create! do |user|
user.provider = auth.provider
user.uid = auth.uid
user.oauth_token = auth.credentials.token
if auth.info
user.name = auth.info.name || ""
user.email = auth.info.email || ""
end
end
end
This code creates a user in the database with the provided authentication
information.
Notice that we have access here to the user access token, which is saved in
the field oauth_token.
oauth_token
In the config\initializers folder, create the file omniauth.rb with the
contents:
Rails.application.config.middleware.use OmniAuth::Builder do
provider :facebook, ENV['FACEBOOK_KEY'], ENV['FACEBOOK_SECRET'],
:scope => 'friends_location, friends_hometown, user_friends, email',
:display => 'popup'
end
This informs omniauth that we're authenticating with Facebook. Notice the
"scope" key, whose values are friends_location and friends_hometown,
which specifies that we're interested in the location and hometown of our
friends, and we need user_friends so that we can get the friends of the Facebook user.
friends_location
friends_hometown
user_friends
Personally, I don't like environment variables - I would rather use a file
that isn't stored in the Git repository.
Previously, I've used a local_env.yml file to programmatically add
items to the ENV collection:
Edit the application.rb file (located in the config) folder, and add:
config.before_configuration do
env_file = File.join(Rails.root, 'config', 'local_env.yml')
YAML.load(File.open(env_file)).each do |key, value|
ENV[key.to_s] = value
end if File.exists?(env_file)
end
This code adds additional items to the ENV collection. Now we need to
create the file. In the config folder, create the file local_env.yml,
whose contents are:
FACEBOOK_KEY: '[your key]'
FACEBOOK_SECRET: '[your secret id]'
Make sure that when you put in your key and secret ID, that you preserve the
single quotes.
Also, add config/local_env.yml to your .gitignore file -- this prevents the
file from being added to your repository.
In the app\controllers folder, create the file sessions_controller.rb,
whose contents are:
class SessionsController < ApplicationController
def new
redirect_to '/auth/facebook'
end
def create
auth = request.env["omniauth.auth"]
user = User.where(:provider => auth['provider'],
:uid => auth['uid']).first || User.create_with_omniauth(auth)
session[:user_id] = user.id
redirect_to root_url, :notice => "Signed in!"
end
def destroy
session[:user_id] = nil
redirect_to root_url, notice: 'Signed out!'
end
end
This handles three routes:
Add the following routes to the routes.rb file (in the config folder):
match '/auth/:provider/callback' => 'sessions#create'
match '/signout' => 'sessions#destroy'
match '/signin' => 'sessions#new'
In our map_my_friends_controller, we're going to pass in the access token,
which we acquire from the database with the user's id stored when the session
was created:
map_my_friends_controller
def index
user_id = session[:user_id]
@friends = []
if !user_id.nil?
oauth_token = User.find(user_id).oauth_token
@friends = get_friends(oauth_token)
end
@json = @friends.to_gmaps4rails
respond_to do |format|
format.html # index.html.erb
format.json { render json: @friends }
end
end
Notice that I separated out the get_friends code into a separate function.
Another thing you'll often see in a lot of Ruby on Rails code is very long
functions with code blocks that really should be broken out. It's easy to
write the code the "wrong" way because you're dealing with specific, isolated
route handler functions, but it makes things a lot less readable and
maintainable.
get_friends
And in facebook_wrapper.rb:
def self.get_fb_friends(oauth_token)
options = {access_token: oauth_token}
friends = Fql.execute("SELECT uid, name, pic_square, current_address,
current_location, hometown_location,
profile_url FROM user WHERE uid IN (SELECT uid2
FROM friend WHERE uid1 = me())", options)
friends
end
In our application view (common to all pages) we want to provide:
We might as well convert this file to a "slim" file as well, so delete
application.html.erb (in the app\views\layouts folder), and replace it with the
slim file "application.html.slim":
Edit the application.html.erb file (in the app\views\map_my_friends folder),
inserting at the top of the file:
doctype
html
head
title Where Are My Friends
= stylesheet_link_tag "application", media: "all"
= javascript_include_tag "application"
= csrf_meta_tag
body
#container
#user_nav
- if current_user
| Signed in as
|
strong= current_user.name
|
= link_to "Sign out", signout_path
- else
= link_to "Sign in with Facebook", signin_path
- flash.each do |name, msg|
= content_tag :div, msg, id: "flash_#{name}"
= yield
= yield :scripts
To access the current_user that we used above, we'll add a helper function
to application_controller.rb (in the app\controllers folder):
current_user
class ApplicationController < ActionController::Base
protect_from_forgery
private
def current_user
@current_user ||= User.find(session[:user_id]) if session[:user_id]
end
helper_method :current_user
end
There's also a bunch of CSS that I'm not showing. The result is now a
usable website:
The application is hosted here: give it a try!
If your friends haven't set a current location or hometown or if this
information is blocked, they won't show up on the map. I also don't
distinguish between current location and hometown - that would be something nice
to do with a different marker. So there's a couple things I'll get around
to at some point and update the article.
Also, it's interesting working with a Facebook app. For example, if I
want my housemate to try out the site after I've signed in, while I can sign out
from my site, I also need to sign out from Facebook (by going to Facebook!) and
only then will I get the Facebook sign in so my housemate can sign in with her
Facebook username and password.
I'm indebted specifically to the following people who have no idea that they
helped me put all this together! This of course omits all the people that
have put in countless hours writing Ruby, Rails, and all these amazing gems.
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
_Josh_ wrote:He's a long, long way from the majority of your FB friends
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | http://www.codeproject.com/Articles/664754/Where-In-The-World-Are-My-Facebook-Friends?msg=4716052 | CC-MAIN-2016-07 | refinedweb | 3,693 | 54.02 |
In this project, you’ll be introduced to using the GPIO pins of your Raspberry Pi and how to integrate them with electronic components using Python.
The switch pins should easily fit into the patch points of the breadboard, if they don’t, the switch is not being positioned correctly.
from gpiozero import LED from time import sleep from gpiozero import Button button = Button(2) led = LED(25) while True: button.wait_for_press() led.on() led.off()
Your LED should now be cycling through on and off when you press the switch
From a quick tap to smashing that love button and show how much you enjoyed this project. | https://www.okdo.com/project/control-an-led-with-a-switch/ | CC-MAIN-2020-40 | refinedweb | 108 | 62.72 |
How to... move from AjaxPro to ASP.NET AJAX PageMethods
In one of my last posts I blogged about the future of Ajax.NET Professional (AjaxPro) and that I'm not able to do further development on that project. A lot of my readers feeling sad about this but I had to concentrate more on new technologies that will revolutionize web application development.
My recommendation is to move to ASP.NET AJAX because it is Microsoft's next generating ASP.NET web application generation and is built in Visual Studio .NET 2008 (and, of course, available as additional feature pack for Visual Studio .NET 2005. Those of you that are still using .NET framework 1.1: please stay developing with AjaxPro...
ASP.NET AJAX PageMethods in VS.NET 2008
When started Visual Studio I start a new WebApplication project:
To enable any ASP.NET AJAX feature you need always the ScriptManager which will be responsible for the ASP.NET AJAX main scripts (in AjaxPro prototype.ashx and core.ashx) as well as the JavaScript that is needed to create the client JavaScript proxies (compared with the on-the-fly generated ASHX files in Ajax.NET Professional).
You will find the ScriptManager control in the AJAX Extensions toolbox. Simply drag and drop the control in the default.aspx page.
As this control does not have any UI it will be displayed as a black box in Visual Studio Designer.
Next we need to setup the ScriptManager. By default PageMethods are not enabled. To enable PageMethods only one property has to be changed. The property name is EnablePageMethods. You can either configure this property in the property list or in the HTML code itself.
That is nearly everything we need to configure in ASP.NET AJAX to get PageMethods running.
Let's have a look at the C# source code that we want to be execute when calling the AJAX method. As a very simple example I will return an integer value. First have a look at the source code that we are currently using in AjaxPro:
namespace WebApplication1 { [AjaxPro.AjaxNamespace("Default")] public partial class _Default : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { AjaxPro.Utility.RegisterTypeForAjax(typeof(_Default)); } [AjaxPro.AjaxMethod] public static int HelloWorld() { return 2; } } }
As I have recommended in the Google group for Ajax.NET Professional I'm using the [AjaxNamespace] attribute nearly every time. The reason is that it is very easy to move the class around or to make updates more easy. The [AjaxMethod] attribute marks the static HelloWorld method to be available in the JavaScript client-side proxy.
With AjaxPro you call this method like following line:
<script type="text/javascript"> function callme() { Default.HelloWorld(mycallback); } function mycallback(res) { alert(res.value); } </script>
What do you need to change your code for AjaxPro to ASP.NET AJAX? The code-behind C# source looks very similar. Because we have put the ScriptManager control on the page we don't need the RegisterTypeForAjax call in the Page_Load. The control itselfs has the reference to the page.
The AJAX methods (PageMethods) in ASP.NET AJAX have to be marked with the [WebMethod] attribute. To identify that the Page class includes a method that we want to expose the class has to be marked with the [ScriptService] attribute. That's everything you have to change. Don't forget to change the method to static if not already using static methods.
Wow, that was very easy, no source code change (only meta information are changed). Note that you can still leave Ajax.NET Professional attributes if you want. This makes it very easy to move from AjaxPro to ASP.NET AJAX and back if needed.
On the client-side JavaScript code you have to do more changes but it is not very different. Well, have a look at the JavaScript source code:
<script type="text/javascript"> function onSuccess(value, ctx, methodName) { alert(value + 5); } function onFailed(ex, ctx, methodName) { alert(ex.get_exceptionType()); // get_stackTrace(), get_message(), // get_statusCode(), get_timedOut() } window.onload = function() { var ctx = { CurrentValue: 123456, CurrentDate: new Date() }; // sample context data PageMethods.HelloWorld(onSuccess, onFailed, ctx); } </script>
In the window.onload event I invoke the HelloWorld method. As argument I pass the callback handler if the execution was successful. Another callback handler could be passed to be called if there occurs any problem like http errors or .NET exceptions. As last argument you can pass a context object that will be available in both callback handlers.
If the invoke was successful you get up to three objects. The first will contain the value of the AJAX method, in our example it is the integer 2 (remember in AjaxPro it was res.value). The second passed object contains the context and the last one the method name (which does not include the namespace or class name when e.g. used in MasterPage and Page side-by-side).
The onFailed callback handler will be executed on any error during invocation. As result you get a JavaScript objects with several methods that help you to identify the real error. Second and third passed objects are the same as for the onSuccess callback: the context and method name.
The generation of the JavaScript client-side proxy will be included in the html output:
I hope this very short example will help you to move to ASP.NET AJAX. Any questions? | http://weblogs.asp.net/mschwarz/how-to-move-from-ajaxpro-to-asp-net-ajax-pagemethods | CC-MAIN-2015-11 | refinedweb | 896 | 67.76 |
Hi, In my master thesis I use GRIN and C-- to writing a backend to ehc (essential haskell compiler -) It is nice to see GRIN is used by other people as well. My work has been to extend the GRIN model to be usefull for exceptions and implement a grin compiler. I am currently in the process of writing the results down. For those not familiar to GRIN some lines on its ideas: - Explicit behaviour (no buildin magic functions) - Thunks and values are represented by a node (A tag followed with zero or more fields) - Do a global analysis to find an approximation of which nodes are stored in memory and the values that variables hold. - Use this approximation to remove any unknown function call and remove unneeded case alternatives [snip] > I have recently been thinking about writing a c-- backend for jhc, printing out > and reading the various c-- papers in preparation. As I expected, the > translation would be quite simple and straightforward, but I found the section > on continuations and cut to particulary inspiring. Adding them to Grin > (graph-reduction-intermediate-form, the last form of jhc programs before code > generation) would give me exactly the features I have been missing and having > to do in an ad-hoc manner in the code generator or do without, exceptions, join > points, loops. > I formulated how to add them to Grin, verified it still followed > the monad laws and think I might have come up with something that is not only > practically useful to me now, but has implications for c-- optimizers and > implementations in general which is what this message is about. > for example, a join point: > > myfunc = \ (a,b) -> do -- ^ note, tupled, no currying > cont <- mkContinuation $ \ x -> do > return (x + 4) > case a - b of > 3 -> return -1 > 5 -> cutTo (cont 2) > z -> cutTo (cont z) > yay! Grin has gained a whole lot of power from stealing this idea from c--. yay! Grin already has a whole lot of power. ;-). [snip] Doing register allocation and instruction selection in GRIN sure looks nice. It is an intresting thought. Christof | http://www.haskell.org/pipermail/glasgow-haskell-users/2005-November/009232.html | CC-MAIN-2014-35 | refinedweb | 350 | 53.04 |
Python Numpy Array Tutorial
NumPy is, just like SciPy, Scikit-Learn, Pandas, etc..
Today’s post will exactly focus on this last. This NumPy tutorial will not only show you what NumPy arrays actually are and how you can install Python, but you’ll also learn how to make arrays (even when your data comes from files!), how broadcasting works, how you can ask for help, how to manipulate your arrays and how to visualize them.
If you want to know even more about NumPy arrays and the other data structures that you will need in your data science journey, consider taking a look at DataCamp’s Intro to Python for Data Science, which has a chapter on NumPy.
What Is A Python Numpy Array?
You already read in the introduction that”.
This already gives an idea of what you’re dealing with, right?
In other words, NumPy is a Python library that is the core library for scientific computing in Python. It contains a collection of tools and techniques that can be used to solve on a computer mathematical models of problems in Science and Engineering. One of these tools is a high-performance multidimensional array object that is a powerful data structure for efficient computation of arrays and matrices. To work with these arrays, there’s a huge amount of high-level mathematical functions operate on these matrices and arrays.
Then, what is an array?
When you look at the print of a couple arrays, you could see it as grid that contains values of the same type:
You see that, in the example above, the data are integers. The array holds and represents any regular data in a structured way.
However, you should know that, on a structural level, an array is basically nothing but pointers. It’s a combination of a memory address, a data type, a shape and strides:
- The
datapointer indicates the memory address of the first byte in the array,
- The data type or
dtypepointer describes the kind of elements that are contained within the array,
- The
shapeindicates the shape of the array, and
- The
stridesare the number of bytes that should be skipped in memory to go to the next element. If your strides are (10,1), you need to proceed one byte to get to the next column and 10 bytes to locate the next row.
Or, in other words, an array contains information about the raw data, how to locate an element and how to interpret an element.
Enough of the theory. Let’s check this out ourselves:
You can easily test this by exploring the
numpy array attributes:
You see that now, you get a lot more information: for example, the data type that is printed out is ‘int64’ or signed 32-bit integer type; This is a lot more detailed! That also means that the array is stored in memory as 64 bytes (as each integer takes up 8 bytes and you have an array of 8 integers). The strides of the array tell us that you have to skip 8 bytes (one value) to move to the next column, but 32 bytes (4 values) to get to the same position in the next row. As such, the strides for the array will be (32,8).
Note that if you set the data type to
int32, the strides tuple that you get back will be
(16, 4), as you will still need to move one value to the next column and 4 values to get the same position. The only thing that will have changed is the fact that each integer will take up 4 bytes instead of 8.
The array that you see above is, as its name already suggested, a 2-dimensional array: you have rows and columns. The rows are indicated as the “axis 0”, while the columns are the “axis 1”. The number of the axis goes up accordingly with the number of the dimensions: in 3-D arrays, of which you have also seen an example in the previous code chunk, you’ll have an additional “axis 2”. Note that these axes are only valid for arrays that have at least 2 dimensions, as there is no point in having this for 1-D arrays;
These axes will come in handy later when you’re manipulating the shape of your NumPy arrays.
How To Install Numpy
Before you can start to try out these NumPy arrays for yourself, you first have to make sure that you have it installed locally (assuming that you’re working on your pc). If you have the Python library already available, go ahead and skip this section :)
If you still need to set up your environment, you must be aware that there are two major ways of installing NumPy on your pc: with the help of Python wheels or the Anaconda Python distribution.
… With Python Wheels
Make sure firstly that you have Python installed. You can go here if you still need to do this :)
If you’re working on Windows, make sure that you have added Python to the PATH environment variable. Then, don’t forget to install a package manager, such as
pip, which will ensure that you’re able to use Python’s open-source libraries.
Note that recent versions of Python 3 come with pip, so double check if you have it and if you do, upgrade it before you install NumPy:
pip install pip --upgrade pip --version
Next, you can go here or here to get your NumPy wheel. After you have downloaded it, navigate to the folder on your pc that stores it through the terminal and install it:
install "numpy-1.9.2rc1+mkl-cp34-none-win_amd64.whl" import numpy numpy.__version__
The two last lines allow you to verify that you have installed NumPy and check the version of the package.
After these steps, you’re ready to start using NumPy!
… With The Anaconda Python Distribution
To get NumPy, you could also download the Anaconda Python distribution. This is easy and will allow you to get started quickly! If you haven’t downloaded it already, go here to get it. Follow the instructions to install and you're ready to start!
Do you wonder why this might actually be easier?
The good thing about getting this Python distribution is the fact that you don’t need to worry too much about separately installing NumPy or any of the major packages that you’ll be using for your data analyses, such as pandas, scikit-learn, etc.
Because, especially if you’re very new to Python, programming or terminals, it can really come as a relief that Anaconda already includes 100 of the most popular Python, R and Scala packages for data science. But also for more seasoned data scientists, Anaconda is the way to go if you want to get started quickly on tackling data science problems.
What’s more, Anaconda also includes several open source development environments such as Jupyter and Spyder. If you’d like to start working with Jupyter Notebook after this tutorial, go to this page.
In short, consider downloading Anaconda to get started on working with
numpy and other packages that are relevant to data science!
How To Make NumPy Arrays
So, now that you have set up your environment, it’s time for the real work. Admittedly, you have already tried out some stuff with arrays in the above DataCamp Light chunks. However, you haven’t really gotten any real hands-on practice with them, because you first needed to install NumPy on your own pc. Now that you have done this, it’s time to see what you need to do in order to run the above code chunks on your own.
Some exercises have been included below so that you can already practice how it’s done before you start on your own!
To make a
numpy array, you can just use the
np.array() function. All you need to do is pass a list to it and optionally, you can also specify the data type of the data. If you want to know more about the possible data types that you can pick, go here or consider taking a brief look at DataCamp’s NumPy cheat sheet.
There’s no need to go and memorize these NumPy data types if you’re a new user; But you do have to know and care what data you’re dealing with. The data types are there when you need more control over how your data is stored in memory and on disk. Especially in cases where you’re working with large data, it’s good that you know to control the storage type.
Don’t forget that, in order to work with the
np.array() function, you need to make sure that the
numpy library is present in your environment. The NumPy library follows an import convention: when you import this library, you have to make sure that you import it as
np. By doing this, you’ll make sure that other Pythonistas understand your code more easily.
In the following example you’ll create the
my_array array that you have already played around with above:
If you would like to know more about how to make lists, go here.
However, sometimes you don’t know what data you want to put in your array or you want to import data into a
numpy array from another source. In those cases, you’ll make use of initial placeholders or functions to load data from text into arrays, respectively.
The following sections will show you how to do this.
How To Make An “Empty” NumPy Array
What people often mean when they say that they are creating “empty” arrays is that they want to make use of initial placeholders, which you can fill up afterwards. You can initialize arrays with ones or zeros, but you can also make arrays that get filled up with evenly spaced values, constant or random values.
However, you can still make a totally empty array, too.
Luckily for us, there are quite a lot of functions to make
Try it all out below!
Tip: play around with the above functions so that you understand how they work!
- For some, such as
np.ones(),
np.random.random(),
np.empty(),
np.full()or
np.zeros()the only thing that you need to do in order to make arrays with ones or zeros is pass the shape of the array that you want to make. As an option to
np.ones()and
np.zeros(), you can also specify the data type. In case of
np.full(), you also have to specify the constant value that you want to insert into the array.
- With
np.linspace()and
np.arange()you can make arrays of evenly spaced values. The difference between these two functions is that the last value of the three that are passed in the code chunk above designates either the step value for
np.linspace()or number of samples for
np.arange(). What happens in the first is that you want, for example, an array of 9 values that lie between 0 and 2. For the latter, you specify that you want an array to start at 10 and per steps of 5, generate values for the array that you’re creating.
Remember that NumPy also allows you to create an identity array or matrix with
np.eye() and
np.identity(). An identity matrix is a square matrix of which all elements in the principal diagonal are ones and all other elements are zeros. When you multiply a matrix with an identity matrix, the given matrix is left unchanged.
In other words, if you multiply a matrix by an identity matrix, the resulting product will be the same matrix again by the standard conventions of matrix multiplication.
Even though the focus of this tutorial is not on demonstrating how identity matrices work, it suffices to say that identity matrices are useful when you’re starting to do matrix calculations: they can simplify mathematical equations, which makes your computations more efficient and robust.
How To Load NumPy Arrays From Text
Creating arrays with the help of initial placeholders or with some example data is a great way of getting started with
numpy. But when you want to get started with data analysis, you’ll need to load data from text files.
With that what you have seen up until now, you won’t really be able to do much. Make use of some specific functions to load data from your files, such as
loadtxt() or
genfromtxt().
Let’s say you have the following text files with data:
# This is your data in the text file # Value1 Value2 Value3 # 0.2536 0.1008 0.3857 # 0.4839 0.4536 0.3561 # 0.1292 0.6875 0.5929 # 0.1781 0.3049 0.8928 # 0.6253 0.3486 0.8791 # Import your data x, y, z = np.loadtxt('data.txt', skiprows=1, unpack=True)
In the code above, you use
loadtxt() to load the data in your environment. You see that the first argument that both functions take is the text file
data.txt. Next, there are some specific arguments for each: in the first statement, you skip the first row and you return the columns as separate arrays with
unpack=TRUE. This means that the values in column
Value1 will be put in
x, and so on.
Note that, in case you have comma-delimited data or if you want to specify the data type, there are also the arguments
delimiter and
dtype that you can add to the
loadtxt() arguments.
That’s easy and straightforward, right?
Let’s take a look at your second file with data:
# Your data in the text file # Value1 Value2 Value3 # 0.4839 0.4536 0.3561 # 0.1292 0.6875 MISSING # 0.1781 0.3049 0.8928 # MISSING 0.5801 0.2038 # 0.5993 0.4357 0.7410 my_array2 = np.genfromtxt('data2.txt', skip_header=1, filling_values=-999)
You see that here, you resort to
genfromtxt() to load the data. In this case, you have to handle some missing values that are indicated by the
'MISSING' strings. Since the
genfromtxt() function converts character strings in numeric columns to
nan, you can convert these values to other ones by specifying the
filling_values argument. In this case, you choose to set the value of these missing values to -999.
If, by any chance, you have values that don’t get converted to
nan by
genfromtxt(), there’s always the
missing_values argument that allows you to specify what the missing values of your data exactly are.
But this is not all.
Tip: check out this page to see what other arugments you can add to import your data successfully.
You now might wonder what the difference between these two functions really is.
The examples indicated this maybe implicitly, but, in general,
genfromtxt() gives you a little bit more flexibility; It’s more robust than
loadtxt().
Let’s make this difference a little bit more practical: the latter,
loadtxt(), only works when each row in the text file has the same number of values; So when you want to handle missing values easily, you’ll typically find it easier to use
genfromtxt().
But this is definitely not the only reason.
A brief look on the number of arguments that
genfromtxt() has to offer will teach you that there is really a lot more things that you can specify in your import, such as the maximum number of rows to read or the option to automatically strip white spaces from variables.
How To Save NumPy Arrays
Once you have done everything that you need to do with your arrays, you can also save them to a file. If you want to save the array to a text file, you can use the
savetxt() function to do this:
import numpy as np x = np.arange(0.0,5.0,1.0) np.savetxt('test.out', x, delimiter=',')
Remember that
np.arange() creates a NumPy array of evenly-spaced values. The third value that you pass to this function is the step value.
There are, of course, other ways to save your NumPy arrays to text files. Check out the functions in the table below if you want to get your data to binary files or archives:
For more information or examples of how you can use the above functions to save your data, go here or make use of one of the help functions that NumPy has to offer to get to know more instantly!
Are you not sure what these NumPy help functions are?
No worries! You’ll learn more about them in one of the next sections!
How To Inspect Your NumPy Arrays
Besides the array attributes that have been mentioned above, namely,
data,
shape,
dtype and
strides, there are some more that you can use to easily get to know more about your arrays. The ones that you might find interesting to use when you’re just starting out are the following:
These are almost all the attributes that an array can have.
Don’t worry if you don’t feel that all of them are useful for you at this point; This is fairly normal, because, just like you read in the previous section, you’ll only get to worry about memory when you’re working with large data sets.
Also note that, besides the attributes, you also have some other ways of gaining more information on and even tweaking your array slightly:
Now that you have made your array, either by making one yourself with the
np.array() or one of the intial placeholder functions, or by loading in your data through the
loadtxt() or
genfromtxt() functions, it’s time to look more closely into the second key element that really defines the NumPy library: scientific computing.
How NumPy Broadcasting Works
Before you go deeper into scientific computing, it might be a good idea to first go over what broadcasting exactly is: it’s a mechanism that allows NumPy to work with arrays of different shapes when you’re performing arithmetic operations.
To put it in a more practical context, you often have an array that’s somewhat larger and another one that’s somewhat smaller. Ideally, you want to use the smaller array multiple times to perform an operation (such as a sum, multiplication, etc.) on the larger array.
To do this, you use the broadcasting mechanism.
However, there are some rules if you want to use it. And, before you already sigh, you’ll see that these “rules” are very simple and kind of straightforward!
- First off, to make sure that the broadcasting is successful, the dimensions of your arrays need to be compatible. Two dimensions are compatible when they are equal. Consider the following example:
- Two dimensions are also compatible when one of them is 1:
Note that if the dimensions are not compatible, you will get a
ValueError.
Tip: also test what the size of the resulting array is after you have done the computations! You’ll see that the size is actually the maximum size along each dimension of the input arrays.
In other words, you see that the result of
x-y gives an array with shape
(3,4):
y had a shape of
(4,) and
x had a shape of
(3,4). The maximum size along each dimension of
x and
y is taken to make up the shape of the new, resulting array.
- Lastly, the arrays can only be broadcast together if they are compatible in all dimensions. Consider the following example:
You see that, even though
x and
y seem to have somewhat different dimensions, the two can be added together.
That is because they are compatible in all dimensions:
- Array
xhas dimensions 3 X 4,
- Array
yhas dimensions 5 X 1 X 4
Since you have seen above that dimensions are also compatible if one of them is equal to 1, you see that these two arrays are indeed a good candidate for broadcasting!
What you will notice is that in the dimension where
y has size 1 and the other array has a size greater than 1 (that is, 3), the first array behaves as if it were copied along that dimension.
Note that the shape of the resulting array will again be the maximum size along each dimension of
x and
y: the dimension of the result will be
(5,3,4)
In short, if you want to make use of broadcasting, you will rely a lot on the shape and dimensions of the arrays with which you’re working.
But what if the dimensions are not compatible?
What if they are not equal or if one of them is not equal to 1?
You’ll have to fix this by manipulating your array! You’ll see how to do this in one of the next sections.
How Do Array Mathematics Work?
You’ve seen that broadcasting is handy when you’re doing arithmetic operations. In this section, you’ll discover some of the functions that you can use to do mathematics with arrays.
As such, it probably won’t surprise you that you can just use
+,
-,
*,
/ or
% to add, subtract, multiply, divide or calculate the remainder of two (or more) arrays. However, a big part of why NumPy is so handy, is because it also has functions to do this. The equivalent functions of the operations that you have seen just now are, respectively,
np.add(),
np.subtract(),
np.multiply(),
np.divide() and
np.remainder().
You can also easily do exponentiation and taking the square root of your arrays with
np.exp() and
np.sqrt(), or calculate the sines or cosines of your array with
np.sin() and
np.cos(). Lastly, its’ also useful to mention that there’s also a way for you to calculate the natural logarithm with
np.log() or calculate the dot product by applying the
dot() to your array.
Try it all out in the DataCamp Light chunk below.
Just a tip: make sure to check out first the arrays that have been loaded for this exercise!
Remember how broadcasting works? Check out the dimensions and the shapes of both
x and
y in your IPython shell. Are the rules of broadcasting respected?
But there is more.
Besides all of these functions, you might also find it useful to know that there are mechanisms that allow you to compare array elements. For example, if you want to check whether the elements of two arrays are the same, you might use the
== operator. To check whether the array elements are smaller or bigger, you use the
< or
> operators.
This all seems quite straightforward, yes?
However, you can also compare entire arrays with each other! In this case, you use the
np.array_equal() function. Just pass in the two arrays that you want to compare with each other and you’re done.
Note that, besides comparing, you can also perform logical operations on your arrays. You can start with
np.logical_or(),
np.logical_not() and
np.logical_and(). This basically works like your typical OR, NOT and AND logical operations;
In the simplest example, you use OR to see whether your elements are the same (for example, 1), or if one of the two array elements is 1. If both of them are 0, you’ll return
FALSE. You would use AND to see whether your second element is also 1 and NOT to see if the second element differs from 1.
Test this out in the code chunk below:
How To Subset, Slice, And Index Arrays
Besides mathematical operations, you might also consider taking just a part of the original array (or the resulting array) or just some array elements to use in further analysis or other operations. In such case, you will need to subset, slice and/or index your arrays.
These operations are very similar to when you perform them on Python lists. If you want to check out the similarities for yourself, or if you want a more elaborate explanation, you might consider checking out DataCamp’s Python list tutorial.
If you have no clue at all on how these operations work, it suffices for now to know these two basic things:
- You use square brackets
[]as the index operator, and
- Generally, you pass integers to these square brackets, but you can also put a colon
:or a comination of the colon with integers in it to designate the elements/rows/columns you want to select.
Besides from these two points, the easiest way to see how this all fits together is by looking at some examples of subsetting:
Something a little bit more advanced than subsetting, if you will, is slicing. Here, you consider not just particular values of your arrays, but you go to the level of rows and columns. You’re basically working with “regions” of data instead of pure “locations”.
You can see what is meant with this analogy in these code examples:
You’ll see that, in essence, the following holds:
a[start:end] # items start through the end (but the end is not included!) a[start:] # items start through the rest of the array a[:end] # items from the beginning through the end (but the end is not included!)
Lastly, there’s also indexing. When it comes to NumPy, there are boolean indexing and advanced or “fancy” indexing.
(In case you’re wondering, this is true NumPy jargon, I didn’t make the last one up!)
First up is boolean indexing. Here, instead of selecting elements, rows or columns based on index number, you select those values from your array that fulfill a certain condition.
Putting this into code can be pretty easy:
Note that, to specify a condition, you can also make use of the logical operators
| (OR) and
& (AND). If you would want to rewrite the condition above in such a way (which would be inefficient, but I demonstrate it here for educational purposes :)), you would get
bigger_than_3 = (my_3d_array > 3) | (my_3d_array == 3).
With the arrays that have been loaded in, there aren’t too many possibilities, but with arrays that contain for example, names or capitals, the possibilities could be endless!
When it comes to fancy indexing, that what you basically do with it is the following: you pass a list or an array of integers to specify the order of the subset of rows you want to select out of the original array.
Does this sound a little bit abstract to you?
No worries, just try it out in the code chunk below:
Now, the second statement might seem to make less sense to you at first sight. This is normal. It might make more sense if you break it down:
- If you just execute
my_2d_array[[1,0,1,0]], the result is the following:
array([[5, 6, 7, 8], [1, 2, 3, 4], [5, 6, 7, 8], [1, 2, 3, 4]])
[:,[0,1,2,0]], is tell you that you want to keep all the rows of this result, but that you want to change the order of the columns around a bit. You want to display the columns 0, 1, and 2 as they are right now, but you want to repeat column 0 as the last column instead of displaying column number 3. This will give you the following result:
array([[5, 6, 7, 5], [1, 2, 3, 1], [5, 6, 7, 5], [1, 2, 3, 1]])
Advanced indexing clearly holds no secrets for you any more!
How To Ask For Help
As a short intermezzo, you should know that you can always ask for more information about the modules, functions or classes that you’re working with, especially becauseNumPy can be quite something when you first get started on working with it.
Asking for help is fairly easy.
You just make use of the specific help functions that
numpy offers to set you on your way:
- Use
lookfor()to do a keyword search on docstrings. This is specifically handy if you’re just starting out, as the ‘theory’ behind it all might fade in your memory. The one downside is that you have to go through all of the search results if your query is not that specific, as is the case in the code example below. This might make it even less overviewable for you.
- Use
info()for quick explanations and code examples of functions, classes, or modules. If you’re a person that learns by doing, this is the way to go! The only downside about using this function is probably that you need to be aware of the module in which certain attributes or functions are in. If you don’t know immediately what is meant by that, check out the code example below.
You see, both functions have their advantages and disadvantages, but you’ll see for yourself why both of them can be useful: try them out for yourself in the DataCamp Light code chunk below!
Note that you indeed need to know that
dtype is an attribute of
ndarray. Also, make sure that you don’t forget to put
np in front of the modules, classes or terms you’re asking information about, otherwise you will get an error message like this:
Traceback (most recent call last): File "<stdin>", line 1, in <module> NameError: name 'ndarray' is not defined
You now know how to ask for help, and that’s a good thing. The next topic that this NumPy tutorial covers is array manipulation.
Not that you can not overcome this topic on your own, quite the contrary!
But some of the functions might raise questions, because, what is the difference between resizing and reshaping?
And what is the difference between stacking your arrays horizontally and vertically?
The next section is all about answering these questions, but if you ever feel in doubt, feel free to use the help functions that you have just seen to quickly get up to speed.
How To Manipulate Arrays
Performing mathematical operations on your arrays is one of the things that you’ll be doing, but probably most importantly to make this and the broadcasting work is to know how to manipulate your arrays.
Below are some of the most common manipulations that you’ll be doing.
How To Transpose Your Arrays
What transposing your arrays actually does is permuting the dimensions of it. Or, in other words, you switch around the shape of the array. Let’s take a small example to show you the effect of transposition:
Tip: if the visual comparison between the array and its transposed version is not entirely clear, inspect the shape of the two arrays to make sure that you understand why the dimensions are permuted.
Note that there are two transpose functions. Both do the same; There isn’t too much difference. You do have to take into account that
T seems more of a convenience function and that you have a lot more flexibility with
np.transpose(). That’s why it’s recommended to make use of this function if you want to more arguments.
All is well when you transpose arrays that are bigger than one dimension, but what happens when you just have a 1-D array? Will there be any effect, you think?
Try it out for yourself in the code chunk below. Your 1-D array has already been loaded in:
You’re absolutely right! There is no effect when you transpose a 1-D array!
Reshaping Versus Resizing Your Arrays
You might have read in the broadcasting section that the dimensions of your arrays need to be compatible if you want them to be good candidates for arithmetic operations. But the question of what you should do when that is not the case, was not answered yet.
Well, this is where you get the answer!
What you can do if the arrays don’t have the same dimensions, is resize your array. You will then return a new array that has the shape that you passed to the
np.resize() function. If you pass your original array together with the new dimensions, and if that new array is larger than the one that you originally had, the new array will be filled with copies of the original array that are repeated as many times as is needed.
However, if you just apply
np.resize() to the array and you pass the new shape to it, the new array will be filled with zeros.
Let’s try this out with an example:
Besides resizing, you can also reshape your array. This means that you give a new shape to an array without changing its data. The key to reshaping is to make sure that the total size of the new array is unchanged. If you take the example of array
x that was used above, which has a size of 3 X 4 or 12, you have to make sure that the new array also has a size of 12.
Psst… If you want to calculate the size of an array with code, make sure to use the
size attribute:
x.size or
x.reshape((2,6)).size:
If all else fails, you can also append an array to your original one or insert or delete array elements to make sure that your dimensions fit with the other array that you want to use for your computations.
Another operation that you might keep handy when you’re changing the shape of arrays is
ravel(). This function allows you to flatten your arrays. This means that if you ever have 2D, 3D or n-D arrays, you can just use this function to flatten it all out to a 1-D array.
Pretty handy, isn’t it?
How To Append Arrays
When you append arrays to your original array, they are “glued” to the end of that original array. If you want to make sure that what you append does not come at the end of the array, you might consider inserting it. Go to the next section if you want to know more.
Appending is a pretty easy thing to do thanks to the NumPy library; You can just make use of the
np.append().
Check how it’s done in the code chunk below. Don’t forget that you can always check which arrays are loaded in by typing, for example,
my_array in the IPython shell and pressing ENTER.
Note how, when you append an extra column to
my_2d_array, the
axis is specified. Remember that axis 1 indicates the columns, while axis 0 indicates the rows in 2-D arrays.
How To Insert And Delete Array Elements
Next to appending, you can also insert and delete array elements. As you might have guessed by now, the functions that will allow you to do these operations are
np.insert() and
np.delete():
How To Join And Split Arrays
You can also ‘merge’ or join your arrays. There are a bunch of functions that you can use for that purpose and most of them are listed below.
Try them out, but also make sure to test out what the shape of the arrays is in the IPython shell. The arrays that have been loaded are
x,
my_array,
my_resized_array and
my_2d_array.
You’ll note a few things as you go through the functions:
- The number of dimensions needs to be the same if you want to concatenate two arrays with
np.concatenate(). As such, if you want to concatenate an array with
my_array, which is 1-D, you’ll need to make sure that the second array that you have, is also 1-D.
- With
np.vstack(), you effortlessly combine
my_arraywith
my_2d_array. You just have to make sure that, as you’re stacking the arrays row-wise, that the number of columns in both arrays is the same. As such, you could also add an array with shape
(2,4)or
(3,4)to
my_2d_array, as long as the number of columns matches. Stated differently, the arrays must have the same shape along all but the first axis. The same holds also for when you want to use
np.r[].
- For
np.hstack(), you have to make sure that the number of dimensions is the same and that the number of rows in both arrays is the same. That means that you could stack arrays such as
(2,3)or
(2,4)to
my_2d_array, which itself as a shape of
(2,4). Anything is possible as long as you make sure that the number of rows matches. This function is still supported by NumPy, but you should prefer
np.concatenate()or
np.stack().
- With
np.column_stack(), you have to make sure that the arrays that you input have the same first dimension. In this case, both shapes are the same, but if
my_resized_arraywere to be
(2,1)or
(2,), the arrays still would have been stacked.
np.c_[]is another way to concatenate. Here also, the first dimension of both arrays needs to match.
When you have joined arrays, you might also want to split them at some point. Just like you can stack them horizontally, you can also do the same but then vertically. You use
np.hsplit() and
np.vsplit(), respectively:
What you need to keep in mind when you’re using both of these split functions is probably the shape of your array. Let’s take the above case as an example:
my_stacked_array has a shape of
(2,8). If you want to select the index at which you want the split to occur, you have to keep the shape in mind.
How To Visualize NumPy Arrays
Lastly, something that will definitely come in handy is to know how you can plot your arrays. This can especially be handy in data exploration, but also in later stages of the data science workflow, when you want to visualize your arrays.
With
np.histogram()
Contrary to what the function might suggest, the
np.histogram() function doesn’t draw the histogram but it does compute the occurrences of the array that fall within each bin; This will determine the area that each bar of your histogram takes up.
What you pass to the
np.histogram() function then is first the input data or the array that you’re working with. The array will be flattened when the histogram is computed.
You’ll see that as a result, the histogram will be computed: the first array lists the frequencies for all the elements of your array, while the second array lists the bins that would be used if you don’t specify any bins.
If you do specify a number of bins, the result of the computation will be different: the floats will be gone and you’ll see all integers for the bins.
There are still some other arguments that you can specify that can influence the histogram that is computed. You can find all of them here.
But what is the point of computing such a histogram if you can’t visualize it?
Visualization is a piece of cake with the help of Matplotlib, but you don’t need
np.histogram() to compute the histogram.
plt.hist() does this for itself when you pass it the (flattened) data and the bins:
# Import numpy and matplotlib import numpy as np import matplotlib.pyplot as plt # Construct the histogram with a flattened 3d array and a range of bins plt.hist(my_3d_array.ravel(), bins=range(0,13)) # Add a title to the plot plt.title('Frequency of My 3D Array Elements') # Show the plot plt.show()
The above code will then give you the following (basic) histogram:
Using
np.meshgrid()
Another way to (indirectly) visualize your array is by using
np.meshgrid(). The problem that you face with arrays is that you need 2-D arrays of x and y coordinate values. With the above function, you can create a rectangular grid out of an array of x values and an array of y values: the
np.meshgrid() function takes two 1D arrays and produces two 2D matrices corresponding to all pairs of (x, y) in the two arrays. Then, you can use these matrices to make all sorts of plots.
np.meshgrid() is particularly useful if you want to evaluate functions on a grid, as the code below demonstrates:
# Import NumPy and Matplotlib import numpy as np import matplotlib.pyplot as plt # Create an array points = np.arange(-5, 5, 0.01) # Make a meshgrid xs, ys = np.meshgrid(points, points) z = np.sqrt(xs ** 2 + ys ** 2) # Display the image on the axes plt.imshow(z, cmap=plt.cm.gray) # Draw a color bar plt.colorbar() # Show the plot plt.show()
The code above gives the following result:
Beyond Data Analysis with NumPy
Congratulations, you have reached the end of the NumPy tutorial!
You have covered a lot of ground, so now you have to make sure to retain the knowledge that you have gained. Don’t forget to get your copy of DataCamp’s NumPy cheat sheet to support you in doing this!
After all this theory, it’s also time to get some more practice with the concepts and techniques that you have learned in this tutorial. One way to do this is to go back to the scikit-learn tutorial and start experimenting with further with the data arrays that are used to build machine learning models.
If this is not your cup of tea, check again whether you have downloaded Anaconda. Then, get started with NumPy arrays in Jupyter with this Definitive Guide to Jupyter Notebook. Also make sure to check out this Jupyter Notebook, which also guides you through data analysis in Python with NumPy and some other libraries in the interactive data science environment of the Jupyter Notebook.
Lastly, consider checking out DataCamp’s courses on data manipulation and visualization. Especially our latest courses in collaboration with Continuum Analytics will definitely interest you! Take a look at the Manipulating DataFrames with Pandas or the Pandas Foundations courses. | https://www.datacamp.com/community/tutorials/python-numpy-tutorial | CC-MAIN-2018-09 | refinedweb | 7,049 | 68.6 |
horus93871 Points
Curious about why using len to count keys vs values in a dict requires a different approach.
So I'm working on the last 5 part challenge for this section, and the first thing it asks was to create a function named num_teachers that would take a single argument, a dictionary of teachers the background tester would presumably run through it as with most challenges, and wanted me to return (if i remember right) a total number of teachers (keys) in the dict.
My brain wasn't working for a bit late last night so I slept on it and found the solution rather quickly this morning, as usual, when I'm tired I tend to overthink things and instead wound up with a much shorter solution in the form of
==========
tt = {} def num_teachers(tt): return len(tt.keys())
==========
Ok, so far so good, then the second one seemed like it was asking the same question, except wanting me to return the total # of courses (values) for all the teachers. So, obviously I'm thinking (this is too easy, why would they ask me the same question twice in a row instead of just having me create both the functions in one step? But I figured, hey, maybe your overthinking it, just try it anyway.
==========
tt = {} def num_teachers(tt): return len(tt.keys()) def num_courses(tt): return len(tt.values())
==========
no such luck, wound up with the same problem I had when I tried my first solution to the first step which was that the function didn't return the right number of teachers, after a little searching online I came across a solution using 'sum' and 'map', but I can't for the life of me remember going over map at least. I'm pretty sure I remember sum coming up at one point in the earlier lessons, but from what I've read map is part and parcel for tuples which is the next section over.
Anyway, that solution didn't take long to find and it looked like this
=============
tt = {} def num_teachers(tt): return len(tt.keys()) def num_courses(tt): return sum(map(len, tt.values()))
=============
And I was wondering why it'd be necessary to approach it like that rather than simply doing a len for the dict values like I can with the keys? Would it run into a problem with keys that hold multiple values, instead just returning the count as 1 each time without using sum and map? My gut is telling me that's probably the case, but i'd like someone more learned to weigh in on the matter.
[MOD: added ```python formatting -cf]
1 Answer
andrenTreehouse Moderator 28,169 Points
The issue does indeed stem from the fact that the value contains multiple items, while the key only contains one. The key represents one teacher, so by counting the number of keys you find out the number of teachers. The values on the other hand are lists of courses, one list does not equal one course. So counting the number of values does not give you the number of courses.
It is extremely rare for a key to contain multiple elements, but it is in theory possible (using a tuple or something) and yes you would run into the same type of issue.
Also while using
map works it is not the only solution. There are a number of solutions that would not have required using unknown methods, for example you could have looped through each of the values and counted the number of items in each value. Like this:
def num_courses(tt): courses = 0 for value in tt.values(): courses += len(value) return courses | https://teamtreehouse.com/community/curious-about-why-using-len-to-count-keys-vs-values-in-a-dict-requires-a-different-approach | CC-MAIN-2018-51 | refinedweb | 617 | 63.63 |
#include <OP_PostIt.h>
Definition at line 47 of file OP_PostIt.h.
Create a network box with the specified name. If a duplicate name is passed in, it will be altered to make it unique in the network.
Reimplemented from OP_NetworkBoxItem.
Definition at line 69 of file OP_PostIt.h.
Reimplemented from OP_NetworkBoxItem.
Definition at line 67 of file OP_PostIt.h.
Reimplemented from OP_NetworkBoxItem.
Definition at line 142 of file OP_PostIt 92 of file OP_PostIt.h.
Our children should implement this and return what type of item they are.
Implements OP_NetworkBoxItem.
Get whether this box is currently minimized.
Returns the network that is our parent.
Implements OP_NetworkBoxItem.
Returns true if this box is currently picked.
Implements OP_NetworkBoxItem.
Definition at line 95 of file OP_PostIt.h.
Definition at line 85 of file OP_PostIt.h.
Load the contents of the stream into the attributes of this post-it note; if binary is nonzero, load in binary mode. Loading doesn't send the OP_POSTIT_NOTE_CREATED message to the network; the caller is responsible for doing that.
Each netbox has a unique id. This is used primarily for undos, as we also keep a list of notes in order of id, so lookup by id becomes quite quick.
Used by opscript, this outputs the sequence of hscript commands necessary to recreate this sticky note.
The colour used to display our background. Returns true if the color changed.
Reimplemented from OP_NetworkBoxItem..
Get and set the position of this item. Units are absolute, as opposed to relative units found in OPUI.
Implements OP_NetworkBoxItem.
Definition at line 204 of file OP_PostIt.h. | http://www.sidefx.com/docs/hdk11.1/class_o_p___post_it.html | CC-MAIN-2013-48 | refinedweb | 262 | 61.63 |
Ben Franksen wrote: > If I want to import a module I have to decide on /one/ module name. Since I > cannot know at which point in the hierarchy users might have exposed > modules from other packages, I must chose the default 'root' point. So, > this will not help library authors who want to e.g. import the 'same' > module from either mtl or transformers. No, that misses a big point in the proposal. Yes, every compiled module must have only one "name", but that name does not need to be the same "name" that is used in Haskell code. This is what it means to separate provenance from reference. To make this more concrete, consider the installed package: libfoo.cabal: ... Build-depends: base (>= 3.0 && < 4.0) at Base exposed-modules: Data.Foo ... Data/Foo.hs: {-# LANGUAGE NoImplicitPrelude #-} package Data.Foo where import Base.Prelude ... And consider the client package we are compiling: libbar.cabal: ... Build-depends: base (>= 3.0 && < 4.0) at Elsewhere, libfoo at Foo exposed-modules: Control.Bar ... Control/Bar.hs: {-# LANGUAGE NoImplicitPrelude #-} package Control.Bar where import Elsewhere.Prelude import Foo.Data.Foo ... Still with me? Now, when we compiled base-3.5.0 we compiled the base-3.5.0:Prelude module. Once we've compiled it we need to give it some globally unique name so that we know to refer to exactly some byte-offset into some file located on some sector of some disk. What this name actually looks like is irrelevant. We could call the compiled module "base-3.5.0:Prelude" or we could call it "0xDEADBEEF". If we wanted to avoid name-/versionspace clashing up at the package layer, then we may prefer something like the latter; but for this discussion I'll stick with the former for simplicity. So before we compile libbar, we have the following compiled modules available: base-3.5.0:Prelude libfoo-0.0.0:Data.Foo The linking/reference process can be considered like a dialogue between the source code and the compiler (or between the compiler and the package-manager, if you prefer. The dialogue for compiling libbar-42:Control.Bar will look something like this: Code: set LANGUAGE NoImplicitPrelude GHC: okay. Code: call me libbar-42:Control.Bar GHC: righto, libbar-42:Control.Bar Code: I need something called Elsewhere.Prelude GHC: okay, just a sec. GHC: hey pkg! PKG: j0, wassup dawg GHC: I need something called Elsewhere.Prelude PKG: I have that at 0xDEADBEEF GHC: what? PKG: Oh, I mean I have that at base-3.5.0:Prelude GHC: thanks. /GHC memorizes Elsewhere.Prelude = base-3.5.0:Prelude GHC: hey libbar-42:Control.Bar, you still there? Code: yeah GHC: I found Elsewhere.Prelude Code: thanks ... Code: I need to get the type of Elsewhere.Prelude.curry GHC: okay, just a sec. GHC: hey pkg, what's the type of base-3.5.0:Prelude.curry ? PKG: base-3.5.0:Prelude.curry :: ((a, b) -> c) -> a -> b -> c GHC: hey libbar-42:Control.Bar, Elsewhere.Prelude.curry :: ((a, b) -> c) -> a -> b -> c ... /Code leaves #haskell /GHC forgets module mappings /GHC waits for Code to join #haskell Naturally GHC needs to be in on the joke and needs to be aware of both "names" for the same compiled module. But this is no different than what we already have. The module names used in Haskell code do not refer to the version of the module they need, and again they shouldn't have to. When the code asks for Prelude, it's up to GHC and PKG to determine which version of the Prelude should be linked to the code. The only thing that changes in this proposal is that PKG can have a more sophisticated way of mapping Haskell module names into compiled module object files. > IMO it makes much more sense to let client packages decide from where in the > module hierarchy they want to import modules from another package, rather > than forcing users to decide this globally per installation. Right. For each package (or compilation unit), the user/client constructs a map from the compiled module object files to the Haskell module names. The namespace that each package sees is only a fabrication, because the Haskell module names are rewritten into compiled module object names in the Core code GHC produces. So every package can make up their own independent mapping. > Thus, grafting should not be done when exposing packages, but rather when > actually using them. Your examples above would become > > ghc -package libfoo-0.0.0 at Zot ... > > resp. > > ghc -package libfoo-0.0.0:Control.Bar at Quux > > This would also better play with the way cabal does things: cabal currently > ignores hidden/expoosed status of packages; instead it hides everything and > then explicitly 'imports' exact versions using the -package option. With a > few tweaks to the cabal file syntax, we could easily declare package 'mount > points' (even for subtrees) when declaring the dependent packages and this > would be tranformed to the ghc command line syntax above. Six of one... :) As far as the proposal goes, the only important bit is that the names that Code uses are different than the names GHC/PKG use. Whether the namespace mapping is done by ghc-pkg, ghc, ghci, or whatever doesn't really matter since they're all on the same side of the fence. At that point it's just delegation of responsibility. The reason I was singling out ghc-pkg as the PKG is because (so far as I know) that's its current purpose. When Cabal runs, it needs to sanitize the namespace mapping. It does this by first hiding all packages, and then exposing only the ones the *.cabal file indicates are necessary (apparently via flags to ghc rather than calls to ghc-pkg). Right now, all packages are exposed at the same root in the module namespace; the extension is just to say that packages (and subtrees of packages) can be exposed wherever we want. After Cabal is done, it restores whatever mapping was in place before it started sanitizing things. From what (little) I know of how Cabal works under the covers, it makes sense to me that the "exposure" step is the right place to do grafting. If the map from Haskell module names to the compiled modules in exposed packages is already separate from the exposure process, then of course grafting should be done wherever that mapping is kept. If the real purpose of ghc-pkg is to give a system-default module namespace for ghc/ghci when commandline flags are not set, then sure ghc/ghci will need new flags. Of course ghc-pkg will also need new flags since it too is constructing a module namespace. -- Live well, ~wren | http://www.haskell.org/pipermail/libraries/2009-June/011975.html | CC-MAIN-2014-15 | refinedweb | 1,136 | 66.84 |
...for spaces!
This is another thing where I beat the code into submission.
I just want someone to tell me if this is "good programming practice"-
or have i gone off on a tangent.
Is the use of the member access operator OK?
All this program does is take a text file and strip the spaces out
// this will open a file , read all the characters // and display only letters and punctuation #include <iostream> #include <fstream> using namespace std; int main() { cout << "Enter a filename: "; char filename[80]; cin >> filename; ifstream file(filename); if (!file) { cout << "File doesn't exist!" << endl; } else if (file) { char ch; while (file.get(ch)!=0 ) { if (ch ==' ') file.get(ch).ignore(1,' '); else cout << ch; } } file.close(); return 0; } | https://www.daniweb.com/programming/software-development/threads/64358/stripper-program | CC-MAIN-2018-13 | refinedweb | 125 | 75.5 |
#include <ColorOcTree.h>
Static member object which ensures that this OcTree's prototype ends up in the classIDMapping only once. You need this as a static member in any derived octree class in order to read .ot files through the AbstractOcTree factory. You should also call ensureLinking() once from the constructor.
Definition at line 182 of file ColorOcTree.h.
Definition at line 184 of file ColorOcTree.h.
Dummy function to ensure that MSVC does not drop the StaticMemberInitializer, causing this tree failing to register. Needs to be called from the constructor of this octree.
Definition at line 195 of file ColorOcTree.h. | https://docs.ros.org/en/lunar/api/octomap/html/classoctomap_1_1ColorOcTree_1_1StaticMemberInitializer.html | CC-MAIN-2022-27 | refinedweb | 102 | 60.51 |
In this tutorial we will learn how to use c# lazy class, or how to perform lazy loading in c# application development, c# lazy class allow us to load any data on demand instead of loading at the time of initialization.
Lazy<List<int>> list3Lazy = data.GetList3(); List<int> list3 = list3Lazy.Value;
Lazy loading is basically loading data when its required, not at the time when the object instance is created, the lazy loading technique can help boosting application performance some extent if used correctly.
Think of real-time business scenario, when you want to create a client object along with client order list, now in general you probably will have order list property in client object, so when a new instance of client object is created, the order list also will be loaded at the same time. If the order list is long, that will take longer the execution time, when you may not need those data while accessing other client class property, which can be avoided by using lazy class, you can load the data only when the data is required.
Now to experiment if c# lazy class really helps in performance improvement, we do the following exercise; create two properties with exact same amount of data. One simple list object and another one lazy list object.
public class data { public static List<int> GetList1() { List<int> list1 = new List<int>(); for (int i = 0; i <= 5000000; i++) { list1.Add(i); } return list1; } public static Lazy<List<int>> GetList3() { Lazy<List<int>> list3 = new Lazy<List<int>>(); for (int i = 0; i <= 5000000; i++) { list3.Value.Add(i); } return list3; } }
Now create two separate stop watch object to understand how much time is consumed for each execution.
public static void displayData() { Stopwatch st = new Stopwatch(); st.Start(); dobj = new data(); List<int> list1 = data.GetList1(); foreach (int i in list1) { //just executing the loop to check time duration } st.Stop(); Console.WriteLine("List Time:" + st.Elapsed); st.Reset(); Stopwatch st1 = new Stopwatch(); st1.Start(); dobj = new data(); Lazy<List<int>> list3Lazy = data.GetList3(); List<int> list3 = list3Lazy.Value; foreach (int i in list3) { //just executing the loop to check time duration } st1.Stop(); Console.WriteLine("Lazy List Time:"+ st1.Elapsed); st1.Reset(); }
You will be surprised to see the execution time; the lazy list took half time to perform than the normal list object.
Let's assume you have Client class, and there is property called OrderItems, now while displaying client information we also want to keep the order details ready.
now instead of creating list property we can create lazy list property
Lazy<List<OrderItem>> like example below.
public class Client { public Lazy<List<OrderItem>> OrderItems { get; set; } }
To set value in lazy list property use new keyword.
List<OrderItem> items = dto.GetOrderList(); Client clientmodel = new Client(); clientmodel.OrderItems = new Lazy<List<OrderItem>>(items);
You may be interested in following tutorial | https://www.webtrainingroom.com/csharp/lazy-loading | CC-MAIN-2021-49 | refinedweb | 484 | 63.59 |
By Alex Mungai Muchiri, Alibaba Cloud Tech Share Author. Tech Share is Alibaba Cloud's incentive program to encourage the sharing of technical knowledge and best practices within the cloud community..
You will need to get yourself acquainted with the concepts and deployment of Kubernetes for you to follow through with this article. There are specific terms that may need you to refer back to the articles for this tutorial..
Kubernetes Services employ TCP and UDP protocols for interaction. The other bit worth mentioning about primitives is the database configuration. Kubernetes does not expose database containers and caches to the public. Kubernetes uses a policy mechanism to expose such containers to other containers to avoid exposure of sensitive workloads to the public. However, APIs are exposed to the public to access services. This default configuration of the primitives improves security.
On scaling workloads up and down, Kubernetes incorporates a dynamic implementation of the label primitive. Selectors can then easily discover running objects. Such objects also include containers, which makes scaling very fast as compared to heavier virtual machines. On the whole, the different primitive configuration enables capabilities similar to PAAS.
If you have successfully followed how to Install and Deploy Kubernetes on Ubuntu 16.04, you can get a list of all nodes and namespaces by running the command below:
kubectl get nodes Output NAME NAME STATUS AGE default Active 11m kube-public Active 11m kube-system Active 11m stackpoint-system Active 4m
Note:
kubectl will target the default Namespace if there no other identified namespaces. Great, let us get an application launched!
We need the
kubectl CLI to declare objects in YAML format to submit to Kubernetes for processing. Let us create our first pod:
Run the command to create
sample pod Sample-Pod.yaml.
nano Sample-Pod.yaml
Next, let us define our pod by adding the code below. It defines our pod as having a single container based on Nginx. It uses TCP protocol over port 80. The
name and
env labels in the definition make it possible to identify and configure select pods.
Sample-Pod.yaml
apiVersion: "v1" kind: Pod metadata: name: web-pod labels: name: web env: dev spec: containers: - name: myweb image: nginx ports: - containerPort: 80 name: http protocol: TCP
Create our Pod by running the following command:
kubectl create -f Sample-Pod.yaml Output pod "web-pod" created
Run the command below to verify our Pod was created
kubectl get pods Output NAME READY STATUS RESTARTS AGE web-pod 1/1 Running 0 2m
We want to make our Pod accessible to the public. We shall see how to go about it in the next section.
You can expose Pods either internally or externally using Services. In our simple project, let us expose the Nginx web server pod publicly. Our preferred object of use is the NodePort, which uses an arbitrary port on a node. We begin by creating a
Sample-Service.yaml file, which has the coded instructions to define the Nginx service.
Simple-Service.yaml
apiVersion: v1 kind: Service metadata: name: web-svc labels: name: web env: dev spec: selector: name: web type: NodePort ports: - port: 80 name: http targetPort: 80 protocol: TCP
We have created a service to discover all pods with the Label with name: web and that are within the same namespace. The association is defined fully for the selector. The service has also been declared as of NodePort type. The final process is to submit it to the cluster using kubectl.
kubectl create -f Sample-Service.yml
You should get confirmation for the successful creation of the service in the output:
Output service "web-svc" created
Use the command below to get the Pod's port:
kubectl get services Output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.3.0.1 <none> 443/TCP 28m web-svc NodePort 10.3.0.143 <none> 80:32096/TCP 38s
The output has indicated that port 32096 carries the service. Accordingly, lets try working with one of the available nodes:
Use the Alibaba console to obtain the IP addresses of the worker nodes.
Next, make an HTTP request to one of the workers using a
curl command on port
32096.
curl
The response should contain the home page of the Nginx web server
<!DOCTYPE html> <html> <head> <title>Welcome to Nginx!</title> ... Commercial support is available at <a href="">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html>
We have both a Pod and a Service declared, we shall look at replication sets in the next section.
Replica sets maintain the minimum required Pods running within the cluster. We are going to destroy the Pod we created and use the Replica Set to create three replacements.
Delete the Pod like so:
kubectl delete pod web-pod Output pod "web-pod" deleted
Declare a new Replica set to proceed to the next step. Defining one is similar to declaring a pod, with the only difference being the replica element defining the Pods it will run. It also contains metadata definition for ease of discovery as was the case with Pods.
We shall create a
Sample-RS.yml and the code below:
Sample-RS.yml the changes and close the file.
Next, let's get the Replica Set defined:
kubectl create -f Simple-RS.yml Output replicaset "web-rs" created
Let us now search for our Pods:
kubectl get pods Output NAME READY STATUS RESTARTS AGE web-rs-htb58 1/1 Running 0 8s web-rs-khtld 1/1 Running 0 8s web-rs-p5lzg 1/1 Running 0 8s
When a NodePort is used for Service access, requests are passed to one of these nodes under by the Replica Set. Let us try our Replica Set's response by deleting one of the pods like so:
kubectl delete pod web-rs-p5lzg Output pod "web-rs-p5lzg" deleted
Run the command below again:
kubectl get pods Output NAME READY STATUS RESTARTS AGE web-rs-htb58 1/1 Running 0 3m web-rs-khtld 1/1 Running 0 3m web-rs-fqh2f 0/1 ContainerCreating 0 3s web-rs-p5lzg 1/1 Running 0 3m web-rs-p5lzg 0/1 Terminating 0 3m
Kubernetes deletes the pod but creates a new one to maintain the required number of pods in our cluster. The next section will explore deployments
Deployments are easier for upgrades and patches compared to Pods and Replica Sets. It is the fundamental reason why you would want to use them to deploy containers. For instance, you can upgrade a running pod with Deployments but not with Replica Sets. The feature allows upgrades without downtime and enables PAAS capabilities. Let us first delete our replica set and then proceed to create a Deployment like so:
kubectl delete rs web-rs Output replicaset "web-rs" deleted
Create a new
Sample-Deployment.yaml file and include the code below:
Sample-Deployment.yaml
apiVersion: apps/v1beta2 kind: Deployment metadata: name: web-dep labels: name: web env: dev spec: replicas: 3 selector: matchLabels: name: web template: metadata: labels: name: web spec: containers: - name: myweb image: nginx ports: - containerPort: 80
Create the deployment and view existing Deployments
kubectl create -f Simple-Deployment.yml Output deployment "web-dep" created
Get the deployments:
kubectl get deployments Output NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE web-dep 3 3 3 3 2m
We specified the creation of three Pods, there are three running Pods.
Get the pods like so:
kubectl get pods Output NAME READY STATUS RESTARTS AGE web-dep-8594f5c765-5wmrb 1/1 Running 0 3m web-dep-8594f5c765-6cbsr 1/1 Running 0 3m web-dep-8594f5c765-sczf8 1/1 Running 0 3m
We had configured a service before creating the Deployment. However, since they all have the same labels, it will still be able to use the newly created pods.
If you need to clean up, you can delete both the Service and Deployment like so:
kubectl delete deployment web-dep Output deployment "web-dep" deleted kubectl delete service web-svc Output service "web-svc" deleted
The Kubernetes documentation contains further information on this subject.
This tutorial has built on what we discussed in the beginner's guide to Kubernetes. We have examined primitive configurations as well as the most important configurations that you are likely to encounter on Kubernetes. We have configured a web server using a pod, created a service, Replica set and deployment. The configurations we have studied in this tutorial are among the most fundamental when handling Kubernetes.
To learn more about Kubernetes on Alibaba Cloud, visit
A Beginner's Guide to Kubernetes
Installing MXNet on Alibaba Cloud ECS Ubuntu 16.04
Alibaba System Software - August 14, 2018
Alibaba Clouder - September 2, 2020
JeffLv - December 2, 2019
Alibaba Clouder - December 19, 2019
Alex - November 8, 2018
Alibaba Cloud New Products - June 11, 2020
Link IoT Edge allows for the management of millions of edge nodes by extending the capabilities of the cloud, thus providing users with services at the nearest location.Learn More | https://www.alibabacloud.com/blog/understanding-kubernetes-configurations_594162 | CC-MAIN-2020-40 | refinedweb | 1,513 | 52.39 |
Interview with Mark Safronov, author of “Web App Development with Yii 2”
I’ve made it no mystery that I work with Yii and enjoy using it. I’m somewhat of an old school guy and don’t mind reading a book to get started. I like the way they normally give you a gradual introduction to something you want to gain knowledge about. It’s a more subtle way to start with something and you don’t immediately drown in the huge amount of information.
That’s the way I started on Yii 1.1 some years ago. Unfortunately, I tossed the book I used back then half way through; it went too slow for me. Going over simple things repeatedly made me lose my interest quite fast. I wanted to learn more, faster. I had a whole bunch of ideas in my head and I wanted to find out how to realize them.
We’re a few years down the road and with the introduction of Yii 2.0 I let myself be tempted yet again by a book before starting some serious work with Yii 2.0. With mixed experiences in mind I started on a copy of Web Application Development with Yii 2 and PHP and ended up pleasantly surprised.
Content
The structure of this 2.0 version is somewhat similar to the previous one (which caused Jeffrey Winesett’s name to be on the cover) but this one is entirely written by Mark Safronov.
Having had some experience with Yii, I was looking for the differences and an insight into how you build an application from nothing.
The book does this, as usual, by taking an example application and gradually extending it. It starts with a test-first approach. If you feel serious about your application and maintaining it long-term, you’ll probably agree that tests are a vital part. Luckily, this approach relaxes quite fast, not spelling out how to define a test every time something new is added.
The book covers all the key components of the framework; rendering, authentication, authorization, modules, behavior, extensions, events, grid view and route management. All the components are reviewed along with how to use them. It also dives into the working of the framework quite often, showing the logic in it and explaining in detail how they work.
It has a slight touch of humor every now and then, making it a very easy read.
I did, again, skip some pages right around modules and the behavior part, but that was more because I was very interested in seeing what was coming in the chapters after that. Ever since I finished the book and started on my first real Yii 2.0 project, I’ve already gone back to the book a few times. I came across a few new things and remembered some of the nice suggestions that were made for improvement. For me, that was proof enough that it was at least worth the read.
Interview
Being left with quite a good feeling about this book, I went to contact Mark Safronov for some Q&A on the book and Yii itself.
Can you tell us a bit about your background, who are you?
There’s not much to brag about. I am a full-time developer in Clevertech, and I am Russian, living in a country where horses ride YOU.
I am working on a website based on Yii 1.1 every day and have been doing it for more than two years already. This website is sufficiently large and complex to really test the system-building skills of anyone deeply involved in the development, with the requirements changing on daily basis. So I dare to consider myself knowing thing or two about building and maintaining real-world Yii-based Web applications.
In the meantime, I maintained the opensource YiiBooster and rewrote from scratch the YiiBoilerplate project.
I have been poisoned by PHP for around 5 years already. I started my career in a Russian Web studio which made websites for its customers based on the dreaded 1C-Bitrix CMS. You westerners probably don’t know this Frankenstein of CMS-es but I certainly do regret knowing it.
Over the years, I switched my habits from reading sci-fi and fantasy novels to reading hardcore fundamental technology books like Working Effectively with Legacy Code, GooS, DDD and such. This has proven itself a lot more entertaining, if not enlightening.
How did you end up writing this book?
I have co-authored another title, “INSTANT Yii 1.1 Application Development Starter”. At that time, the publisher lost the original author (literally lost, they could not contact him after he sent them the drafts) and chose me, because they already had me as a reviewer for that title.
So, we already had some cooperation prior to this book and they probably decided that I write convincingly.
Why Yii 2.0, since the list of frameworks is quite long?
First of all, I got no other choice; the publisher gave me the offer specifically for a book about Yii. And Yii is the framework I am dealing with on a daily basis.
Also, version 2 is an obvious next step. I saw this as an opportunity to learn about the intricacies of its internals. On a project I am currently working on it’s a common practice to delve into the source code of the framework itself, because you frequently need absolutely exact knowledge of how a method (like
CController.render() or
CWebUser.login()) works.
I do need to make a major disclaimer here. I am not biased towards Yii in any way. To be honest, I actually dislike it, but bear with it at the same time. I have no attachment and only a strictly professional relation to this framework. This certainly affected both the overall style of the book and the areas it covers.
What do you feel was the weak point of Yii 1.1?
The most important pain point of Yii was, and still is, the Service Locator, which is, incidentally, the central idiom of the whole framework. If you are accustomed to sticking calls to
Yii::$app everywhere through your whole application, then I guarantee you, you will experience major butthurt later on, when your project grows. Even earlier if you try to auto-test it.
Yii 2 tried to overcome it by shipping the DI container inside, but it has almost no coverage in documentation, every tutorial assumes you do
Yii::$app everywhere. Due to my enormous mental inertia, I have not covered
yii\di\Container in the book, which I regret.
A second issue is its tight coupling to the classes. Most of the time, when you need to meddle with something built-in to Yii, you need to both subclass what you need and copy the old behavior from superclass.
Yii 2 has a lot of improvements here. But nevertheless; try to write the custom Column class which has automatic calculation of totals in the footer, without copy-pasting parts of the Column class.
The use of
Yii::$app is quite promoted indeed. Why would using it hurt us in the long run exactly?
The reason is very simple: as long as you have a
Yii::app()->something call in your class, you can’t test it in isolation; it will be the integrated test, because you need to include the YiiBase class and, actually, your application config, autoloader and also you need to start the Yii application because you need your components attached and configured.
If you don’t see it as a pain, all right, but I personally think that even in terms of run time alone this is a totally failed approach. Gary Bernhardt says that even 100ms test runtime is too much and I usually prefer to listen to people of his caliber. And running time is not the only problem with such tight coupling.
What should we use, then? And how would using this influence automated tests?
Dependency Injection techniques, in particular, separating the creating of the application object graph and actually performing processing. As far as I know, people do this by using some kind of IoC container, but I have no actual first-hand experience working with them as the project I am working on on daily basis is too legacy to even try to use such things there.
I have already said above how it influences automated tests. In short, we have the Law of Demeter for years now, and sticking to it actually works (with this I do have first-hand experience) and leads to decoupled, maintainable, testable design.
You say that Yii 2 “tried to overcome” this. Formulating it that way implies that they didn’t succeed.
They would succeed by completely throwing the
Yii singleton away and making everything build itself decoratively in the
index.php entry point. This way we would actually work with Yii in a TDD manner. I dare to spell my personal opinion: I am reading professional literature every single day trying to catch up with all the community knowledge I missed while wasting time in university, and sometimes I think that the percentage of people that actually care about problems already solved is extremely low; the majority of the community is just spitting out code in code-fix and write-forget manner thinking that there’s no way to do better. I haven’t seen the techniques described in “DDD” by Eric Evans actually implemented ever and I find this completely unbelievable. I tried to use what I learned in the course of the “Web app development with Yii 2” but I consider myself failed at this approach, as I have too little experience working in such a way.
Since you have quite a bit of experience with Yii, what is your general feeling about what changed in version 2.0? And how does that influence the way we can work with it?
- I was very pleased with the improvements in the Yii 2 framework. It was a physical feeling of pleasure sometimes.
- They use namespaces, which is just great, as I have namespace seams now and can have classes with identical names through my project;
- There are interfaces now in critical places, like DataProviderInterface, which means that I can provide my own implementation without the need to subclass the whole huge CComponent.
- On the other hand, they subclass everything from the custom-made
Objectclass, but this class is actually a nice invention – it’s the stdClass with automatic
getSomething()and
setSomething()methods, configurable by the associative configuration array and some other features. At the same time, this class uses a static call to
Yii::configure()right inside its constructor which, again, reverts the dependencies inside out.
- They finally extracted some entities as separate components, like Response and View. The Response concept is amazing, as we can finally implement sane CSV and JSON formatting in an idiomatic way and without crazy hacks involving sub classes again.
- They actually have a dependency injection container in the code base, so we can finally hope to decrease the usage of calls to
Yii::$appthrough the code base.
This is just what I can recall right now. There is absolutely no need to keep sticking to Yii 1.1 if you can migrate. However, I am pretty sure nobody will be able to migrate existing Yii 1.1 application to Yii 2 without substantial rewriting of most of the application.
I noticed several times throughout the book that you refer to the ‘excellent’ documentation on the Yii website regarding some subjects. Did it help in the book research?
Honestly, I admire the amount and quality of documentation in Yii, regardless of my overall feelings about it. As a developer using the software which should be the root for my work, I usually need precise information about the tiniest details, for example, contracts needed to be followed when calling a particular function. Especially when we’re quite far from the type safety of compiled languages. When I praise its documentation I usually mean its self-documentation, its docblocks in code.
Yii 1.1 was enormously hard for me to understand back when I just started to understand OOP, but with time, all the pieces came into place finally and I figured out its structure. Yii 2 doesn’t have a lot of significant changes in architecture so it was really easy to research.
You make a firm statement about decoupling your code from the ActiveRecord ORM layer. Can you tell us why we would want to do this?
The main reason for this lies in the object-relation impedance mismatch. At one point in time I realized that my logical domain model does not fit into a single table and so, ActiveRecord suddenly became a limiting factor. More than that, the whole set of features stuffed into them is completely unnecessary in the domain layer, where the actual business-related work happens. Shortly after that I learned about the object-relation impedance mismatch, Repository pattern, CQRS, and became completely convinced that the place of ORM should be at the lowest possible layer of the architecture, even lower than the actual data access layer.
This book however is about Yii, not general development. I could not gloss over the problem of having people calling ActiveRecords right from Controllers so I have shown the simplest possible decoupling method in chapter 2.
I had a laugh when I read: “In the web-covered, PHP-based world there’s no habit to use such tools as XDebug, to peek into the individual variables, like beard-wearing C programmers do”. I started to wonder if you were serious about this. You don’t use a debugger at all?
I admit, I spelled out my personal and my team’s opinion. There are probably a lot of people who use xdebug routinely and it helps them. I personally have found that when I am able to write automated tests, I simply don’t need the debugger at all, even if I don’t test first. Any data flow check I can manually perform with a debugger can be encoded as auto test at some level (ideally a unit one) and it will serve both as a regression test in future and the documentation example for others. A total win. And
var_dump(); die() when you’re hacking away is just quicker. Maybe I am just too dumb to use xdebug, though :)
Conclusion
In our conversation, Mark makes a side note which I’d like to close off with;
…if you’re still not sure about whether you need to migrate to Yii 2, be it from Yii 1.1 or from bare procedural PHP or from some CMS, then my answer is: if you are able to, then do it immediately. Yii 2 is an enormous leap in overall quality and expressibility. It builds upon virtually everything good which emerged in the PHP community over the recent years and you certainly don’t want to miss it.
I’d like to thank Mark for the time he took to answer my questions and the insights he gave into software development as a whole. | https://www.sitepoint.com/interview-mark-safronov-author-web-app-development-yii-2/ | CC-MAIN-2020-10 | refinedweb | 2,549 | 61.56 |
MooseX::MarkAsMethods - Mark overload code symbols as methods
This document describes version 0.15 of MooseX::MarkAsMethods - released May 30, 2012 as part of MooseX-MarkAsMethods.; # ...
MooseX::MarkAsMethods allows one to easily mark certain functions as Moose methods. This will allow other packages such as namespace::autoclean to operate without blowing away your overloads. After using MooseX::MarkAsMethods your overloads will be recognized by Class::MOP as being methods, and class extension as well as composition from roles with overloads will "just work".
By default we check for overloads, and mark those functions as methods.
If
autoclean => 1 is passed to import on using this module, we will invoke namespace::autoclean to clear out non-methods."
See the "IMPLICATIONS FOR ROLES" section, above.
You almost certainly don't need or want to do this. CMOP/Moose are fairly good about determining what is and what isn't a method, but not perfect. Before using this method, you should pause and think about why you need to.;
Please see those modules/websites for more information related to this module.
The development version is on github at and may be cloned from git://github.com/RsrchBoy/moosex-markasmethods | http://search.cpan.org/dist/MooseX-MarkAsMethods/lib/MooseX/MarkAsMethods.pm | CC-MAIN-2016-22 | refinedweb | 196 | 57.67 |
* Explanation of Use and Rationale For Procedures 20 * 21 * This class is intended to serve as a "test stand" for loading 22 * images using the Apache Imaging (nee "Sanselan") package. 23 * It performs a loop that loads a specified image multiple times 24 * recording both memory and time required for the loading process. 25 * 26 * The notes given below explain some of the operations of this 27 * test class and the reasons they were designed as they are. 28 * This test is by no means to be considered the "last word" in how 29 * to write a test application. The techniques described below have 30 * demonstrated themselves useful and relevant in developing speed 31 * enhancements for some of the Apache Imaging operations. But I know 32 * I haven't thought of everything and am actually hoping that 33 * someone will have suggestions for improvements. 34 * 35 * 36 * Prerequisites to Testing -------------------------------- 37 * 38 * Whenever testing software performance, particularly timing, 39 * there are a few important considerations that should be observed: 40 * 41 * a) Get a clean testing environment. In a modern computer 42 * system, there are dozens of processes running. To whatever 43 * degree possible, make sure you are not running competing 44 * processes that will consume computer resources and contaminate 45 * your timing results. 46 * 47 * b) Make sure you are testing what you think you are testing. 48 * This guideline is especially true when comparing two different 49 * approaches. Eliminate as many variables from the analysis 50 * as you possible can. 51 * 52 * c) When writing or modifying code, remember that no matter how 53 * obvious and self-evidentially superior a particular approach 54 * may seem, you don't really know if it's an improvement until you 55 * test it. If nothing else, the experience of computer programming 56 * teaches us to not to take anything for granted. 57 * 58 * d) Make sure the JVM is allowed a sufficiently large maximum 59 * memory size. Putting aside the fact that the default size 60 * for the maximum memory use of the JVM could be too small for 61 * handling test images, we also want to allocate a sufficiently 62 * large memory size to ensure that it doesn't get too close to 63 * the maximum size when performing timing tests. When the JVM 64 * detects that it is running up against the limits of its 65 * maximum memory size setting, it triggers garbage collection 66 * operations that can contaminate timing values. 67 * I usually try to set the maximum memory size to be at least 68 * twice what I think I will need. Traditionally, the memory 69 * size for the JVM is quite modest, perhaps 256 megabytes. You 70 * can alter that value by using something like the following 71 * specification (check your own JVM version for alternate values): 72 * -Xmx768M (maximum of 768 megabytes) 73 * 74 * 75 * 76 * 77 * What the Test Application Does and Why ---------------------- 78 * 79 * 0. Functions ------------------ 80 * This class reads the path to a graphics file from the command 81 * line and attempts to read it several times, measuring the time 82 * required to read it. If you prefer, you may hardwire the code 83 * to use a specific file. Take whatever approach is easiest... it 84 * shouldn't affect the accuracy of the results. 85 * 86 * 1) Specific Instances of Classes to Be Tested ----------------- 87 * The Apache Imagine package includes a set of "parsers" for 88 * reading different graphics file formats. The package also includes 89 * a general-purpose class called "Imaging" that determines 90 * the format an arbitrarily specified input 91 * file and internally processes it using the appropriate parser. 92 * However, unless you wish to test the performance of the Imaging 93 * class itself, it is better to instantiate the proper subject-matter 94 * parser explicitly in your code. In ordinary applications, it is often 95 * more convenient to use the Imaging class and let it take care 96 * of the details for you. But in that "taking care of details" 97 * operation, the Imaging class loads and instantiates a large 98 * number of different subject-matter parsers. These operations take 99 * time, consume memory, and will color the results of any timing 100 * and memory-use measurements you perform. 101 * 102 * 2) Repetition ----------------------------------------- 103 * The example output from this program included below, shows that 104 * it performs multiple image-loading operations, recording both 105 * the time required for each individual load time and the overall 106 * average time required for most of the load operations (times are 107 * in milliseconds). 108 * 109 * image size: 10000 by 10000 110 * time to load image memory 111 * time ms avg ms used mb total mb 112 * 15559.150 0.000 -- 384.845 397.035 113 * 8544.926 0.000 -- 410.981 568.723 114 * 8471.012 8471.012 -- 411.563 695.723 115 * 8626.015 8548.513 -- 384.791 397.039 116 * 117 * Note that in the example output, the times for the first two load 118 * operations are not included in the average. The reason for this is 119 * that the first time a Java application performs some operation, 120 * it is likely to take a little longer than for subsequent 121 * operations due to the overhead for class loading and the 122 * just-in-time (JIT) compiler. Unless you're specifically interested 123 * in measuring the cost of start-up operations, the time they take 124 * will contaminate any timing values for the functions of interest. 125 * My experience under Windows is that the overhead only affects the 126 * first time I load an image. In Linux, I've noticed that it can sometimes 127 * carry over into the second. In either case, two loop iterations 128 * has proven to be enough to isolate the start costs... but keep an eye 129 * on the individual times to make sure nothing unwanted is happening. 130 * 131 * 3) Clean Up Memory Between Load Operations -------------------- 132 * This test application specifically invokes the garbage collection 133 * method provided by the Java Runtime class. It then executes a one-second 134 * sleep operation. 135 * Recall that in Java, the JVM performs garbage collection in a 136 * separate thread that runs whenever Java thinks it important to do so. 137 * We want to do what we can to ensure that the garbage collection operation, 138 * which consumes processor resources, doesn't do so while the application 139 * is loading an image. To that end, the application invokes 140 * Runtime.gc() and then allows the JVM one second to initiate and 141 * complete the garbage collection. However, the .gc() method is, at best, 142 * a "suggestion" that the JVM should run garbage collection. 143 * It does not guarantee that the garbage collection will be executed and 144 * completed immediately. Thus the relatively long one-second delay 145 * between loop iterations. 146 * 147 * --------------------------------------------------------------------- 148 * Good luck in using the class for testing. 149 * Feel free to modify it to suit your own requirements... Nothing 150 * I've done in this code is beyond improvement. I hope it works 151 * well for you. 152 * Gary Lucas -- May 2012. 153 * --------------------------------------------------------------- 154 * 155 */ 156 package org.apache.commons.imaging.examples; 157 158 import java.awt.image.BufferedImage; 159 import java.io.File; 160 import java.io.IOException; 161 import java.util.Formatter; 162 import java.util.HashMap; 163 164 import org.apache.commons.imaging.ImageReadException; 165 import org.apache.commons.imaging.common.bytesource.ByteSourceFile; 166 import org.apache.commons.imaging.formats.tiff.TiffImageParser; 167 168 /** 169 * A "test stand" for evaluating the speed an memory use of different Apache 170 * Imaging operations 171 */ 172 public class ApacheImagingSpeedAndMemoryTest { 173 174 /** 175 * Create an instance of the speed and memory test class and execute a test 176 * loop for the specified file. 177 * 178 * @param args 179 * the path to the file to be processed 180 */ 181 public static void main(final String[] args) { 182 final String name = args[0]; 183 184 final ApacheImagingSpeedAndMemoryTest testStand = new ApacheImagingSpeedAndMemoryTest(); 185 186 testStand.performTest(name); 187 } 188 189 /** 190 * Loads the input file multiple times, measuring the time and memory use 191 * for each iteration. 192 * 193 * @param name 194 * the path for the input image file to be tested 195 */ 196 private void performTest(final String name) { 197 final File target = new File(name); 198 final Formatter fmt = new Formatter(System.out); 199 200 double sumTime = 0; 201 int n = 1; 202 for (int i = 0; i < 10; i++) { 203 try { 204 ByteSourceFile byteSource = new ByteSourceFile(target); 205 // This test code allows you to test cases where the 206 // input is processed using Apache Imaging's 207 // ByteSourceInputStream rather than the ByteSourceFile. 208 // You might also want to experiment with ByteSourceArray. 209 // FileInputStream fins = new FileInputStream(target); 210 // BufferedInputStream bins = new BufferedInputStream(fins); 211 // ByteSourceInputStream byteSource = 212 // new ByteSourceInputStream(bins, target.getName()); 213 214 // ready the parser (you may modify this code block 215 // to use your parser of choice) 216 HashMap<String, Object> params = new HashMap<>(); 217 TiffImageParser tiffImageParser = new TiffImageParser(); 218 219 // load the file and record time needed to do so 220 final long time0 = System.nanoTime(); 221 BufferedImage bImage = tiffImageParser.getBufferedImage( 222 byteSource, params); 223 final long time1 = System.nanoTime(); 224 225 // tabulate results 226 final double testTime = (time1 - time0) / 1000000.0; 227 if (i > 1) { 228 n = i - 1; 229 sumTime += testTime; 230 } 231 final double avgTime = sumTime / n; 232 233 // tabulate the memory results. Note that the 234 // buffered image, the byte source, and the parser 235 // are all still in scope. This approach is taken 236 // to get some sense of peak memory use, but Java 237 // may have already started collecting garbage, 238 // so there are limits to the reliability of these 239 // statistics 240 final Runtime r = Runtime.getRuntime(); 241 final long freeMemory = r.freeMemory(); 242 final long totalMemory = r.totalMemory(); 243 final long usedMemory = totalMemory - freeMemory; 244 245 if (i == 0) { 246 // print header info 247 fmt.format("%n"); 248 fmt.format("Processing file: %s%n", target.getName()); 249 fmt.format(" image size: %d by %d%n%n", bImage.getWidth(), 250 bImage.getHeight()); 251 fmt.format(" time to load image -- memory%n"); 252 fmt.format(" time ms avg ms -- used mb total mb%n"); 253 } 254 fmt.format("%9.3f %9.3f -- %9.3f %9.3f %n", testTime, 255 avgTime, usedMemory / (1024.0 * 1024.0), totalMemory 256 / (1024.0 * 1024.0)); 257 bImage = null; 258 byteSource = null; 259 params = null; 260 tiffImageParser = null; 261 } catch (final ImageReadException ire) { 262 ire.printStackTrace(); 263 System.exit(-1); 264 } catch (final IOException ioex) { 265 ioex.printStackTrace(); 266 System.exit(-1); 267 } finally { 268 fmt.close(); 269 } 270 271 try { 272 // sleep between loop iterations allows time 273 // for the JVM to clean up memory. The Netbeans IDE 274 // doesn't "get" the fact that we're doing this operation 275 // deliberately and is apt offer hints 276 // suggesting that the code should be modified 277 Runtime.getRuntime().gc(); 278 Thread.sleep(1000); 279 } catch (final InterruptedException iex) { 280 // this isn't fatal, but shouldn't happen 281 iex.printStackTrace(); 282 } 283 284 } 285 } 286 } | http://commons.apache.org/proper/commons-imaging/xref-test/org/apache/commons/imaging/examples/ApacheImagingSpeedAndMemoryTest.html | CC-MAIN-2018-09 | refinedweb | 1,855 | 54.32 |
Björn Fahller recently wrote a blog post showing how to implement a compile-time quicksort in C++17. It’s a skillful demonstration that employs the evolving C++ feature set to write code that, while not quite concise, is more streamlined than previous iterations. He concludes with, “…the usefulness of this is very limited, but it is kind of cool, isn’t it?”
There’s quite a bit of usefulness to be found in evaluating code during compilation. The coolness (of which there is much) arises from the possibilities that come along with it. Starting from Björn’s example, this post sets out to teach a few interesting aspects of compile-time evaluation in the D programming language.
The article came to my attention from Russel Winder’s provocative query in the D forums, “Surely D can do better”, which was quickly resolved with a “No Story”-style answer by Andrei Alexandrescu. “There is nothing to do really. Just use standard library sort,” he quipped, and followed with code:
Example 1
void main() { import std.algorithm, std.stdio; enum a = [ 3, 1, 2, 4, 0 ]; static b = sort(a); writeln(b); // [0, 1, 2, 3, 4] }
Though it probably won’t be obvious to those unfamiliar with D, the call to
sort really is happening at compile time. Let’s see why.
Compile-time code is runtime code
It’s true. There are no hurdles to jump over to get things running at compile time in D. Any compile-time function is also a runtime function and can be executed in either context. However, not all runtime functions qualify for CTFE (Compile-Time Function Evaluation).
The fundamental requirements for CTFE eligibility are that a function must be portable, free of side effects, contain no inline assembly, and the source code must be available. Beyond that, the only thing deciding whether a function is evaluated during compilation vs. at run time is the context in which it’s called.
The CTFE Documentation includes the following statement:
In order to be executed at compile time, the function must appear in a context where it must be so executed…
It then lists a few examples of where that is true. What it boils down to is this: if a function can be executed in a compile-time context where it must be, then it will be. When it can’t be excecuted (it doesn’t meet the CTFE requirements, for example), the compiler will emit an error.
Breaking down the compile-time sort
Take a look once more at Example 1.
void main() { import std.algorithm, std.stdio; enum a = [ 3, 1, 2, 4, 0 ]; static b = sort(a); writeln(b); }
The points of interest that enable the CTFE magic here are lines 3 and 4.
The
enum in line 3 is a manifest constant. It differs from other constants in D (those marked
immutable or
const) in that it exists only at compile time. Any attempt to take its address is an error. If it’s never used, then its value will never appear in the code.
When an
enum is used, the compiler essentially pastes its value in place of the symbol name.
enum xinit = 10; int x = xinit; immutable yinit = 11; int y = yinit;
Here,
x is initialized to the literal
10. It’s identical to writing
int x = 10. The constant
yinit is initialized with an
int literal, but
y is initialized with the value of
yinit, which, though known at compile time, is not a literal itself.
yinit will exist at run time, but
xinit will not.
In Example 1, the static variable
b is initialized with the manifest constant
a. In the CTFE documentation, this is listed as an example scenario in which a function must be evaluated during compilation. A static variable declared in a function can only be initialized with a compile-time value. Trying to compile this:
Example 2
void main() { int x = 10; static y = x; }
Will result in this:
Error: variable x cannot be read at compile time
Using a function call to initialize a static variable means the function must be executed at compile time and, therefore, it will be if it qualifies.
Those two pieces of the puzzle, the manifest constant and the static initializer, explain why the call to
sort in Example 1 happens at compile time without any metaprogramming contortions. In fact, the example could be made one line shorter:
Example 3
void main() { import std.algorithm, std.stdio; static b = sort([ 3, 1, 2, 4, 0 ]); writeln(b); }
And if there’s no need for
b to stick around at run time, it could be made an
enum instead of a static variable:
Example 4
void main() { import std.algorithm, std.stdio; enum b = sort([ 3, 1, 2, 4, 0 ]); writeln(b); }
In both cases, the call to
sort will happen at compile time, but they handle the result differently. Consider that, due to the nature of
enums, the change will produce an equivalent of this:
writeln([ 0, 1, 2, 3, 4 ]). Because the call to
writeln happens at run time, the array literal might trigger a GC allocation (though it could be, and sometimes will be, optimized away). With the static initializer, there is no runtime allocation, as the result of the function call is used at compile time to initialize the variable.
It’s worth noting that
sort isn’t directly returning a value of type
int[]. Take a peek at the documentation and you’ll discover that what it’s giving back is a
SortedRange. Specifically in our usage, it’s a
SortedRange!(int[], "a < b"). This type, like arrays in D, exposes all of the primitives of a random-access range, but additionally provides functions that only work on sorted ranges and can take advantage of their ordering (e.g.
trisect). The array is still in there, but wrapped in an enhanced API.
To CTFE or not to CTFE
I mentioned above that all compile-time functions are also runtime functions. Sometimes, it's useful to distinguish between the two inside the function itself. D allows you to do that with the
__ctfe variable. Here's an example from my book, 'Learning D'.
Example 5
string genDebugMsg(string msg) { if(__ctfe) return "CTFE_" ~ msg; else return "DBG_" ~ msg; } pragma(msg, genDebugMsg("Running at compile-time.")); void main() { writeln(genDebugMsg("Running at runtime.")); }
The
msg pragma prints a message to
stderr at compile time. When
genDebugMsg is called as its second argument here, then inside that function the variable
__ctfe will be
true. When the function is then called as an argument to
writeln, which happens in a runtime context,
__ctfe is
false.
It's important to note that
__ctfe is not a compile-time value. No function knows if it's being executed at compile-time or at run time. In the former case, it's being evaluated by an interpreter that runs inside the compiler. Even then, we can make a distinction between compile-time and runtime values inside the function itself. The result of the function, however, will be a compile-time value when it's executed at compile time.
Complex compile-time validation
Now let's look at something that doesn't use an out-of-the-box function from the standard library.
A few years back, Andrei published 'The D Programming Language'. In the section describing CTFE, he implemented three functions that could be used to validate the parameters passed to a hypothetical linear congruential generator. The idea is that the parameters must meet a set of criteria, which he lays out in the book (buy it for the commentary -- it's well worth it), for generating the largest period possible. Here they are, minus the unit tests:
Example 6
// Implementation of Euclid’s algorithm ulong gcd(ulong a, ulong b) { while (b) { auto t = b; b = a % b; a = t; } return a; } ulong primeFactorsOnly(ulong n) { ulong accum = 1; ulong iter = 2; for (; n >= iter * iter; iter += 2 - (iter == 2)) { if (n % iter) continue; accum *= iter; do n /= iter; while (n % iter == 0); } return accum * n; } bool properLinearCongruentialParameters(ulong m, ulong a, ulong c) { // Bounds checking if (m == 0 || a == 0 || a >= m || c == 0 || c >= m) return false; // c and m are relatively prime if (gcd(c, m) != 1) return false; // a - 1 is divisible by all prime factors of m if ((a - 1) % primeFactorsOnly(m)) return false; // If a - 1 is multiple of 4, then m is a multiple of 4, too. if ((a - 1) % 4 == 0 && m % 4) return false; // Passed all tests return true; }
The key point this code was intended to make is the same one I made earlier in this post:
properLinearCongruentialParameters is a function that can be used in both a compile-time context and a runtime context. There's no special syntax required to make it work, no need to create two distinct versions.
Want to implement a linear congruential generator as a templated struct with the RNG parameters passed as template arguments? Use
properLinearCongruentialParameters to validate the parameters. Want to implement a version that accepts the arguments at run time?
properLinearCongruentialParameters has got you covered. Want to implement an RNG that can be used at both compile time and run time? You get the picture.
For completeness, here's an example of validating parameters in both contexts.
Example 7
void main() { enum ulong m = 1UL << 32, a = 1664525, c = 1013904223; static ctVal = properLinearCongruentialParameters(m, a, c); writeln(properLinearCongruentialParameters(m, a, c)); }
If you've been paying attention, you'll know that
ctVal must be initialized at compile time, so it forces CTFE on the call to the function. And the call to the same function as an argument to
writeln happens at run time. You can have your cake and eat it, too.
Conclusion
Compile-Time Function Evaluation in D is both convenient and painless. It can be combined with other features such as templates (it's particularly useful with template parameters and constraints), string mixins and import expressions to simplify what might otherwise be extremely complex code, some of which wouldn't even be possible in many languages without a preprocessor. As a bonus, Stefan Koch is currently working on a more performant CTFE engine for the D frontend to make it even more convenient. Stay tuned here for more news on that front.
Thanks to the multiple members of the D community who reviewed this article.
6 thoughts on “Compile-Time Sort in D”
I have an even better implementation:
void main() {
import std.algorithm, std.stdio;
enum a = [ 0, 1, 2, 3, 4 ];
writeln(b);
}
Neat, huh?
Oops, that last line should be writeln(a)
That’s example 4 in the post 🙂 Note that, as I mention, array literals can trigger a run time allocation and
writeln(a)is identical to
writeln([0, 1, 2, 3, 4]).
Guh-reat! Now try it with a list of >10^6 elements! 😀
with newCTFE that should take 300-500 milliseconds.
I wonder how constexpr performs here…
I translated this post to Japanese. | https://dlang.org/blog/2017/06/05/compile-time-sort-in-d/ | CC-MAIN-2018-47 | refinedweb | 1,853 | 60.85 |
Search: Search took 0.02 seconds.
- 8 Jun 2008 9:19 PM
Yes, i know but i can't get it ! Can you help me ?
var bd = Ext.getBody();
var dialog = new Ext.ux.UploadDialog.Dialog({
// ******
height: 200,
width:...
- 28 May 2008 3:57 PM
Is there a way to config the extension to see the UploadDialog as a panel and not as a window ?
See you,
Fernando.
- 19 May 2008 1:24 PM
Hi, i'm trying to use your extension that looks very pretty but i have no luck with it. I'm not getting any information from the server. I read all the thread so, i know that there is implemented...
- 19 Oct 2007 2:22 PM
Yes, i realase that is a timing problem so i 'll have to analize how to use defer because i never used it.
The example starts to work after the first pass, obviously because images are already load,...
- 17 Oct 2007 5:29 AM
I try with your change, i get near what i want but for some reason the images get a strange effect, their seems to blink severeal times before change.
index.js
Ext.onReady(function() {
...
- 17 Oct 2007 5:26 AM
I tried this before but with no success.
- 16 Oct 2007 9:21 PM
Well, i code a script that get some photos in an array from a directory and then show the photos in a bucle one each time before some time. The problem is that the effect appear before the new image...
- 30 Aug 2007 11:20 PM
Hi, i get it. Now is working, i put the code and i comment it tomorrow because i'm too tired :D
Ext.onReady(function() {
var oParams = {
action: "verImagenes",
imagen: 1,
...
- 30 Aug 2007 7:21 PM
I tried your code, but i received "syntaxis error ()" in the FireBug.
- 30 Aug 2007 6:16 AM
Hi, i tried your code but it doesn't work. It still the same problem here. The params sended to the server are always the same and i want to be diferent in each request. In other words, i need to...
- 29 Aug 2007 3:17 PM
I think you don't understand what i'm trying to do. Let see what i send and what i really want to send.
Firebug (What i have)
[CODE]
1
- 29 Aug 2007 1:57 PM
Hi, i'm trying to use UpdateManager to update a div in my page. I want to send a value to the server, then the server respond with another value and i want to use this value and send it again to the...
- 27 Aug 2007 3:56 PM
Yes, i understand this. But is this the only way ? I don't want to rename my javascipts files to php. It seems awful, of course, perhaps it is the only way. I'm just asking :-)
Thanks
- 27 Aug 2007 12:18 PM
Hola gente, la verdad que por lo que estuve viendo no se puede hacer lo que me sugirieron:
var params = Ext.urlEncode({"imageId" : imgId, "num": <?=$_GET['number']?>});
A alguien se le...
- 22 Aug 2007 5:03 PM
Is correct to use php tags inside a javascript ?
var params = Ext.urlEncode({"imageId" : imgId, "num": <?=$_GET['number']?>});
Perhaps it is and the problem is that i don't like it yoo...
- 21 Aug 2007 4:00 PM
Thanks for all guys. Now is working but i still have some doubts.
example.php
<?php
$id = $_POST["imageId"];
if ($id == 1) {
$myData = array(success => false, msg => 'No se encontro la...
- 21 Aug 2007 7:53 AM
Guys, thank you a lot for your help. I was learning a lot.
Nullity: i tried your code and is working great.
BernardChhun: now i'm trying your code. despite the code of Nullity is working, i...
- 21 Aug 2007 6:04 AM
Hi, i want to do something but i feel a little bit lost. For example, i have a page with 2 divs. In one div, there are thumbnails of images, the other div is empty until i click on a thumbnail and...
- 18 Aug 2007 3:37 AM
- Replies
- 161
- Views
- 129,153
Hi, the extension is GREAT !! Just what i'm looking for. Could you put the php files that use in the demo to understand all the process please?
Thanks.
- 18 Aug 2007 12:25 AM
auwful !! With no documentation using namespaces is too pane. Other thing is that i can't get the profile permanent. The option to make it permanent is not appearing.
Still using SpKet despite i...
- 17 Aug 2007 7:40 PM
Well, i decide to download and install AptanaIDE standalone, because as a eclipse plugin installation was bugging me. Now the code completion of ExtJs is working, but i don
- 17 Aug 2007 12:28 AM
When i open an html file with my version of eclipse in the aptana perspective, the Code Assist Profile doesn't get the js files that are include in the html. Why ? I'm using aptana as a plugin.
- 15 Aug 2007 4:29 PM
Ok, that
- 9 Aug 2007 2:02 PM
Can anyone help me with code completion in eclipse with aptana plugin ?? I can't get it work !!
Thanks.
- 9 Aug 2007 11:34 AM
- Replies
- 27
- Views
- 47,444
Can you put the source code to download ?
Thanks.
Results 1 to 25 of 26 | https://www.sencha.com/forum/search.php?s=f74c2319f4440038c83f9213b736babb&searchid=12174059 | CC-MAIN-2015-32 | refinedweb | 910 | 83.86 |
WebReference.com - Excerpt from Inside XSLT, Chapter 2, Part 3 (5/5)
Inside XSLT
Now I can add another template for the next level down, which includes the
<PLANET> elements. In this case, I'll just replace each
<PLANET>
element with the literal result element
<P>Planet</P>:
Listing 2.3: Using
<xsl:apply-templates/>
<?xml version="1.0"?> <xsl:stylesheet <xsl:template <HTML> <xsl:apply-templates/> </HTML> </xsl:template> <xsl:template <xsl:apply-templates/> </xsl:template> <xsl:template <P> Planet </P> </xsl:template> </xsl:stylesheet>
Here's the result of this stylesheet:
<HTML> <P> Planet </P> <P> Planet </P> <P> Planet </P> </HTML>
As you can see, there is nothing left of the
<PLANETS> element at all.
All that's left is the three literal result elements
<P>Planet</P> that were
substituted for the three
<PLANET> elements.
Omitting the select Attribute
If you omit the
selectattribute, then only the child nodes of the current node are processed, which does not include attribute or namespace nodes, because they are not considered children. If you want to process those kinds of nodes, you'll have to use the
selectattribute, as you'll see in Chapter 3.
This is all very interesting, but not too useful. It would be far better, for example, to be able to access the actual value of each element (such as the name of each planet) and make use of that data. And, of course, you can. [Continued in part 4 - Ed.]
Created: September 26, 2001
Revised: September 26, 2001
URL: | http://www.webreference.com/authoring/languages/xml/insidexslt/chap2/3/5.html | CC-MAIN-2016-22 | refinedweb | 255 | 60.75 |
It seems reasonable to focus on ALPN support, and generally dropping NPN from trunk. NPN is already on a decline, and won't be used going forward.
On Thu, Apr 2, 2015 at 12:44 AM, Stefan Eissing <stefan.eiss...@greenbytes.de> wrote: > Any reason to differ from trunk in 2.4? > > The people using spdy already in a 2.4 will most likely have the NPN patch > deployed, so they'll have it easy with the trunk changes. The only one using > the alpn patch, I know of, is myself in mod_h2. And that has already been > adapted. > > So, I myself see no reason to not bring the trunk change into 2.4. > >> Am 01.04.2015 um 22:33 schrieb Jim Jagielski <j...@jagunet.com>: >> >> Yeah, I agree. Right now, trunk pretty much uses >> >> #ifdef HAVE_TLS_ALPN >> blah blah >> #endif >> #ifdef HAVE_TLS_NPN >> blah2 blah2 >> #endif >> >> Instead of >> >> #if defined(HAVE_TLS_NPN) || defined(HAVE_TLS_ALPN) >> >> so that "ripping out" NPN would be easier. The question is >> which to use for 2.4... >> >>> On Apr 1, 2015, at 1:59 PM, Stefan Eissing <stefan.eiss...@greenbytes.de> >>> wrote: >>> >>> Well, I took the trunk version, diffed to 2.4.12 and made a patch for my >>> sandbox build (removed the non alpn/npn parts). That works for mod_h2 after >>> adding callbacks for the npn stuff. >>> >>> I have no real pref to keep npn and alpn separate or not. my thought when >>> merging these was that npn will go away rather soon as alpn is supposed to >>> replace it and is afaik the cryptographically more secure way (i think npn >>> is prone to mitm downgrade attacks). >>> >>> cheers, >>> Stefan >>> >>> >>> >>>> Am 01.04.2015 um 19:28 schrieb Jim Jagielski <j...@jagunet.com>: >>>> >>>> Yeah, there is some "overlap" which I'm trying to grok, >>>> since trunk had NPN but not ALPN, so I tried to have the >>>> ALPN stuff self-contained. But not sure if that's the best >>>> way since, for example, alpn_proposefns is adjusted >>>> in ssl_callback_AdvertiseNextProtos(), but that is a >>>> NPN "only" function in trunk, so it uses npn_proposefns. >>>> >>>> I'm thinking that in trunk we shouldn't think of >>>> NPN and ALPN as "distinct". >>>> >>>>> On Apr 1, 2015, at 12:47 PM, Rainer Jung <rainer.j...@kippdata.de> wrote: >>>>> >>>>> Hi Stefan, >>>>> >>>>>> Am 01.04.2015 um 18:22 schrieb Stefan Eissing: >>>>>> Jim, >>>>>> >>>>>> today I converted your commit to a path on 2.4.12 and tested it with >>>>>> mod_h2. All fine! >>>>>> >>>>>> Then I got a trouble report that alpn negotiation always selected >>>>>> "http/1.1" unless SSLAlpnPreference configured something else. This is >>>>>> due to the deterministic ordering and "http/1.1." > "h2". So, I made a >>>>>> slight modification, attached below. >>>>> >>>>> Maybe related but concerning NPN: There was a difference between the NPN >>>>> parts of your original Bugzilla attachment and what was already in >>>>> mod_ssl trunk and therefore was not applied. In your attachment, there >>>>> was some code for sorting in ssl_callback_AdvertiseNextProtos() which >>>>> IMHO does not exist in trunk. Is that part necessary? >>>>> >>>>> A second difference: your original addition to ssl_engine_io.c had the >>>>> NPN and the ALPN parts merged in the same code block. In trunk those are >>>>> now two separate pieces coming after each other. >>>>> >>>>>> --- modules/ssl/ssl_engine_kernel.c 2015-04-01 15:23:48.000000000 >>>>>> +0200 >>>>>> +++ >>>>>> ../../mod-h2/sandbox/httpd/gen/httpd-2.4.12/modules/ssl/ssl_engine_kernel.c >>>>>> 2015-04-01 17:53:03.000000000 +0200 >>>>>> @@ -2177,7 +2152,7 @@ >>>>>> } >>>>>> >>>>>> /* >>>>>> - * Compare to ALPN protocol proposal. Result is similar to strcmp(): >>>>>> + * Compare two ALPN protocol proposal. Result is similar to strcmp(): >>>>>> * 0 gives same precedence, >0 means proto1 is prefered. >>>>>> */ >>>>>> static int ssl_cmp_alpn_protos(modssl_ctx_t *ctx, >>>>>> @@ -2254,14 +2229,8 @@ >>>>>> i += plen; >>>>>> } >>>>>> >>>>>> - /* Regardless of installed hooks, the http/1.1 protocol is always >>>>>> - * supported by us. Add it to the proposals if the client also >>>>>> - * offers it. */ >>>>>> proposed_protos = apr_array_make(c->pool, client_protos->nelts+1, >>>>>> sizeof(char *)); >>>>>> - if (ssl_array_index(client_protos, alpn_http1) >= 0) { >>>>>> - APR_ARRAY_PUSH(proposed_protos, const char*) = alpn_http1; >>>>>> - } >>>>>> >>>>>> if (sslconn->alpn_proposefns != NULL) { >>>>>> /* Invoke our alpn_propos_proto hooks, giving other modules a >>>>>> chance to >>>>>> @@ -2280,9 +2249,16 @@ >>>>>> } >>>>>> >>>>>> if (proposed_protos->nelts <= 0) { >>>>>> - ap_log_cerror(APLOG_MARK, APLOG_ERR, 0, c, APLOGNO(02839) >>>>>> - "none of the client alpn protocols are >>>>>> supported"); >>>>>> - return SSL_TLSEXT_ERR_ALERT_FATAL; >>>>>> + /* Regardless of installed hooks, the http/1.1 protocol is >>>>>> always >>>>>> + * supported by us. Choose it if none other matches. */ >>>>>> + if (ssl_array_index(client_protos, alpn_http1) < 0) { >>>>>> + ap_log_cerror(APLOG_MARK, APLOG_ERR, 0, c, APLOGNO(02839) >>>>>> + "none of the client alpn protocols are >>>>>> supported"); >>>>>> + return SSL_TLSEXT_ERR_ALERT_FATAL; >>>>>> + } >>>>>> + *out = (const unsigned char*)alpn_http1; >>>>>> + *outlen = (unsigned char)strlen(alpn_http1); >>>>>> + return SSL_TLSEXT_ERR_OK; >>>>>> } >>>>>> >>>>>> /* Now select the most preferred protocol from the proposals. */ >>>> >> > > <green/>bytes GmbH > Hafenweg 16, 48155 Münster, Germany > Phone: +49 251 2807760. Amtsgericht Münster: HRB5782 > > > | https://www.mail-archive.com/dev@httpd.apache.org/msg61618.html | CC-MAIN-2019-26 | refinedweb | 770 | 67.15 |
attr_praxis · fit_praxis · pval_praxis · stop_praxis
Optimization¶ ↑
fit_praxis()¶ ↑
- Syntax:
min = h.fit_praxis(n, "funname", x._ref_x[0])
min = h.fit_praxis(n, "funname", Vector)
min = h.fit_praxis(..., ..., ..., "after quad statement")
min = h.fit_praxis(efun_as_python_callable, neuron_vector)
- Description:
This is the principal axis method for minimizing a function. See praxis.c in the scopmath library.
1 <= n < 20
- is the number of parameters to vary (number of arguments to funname).
- funname
- the name of the function to minimize, eg. least square difference between model and data. The funname must take two arguments, the first arg is the number of elements in second arg vector..
- x
- is a double
Vectorof at least length n. Prior to the call set it to a guess of the parameter values. On return it contains the values of the args that minimize
funname().
funname may be either an interpreted HOC function or a compiled NMODL function. This form of calling cannot optimize Python functions directly.
If the variable stoprun is set to 1 during a call to fit_praxis, it will return immediately (when the current call to funname returns) with a return value and varx values set to the best minimum found so far. Use
stop_praxis()to stop after finishing the current principal axis calculation.
The fourth argument, if present, specifies a statement to be executed at the end of each principal axis evaluation.
If the third argument is a Vector, then that style is used to specify the initial starting point and return the final value. However the function is still called with second arg as a pointer into a double array.
The Python callable form uses a Python Callable as the function to minimize and it must take a single NEURON Vector argument specifying the values of the parameters for use in evaluation the function. On entry to fit_praxis the Vector specifies the number of parameters and the parameter starting values. On return the vector contains the values of parameters which generated the least minimum found so far.
Example: minimize \((x+y - 5)^2 + 5*((x-y) - 15)^2\)
from neuron import h v = h.Vector([0, 0]) def efun(v): return (v[0] + v[1] - 5) ** 2 + 5 * (v[0] -v[1] - 15) ** 2 h.attr_praxis(1e-5, 0.5, 0) e = h.fit_praxis(efun, v) print("e=%g x=%g y=%g\n"%(e, v[0], v[1]))
Warning
Up to version 4.0.1, the arguments to funname were an explicit list of n arguments. ie
numarg()==n.
See also
attr_praxis(),
stop_praxis(),
pval_praxis()
attr_praxis()¶ ↑
- Syntax:
h.attr_praxis(tolerance, maxstepsize, printmode)
previous_index = h.attr_praxis(mcell_ran4_index)
- Description:
Set the attributes of the praxis method. This must be called before the first call to
fit_praxis().
- tolerance
- praxis attempt to return f(x) such that if x0 is the true local minimum then
norm(x-x0) < tolerance
- maxstepsize
- should be set to about the maximum distance from initial guess to the minimum.
- printmode=0
- no printing
- printmode=1,2,3
- more and more verbose
The single argument form causes praxis to pick its random numbers from the the mcellran4 generator beginning at the specified index. This allows reproducible fitting. The return value is the previously picked index. (see
mcell_ran4())
pval_praxis()¶ ↑
stop_praxis()¶ ↑
- Syntax:
h.stop_praxis()
h.stop_praxis(i)
- Description:
- Set a flag in the praxis function that will cause it to stop after it finishes the current (or ith subsequent) principal axis calculation. If this function is called before
fit_praxis(), then praxis will do a single (or i) principal axis calculation and then exit. | https://www.neuron.yale.edu/neuron/static/py_doc/analysis/programmatic/optimization.html | CC-MAIN-2020-16 | refinedweb | 584 | 57.98 |
I know i am in the wrong place... but i have tried several places on teh web and some body recommended me to try out this website ...
my problem is to launch an exe file, say notepad ... so work flow is something like this
1. you open up this webpage
2. You click on a link/button .. essentially you trigger an event
3. this results in opening up (say) notepad on your computer
4. its assumed that notepad will reside at a pre-destined place, for instance c:\winnt
i already have this java program which does the job from command line ... i understand I need to convert it either into an applet or a plug-in. i am also attaching code for this program
PLEASE help me if you know, or kindly guide me to relevant resources
*******************************************************
import java.lang.*;
import java.io.*;
public class RuntimeExecTest {
public static void main(String[] args) {
Runtime rt = Runtime.getRuntime();
String[] callAndArgs = { "Notepad.exe"};
try {
Process child = rt.exec(callAndArgs);
child.waitFor();
System.out.println("Process exit code is:" + child.exitValue());
}
catch(IOException e) {
System.err.println("IOException starting process!");
}
catch(InterruptedException e) {
System.err.println("Interrupted waiting for process!");
}
}
}
Launch an executable from client side (1 messages)
- Posted by: Shariq Samad
- Posted on: January 09 2001 16:01 EST
Threaded Messages (1)
- Launch an executable from client side by Tyler Jewell on January 09 2001 18:29 EST
Launch an executable from client side[ Go to top ]
Well, you have a couple of options here:
- Posted by: Tyler Jewell
- Posted on: January 09 2001 18:29 EST
- in response to Shariq Samad
1) You can create a signed applet (refer to java.sun.com for more information) and give that applet broad permissions to access the file system. You can then exec the notepad.exe from the applet. Creating a signed applet is not easy, however, and it's been quite awhile since I did that ;)
2) You need to have some sort of resident code on the machine that is triggered when your JavaScript button is clicked. BEA used to have a utility part of WLS called Zero Administration Client (ZAC) that automatically created these mini-client side installation programs. The idea was this: the logic behind your program would be stored on the server in modules (JARs). Client applications would go to the site with the program. The client application would automatically download a platform dependent EXE (that was created by ZAC) and installed on the local machine. Your web page could invoke the client side EXE. When the EXE runs, it connects back to the server to check for updated modules and downloads those if necessary. But, this is besides the point -- the EXE is installed on the client and accessible from the web page. You could just write the code to spawn off Notepad as part of what gets generated into the EXE.
I believe that BEA is referring people to Sun's Java WebStart utility which does the same behavior as ZAC.
Good luck.
Tyler | http://www.theserverside.com/discussions/thread.tss?thread_id=3283 | CC-MAIN-2015-18 | refinedweb | 508 | 65.32 |
- Steps
- Upgrade the bundled PostgreSQL chart
- Upgrade steps for 3.0 release
Upgrade Guide
Before upgrading your GitLab installation, you need to check the changelog corresponding to the specific release you want to upgrade to and look for any release notes that might pertain to the new GitLab chart version.
Warning: Warning:
If you are upgrading from the
2.x version of the chart to the latest 3.0 release, you need
to first update to the latest
2.6.x patch release in order for the upgrade to work.
The 3.0 release notes describe the supported upgrade path.
We also recommend that you take a backup first.
Also note that you need to provide all values using
helm upgrade --set key=value syntax or
-f values.yml instead of using
--reuse-values because some of the current values might be deprecated.
--setarguments
3.0version of the chart, please follow the manual upgrade steps for 3.0.
The following are the steps to upgrade GitLab to a newer version:
- Check the change log for the specific version you would like to upgrade to
- Go through deployment documentation step by step
Extract your previous
--setarguments with
helm get values gitlab > gitlab.yaml
- Decide on all the values you need to set
- If you would like to use the GitLab Operator go through the steps outlined in Operator installation
Perform the upgrade, with all
--setarguments extracted in step 4
helm upgrade gitlab gitlab/gitlab \ --version <new version> \ -f gitlab.yaml \ --set gitlab.migrations.enabled=true \ --set ...
gitlab.migrations.enabledset to
false. You will want to ensure that you explicitly set it back to
truefor future updates.
Upgrade the bundled PostgreSQL chart
As part of the
3.0.0 release of this chart, we upgraded the bundled PostgreSQL chart from 0.11.0 to 7.7.0. This is not a drop in replacement. Manual steps need to be performed to upgrade the database.
The steps have been documented in the 3.0 upgrade steps.
Upgrade steps for 3.0 release
The
3.0.0 release requires manual steps in order to perform the upgrade.
If you are using the bundled PostgreSQL, the best way to perform this upgrade is to backup your old database, and restore into a new database instance. We’ve automated some of the steps, as an alternative, you can perform the steps manually.
Prepare the existing database
postgresql.installis false), you do not need to perform this step.
bash -s STAGEwith
bash -s -- -r RELEASE STAGEin the example commands provided later.
kubectlcontext’s default, you must pass the namespace to the database-upgrade script. Replace
bash -s STAGEwith
bash -s -- -n NAMESPACE STAGEin the example commands provided later. This option can be used along with
-r RELEASE. You can set the context’s default namespace by running
kubectl config set-context --current --namespace=NAMESPACE, or using
kubensfrom kubectx
The
pre stage will create a backup of your database using the backup-utility script in the task-runner pod, which gets saved to the configured s3 bucket (MinIO by default):
# GITLAB_RELEASE should be the version of the chart you are installing, starting with 'v': v3.0.0 curl -s{GITLAB_RELEASE}/scripts/database-upgrade | bash -s pre
Prepare the cluster database secrets
NOTICE: If you are not using the bundled PostgreSQL chart (
postgresql.installis false):
- If you have supplied
global.psql.password.key, you do not need to perform this step.
- If you have supplied
global.psql.password.secret, additionally set
global.psql.password.keyto the name of your existing key to bypass this step.
The secret key for the application database key is changing from
postgres-password, to
postgresql-password. Use one of the two steps described below to update your database password secret:
If you’d like to use an auto-generated PostgreSQL password, delete the existing secret to allow the upgrade to generate a new password for you. RELEASE-NAME should be the name of the GitLab release from
helm list:
# Create a local copy of the old secret in case we need to restore the old database kubectl get secret RELEASE-NAME-postgresql-password -o yaml > postgresql-password.backup.yaml # Delete the secret so a new one can be created kubectl delete secret RELEASE-NAME-postgresql-password
If you want to use the same password, edit the secret, and change the key from
postgres-passwordto
postgresql-password. Additionally, we need a secret for the superuser account. Add a key for that user
postgresql-postgres-password:
# Encode the superuser password into base64 echo SUPERUSER_PASSWORD | base64 kubectl edit secret RELEASE-NAME-postgresql-password # Make the appropriate changes in your EDITOR window
Delete existing services
The
3.0 release updates an immutable field in the NGINX Ingress, this requires us to first delete all the services
before upgrading. You can see more details in our troubleshooting documentation, under Immutable Field Error, spec.clusterIP.
Remove all affected services. RELEASE_NAME should be the name of the GitLab release from
helm list:
kubectl delete services -lrelease=RELEASE_NAME
LoadBalancerfor NGINX Ingress from this chart, if in use. See global Ingress settings documentation for more details regarding
externalIP. You may be required to update DNS records!
Upgrade GitLab
Upgrade GitLab following our standard procedure, with the following additions of:
If you are using the bundled PostgreSQL, disable migrations using the following flag on your upgrade command:
--set gitlab.migrations.enabled=false
We will perform the migrations for the Database in a later step for the bundled PostgreSQL.
Restore the Database
postgresql.installis false), you do not need to perform this step.
Wait for the upgrade to complete for the task-runner pod. RELEASE_NAME should be the name of the GitLab release from
helm list
kubectl rollout status -w deployment/RELEASE_NAME-task-runner
After the task-runner pod is deployed successfully, run the
poststeps:
This step will do the following:
- Set replicas to 0 for the
unicorn,
sidekiq, and
gitlab-exporterdeployments. This will prevent any other application from altering the database while the backup is being restored.
- Restore the database from the backup created in the pre stage.
- Run database migrations for the new version.
- Unpause the deployments from the first step.
# GITLAB_RELEASE should be the version of the chart you are installing, starting with 'v': v3.0.0 curl -s{GITLAB_RELEASE}/scripts/database-upgrade | bash -s post
Troubleshooting 3.0 release upgrade process
- Make sure that you are using Helm 2.14.3 or >= 2.16.1 due to the bug in 2.15.x.
If you see any failure during the upgrade, it may be useful to check the description of
gitlab-upgrade-checkpod for details:
kubectl get pods -lrelease=RELEASE,app=gitlab kubectl describe pod <gitlab-upgrade-check-pod-full-name>
You may face the error below when running
helm upgrade:
Error: kind ConfigMap with the name "gitlab-gitlab-shell-sshd" already exists in the cluster and wasn't defined in the previous release. Before upgrading, please either delete the resource from the cluster or remove it from the chart Error: UPGRADE FAILED: kind ConfigMap with the name "gitlab-gitlab-shell-sshd" already exists in the cluster and wasn't defined in the previous release. Before upgrading, please either delete the resource from the cluster or remove it from the chart
The error message can also mention other configmaps like
gitlab-redis-health,
gitlab-redis-headless, etc. To fix it, make sure that the services were removed as mentioned in the upgrade steps for 3.0 release. After that, also delete the configmaps shown in the error message with:
kubectl delete configmap <configmap | https://docs.gitlab.com/12.10/charts/installation/upgrade.html | CC-MAIN-2020-50 | refinedweb | 1,262 | 55.64 |
# # Copyright (C) 1998, 1999 Ken MacLeod # XML::Grove::Path is free software; you can redistribute it # and/or modify it under the same terms as Perl itself. # # $Id: Path.pm,v 1.2 1999/08/17 15:01:28 kmacleod Exp $ # package XML::Grove::Path; use XML::Grove; use XML::Grove::XPointer; use UNIVERSAL; sub at_path { my $element = shift; # or Grove my $path = shift; $path =~ s|^/*||; my @path = split('/', $path); return (_at_path ($element, [@path])); } sub _at_path { my $element = shift; # or Grove my $path = shift; my $segment = shift @$path; # segment := [ type ] [ '[' index ']' ] # # strip off the first segment, finding the type and index $segment =~ m|^ ([^\[]+)? # - look for an optional type # by matching anything but '[' (?: # - don't backreference the literals \[ # - literal '[' ([^\]]+) # - index, any non-']' chars \] # - literal ']' )? # - the whole index is optional |x; my ($node_type, $instance, $match) = ($1, $2, $&); # issues: # - should assert that no chars come after index and before next # segment or the end of the query string $instance = 1 if !defined $instance; my $object = $element->xp_child ($instance, $node_type); if ($#$path eq -1) { return $object; } elsif (!$object->isa('XML::Grove::Element')) { # FIXME a location would be nice. die "\`$match' doesn't exist or is not an element\n"; } else { return (_at_path($object, $path)); } } package XML::Grove::Document; sub at_path { goto &XML::Grove::Path::at_path; } package XML::Grove::Element; sub at_path { goto &XML::Grove::Path::at_path; } 1; __END__ =head1 NAME XML::Grove::Path - return the object at a path =head1 SYNOPSIS); =head1 DESCRIPTION C<XML::Grove::Path> returns XML objects located at paths. Paths are strings of element names or XML object types seperated by slash ("/") characters. Paths must always start at the grove object passed to `C<at_path()>'. C<XML::Grove::Path> is `C<#any>' object type matches any type of object, it is essentially an index into the contents of the parent object. The `C<#text>' object type treats text objects as if they are not normalized. Two consecutive text objects are seperate text objects. =head1 AUTHOR Ken MacLeod, ken@bitsko.slc.ut.us =head1 SEE ALSO perl(1), XML::Grove(3) Extensible Markup Language (XML) < =cut | https://metacpan.org/release/KMACLEOD/XML-Grove-0.46alpha/source/lib/XML/Grove/Path.pm | CC-MAIN-2022-21 | refinedweb | 345 | 51.68 |
AWS errors and events within a Lambda on AWS.
1. Create a Lambda
First, you’ll need a Lambda. If you have an existing Lambda or you are creating a new Lambda, make sure it has permissions to log to CloudWatch. For the examples in this post, we are going to be using the following Node.js code:
exports.handler = async (event) => { console.log(JSON.stringify({ event: 'test', more: 'stuff' })); console.log(JSON.stringify({ event: "don't catch", more: 'stuff' })); };
Note: If this Lambda has not logged anything yet, trigger an event that will cause it to do so. The log group for Lambdas isn’t created until something is logged, and we’ll need that later on.
2. Create Metric Filter
After you have a Lambda with a log group, open the CloudWatch console and click “Logs” in the sidebar. Next, select the log group that relates to your Lambda and click “Create Metric Filter.” You should see a screen asking you to specify a filter pattern. For the example mentioned earlier, we will use the following filter:
{ $.event = test }
This filter will find and count any logged JSON that has event as a key, and test as a value on that key. If you need a more personalized filter, checkout Amazon’s official documentation on CloudWatch’s filter and pattern syntax.
After you have set your filter pattern, you can test it on one of your existing logs or confirm your filter by pressing “Assign Metric.” Then you can input a name for you filter, along with a name and namespace for the given metric.
3.Create Alarm
After creating a metric filter, you will be taken to a screen showing all metric filters for your chosen log. Find the metric filter you just created, and click “Create Alarm.” A modal will pop up prompting you to create an alarm. Fill in the form with your desired details, click “Create Alarm,” and you will have a custom alarm based on Lambda logs!
You should now be receiving notifications whenever your specified event gets triggered. | https://spin.atomicobject.com/2018/11/16/aws-lambda-custom-alarm/ | CC-MAIN-2019-30 | refinedweb | 346 | 73.17 |
Have you ever written tests so big, so comprehensive, that every time you change
something in your code, you have to change your tests accordingly? Have you
ever spent more than 30 minutes writing functional tests for a controller,
including individually submitting nils and invalid data to each action in turn?
How about, have you ever had to use
class_eval to awkwardly construct a set of
functional tests that did similar things, because Rails says you can only do one
get/post call in a functional test? This is all nonsense. If you want to
stay sane as a tester, relax and write the tests that you need and not the
tests that your needs-100%-comprehensivity-at-all-times part of your brain
wants, you will find that you are a whole lot more rested, and in general
you’ll bleed a lot less.
This is not about eschewing testing, this is about saying No to Big Testing. Like Big Oil before it, Big Testing is gaining a lot of clout in the Ruby and Rails community. Plugins like Rcov and Heckle are emerging as ways to ensure your code is production-ready, and that in your flawed human state you have not erred and left out a crucial test. How will you compete with Google Docs & Spreadsheets if your web-based office application crashes when someone tries to take the square root of a negative number?
As long as your application has a certain critical mass of stability, stability has very little to do with whether your idea will take off or not. It’s all the design choices you’re making, and whether your visual design is appealing and professional. You know, the meat of your application. The skeleton underneath should be strong, it should work, but you don’t need to have so much monitoring equipment hooked up to it that every time you want to shift some bones around, you first have to untangle knots of electrical cords.
Here’s a code sample to break up this post and make it seem more visual. Don’t study it too hard, it doesn’t have any relevance:
def select_bird self.stoked? ? @bald_eagle : @eagle end
When I was first introduced to formal unit and functional testing in Rails, I understood it as a way to refactor with ease. If you wrote basic tests, that made sure all of your features worked as expected, you could confidently go into a total rewrite of how they worked under the hood. If your tests passed, you didn’t have to be nervous. This is why I love tests. Little tests. Simple tests. Tests that touch the outside only, to make sure that the application is what the tests ordered out of the catalog. If you’re changing tests when you change code without changing design, those tests are either poorly written, or ultimately meaningless. For testing at that level, internal or public QA will take up far less of your time and money.
The three different kinds of testing (unit, functional, and integration) necessarily attach themselves to your code at three levels of granularity, with unit testing being the most code-specific, followed by functional, and then integration testing (which barely cares about your code at all). I understand that it is impossible to write unit tests and functional tests that don’t straitjacket the specific implementation of your design to at least some significant degree. My answer to this, then, is to write far less of them. Spend less time checking values in the assigns hash, and spend more time on ensuring realistic uses of the system work properly.
Again:
# chapters seven and eight of the poignant guide are coming soon! excited = "11"
Testing is an asymptotic curve. You can spend 1 hour on a set of basic tests that cover 80% of your code, or you can spend a large bag of hours on covering 99.9% of your application. Most of the time, it’s not worth it. Write a functional test for every action that ensures it works as expected, write any additional interesting, caveat-y functional tests that come to mind without you really thinking about it (but not if it takes too long!). Don’t worry about nils or negative numbers, just move on and write a few unit tests to check other interesting model interactions. Then spend an hour writing a few key integration tests that really take your application (or application-to-be, if you’re doing test-driven development) for a spin. Then go back to working on design mocks for your next feature, or spend time talking to the friends you’ve asked to try your application out.
If your application has gotten so big and well defined that you don’t expect there to be any significant refactoring, and the app has become so mission critical that near-100% stability is a requirement, and you’re ready to lock down the codebase into a stable, predictable machine, then you should get Rcov and Heckle and spend an indefinite amount of hours writing Big Tests. But if your application is at that point, that sounds pretty boring, and I would find/found another company. | https://robots.thoughtbot.com/unpopular-developer-2-down-with-big-testing | CC-MAIN-2015-22 | refinedweb | 874 | 65.35 |
Results 1 to 3 of 3
- Join Date
- Nov 2010
- 5
- Thanks
- 0
- Thanked 0 Times in 0 Posts
Help with writing program for class.
Sorry..:
Code:
//
Last edited by dubsjw; 11-10-2010 at 08:10 AM. Reason: adding code
- Join Date
- Sep 2002
- Location
- Saskatoon, Saskatchewan
- 16,994
- Thanks
- 4
- Thanked 2,662 Times in 2,631 Posts
So far the only thing wrong with the class is the constructor. The UML lists that the constructor has no arguments, but this one has three. I'd chain them and keep both to give you an option:
PHP Code:
public Time()
{
this(0, 0, 0);
}
The timeApp just has a main that has a few calls to the time class. I can't really say much else on that without doing it for you. Update what you have, and if you still have some problems than come back and post back what the problem is.PHP Code:
header('HTTP/1.1 420 Enhance Your Calm');
- Join Date
- Nov 2010
- 5
- Thanks
- 0
- Thanked 0 Times in 0 Posts
ok...i caught up on what i missed in my class and i understand ALOT more than i did prior to this post. One thing im still confused about is the whole am and pm thing. In order to do this I will have to create a string and enforce it to change according to what the the number is but there is nowhere in the uml that it lists that i need a variable for it which makes me confused...is there another way to do this?
Here is what i have so far:
Code:
public class Time { private int hour; private int minute; private int second; public Time() { hour = 0; minute = 0; second = 0; } public void setTime(int h, int m, int s) { if (h > 23) { h = 0; } if (m > 59) { m = 0; } if (s > 59) { s = 0; } hour = h; minute = m; second = s; } public void displayMilitary() { } public void displayStandard() { } }
Last edited by dubsjw; 11-17-2010 at 09:25 AM.
AdSlot6 | http://www.codingforums.com/java-and-jsp/208955-help-writing-program-class.html?s=62c424bc42bea1486aa66dfd726646cc | CC-MAIN-2015-40 | refinedweb | 343 | 81.87 |
Harden services with systemd
A Hard Nut to Crack
One of the most important goals in the development of systemd is securing Linux. Of course, you can only improve what can be measured, which is why Galileo Galilei advised: "Measure what is measurable, and make measurable what is not." Following this maxim, systemd now makes system security under Linux measurable and improvable.
More specifically, it is the
systemd-analyze security command that allows this measurement. When executed, it returns a table like that shown in Figure 1, listing each service managed by systemd (UNIT
); a numerical value for the degree of protection (EXPOSURE
, where 10 is both the highest and worst value); a verbal translation of this value (PREDICATE
); and another version of the rating (HAPPY
) in the form of an emoji.
Additionally,
systemd-analyze can reveal how it arrives at its assessment: To see this, start it with the name of a service unit. As shown in Figure 2, it lists all the factors that have been checked, along with a checkmark for passed or an X for failed.
Not a Tough Cookie
After that, the user knows systemd's opinion on the security status of the services it checked, but what can be done to improve the bad scores? To find out, you can build a minimal service, whose security you then elevate step-by-step. As an example, first create a minimal HTML page in an empty directory (e.g.,
/home/$USER/Python/sectest/, which will serve later as the document root of a small web server) (Listing 1).
Listing 1
Minimal HTML Page
<!doctype html> <html lang=en> <head> <meta charset=utf-8> <title>Hello World</title> </head> <body> <p><h1>HELLO WORLD!</h1></p> </body> </html>
The easiest approach is to borrow the web server itself from Python, which already has a simple model that can be used with virtually no configuration. Next, wrap the server start in a systemd unit file – again, keeping it as simple as possible (Listing 2). Now save the unit file as
/lib/systemd/system/helloworld.service and the HTML page as
index.html in the document root directory. After typing
systemctl start helloworld.service
Listing 2
Unit File
[Unit] Description=Simple HTTP Server Documentation= [Service] Type=simple WorkingDirectory=/home/jcb/Python/sectest ExecStart=/usr/bin/python3 -m http.server 8080 ExecStop=/bin/kill -9 $MAINPID [Install] WantedBy=multi-user.target
enter localhost:8080 in the address bar of a web browser to bring up the plain Hello World page.
In this state, without any precautions, the service is completely unprotected. In the output of
systemd-analyze security, it appears with a high score of 9.6 as UNSAFE
and a shocked emoji (Figure 3).
Fundamentals
In the first step, add a line reading
NoNewPrivileges=true
to the
Service section of the unit file to prevent the process from escalating its privileges later (e.g., with
setuid or
setgid bits). After this (as for all subsequent additions to the unit file), you need to reload all unit files and restart the service:
systemctl daemon-reload systemctl restart helloworld.service
If you now look at the output of
systemd-analyze security, the exposure value of
helloworld.service has already dropped slightly, from 9.6 to 9.4. Admittedly, this still counts as unsafe.
On with the task: A whole class of attacks can be rendered impossible by adding
PrivateTmp=yes
to the unit file, which causes systemd to create a new, exclusive filesystem namespace for the process and to mount
/tmp and
/var/tmp/ there. Therefore, the temporary files are no longer shared publicly and are immediately deleted after the process ends. Attacks based on swapping or manipulating temporary files now come to nothing. The exposure value drops to 9.0, but the rating remains unsafe.
The next step is to add
RestrictNamespaces=uts ipc pid user cgroup
to the unit file, which prevents the process from accessing the listed namespaces. The list deliberately excludes the
net namespace and a few others that the web server has to use. After this action, the exposure value drops below 9 (to 8.8) for the first time, and the rating is no longer unsafe, only EXPOSED
. The emoji's expression changes from horrified to merely unhappy.
Kernel and Control Groups
The next step is to enable additional protections in the unit file:
ProtectKernelTunables=yes ProtectKernelModules=yes ProtectControlGroups=yes
The kernel variables, which users can access via
/proc/sys/,
/sys,
/proc/sysrq-trigger/,
/proc/latency_stats/,
/proc/acpi/,
/proc/timer_stats/,
/proc/fs/, and
/proc/irq/, are now read-only and therefore no longer editable for the process. In any case, the system should only have write access to these variables during booting, so you are not losing any functionality.
Because the web server does not need any special kernel modules, you have also stopped it loading and unloading such modules for the web server process. From now on, it cannot access the control groups. Although container administration software might need this access, a web server does not. This step pushes the exposure value down to 8.1.
Finally, you can set:
ProtectSystem=strict PrivateUsers=strict
The first line mounts
/usr and the bootloader directories
/boot and
/efi in read-only mode for all processes that this unit starts. The second line configures a user group mapping for the process that maps root and the user that starts the unit's main process to itself – but maps all other users or groups to nobody
. The system's user and group database is thus decoupled from the process running in its own sandbox. The exposure value now drops below 8 (more precisely, to 7.8).
Buy this article as PDF
(incl. VAT) | https://www.admin-magazine.com/index.php/Archive/2022/67/Harden-services-with-systemd | CC-MAIN-2022-33 | refinedweb | 955 | 54.63 |
Svelte is a new method of building Web applications. It has been tepid since its launch. It has not become the fourth largest framework after Angular, React and VUE, but it has not lost its popularity and nobody cares. An important reason for this is that svelte's core idea is to reduce the amount of code in the framework runtime through static compilation. It can be developed like React and VUE, but there is no virtual DOM., So svelte can compile the code into JS code that is small and independent of the framework.
It seems full of advantages, but it is too flexible to write highly consistent business code. The above advantages are not well reflected in actual large projects.
Svelte's framework is not perfect, but it didn't die in the cruel market competition because it has a special secret script and some functions that make it an irreplaceable member of other frameworks..
For Svelte, the name of this script is Web Components.
In a large project completed by multiple teams, each team may use different framework versions or even different frameworks, which makes it difficult to reuse components between different projects. "write one, run anywhere" is empty talk. In this case, Svelte becomes a bridge to bridge the framework gap. The framework independent Web Components developed by Svelte can be reused among various frameworks. At the same time, Svelte's development method is not as cumbersome as writing pure js.
Taking SpreadJS integration as an example, the following describes how to develop a spread sheets web component with Svelte for reuse by other pages.
- Create the Svelte template project. Svelte officially provides the template project, just clone or download the project.
npx degit sveltejs/component-template my-new-component cd my-new-component npm install # or yarn
- Modify rollup.config.js, add customElement: true configuration, and the output is a web component component.
The added rollup.config.js is as follows.
import svelte from 'rollup-plugin-svelte'; import resolve from '@rollup/plugin-node-resolve'; import pkg from './package.json'; const name = pkg.name .replace(/^(@\S+\/)?(svelte-)?(\S+)/, '$3') .replace(/^\w/, m => m.toUpperCase()) .replace(/-\w/g, m => m[1].toUpperCase()); export default { input: 'src/index.js', output: [ { file: pkg.module, 'format': 'es' }, { file: pkg.main, 'format': 'umd', name } ], plugins: [ svelte({ customElement: true, }), resolve() ], };
- Update src/Component.svelte to create the spread sheets component.
<svelte:options <script> import { createEventDispatcher, onMount } from 'svelte'; // Event handling const dispatch = createEventDispatcher(); export let</div>
In this way, our custom components are created. Just call npm run build to compile the spread sheets component.
- Reference the component on the page. Create the index.html page and reference the compiled js file. At the same time, the related resources of spreadjs are introduced. Add spreadjs directly using the spread sheets tag.
<meta name="spreadjs culture" content="zh-cn"> <meta charset="utf-8"> <title>My Counter</title> <base href="/"> <meta name="viewport" content="width=device-width, initial-scale=1"> <link rel="stylesheet" type="text/css" href=""> <!-- <spread-sheets-designer></spread-sheets-designer> --> <button onclick="getJSON()">GetJSON</button> <spread-sheets</spread-sheets> <script src="" type="text/javascript"></script> <script type="text/javascript" src="/dist/index.js"></script> <script type="text/javascript"> document.querySelector("spread-sheets").addEventListener("changed", function(){ console.log(arguments) }) window.onload = function(){ document.querySelector("spread-sheets").setAttribute("value", "234"); } </script>
The effect after adding is shown in the figure below.
summary
Although it seems that the Web Component perfectly solves the reuse problem between components, the Web Component developed with Svelte also has some limitations: for example, it can only pass string attribute; The bound attribute is a one-way binding. If you want to obtain the internal update value of the component, you need to bind event to obtain it.
If you are more interested in Svelte, please leave a message~
| https://programmer.help/blogs/619e68bf49d9e.html | CC-MAIN-2021-49 | refinedweb | 640 | 51.14 |
import "cuelang.org/go/cue/literal"
Package literal implements conversions to and from string representations of basic data types.
const ( K = mulDec | mul1 M = mulDec | mul2 G = mulDec | mul3 T = mulDec | mul4 P = mulDec | mul5 E = mulDec | mul6 Z = mulDec | mul7 Y = mulDec | mul8 Ki = mulBin | mul1 Mi = mulBin | mul2 Gi = mulBin | mul3 Ti = mulBin | mul4 Pi = mulBin | mul5 Ei = mulBin | mul6 Zi = mulBin | mul7 Yi = mulBin | mul8 )
ParseNum parses s and populates NumInfo with the result.
Unquote interprets s as a single- or double-quoted, single- or multi-line string, possibly with custom escape delimiters, returning the string value that s quotes.
A Multiplier indicates a multiplier indicator used in the literal.
NumInfo contains information about a parsed numbers.
Reusing a NumInfo across parses may avoid memory allocations.
Decimal is for internal use.
IsInt reports whether the number is an integral number.
func (p *NumInfo) Multiplier() Multiplier
Multiplier reports which multiplier was used in an integral number.
String returns a canonical string representation of the number so that it can be parsed with math.Float.Parse.
QuoteInfo describes the type of quotes used for a string.
ParseQuotes checks if the opening quotes in start matches the ending quotes in end and reports its type as q or an error if they do not matching or are invalid. nStart indicates the number of bytes used for the opening quote.
IsDouble reports whether the literal uses double quotes.
Unquote unquotes the given string. It must be terminated with a quote or an interpolation start. Escape sequences are expanded and surrogates are replaced with the corresponding non-surrogate code points.
Package literal imports 7 packages (graph) and is imported by 14 packages. Updated 2020-09-18. Refresh now. Tools for package owners. | https://godoc.org/cuelang.org/go/cue/literal | CC-MAIN-2020-40 | refinedweb | 290 | 56.55 |
I've been doing two things, recently:
Update: Looking at this again, I'd like to stress that this is not a tutorial or a howto on packaging. It's a number issues that I consider either important or beneficial when incorperated into your packaging workflow.
Update: Both Zaxo and Corion expressed concern about my advocation of Module::Build. They explain a problem with the faked Makefile.PL interface to it (which IMHO is sillly - if you make a Makefile at least remove the dependancy from M::B).
Anyway, I sort of take it back. Read their posts, see what they have to say. I bought M::B's story, but I may be an idiot. Ciao!
</Update>
From here on, unless noted, instances of ./Build or Build.PL are canonical to, and can be replaced by make or Makefile.PL if you went the ExtUtils::MakeMaker way. Keep in mind that Michael Schwern, MakeMaker's maintainer promotes replacing MakeMaker with Module::Build.
The first step of properly packaging a module has to do with what goes in, and what stays out.
A confusing aspect of that is that what goes into your src dir is not necessarily what goes into your tarball. This is where we'll start.
Given a proper MANIFEST, run perl Build.PL and then ./Build dist. If you want a SIGNATURE file generated, put the argument sign => 1, in your Build.pl args. If you want a Makefile.PL, put a create_makefile_pl => 'traditional', in there too (but make sure Makefile.PL is mentioned in your MANIFEST!).
Instead, write a MANIFEST.SKIP file. My generic one looks like this:
\.DS_Store$
^_build
^Build$
^blib
~$
\.bak$
\.sw.$
^cover_db
^MANIFEST\.SKIP$
[download]
Now that we defined what we want out of the manifest, lets get our tool to generate one for us:
perl Build.pl; ./Build manifest
And now, any file that doesn't match the patterns in MANIFEST.SKIP is mentioned in there.
#!/usr/bin/perl
use strict;
use warnings;
use Module::Build;
Module::Build->new(
module_name => 'My::Module',
license => 'perl',
requires => {
'Other::Module' => '0',
},
build_requires => {
'Test::Funky' => 0,
},
create_makefile_pl => 'traditional',
sign => 1,
)->create_build_script;
[download]
It should be noted that Test::Prereq might not apply to modules you optionally use in tests. skip_all is often used when these modules aren't available. Test::WithoutModule helps you check if that code works correctly.
cpansmoke will complain if you include no t/ dir. It will send you an email saying "would you please get up off your ass and at least write a tests that does use_ok("Your::Module")".
This has become an integral part of a proper distribution, even if it doesn't really do any real testing. At least try to look like you're trying.
From there the value of $VERSION is extracted, and used as the distribution version.
Versions are important because they imply compatibility, they define dependancies, and so forth. On the CPAN there is a general format.
Your-Module-0.01/
|-- Build.PL # your build script.
|-- LICENSE # put you're license here if you really feel like it
|-- MANIFEST # we discussed it already
|-- Makefile.PL # this too. For those who aren't using CPANPLUS yet.
|-- Meta.yml # this is generated by ./Build dist
|-- README # everyone should have one
|-- SIGNATURE # like MANIFEST, only cryptographic
|-- lib/ # this should look like something you could put in @INC
|-- t/ # this is where you keep your tests
| |-- lib/ # if your tests need some libraries, put them here, and u
+se lib 't/lib'
[download]
src/
|-- Build.PL
|-- LICENSE
|-- README
|-- lib/
|-- t/
|-- lib/
[download]
src/
|-- MANIFEST
|-- Makefile.PL
|-- Meta.yml
[download]
src/
|-- Build # left over from perl Build.PL
|-- Makefile # or perl Makefile.PL
|-- _build # where Build.PL keeps it's data
|-- blib # where your source tree is before installation
|-- *~ # your editor backups
|-- cover_db # your data from test coverage
|-- CVS # your source control meta data
[download]
Test:: is a namespace you should know. See also Test::Distribution and it's friends.
Devel:: is a namespace to help you write code. Deve::Cover, for example, is a away to see what code is getting run when you run tests.
Module::Signature makes cryptographically signed MANIFEST like files.
The CPAN testing service is an experiement to try and measure package quality. run tests for you, on many platforms and perls, when you release modules with tests. is a website concerned with the quality of perl and CPAN in general. It's an interesting starting point.
That doesn't mean you shouldn't see what h2xs is outputting, and understand what the mess it makes is.
P.S.
my brain is a bit zonked today. This probably has many grammer errors, ambiguities, and disinformation. Please inform me if you find any!
An attempt to install Module::Build over CPAN gives:
CPAN.pm: Going to build K/KW/KWILLIAMS/Module-Build-0.2604.tar.gz
Sorry, PREFIX is not supported. See the Module::Build
documentation for 'destdir' or 'install_base' instead.
[download]
The objections to make are mostly avoidable by keeping things simple, only using POSIX features, writing for /bin/sh. Everybody needs make, anyhow, whether they know it or not ;-)
After Compline,Zaxo
Note that this is different from how MakeMaker's PREFIX parameter works. PREFIX tries to create a mini-replica of a site-style installation under the directory you specify, which is not always possible (and the results are not always pretty in this case)..
Chris
M-x auto-bs-mode
The objections to make are mostly avoidable by keeping things simple, only using POSIX features, writing for /bin/sh. Everybody needs make, anyhow, whether they know it or not ;-)
I really, really, really hope that's a bad joke.
This might come as a shock to you but not everybody runs Unix. Even if they did, not all Unixen are the same. Even if they were, not all makes are the same. There's BSD make, GNU make, Solaris make... Even if they were the same, they have bugs and not everyone uses the latest version. Even if they did there's different compilers, different build tools, different file tools, different file systems... oh the incompatibilities just keep coming.
Trust me. I have to deal with all of them. I maintain MakeMaker. And MakeMaker has to work everywhere Perl does. Be glad it does. It would suck if you couldn't install Perl modules on your OS because some unrelated utility is quirky. If working with MakeMaker has taught me one thing its not to be such a bloody Unix snob!
Even if everyone used the same Unix variant on the same hardware with the same linkers, compilers and utilities and there were no bugs, make would still be the problem. It has always been the problem. Not just make but relying on any external build tool. I have learned this after many painful years of trying to clean up MakeMaker. It is impossible.
I'm going to let you read why because I've said all this so many times before that I've written a whole talk about it.
Its taken years and 1400+ lines of code to keep MakeMaker running on VMS (about as far from Unix as you can get). I ported Module::Build over in one night and 25 lines.
This is not really a step into packaging modules correctly, but a prerequisite. Use Module::Build. It's shiny, it's sexy and it's future proof. It's also backwards compatible, with traditional Makefile.PL generation too. If you have a reason, use ExtUtils::MakeMaker instead.
Module::Build is not backwards compatible, as it doesn't respect the PREFIX= syntax and provides no means to achieve a similar result. So please avoid Module::Build unless you have compelling reasons to use it. ExtUtils::MakeMaker is the default, standard and working solution. If you think you really need Module::Build, create a fully compatible Makefile.PL too.
As for the other arguments against Module::Build - I will edit my post.
It's what contains lib.
I tried posting twice, appearantly I was too much of an airhead yesterday to go past the preview stage...
Ciao!
xoxo,
Andy
For those of you who don't know, I'm the poor sod who has to maintain ExtUtils::MakeMaker, as somebody here put it "the default, standard and working solution". Ha.
First of all, MakeMaker is DOOMED! Its rotting from the inside out, the architecture is all wrong. Has been since the beginning . It would take more effort and cause more breakage to properly fix it than to just write a whole new build system. ExtUtils::MakeMaker has to go away. Not today. Not tomorrow. But it has to happen. Module::Build is the heir apparent. I'm helping out as much as I can with it.
Second, PREFIX does not really work. Just because it works for you and your configuration of Perl does not mean it works for everyone else's. And that's the game build systems have to play in a language as ubiquitous as Perl. It has to work. Not only that but it has to work everywhere! And I'm not just talking about VMS or MacOS , I'm talking about the hundreds of Unix variants out there which all have their own way of configuring Perl. I started maintaining MakeMaker because PREFIX was broken. The system I was using was Debian.
What is needed is a simple, predictable, user configurable alternative. Module::Build's install_base is the beginnings of that. It installs into the same layout no matter what your Perl configuration. The layout itself is still being tweaked (for example, it currently goes into foo/lib. It should probably be foo/lib/perl5) but its pretty much ok. It is customizable by passing in more install* arguments to Build.PL.
The future is install_base. MakeMaker will be implementing its own INSTALL_BASE logic to match.
What's missing is an easy way to customize your layout once and then be done with it. Module::Build will likely support a .modulebuildrc file where you can write out your defaults and be done with it.
As a final note, Module::Build's compatible Makefile.PL will be getting PREFIX support. At the OSCON 2004 auction I offered to implement PREFIX for Module::Build if a few hundred dollars were donated to TPF. Folks ponied up the money (alas, I do not have their names) and it will be done.
Now if we can get Module::Build to include a pass-through Makefile.PL by default, then we'll have eliminated the major reason a ton of people have sworn off Module::Build (other than the impression of personality problems that lead to broken decisions like this).
(I'm assuming that the major Win32-related issues with Module::Build have been addressed.)
There will certainly be other issues to address, but this particular one needs to be addressed immediately.
The second issue will probably be getting Module::Build into core Perl.
- tye.
MB is slated to go into 5.10 along with CPAN. | http://www.perlmonks.org/index.pl?node_id=409857 | CC-MAIN-2016-36 | refinedweb | 1,848 | 67.76 |
Add Managed Associations with Core Data Services
04/30/2019
You will learn
- How foreign keys are handled in managed associations
- What the CDS builder generates
In this tutorial you will enhance the product catalog of the SAP Cloud Platform business application project
ITelO with a category for the products. To realize this you add a managed association to the data model and then add annotations, to show the category on the UI.
We use a slightly reduced variant from the cloud-samples-catalog repository for this tutorial, which is a reuse module of the reference application
ITelO. This reference application is a showcase for the application programming model on the Cloud Platform.
The application programming model makes use of Core Data & Services (CDS) that comes with an infrastructure that enables you to focus on your business problem you want to solve.
Over the course of three tutorials you will add some CDS code and see how the application changes on the UI and its generated files. These tutorials cover the topics association, annotation, and localization.
In this tutorial and others the general name “SAP Web IDE” is used. Specifically, the “Full-Stack” version is implied throughout.
We prepared an archive for you as the basis for the upcoming tutorials. Follow the link to GitHub to download the archive.
This archive contains the
cloud-samples-catalog project, except some code, which you will add in this tutorial and the following. When you have finished all three tutorials of this group, the coding will be the same as in the original project.
GitHub Repository: SAP/cloud-samples-catalog, Branch: tutorial
Open your SAP Web IDE and import the project to your workspace.
Open SAP Web IDE and choose File | Import | File or Project.
Select the archive you just downloaded.
Select Extract Archive and confirm.
In case you have problems with this procedure, these links might help (open in a new tab):
Import a Project from an Archive
Add the following code to your
db/model.cds file in the entity
Products between
description and
image (line 18).
category: Association to Categories;
You’ve modelled a category inside the entity
Products, which is an association to the entity
Categories.
If you encounter any validation errors inside the editor, please fix these errors to have a valid CDS file before you save.
Exception to this rule: The code might show validation errors out of the box. Please ignore these. The
.cdsfiles from the provided repository are valid.
This is how your code should look:
To enable build upon save go to Tools | Preferences | Core Data Services and select the checkbox for Performs CDS Build upon save. We assume you have enabled build upon save during the course of these tutorials.
Save your
.cds file to start the builder.
This starts a build of all
.cds file in this project. To check which files are created go to View | Console in the SAP Web IDE.
>If you wonder about the filenames you see in the console, they are derived from the namespace of the `.cds` file, the service (`srv`), and entities (`srv/db`) they are modelled in. This is the pattern: `NAMESPACE_SERVICE_ENTITY`
Let’s check what the builder generated for you.
Use the code editor to view the files. You find the entry Open Code Editor in the context menu.
First, open the
db/src/gen/CLOUD_PRODUCTS_PRODUCTS.hdbcdsfile in the code editor and have a look at line 12.
Notice, that the CDS builder has added the foreign key
CATEGORY_IDto your entity. So the CDS builder adds and resolves the foreign key for you by using a managed association. This file is the design time artifact for the
CLOUD_PRODUCTS_PRODUCTStable that you will get to know later in this tutorial.
The second
.hdbcdsfile is
db/src/gen/CLOUDS_PRODUCTS_CATALOGSERVICE_PRODUCTS.hdbcds. The code that has been generated by the CDS builder is in line 10, 21, and 39.
In the file
srv/src/main/resources/edmx/clouds.products.CatalogService.xmlwithin the entity type named
Products, you have a navigation property (line 80) and a property for the foreign key (line 98).
In the
EntityContainer, there is a new
Associationand
AssociationSet.
This is the added
AssociationSet
This is the added
Association
The association is used in both, the association set and the navigation property.
The files in the
edmxfolder describe the OData exposure of your service.
These are all code snippets the CDS builder generates out of your added association, which you would otherwise have to code manually in all these separate files.
More information about associations in CDS on the SAP Help Portal
Question: Where has the
Association and
AssociationSet been added?
Hint: It’s in the
srv/src/main/resources/edmx/clouds.products.CatalogService.xmlfile. But in which part of the file?
Add your answer (case-sensitive) in the box below.
So you can see the effect of your changes in the modeling, we have included some sample data. For the deployment, we have to add this association also to the mapping of the sample data.
The mapping of the data model and actual data is done in an
.hdbtabledata file. This is used during the deployment of tables and data to the database.
Go to
db/src/_csv/data.hdbtabledata and add this code to line 16:
"CATEGORY_ID",
Save your changes.
Open the context menu on the
dbmodule and select Build | Build
Open SAP HANA database explorer.
If you haven’t enabled that feature yet, go to Tools | Preferences | Features and enable
SAP HANA Database Explorer.
Add your newly created database to the database explorer.
Hint to identify your database: The name of your project is also contained in the name of your HDI container, that you need to select in the
Add Databasedialog.
Choose
Tablesand open the
CLOUDS_PRODUCTS_PRODUCTStable.
There you find the
CATEGORY_IDwith its
SQL Data Type[NVARCHAR(36)] and
Column Store Data Type[STRING].
Open Dataand locate the column
CATEGORY_ID.
Table
CLOUDS_PRODUCTS_PRODUCTS
The values there are defined in the table
CLOUDS_PRODUCTS_CATEGORIESin the column
ID. Open the
CLOUDS_PRODUCTS_CATEGORIEStable and select
Open Datato see the values in the first column.
Table
CLOUDS_PRODUCTS_CATEGORIES
This connection in design and deployment is what we added in this tutorial. In the next tutorial you will learn how to use this on the UI.
One last task for you, in this tutorial
- Go to the data in table
CLOUDS_PRODUCTS_PRODUCTS.
- Copy the description for
ID
HT-1007.
- Paste the description in the box below.
Some interesting blog posts
- Introducing the new Application Programming Model for SAP Cloud Platform
- Interview with
Rui Nogueiraon the new Application Programming Model for SAP Cloud Platform
ITelO– A Sample Business Application for the new Application Programming Model for SAP Cloud Platform
- Top 5 Time-Saving Benefits of the Application Programming Model for SAP Cloud Platform | https://developers.sap.com/tutorials/cp-apm-foundation-01.html | CC-MAIN-2019-30 | refinedweb | 1,129 | 56.45 |
Tutorial
How To Set Up Zero Downtime Rails Deploys Using Puma and Foreman
Introduction
Puma is an efficient Ruby web server that handles Rack apps (such as Rails) very well. Puma offers concurrency using threads and/or workers. You can use Puma’s clustered mode (using workers) to deploy apps without downtime. Puma will restart workers one by one, other workers will continue to handle during this time. This is good because your users will no longer see delayed responses or error pages when you deploy updates to your app.
In this guide, we are going to use the following:
- Puma as our web server
- Foreman to manage our app (this is not strictly necessary but it does make life a bit easier)
- Capistrano to deploy (the deployment instructions are fairly generic and can easily be re-applied to other methods)
This guide assumes you have an existing Rails app you’ll be using. See the Rails Getting Started guide if you’re not up to there yet. This guide also assumes you are using Ubuntu; read up on how to set it up here. We will be using Upstart in this guide.
Installing Puma
Puma is installable via RubyGems:
gem install puma
Once installed, you can launch an app using Puma simply by calling:
puma config.ru # or whatever your *.ru file is called
But instead, we’re going to use Foreman to manage our app. Read on.
Installing Foreman
Foreman can be installed using RubyGems:
gem install foreman
Configuring Rails with Puma and Foreman
First, add puma and foreman to your
Gemfile:
# Gemfile gem 'puma' gem 'foreman'
And install:
bundle install
Foreman requires a
Procfile which is used to specify processes to run. You can give processes names, for example
web and
worker, which you can then use to distinguish between which types of processes to run. The next thing we will do is create our
Procfile:
echo "web: bundle exec puma -e $RAILS_ENV -p 5000 -S ~/puma -C config/puma.rb" >> Procfile
You can change the port number to anything you like, we’ll use 5000. Next, create a puma config file and edit it to resemble this:
# config/puma.rb threads 1, 6 workers 2 on_worker_boot do require "active_record" cwd = File.dirname(__FILE__)+"/.." ActiveRecord::Base.connection.disconnect! rescue ActiveRecord::ConnectionNotEstablished ActiveRecord::Base.establish_connection(ENV["DATABASE_URL"] || YAML.load_file("#{cwd}/config/database.yml")[ENV["RAILS_ENV"]]) end
It is important to configure the number of threads and workers correctly, and the Puma README offers some advice on how best to do this. To run Puma in clustered mode, you will need at least two workers. Generally you should match the number of cores available on your VPS. On Ubuntu you can use the command:
grep -c processor /proc/cpuinfo
to check how many cores you have. Alternatively, you can tell based on the type of droplet you are using. Note, also, that each worker will use the given number of threads, so with the configuration above, there will be a minimum of 2 and and a maximum of 12 threads.
The
on_worker_boot block establishes ActiveRecord connections whenever a worker is booted (ie. during a deploy). If you are using a different ORM you will want to make the appropriate connections here.
Once you have configured your
Procfile and
puma.rb you should be able to run your application. To do this, simply run the command:
foreman start
Once loaded your app should be available at localhost:5000 (or whichever port you specified in your
Procfile).
Deployments
Foreman is able to export to other process management formats; we are going to use Upstart, which is the Ubuntu process manager. You can export Ubstart-compatible scripts using Foreman by running:
sudo foreman export upstart /etc/init -a puma-test-app -u puma-test-user -l /var/puma-test-app/log
In this example,
puma-test-app and
puma-test-user would be replaced by the appropriate app name and system user. We don’t want to do this locally, though - what we want to do is automatically create upstart scripts as part of our deployment, so that we are always launching the app correctly and letting Upstart ensure it keeps running.
We’ll use Capistrano to deploy. If you haven’t already done so, set up your app to work with Capistrano by running:
capify .
in your app’s root directory. Next, in the file created at
config/deploy.rb, add the following:
# config/deploy.rb set :app_name, "puma-test-app" set :user, "puma-test-user" namespace :foreman do desc "Export the Procfile to Ubuntu's upstart scripts" task :export, :roles => :app do run "cd #{current_path} && #{sudo} foreman export upstart /etc/init -a #{app_name} -u #{user} -l /var/#{app_name}/log" end desc "Start the application services" task :start, :roles => :app do run "#{sudo} service #{app_name} start" end desc "Stop the application services" task :stop, :roles => :app do run "#{sudo} service #{app_name} stop" end desc "Restart the application services" task :restart, :roles => :app do run "#{sudo} service #{app_name} start || #{sudo} service #{app_name} restart" end end
The
foreman:export task will update Ubuntu’s Upstart scripts, we will call this whenever we deploy. The other tasks manage the Upstart-backed service. Your
Procfile will not change regularly, but if it does, you’ll need to restart the application service to register these changes; you can do now do this by running:
cap foreman:restart
However, in the majority of cases, all we will want to do is tell Puma to do a phased restart. To do this, we are going to use the Capistrano
deploy:restart task, which is run automatically as part of a standard Capistrano deployment. Add the following to your
config/deploy.rb:
namespace :deploy do task :restart, :roles => :app do foreman.export # on OS X the equivalent pid-finding command is `ps | grep '/puma' | head -n 1 | awk {'print $1'}` run "(kill -s SIGUSR1 $(ps -C ruby -F | grep '/puma' | awk {'print $2'})) || #{sudo} service #{app_name} restart" # foreman.restart # uncomment this (and comment line above) if we need to read changes to the procfile end end
The first thing the
deploy:restart task does is call
foreman:export to update our Upstart scripts. Then, we send the
SIGUSR1 signal to all running instances of Puma. It does this by finding the process ID for the Puma master process, then sending it the appropriate signal. The commented out command finds the PID on OS X, it may be helpful to keep both on hand in case you need to test locally. When Puma receives the
SIGUSR1 signal it will, commence a phased restart. If we are unable to send the
SIGUSR1 signal for whatever reason, we fall back to restarting the service the old fashioned way.
If you need to register changes to your
Procfile then you will have to call the
foreman:restart task. To this, comment out the line that starts with
run, and uncomment the
foreman.restart line. You shouldn’t need to do this regularly as ideally your
Procfile would remain the same between deployments.
Testing
Before testing, make sure Foreman is installed on your Ubuntu VPS using the instructions above.
If you haven’t already, you’ll need to set up your VPS to work with Capistrano. First, fill in the blanks in your
config.deploy.rb for your web server, repository, etc. Then configure your VPS by running:
cap deploy:setup
You can then run:
cap deploy
to do your first deployment. During this deploy, your app will be started as an Ubuntu service on your VPS. In all subsequent deployments, Puma will get sent the
SIGUSR1 signal which will trigger a phased restart; you should be able to continue using your app throughout the restart process.
Database migrations
While Puma does a phased restart, your app will be running two different codebases. Thus, you will need to ensure that both codebases work with your existing database schema, which may be tricky if you have to make migrations. (If you have a substantial migration to make and need to take the site offline, you can do so by calling your new
foreman:stop Capistrano task, then calling the
foreman:start task to go back online later.) | https://www.digitalocean.com/community/tutorials/how-to-set-up-zero-downtime-rails-deploys-using-puma-and-foreman | CC-MAIN-2020-40 | refinedweb | 1,372 | 59.84 |
:
- Create a directory under
workers/of your SpatialOS project (for example,
workers/csharp).
Create a worker configuration file file (for example,jin the same directory. This file is controlled by you, and should provide you with enough flexibility without the need to turn off automatically generated build scripts.
We provide a seed csproj file which you should start with.
This includes
BuildTargets.targetsand
CsharpWorker.targets, which set up the expected platforms and output for you. You can choose any name for the assembly - this will then be the name of the worker and the executable.
The
csprojis set to build an executable, so you will need to provide a
static int Main(string[] args)somewhere in the source code in order for the build to succeed.
For example, you can create a
src/Test.cswithin the worker directory with the following contents:
using System; class Test { static void Main() { Console.WriteLine("Hello World!"); } }
Build your worker using
spatial worker buildor
spatial worker build <worker type>.
Add the worker to your SpatialOS application.
Assemblies produced by
spatial worker buildcontain an
exefile with the assembly name set in your
csprojfile. For example, the seed csproj produces assemblies containing
CsharpWorkerName.exe.
You should launch them using
mono CsharpWorkerName.exe. Configure managed workers to be launched this way in the launch config.
For more about the build setup, see the Building page. | https://docs.improbable.io/reference/13.1/csharpsdk/setting-up | CC-MAIN-2019-18 | refinedweb | 228 | 50.23 |
It’s no secret that Enzyme has become the de facto standard for React components testing, but there are other good options around.
For example: React Test Renderer.
I personally like Test Renderer because of the way it works: it renders React components into pure JavaScript objects that are easy to use and understand.
Another advantage of React Test Renderer is that it is maintained by a core team at Facebook and is always up to date.
React Test Renderer has a great documentation, so I won’t duplicate it. Instead, I’d like to illustrate a few of the most common use cases in an example with a test-driven development (TDD) approach.
Setup
Test Renderer has a really easy setup process — just install the lib and you’re ready to go:
npm install --save-dev react-test-renderer
Ordinarily, we’d need a component in order to start writing a test, but React Test Renderer enables us to write a test before the component is implemented.
Note: The reason for this is that TDD works like a charm when you test functions, so taking into account that most of the React components are pure functional components, TDD is applied really well here, especially with React Test Renderer.
Sometimes it’s even faster to write your component starting with tests in case of complex logic because you need fewer iterations and debugging.
Let’s consider the requirements for a simple component:
- It needs to have a class
btn-group
- It should be able to render its children
Testing
className
First, we need to test the class of an empty component (as we follow TDD):); });
The test has three steps: test instance creation, element querying, and assertion.
Let’s skip over the more in-depth explanation of that for now and focus on fixing the test. At first, it will break (as expected):
No instances found with node type: "undefined"
That means we need to add some node with some type. In our case, the type should be
<div>:
const BtnGroup = () => <div />;
Once we change the code, the file watcher runs the test again and we receive an updated message:
expect(received).toEqual(expected) // deep equality Expected: "btn-group" Received: undefined
We’re already asserting. To pass the first test, all we need to do now is add a
className prop:
const BtnGroup = () => <div className="btn-group" />;
After this change, we’ll see that rewarding green message:
As soon as the test is green, we can slow down a bit and revisit the code of the test line by line. Here’s that code again:); });
[ 1 ] Test Renderer has only one way of creating a component — the
create method — so just import and use it.
[ 2 ] When creating a component, getting a test instance is a standard boilerplate code for React Test Renderer.
[ 3 ] There are two main ways to query for an element in Test Renderer: by type and by props. I prefer querying by type when there are no other containers around, as in the current example. We’ll get to other methods a bit later.
[ 4 ] This assertion is pretty self-explanatory; just check that the
className prop value includes
btn-group and you’re good to go.
Testing children
Let’s continue adding functionality to the
BtnGroup component we already have since we know we need to meet the following requirement:
It should be able to render its
Testing the
children prop is very straightforward. We just need to make sure that the passed value matches the result rendered:
import React from "react"; import { create } from "react-test-renderer"; const BtnGroup = () => <div className="btn-group" />; test("renders BtnGroup component with children", () => { // [ 6 ] child text const text = "child"; // boilerplate code, already mentioned in [ 2 - 3 ] above const instance = create(<BtnGroup>{text}</BtnGroup>).root; // query for element const element = instance.findByType("div"); // assert child to match text passed expect(element.props.children).toEqual(text); });
[ 6 ] The value we pass to the component and the value we use to assert against it should be the same.
Since we’re using TDD, you might expect the test to break here. However, React supports passing children to components out of the box, so our test will be green.
If you’re wondering if the test is running successfully, you can print the element value with console.log.
The output is as follows:
Testing any props
Let’s continue adding requirements for our components: it should render any props passed.
Here’s a test:
import React from "react"; import { create } from "react-test-renderer"; // the component is still not updated as we use TDD const BtnGroup = () => <div className="btn-group" />; test("renders BtnGroup component with custom props", () => { // generate some custom props const props = { id: "awesome-button-id", className: "mb-3", children: "child" }; // boilerplate code const instance = create(<BtnGroup {...props} />).root; // get element by component name const element = instance.findByType("div"); // assert if an additional className was added to existing one expect(element.props.className).toEqual("btn-group mb-3"); // assert "id" prop to match passed one expect(element.props.id).toEqual(props.id); // assert "children" to match passed expect(element.props.children).toEqual(children); });
The code of the test already looks familiar; we’re just checking that the prop values passed match.
Now, the test will break and issue the following message:
Expected: "btn-group mb-3" Received: "btn-group"
What happens now is that we need to actually start passing props. Otherwise,
btn-group will always be there:
const BtnGroup = props => <div className="btn-group" {...props} />;
Here’s where having tests comes in handy. We have another message telling us that the
className case is specific:
Expected: "btn-group mb-3" Received: "mb-3"
Now, the passed props replace the props that our component already has; in our case,
btn-group is replaced with
mb-3.
We should change the code of the component to fix this so that it handles
className differently:
const BtnGroup = ({className = "", ...rest}) => <div {...rest} className={`btn-group ${className}`} />;
The trick here is to destructure props so that items needing special treatment have their name and all other props consolidated into a
rest object.
Again, there is no special approach needed for the
children prop, although they’re passed now as a regular prop instead of in the body of the component.
Now, the test should be green again. All of the previously written tests will also be green:
Note: I left a
console.loghere to show how you can check the output at any time.
As you can see, all of the assertions we’ve done — for now — are just checks that strings match. But if there’s a need to check the number of items, we can use this handy method in Test Renderer:
testInstance.findAllByType().
Let’s see how it works.
Testing the number of items
To demonstrate how to count items in React Test Renderer, we should have some component that renders an array or a list. So, the requirement would be that the component should render a list with correct items count.
To follow TDD, we’ll start with an empty functional component that renders an empty
ul tag:
const ProductList = ({ list }) => <ul />;
Here’s a test we could write:
import React from "react"; import { create } from "react-test-renderer"; test("renders a list of items with correct items count", () => { // prepare the list for testing const list = [{ id: 1, text: "first item" }, { id: 2, text: "second item" }]; // boilerplate code const root = create(<ProductList list={list} />).root; // [ 7 ] get list items const elementList = root.findAllByType("li"); // assert if the length match with original list passed as a prop expect(elementList.length).toEqual(list.length); });
The goal of this test is to check whether the number of rendered nodes equals the number of passed items.
Initially, the test will break with the following message:
To fix the test, we should render list items with
li tags inside the container:
const ProductList = ({ list }) => <ul> {list.map(li => <li key={li.id}>{li.text}</li>)} </ul>;
Now the test is green and we can talk about the code.
[ 7 ] To query specifically for nodes with type
li, I use the
testInstance.findAllByType() method that returns all elements with tag
li.
There are also some other methods to search for multiple items:
testInstance.findAll() and
testInstance.findAllByProps(). The first is useful when you need to check the overall number, while the second comes in handy when you want to count a specific prop, e.g., all nodes with a specific
className.
Testing text
In most cases, having a test for only item count is not sufficient, and you’ll also want to test the actual text a user can read.
There’s no specific functionality in React Test Renderer for that purpose, but that’s pretty easy to write if you consider that text can only be found in children.
import React from "react"; import { create } from "react-test-renderer"; test("renders all items with correct text", () => { // [ 8 ] prepare the list for testing const list = [{ id: 1, text: "first item" }, { id: 2, text: 33 }]; // boilerplate code const root = create(<ProductList list={list} />).root; // get list items const elementList = root.findAllByType("li"); // [ 10 ] Iterate over all items and search for text occurence in children elementList.forEach((el, index) => { // [ 11 ] convert text to string expect(el.children.includes(`${list[index].text}`)).toBe(true); }); });
Having a list of all items in [ 8 ], we can iterate over the nodes of the component and make sure that every text was found [ 10 ].
This test is instantly green as soon as the component doesn’t have any filtering or sorting logic inside and just renders a list as it is, so we don’t have to change any lines of code in the test.
The only nit to add here is that rendered text is always a string, regardless of the value type you pass [ 11 ].
Testing event handlers and Hooks
Some of the functional components rely on more than just props and have their own state management thanks to the Hooks API. Consider a classic example of a toggler component with the following requirements:
- Should render a button
- Should toggle children on button click
That means that children visibility should change on click.
Here’s an example of a test you could write:
import React from "react"; import { create, act } from "react-test-renderer" // let component to be a fragment for start const VisibilityToggler = () => <></>; test("should toggle children nodes on button click", () => { const root = create( <VisibilityToggler> <div>awecome content</div> </VisibilityToggler> ).root; // helper to get nodes other than "button" const getChildrenCount = () => root.findAll(node => node.type !== "button").length; // assert that button exists expect(root.findAllByType("button").length).toEqual(1); // query for a button const button = root.findAllByType("button")[0]; // remember initial nodes count (before toggle) const initialCount = getChildrenCount(); // trigger a hook by calling onClick of a button act(button.props.onClick); const countAfterFirstClick = getChildrenCount(); // assert that nodes count after a click is greater than before expect(countAfterFirstClick > initialCount).toBe(true); // trigger another click act(button.props.onClick); const countAfterSecondClick = getChildrenCount(); // check that nodes were toggled off and the count of rendered nodes match initial expect(countAfterSecondClick === initialCount).toBe(true); });
The test looks huge, so let’s not try to fix it right away. First, let’s discuss the code a bit.
[ 12 ] Here is one new thing happens: the
act() method is used to wrap event handler calls.
Why should we? And how should we remember to do so? The second answer is easy: no need to remember, because React Test Renderer checks the code and prints a warning with a reason.
When writing UI tests, tasks like rendering, user events, or data fetching can be considered as “units” of interaction with a user interface.
React provides a helper called
act() that makes sure all updates related to these “units” have been processed and applied to the DOM before you make any assertions from the docs.
In other words, an
act() method “waits” for React updates and makes otherwise async code look synchronous, very similar to
await from ES7.
At this stage, the test can’t find a button and breaks:
To resolve this issue, let’s add a button:
const VisibilityToggler = () => <><button /></>;
The button exists, but the
onClick method is not found:
Don’t forget to add a button:
const VisibilityToggler = () => <><button /></>;
This is the next message you’ll receive after adding an
onClick handler:
Finally, we’re at the point where we’re ready to add some state management with Hooks:
const VisibilityToggler = ({ children }) => { const [isVisible, setVisibility] = useState(false); const toggle = () => setVisibility(!isVisible); return ( <> <button onClick={toggle}>toggle</button> {isVisible && children} </> ); };
Clicking on a button now toggles a state variable
isVisible to the opposite value (
true or
false), which, in return, causes a render of
children in case of
true and skips rendering
children in case of
false.
All tests should be green now. You can find the complete source code for this example here:
Conclusion
Although React Test Renderer is usually associated with snapshot testing, it can still be used to make specific assertions against your components with sufficient accuracy for most common use cases.
I personally like it because it has a clean API, it’s simple, and it’s easy to use along with TDD. I hope you like it, too! “TDD with React Test Renderer”
In the first code snippet of “Testing event handlers and hooks” act is not defined.
Hello, Sam
thank you for your feedback! Yes, the code snippet should be updated and have an import statement for `act` from react-test-renderer alongside with `create` like follows:
`import { create, act } from “react-test-renderer”;`
Here is also a link to source code that I used in the article | https://blog.logrocket.com/tdd-with-react-test-renderer/ | CC-MAIN-2022-40 | refinedweb | 2,295 | 58.82 |
Tracking SDK for App Advertisers
When you advertise your mobile app with CPAlead, you have the option to bid on a Cost Per Install basis (CPI). In order to track your installs, you will be required to install our Tracking SDK into your application. This process is made very easy with our integration SDK.
Step 1: Add the CPAlead Tracking SDK to Your Project
Add the cpaleadtrackingsdk.jar to your project, this can be be accomplished by copying the file into your "libs" directory. Right click the .jar file and goto "Add As Library" you then select your main app and hit "OK".
Step 2: Edit your apps AndroidManifest.xml File
First, you need to make sure you allow internet permission in your manifest:
In order to track your app installs, insert the following code to your application tag in the manifest:
Step 3: Add Tracking Code
Add import code to import our tracking SDK function.
import com.cpaleadtrackingsdk.CPAleadTrack;
Add this tracking function to your apps MainActivity function or class on startup
CPAleadTrack.track(this);
Example:
Thats it!
Once your app changes are uploaded to Google Play production, you are now ready to promote your app! | https://www.cpalead.com/documentation/tracking-sdk/ | CC-MAIN-2019-18 | refinedweb | 197 | 63.8 |
Chat room for all things SwayDB.
@.
Bagimplementation into your code. Proper release will happen when I win the battle against JVM's garbage collector.
import cats.effect.IO import cats.effect.unsafe.IORuntime import swaydb.Bag.Async import swaydb.serializers.Default._ import swaydb.{IO => SwayIO, _} import scala.concurrent.{ExecutionContext, Future, Promise} import scala.util.Failure object Cats3Example extends App { /** * Cats-effect 3 async bag implementation */ implicit def apply(implicit runtime: IORuntime): swaydb.Bag.Async[IO] = new Async[IO] { self => override def executionContext: ExecutionContext = runtime.compute override val unit: IO[Unit] = IO.unit override def none[A]: IO[Option[A]] = IO.pure(Option.empty) override def apply[A](a: => A): IO[A] = IO(a) override def map[A, B](a: IO[A])(f: A => B): IO[B] = a.map(f) override def transform[A, B](a: IO[A])(f: A => B): IO[B] = a.map(f) override def flatMap[A, B](fa: IO[A])(f: A => IO[B]): IO[B] = fa.flatMap(f) override def success[A](value: A): IO[A] = IO.pure(value) override def failure[A](exception: Throwable): IO[A] = IO.fromTry(Failure(exception)) override def foreach[A](a: IO[A])(f: A => Unit): Unit = f(a.unsafeRunSync()) def fromPromise[A](a: Promise[A]): IO[A] = IO.fromFuture(IO(a.future)) override def complete[A](promise: Promise[A], a: IO[A]): Unit = promise tryCompleteWith a.unsafeToFuture() override def fromIO[E: SwayIO.ExceptionHandler, A](a: SwayIO[E, A]): IO[A] = IO.fromTry(a.toTry) override def fromFuture[A](a: Future[A]): IO[A] = IO.fromFuture(IO(a)) override def suspend[B](f: => IO[B]): IO[B] = IO.defer(f) override def flatten[A](fa: IO[IO[A]]): IO[A] = fa.flatMap(io => io) } implicit val runtime = IORuntime.global val test = for { map <- memory.Map[Int, String, Nothing, IO]() _ <- map.put(key = 1, value = "one") value <- map.get(key = 1) //returns "one" } yield { println(s"value: $value") } test.unsafeRunSync() }
PartitionId : TopicId : TopicOffset
I see you did a cats effect 3 release so I am guessing you won the battle with the garbage collector :-)
Oh the battle with the GC has been on for a while. SwayDB outperforms RocksDB when write workload is small-medium but on heavy compaction workloads longer GC pauses occur frequently so the battle is still on.
I would love to understand more about why you made swaydb.
There were many reasons to start but in general I felt that existing solutions always fell short one way or another. Thought a storage engine that (following general Scala philosophy) allowed simple data-structures to be composed easily to build more rich data-structures was needed. A company I used to work at got hefty monthly cloud bills for running ML training on large data so reducing these bills was also a major motivation.
My use case is http based messaging middleware with a large number of topics / mailboxes. where swaydb would be the underlying storage engine
I'm glad to hear that and would love to learn more. Is it a distributed system?
Wondering if there are any performance nobs I should turn.
There is heaps you can do here. The basic idea behind all configurations is that if something can determine Disk, CPU & RAM usage then it should be configurable. But I think this needs more documentation showing experiments with different settings and results.
the RocksDB code we create a RocksDB per day and at an atomic moment each day we drop the oldest rocks db from the end of partition list and add a new one to front of the list...
That's clever. Yep
MultiMap should make this very easy.
I'm sure you know this already but please note that SwayDB is not production ready yet. Quick status overview is that there a total of 2,897 test-cases (unit, stress, integration & performance tests) to ensure that nothing leads to incorrect data or data corruption but a solution for reducing GC pauses on heavy compaction workload is still pending.
So the general idea is the following
* mailboxes are lightweight and are created with a public and a private key * you can write messages to a mailbox using the public key * you can read messages from a mailbox using the private key * best practice transport is to use the http methods they use server sent events and / or web sockets with a few other more esoteric mechanisms * the idea is you can do all your RPC over this. So browsers can talk directly to browsers, servers to talk servers, a browser doesn't have to talk to only "it's" server but can be easily moved to talk to another server * all mailboxes are just a sequence of messages ALWAYS stored to disk * reading a mailbox is just saying what index to start at * tailing a mailbox is an in memory transaction (lowers latency) * work load is LOTS of mailboxes without any specific mailbox really getting a ton of IO though we have use cases where we push pulsar / kafka data through a mailbox and hence get high IO, those work fine you * designed to be distributed but single server for now
we have tried a LOT of things to make this work over the years (various messaging servers most recently pulsar) and they all sort of fall over at some point... for example pulsar with a large number of topics / mailboxes all tailing thrashes really hard, like we can take down large pulsar clusters kind of thrashing.
In effect you can get around the infrastructure game (load balancing, etc, etc) but using this, which is what we do. We run some large customers using it and really avoid a TON of infrastrucutre headaches (think legacy systems, multiple clouds, things moving around all the time and multiple teams)... We run these systems with very small teams because of it. We deploy a service make sure it can reach the hermes server and everything else just works.
For debugging it is really cool. All I need are the two mailboxes of a conversation and I can re-construct the entire conversation... If you built your browser app properly can even do some ELM like things for replay... | https://gitter.im/SwayDB-chat/Lobby?at=5f9fd75ec6fe0131d4cdc198 | CC-MAIN-2022-27 | refinedweb | 1,033 | 64.2 |
Introduction
Long time no write, kids. Sometimes life throws you a curve, you get whacked, and you take a base. Standing on first a little dazed and confused, it’s good to be writing about something simple and fun: the My feature.
Microsoft Visual Basic developers did something that most of you have been doing for years: They wrapped up some more-complex stuff in a wrapper, mixed in the relatively new meaning of the Shared keyword, and made a bunch of everyday stuff easier to get at. The biggest difference is that you don’t have to do the work in this instance.
Very simply, My is a feature in .NET—used mostly by VB programmers—that condenses namespaces and classes into easier-to-use shared behaviors. My includes a plethora of everyday programming capabilities (including access to the registry, clipboard, writing to the event log, file I/O, and moving files across a network) that I’ll show you how to use.
The code in this article is not intended to show you how clever I am, but rather how easy My is. I do have one gentle word of caution: Like cane sugar, too much "My sugar" can become an excuse to avoid grown-up foods for a little longer. Don’t let tools used to make life a little easier become an opportunity to become complacent about mastering OOP. | http://www.informit.com/articles/article.aspx?p=606221&seqNum=9 | CC-MAIN-2019-26 | refinedweb | 233 | 67.89 |
When I first started working with Amazon Web Services (AWS), it didn’t take long before I started to look for a way to automate the creation of cloud resources. This was years ago, when DevOps was only just starting to become a popular concept. Back then, the newly releases Boto3 Python library gave me the result I needed. Using Python scripts, I could make API calls to AWS and interact with resources. For example, creating a S3 bucket can be done with these lines of code:
import boto3 s3 = boto3.resource('s3') bucket = s3.create_bucket(Bucket='mybucket')
This had the advantage of being fast and providing an automated way to get a consistent result. In fact, I still use Boto3 to this day. However, this isn’t true DevOps because it doesn’t give you a way to track changes or destroy the resources linked to the script. Enter CloudFormation.
When looking for a proper DevOps tool, I stumbled upon CloudFormation. This is Amazon’s answer to Terraform, a way to use templates to interact with cloud resources. The advantage of CloudFormation is that the template, or set of instructions, is state aware. Meaning that you can check at any time to make sure that the real world is identical to how things should be. You can also destroy all resources created by a template.
The same code would now look like this:
AWSTemplateFormatVersion: 2010-09-09 Resources: mybucket: Type: AWS::S3::Bucket
My main gripe with CloudFormation is that it can be difficult to guess the right syntax, and if your template fails, it’s even harder to know why it failed. Also, templates can become very big very fast, far more so than code.
But thankfully, last year Amazon announced CDK, and the circle was complete. CDK is a way to write code, just like with Boto3, but the code then in turn creates CloudFormation templates. So you get all the advantages of writing Python, with the advantages of having a state aware template.
Now, creating that S3 bucket looks like this:
from aws_cdk import core, aws_s3 class MyBucketStack(core.Stack): def __init__(self, scope: core.Construct, id: str, **kwargs) -> None: super().__init__(scope, id, **kwargs) bucket = aws_s3.Bucket(self, "mybucket")
Creating the template is done with
cdk synth and deploying it using
cdk deploy. I won’t point out the irony of doing this, but out of the three methods, I think I prefer using CDK. | http://blog.dendory.ca/2020/07/coming-full-circle-with-aws-cdk.html | CC-MAIN-2021-10 | refinedweb | 410 | 72.66 |
Data::Hive::Store - a backend storage driver for Data::Hive
version 1.013
Data::Hive::Store is a generic interface to a backend store for Data::Hive.
All methods are passed at least a 'path' (arrayref of namespace pieces). Store classes exist to operate on the entities found at named paths.
print $store->get(\@path, \%opt);
Return the resource represented by the given path.
$store->set(\@path, $value, \%opt);
Analogous to
get.
print $store->name(\@path, \%opt);
Return a store-specific name for the given path. This is primarily useful for stores that may be accessed independently of the hive.
if ($store->exists(\@path, \%opt)) { ... }
Returns true if the given path exists in the store.
$store->delete(\@path, \%opt);
Delete the given path from the store. Return the previous value, if any.
Stores can also implement
delete_all to delete this path and all paths below it. If
delete_all is not provided, the generic one-by-one delete in this class will be used.
my @keys = $store->keys(\@path, \%opt);
This returns a list of next-level path elements that lead toward existing values. For more information on the expected behavior, see the KEYS method in Data::Hive.
This software is copyright (c) 2006 by Hans Dieter Pearcey.
This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself. | http://search.cpan.org/~rjbs/Data-Hive/lib/Data/Hive/Store.pm | CC-MAIN-2017-47 | refinedweb | 231 | 68.36 |
Cannot place any QGraphicsItems on QGraphicsScene anywhere but at 0,0
Hello guys, Let me start by thanking you for your help. I am trying to create a pong game. I have a QGraphicsView on the QGraphicsScene. I m trying to add a QGraphicsEllipseItem to make a ball but no what I use for coordiates it WILL NOT move or be created anywhere but 0,0. I have tried the same with a QRectf.
//Header
<code>
class MainWindow : public QMainWindow
{
Q_OBJECT
public:
MainWindow(QWidget *parent = 0);
~MainWindow();
protected:
void createActions( );
void createMenus( );
//OpenGL void initializeGL(); void paintGL(); void resizeGL(int width, int height);
//slots:
// void levelUp();
private:
//functions public ????
void drawBall( );
void moveBall(qreal speed);
QRadialGradient *m_gradient; QWidget *m_central_widget; QGraphicsView *m_view; QGraphicsScene *m_scene; QGraphicsEllipseItem *m_ball; QTimer *m_timer; QVBoxLayout *m_layout; QSpacerItem *m_spacer; // Menu widgets QMenuBar *m_menubar; QMenu *m_levelMenu; QMenu *m_helpMenu; QMenu *m_fullScreenMenu; QMenu *m_aboutMenu; // Actions for menu QActionGroup *m_level_actions; QAction *m_firstLevelAct; QAction *m_secondLevelAct; QAction *m_ThirdLevelAct; QAction *m_fullScreenAct; QAction *m_helpAct; QAction *m_aboutAct; qreal m_speed;
#include "mainwindow.h"
MainWindow::MainWindow(QWidget *parent)
: QMainWindow(parent)
{
// ste central widget to add others
m_central_widget = new QWidget;
setCentralWidget (m_central_widget);
// graphics necesssities m_view = new QGraphicsView(m_central_widget); m_scene = new QGraphicsScene(m_view); m_view->setScene (m_scene); // Vertical layout m_layout = new QVBoxLayout(m_central_widget); // this might need to be changed to allow for the walls later m_layout->setMargin (10); m_layout->addWidget (m_view); //set layout m_central_widget->setLayout (m_layout); // TODO: Play with this latter QString TEMP("This is just a place holer"); this->statusBar ()->showMessage(TEMP); // TODO:Play with this later m_levelMenu = this->menuBar ()->addMenu ("Level"); m_scene->setItemIndexMethod (QGraphicsScene::NoIndex); m_view->show (); drawBall ();
}
MainWindow::~MainWindow() { }
// TODO:
void MainWindow::createActions( )
{
m_fullScreenAct = new QAction("FullScreen", this);
m_fullScreenAct->setStatusTip ("Open in FullScreen");
connect(m_fullScreenAct, &QAction::triggered, this, &MainWindow::showFullScreen);
}
void MainWindow::createMenus( ) { }
void MainWindow::drawBall( )
{
// m_gradient = new QRadialGradient(30, 30, 150);
// m_gradient->setColorAt (0, Qt::white);
// m_gradient->setColorAt (.25, Qt::black);
// QBrush brush(*m_gradient);
QBrush brush(Qt::blue);
m_ball = m_scene->addEllipse (30, 30, 30, 30, QPen(Qt::white), brush); qDebug() << m_ball->x (); qDebug() << m_ball->y ();
}
//TODO: Needs to be redone
void MainWindow::moveBall(qreal speed)
{
qreal newX = m_ball->x () + 20;
qreal newY = m_ball->y () + 20;
m_ball->moveBy (newX, newY); m_ball->setPos (newX, newY); m_view->repaint (); m_scene->update ();
}
</code>
@arortell ... There seems to be some pieces missing. Like sceneRect. You should also not have to call m_view->repaint () since you are updating the scene with m_scene->update ().
The sceneRect will automatically update when items are added based on the union of all the boundingRects for the QGraphicsItems. You could try using setSceneRect and give your canvas a large area to play with. If the item goes outside the rect it will be clipped and not drawn. So, a small sceneRect will clip items fast.
An easy way to make the sceneRect the size of the client area is to set it to the viewport rectangle in resizeEvent of the QGraphicsView. In your example, you would override QMainWindow::resizeEvent as follows:
void MainWindow::resizeEvent (QResizeEvent* event)
{
QMainWindow::resizeEvent (event); // This allows the main window to pass it on and handle it before you use it
if (m_scene) m_scene->setScenRect (QRect (QPoint (0, 0), event->size ()));
}
This should give you a good size sceneRect to play with.
@Buckwheat Thank you very much. I wil give it a try.
@Buckwheat that gives me
error: no matching constructor for initialization of 'QRectF'
m_scene->setSceneRect((QRectF(0,0),event->size()));
^ ~~~
I just don't understand I cannot get this thing to show up anywhere but 0,0. And it will not move.
Oky, new development. I just created a QGraphicsRectItem with all the same properties and it works.
How weird is that?
Oky I had the QGraphicsEllipseItem pointer created in the header file. I move that to the cpp file and now it WORKS! If some could please explain to me why, I would be forever greatful. I am obviously very new to QT but I am loving it. It truly is incredible, the documentation is great, but I am having a hard time finding help. Thanks
Its not moving the pointer that is making it work. It is the QRect that I created to test. If I comment out
the rect it all goes back to not moving and only being created at 0,0 | https://forum.qt.io/topic/74543/cannot-place-any-qgraphicsitems-on-qgraphicsscene-anywhere-but-at-0-0 | CC-MAIN-2019-43 | refinedweb | 702 | 51.38 |
Date: Wed, 13 Dec 2000 20:56:08 -0700 From: "Justin T. Gibbs" <gibbs@scsiguy.com> None-the-less, it seems to me that spamming the kernel namespace with "current" in at least the way that the 2.2 kernels do (does this occur in later kernels?) should be corrected.Justin, "current" is a pointer to the current thread executing on thecurrent processor under Linux. It has existed since day one of theLinux kernel and probably will exist till the end of it's life.I'm sure the BSD kernel has some similar bogosity :-)Later,David S. Millerdavem@redhat.com-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgPlease read the FAQ at | https://lkml.org/lkml/2000/12/14/60 | CC-MAIN-2021-31 | refinedweb | 126 | 59.4 |
Imperative Programming. Imperative Programming. Heart of program is assignment statements Aware that memory contains instructions and data values Commands: variable declarations, loops, conditionals, procedural abstraction Commands make a flowchart. Imperative Languages. FORTRAN COB.
Imperative Programming
program Loops;vari: Integer;begin for i := 1 to 10 do beginWriteln('Hello');Writeln('This is loop ',i); end;end.
/*Hello, world” */
#include <stdio.h>
main()
{
printf(“Howdy, earth! \n”);
return 0;
}
char c = ‘A’;
printf(“Print char c: %c.\n”,c);
int n = 6;
printf(“Print int n: %d.\n”,n);
int x,y,z;
x=y=z=1;
y = x++;
x=1;
z = ++x;
printf("x++ gives %d\n",y);
printf("++x gives %d\n",z);
char c = x > 0? ‘T’ : ‘F’
int a = 13;
printf("a is %d\n",a);
printf("&a is %p\n",&a);
int* ptr; -> ptr is a pointer to an int (with no pointee!)
What does it do?
int *ptr;
int num = 4;
ptr = #
int *ptr;
int num = 4;
ptr = #
*ptr = 5;
program Pointers;var p: ^integer;begin new(p); p^ := 3;writeln(p^); dispose(p);end.
intarInt[5];
int arInt2[5] = {1,4,3,2,1};
intar[2][3] = {{1,2,3},{4,5,6}};
char str[] = “I like C.”;
char *ptrStr = “I love pointers.”;
Last character in String is ‘\0’
intmyStrLen(const char* s)
{
int count = 0;
while (*s != ‘\0’){
count++;s++;
}
return count;
}
#define MAX 1000
if(x > MAX)…
typedef struct{
char* word;
int frequency;
int size;
} wordInfo;
wordInfo wordOne;
wordOne.word = “apple”;
wordOne.frequency = 0;
wordOne.size = 0;
wordInfo *wiPtr;
wordInfo wi = *wiPtr;
int f = wiPtr->frequency;
voidswap(int a, int b){
int temp = a;
a = b;
b = temp;
}
voidswap(int& a, int& b){
int temp = a;
a = b;
b = temp;
}
int h, i;
void B(int* w) {
int j, k;
i = 2*(*w);
*w = *w+1;
}
void A(int* x, int* y) {
bool i, j;
B(&h);
}
int main() {
int a, b;
h = 5; a = 3; b = 2;
A(&a, &b);
}
Compute the address of the
argument at the time of the
call and assign it to the
parameter.
Example
Since h is passed by reference, its value changes during the call to B.
Call swap(i, a[i]). Method becomes:
begin integer t;
t := i;
i := a[i];
a[i]:= t
end;
i = 0, a=[3,0,0,0].
Want: i=3, a = [0,0,0,0].
What really happens?
What happens if I call swap(a[i],i) instead? | http://www.slideserve.com/skule/imperative-programming | CC-MAIN-2017-13 | refinedweb | 408 | 73.98 |
Kubernetes API.
Index
Properties
- Cluster
- CurrentField
- DecreaseCooldown
- IncreaseCooldown
- Kind
- KindTag
- Max
- Min
- Namespace
- NamespaceTag
- Replicas
- ResourceName
- ResourceNameTag
- ResourceTag
- Mean
- Median
-.
Cluster
Cluster is the name of the Kubernetes cluster to use.
node.cluster(value string)
CurrentField)
DecreaseCooldown
Only one decrease event can be triggered per resource every DecreaseCooldown interval.
node.decreaseCooldown(value time.Duration)
IncreaseCooldown
Only one increase event can be triggered per resource every IncreaseCooldown interval.
node.increaseCooldown(value time.Duration)
Kind
Kind is the type of resources to autoscale. Currently only "deployments", "replicasets" and "replicationcontrollers" are supported. Default: "deployments"
node.kind(value string)
KindTag
KindTag is the name of a tag to use when tagging emitted points with the kind. If empty the point will not be tagged with the resource. Default: kind
node.kindTag(value string)
Max
The maximum scale factor to set. If 0 then there is no upper limit. Default: 0, a.k.a no limit.
node.max(value int64)
Min
The minimum scale factor to set. Default: 1
node.min(value int64)
Namespace
Namespace is the namespace of the resource, if empty the default namespace will be used.
node.namespace(value string)
NamespaceTag
NamespaceTag is the name of a tag to use when tagging emitted points with the namespace. If empty the point will not be tagged with the resource. Default: namespace
node.namespaceTag(value string)
Replicas
Replicas is a lambda expression that should evaluate to the desired number of replicas for the resource.
node.replicas(value ast.LambdaNode)
ResourceName
ResourceName is the name of the resource to autoscale.
node.resourceName(value string)
ResourceNameTag
ResourceNameTag is the name of a tag that names the resource to autoscale.
node.resourceNameTag(value string)
ResourceTag
ResourceTag is the name of a tag to use when tagging emitted points the resource. If empty the point will not be tagged with the resource. Default: resource
node.resourceTag(value string)()
Mean
Compute the mean of the data.
node|mean(field string)
Returns: InfluxQLNode
Median
Compute the median of the data. Note, this method is not a selector,
if you want the median point use
.percentile(field, 50.0).
node|median | http://docs.influxdata.com/kapacitor/v1.3/nodes/k8s_autoscale_node/ | CC-MAIN-2017-47 | refinedweb | 350 | 52.05 |
I am sitting right now in the “Opening Party” for the 97 Things Programmer Should Know. Someone had mentioned this to me when I was touring in South Africa and I had never gotten a chance to put together an entry so here it is. Thanks to Thomas Huberz for letting me quote from his book while people are talking in Norwegian.
Responsibility is Important When Dealing With Single Responsibility
Many people apply Single Responsibility Principle (SRP) way past where it should be. SRP is defined in an article in the book by Uncle Bob Martin as only needing a single reason to change. He gives an example as follows:
public class Employee {
public Money CalculatePay() …
public String ReportHours() …
public void save() …
}
The suggestion is to separate these methods into three separate classes.
public class Employee {
public Money CalculatePay() …
}
public class EmployeeReporter {
public String ReportHours() …
}
public class EmployeeRepository {
public void save() …
}
The explanation provided is
Some programmers might think that putting these three functions in the same class is perfectly appropriate. After all, classes are supposed to be collections of functions that operate on common variables. However the problem is that the three functions change for completely different reasons. The CalculatePay method will change whenever the business rules for calculating pay do. The ReportHours method will change whenever someone wants a different format for the report. The Save function will change whenever the DBAs change the database schema. These three reasons to change make Employee very volatile. It will change for any of those reasons.
Uncle Bob’s example is an extreme case of a SRP violation. Many tend to end up in less obvious situations such as the following.
public class Employee {
public void CalculatePay() …
public void ChangeSex() …
public void UpgradePayGrade() …
}
By the logic presented in the explanation we could very easily argue that these three methods should also be in three separate classes. The CalculatePay method will change whenever the business rules for calculating pay do. The ChangeSex method will change whenever the business rules for changing the sex of an employee change. The UpgradePayGrade method will change whenever the logic for upgrading the grade of a user changes. Should we however separate these methods off of the Employee object?
public class EmployeePaymentCalculator {
public void CalculatePay(Employee employee) …
}
public class EmployeeSexChanger {
public ChangeSex(Employee employee) …
}
public class PayGradeUpgrader {
public UpgradePayGrade(Employee employee) …
}
The basis for the suggestion of SRP is that each class should only have one reason for change. This is all fine and good so long as people temper that suggestion with what the responsibility of an object is. The responsibility of an object is to encapsulate data and provide behaviors. Without taking into account the basics of what an object’s responsibilities are one could follow SRP to the demise of their object oriented code and easily end up with procedural code and many do. I want to make it clear that Bob does specify that object responsibilities are important but many tend to miss this subtle detail.
One can find a quick smell to help identify the places where we may have incorrectly applied SRP and created procedural code by noticing the parameter arguments that are being passed. If we are passing an Employee how are we accessing it? Are we breaking state encapsulation? If we have getters and setters being called in that method we should really be reconsidering whether that code actually falls under the responsibility of our object.
In the case above we also have a tradeoff between SRP and cohesion. Cohesion is a very important concept as well as the reasons for change. If however we are responsible, and keep in mind the responsibilities of an object when applying single responsibility principle we can end up in the happy middle ground between god classes and procedural code.
Nice article. I think the SRP goes hand in hand with double dispatch for not breaking encapsulation such as:
Employee {
String employeeToken;
Money calculatePay(EmployeePaymentCalculator)
{
return EmployeePaymentCalculator.doCalculate(this.employeeToken);
}
}
In Java, .Net hybrid languages we have no choice other than using MethodLess roles, so maybe a Person may wear a UnsatisfiedWithSex hat (as an interface) and somehow isolate that use case from others. to represent the user mental model more descriptive:
class Person implements UnsatisfiedWithSex {
changeSex(SurgeryInstitute) {
if (SurgeryInstitute.canChangeSex(this.attributes) {
this.sex = this.sex.reverse();
}
}
}
Why dont you do something like this
class Employee {
public decimal CalculatePay(decima amount) {
return new PayCalculator.CalculatePay(amount);
}
}
@Jarrett
There are other ways around that.
You can use a ViewModel as well. CQRS makes this even simpler.
Good post. I agree with you 100%. The key is where the data lives. If you create another class and have to give it all of the data of the original class to do it’s work, then you might reconsider creating that new class, or consider splitting the data.
In my experience, there’s a conflict between SRP and MVC. MVC wants to avoid anemic domain models. The business goes on the model. But it can easily be remedied.
class Employee {
/* snip */
public decimal CalculatePay(IPayCalculator payCalulator) {
return payCalculator.CalculatePay(this);
}
}
This works just fine. It’s a simple method-level injection refactor.
Interesting post and something I’ve been thinking of for a long time.
I’m one of those who have a hard time to describe what SRP _really_ means, just because the reasons you mention (“eg person object may be changed both by changing her/his name and her/his sex).
<
>
Hmm… Aren’t we back to the original problem? To me it’s just obvious from a gut feeling perspective that db methods shouldn’t be part of the entities but… In your repository’s Save() method – aren’t you grabbing the data out of the person object (whether it’s done by an orm, reflection, props or whatever is probably another story)?
hjhshsj it sounds like you are trying to make an argument that procedural code is more maintainable? Having many single method objects with low cohesion is not maintainable, there are trade offs involved.
EmployeeSexChanger. Hope I don’t ever need one of those!
It’s a matter of scope, how “deep” you want to define what constitutes having a single reason for change.
I’d argue that those methods do belong in the Employee (or base class) based on the arguement that any changes to them reflect a change to the employee.
It’s perspective and to be efficient you do need to generalize. For example, take Newtonian physics which says drop a feather and a bowling ball on earth the same distance in a vaccuum and they’ll fall at the same rate. Generally, true. Actually, false. Accelleration is determined by the combination of the masses involved. The bowling ball *does* fall faster, it’s just that the difference in mass between Bowling Ball + Earth vs. Feather + Earth is relatively insignificant.
Generalization has to apply to some extent to anything we do otherwise we miss the forest for the tree.
Without taking into account the basics of what an object’s responsibilities are one could follow SRP to the demise of their object oriented code and easily end up with procedural code and many do
Code maintainability is far more important than fuzzywarm feelings of pseudo-OO.
I was watching the DCI talks from Oredev. Actually, I think they said your data objects should be dumb. If I understand it correctly, the behaviors of the objects should be simply modifying the data inside. The other interesting behavior (business transactions) should be bound to context and provided or injected by ‘roles’. Alright, I have to admit I don’t fully understand DCI architecture. But, I believe this is somewhat related to what we are talking about here.
zvolkov
2.
a fundamental, primary, or general law or truth from which others are derived: the principles of modern physics.
3.
a fundamental doctrine or tenet; a distinctive ruling opinion: the principles of the Stoics.
Is it actually a principle?
OMG. Finally somebody noticed this. Any principle can be abused to the absurd.
Excellent post. As someone who is constantly defending himself from others as to the exact “boundary” to the scope of the “R” in SRP, this article hits me head on. In short, I answer with “it depends”, which doesn’t satisfy their scientific requirements for adhering to such an idea… even if it’s just a principle. I wish I had a better way of describing the exact stages of an object between zero cohesion and god classes and *proving* it’s near that sweet spot of maintainability and understandability (without even taking into account other important aspects like method naming and cyclomatic complexity).
good post… i think i haven’t really grasped yet what uncle bob means exactly with his definition of SRP. Taken litterally it can be quite extreme, as you describe.
But taken only as a rule of thumb i think it helps me watch out and not let classes grow to big. And it’s a great feeling when i see an ‘Extract Class’ refactoring taking shape. | http://codebetter.com/gregyoung/2010/03/24/the-98th-thing-every-programmer-should-know/ | CC-MAIN-2021-43 | refinedweb | 1,522 | 53.92 |
Search:
Forum
Lounge
Teaching programming
Teaching programming
Jul 4, 2012 at 3:28am UTC
ne555
(4991)
In about a month I'm going to apply to a TA job for an introductory course and for OOP.
The test is to give a public class, the subject will be decided a few days before.
Just wanted to know if you've got any suggestion, or care to share your experience as a teacher or an student.
Can't recall my own, only that really learn when joined a group to participate in the ICPC.
Regards
Jul 5, 2012 at 12:58pm UTC
Gaminic
(1551)
I'm a TA (actually, I'm not; I'm a researcher, but since my boss doesn't have a TA, I get to do it once in a while) at a University. Programming is most definitely not the main focus of the courses I assist, but it's considered a "necessary skill" (read: one semester of Intro to Java & OOP, 90% fails, scores are raised artificially and after deliberations about 80% "passed"). When an assignment involves a bit of programming, all hell breaks loose, and it's my job to guide the students.
I few things I've learned:
a) Assume your class knows nothing. Not because that's the case, but because any other assumption on their background knowledge is wrong. Things "everyone knows" are, in fact, not known by everyone.
b) State the obvious, then state it again. Remember back in school when you got annoyed because your teacher spent so much time on obvious things and skimmed over harder parts? It'll start making sense now. Once they understand the basics (and trust me: most of them still won't, regardless of how often you repeat it), they can do the harder parts by themselves. If they don't understand the basics, they're lost forever.
c) Code is scary. Intro to Java was a mandatory course which many failed, and even the sight of a line of code will send most students back into fetal position. Focus on explaining
the logic
, not the syntax. Use analogies. My favourite way of teaching the basics of programming is by telling a story that keeps on building on itself (see below).
d) Force your students to do exercises throughout the year. It's more work for you (and for them; they'll hate you for it), but it's the only way. Students that try to learn their Java right before the exam fail. There are very few exceptions, and those are usually people with prior experience anyway, who can do the exercises in a few minutes.
e) Teach your students to start programming on paper. Step 1: write down what your program must do. Step 2: high-level pseudocode. Step 3: low-level pseudocode. Step 4: boot computer and implement. Once they get more experience, they can drop step 3 if they want.
Force
them to do step 1 and 2, e.g. by including it in the exercises.
*Intro to Java story:
I always use the "dimwitted restaurant/bank employee" story. It's about a new employee that can't think for himself, but does exactly what you tell him to. He represents the program. The restaurant/bank manager is teaching him, step by step, how to do his job. He represents the programmer.
Use a few bogus "functional" methods (e.g. "greetCostumer", "getOrder", "bringFood", etc.) as filling, then start explaining different core elements step by step by continuing the story, e.g.
Phase 1: (calling functions) pretend customer, go greet and get order.
Phase 2: (conditions)
if
there is a customer, go greet and get order;
else
do the dishes.
Phase 3: (for loop) assume there are infinite customers, go greet and get the order of 50.
Phase 4: (while loop) repeat phase 3 for all the remaining customers.
Once OOP/Classes come into play, you can introduce a second character ("dimwitted chef") and have the two work together.
Ultimately, the program/story I ended up with interplay between the following:
Customers:
arrivals generated randomly [this was the setup for a simulation exercise]. Could have the state "new", "ordered" and "finished". In an alternate version, the concept of customers was replaced with "seats"/tables, of which there were a limited number.
Dimwitted Waiter:
starts idle. If a new customer enters and the waiter is idle, customer must wave() to activate the waiter (or trigger on arrivals). If new customers are available, it greets and get orders (assumed instant-decisions); if food was ready, it would bring it. When no customer/food was available, it would go idle again.
Dimwitted Chef:
starts idle. When a new order comes in, the Waiter activates him and he starts cooking. As long as orders are available, he keeps cooking. When food is finished and the waiter is idle, he yells to reactive the waiter. When no orders are available, he becomes idle again.
The final story had quite a few interesting mechanics to explain:
a) An event generator that spawns arrivals.
b) A queue for cooking orders for the chef.
c) A priority-queue for orders for the waiter (food delivery > greeting).
The year I started had the best results in student scores. I'd like to credit my teaching, but it's probably because I gave them much more hints and tips for the exam. Nevertheless, I still like using the story-method. It makes it understandable without being technical. Explaining a priority-queue to a bunch of airheads is not the easiest thing in the world, but I think a few of them still know what it is.
Jul 5, 2012 at 1:22pm UTC
Gaminic
(1551)
Just realized you only get to do a single class on the subject...
Most of the hints are still proper. Find a single, good analogy that starts simple but can be expanded to encompass everything in your class. Make it something relatable. Everyone knows a dimwitted waiter.
Speak slowly and steadily. If you run out of time, drop part of the class, rather than finishing everything at a crazy pace.
Time your lecture
. If you have an hour, aim for 45 minutes. Make sure you know which parts are vital and which aren't, in case you have to skip anything. Practice, practice, practice!
Don't be afraid to ask questions, but don't overdo it. If you get no reaction, don't bother waiting for answers too long. Reaction doesn't just mean answers; it also means facial expressions. I always order my questions by difficulty and start of with "the first X are basic; if you don't know the answer, it's time to pick up the slack". If you see a few students getting uneasy during one of those questions, you know you've lost a few. For the more difficult questions, no raised hands usually means nobody knows the answer; students are more eager to answer these questions than the really obvious ones (never understood why; I did the same as a student...).
If someone gives a wrong answer, make sure to explain
why
their answer isn't correct and why yours is. The answer itself is generally useless; the logical steps required to get to the answer are much more important. Whether a student's answer is right or wrong, have them explain how they got there.
The hardest part is finding a right tone. Some teachers are very strict, some are very friendly. I think most teachers want to be friendly, but lose control of the class and ultimately end up becoming strict to maintain order. Fear is much easier than respect.
As a TA, I'm not much older than my students (2 years at most; some students are older than I am). I'm also quite small and young-looking. Playing strict just wasn't an option, because if one thing goes wrong, all your credibility is gone. As a Friendly teacher, you can play the "disappointment card" when they get loud. Rather than actually controlling their behaviour, you just guilt-trip into behaving properly.
In closing: know your audience. I teach last-year University students, early/mid twenties. They're not very rebellious; the "bad boys" are weeded out, or just don't come to class [hint: non-mandatory attendance = less frustration]. If you're teaching late-teens, you're going to need a different approach, but I wouldn't know which. Ask the person testing you about the background of your students. How old are they? How noisy are they? How many are there?
And... if things go bad: know what power you have and don't have. Can you send them out? If they refuse to leave, do you have an alternative way of punishing them? Keep worst-case scenarios in mind. If things go wrong, you need to handle it properly, or you risk losing all your credibility. Don't bluff: if you say you're going to throw someone out, throw them out. If they call your bluff, it's over. Instead of threats, use fake-negotiations first. "If you're quiet for a few more minutes, I'll end the class early." They don't know when the class ends, so that's a few more minutes of silence for you, free of charge.
And finally...
never get angry
. It has no positive effect. You'll feel shitty, and the class won't get any better. The only person who can pull of a good moment of rage is "that friendly, calm teacher you've had for 5 years". When that guy gets angry,
shit is up
.
Jul 5, 2012 at 4:37pm UTC
Cubbi
(2324)
the sight of a line of code will send most students back into fetal position
Perhaps they are in the wrong class then?
Nice posts, though.
I don't know how relevant it would be, but to
share your experience as a teacher
:
When I was teaching (12 years ago, for two years, jumpstarting the new curriculum that I wrote for the college I graduated earlier), the last 15 minutes of *every* class were a test, where each student had to write code that makes use of what I just talked about. This wasn't unusual, our college policy actually encouraged that: midterms+finals counted for only half of the course credits.
Inspired by a math professor (excellent group theory researcher, who completely reversed my opinion about math) who did that before, I set myself the goal to give each student a personalized problem every single time. There were about 25 of them, and it was not easy coming up with problems on the spot, some of them came out too easy, some too hard, some embarassingly dull.. but it did help establish connections, I soon knew each student's level and could (some of the time) tailor the problems that would push them but not break them, or at least would help them actually learn what they just heard.
Jul 11, 2012 at 12:23pm UTC
Petee
(9)
First of all, thanks to Gaminic for the really interesting, well-written and well structred post. These are lots of really good approaches for teaching coding but also teaching in general.
I think teaching can be an amazing thing if everything works out but it could also be frustrating and exhausting.
If you you follow Gaminic's suggestions you should be just fine.
If you need some more excercise or a structure for a lesson you could also take a look at the online class codecademy.
I like their basic exercises.
Good luck and don't worry!
Petee
Topic archived. No new replies allowed.
C++
Information
Documentation
Reference
Articles
Forum
Forum
Beginners
Windows Programming
UNIX/Linux Programming
General C++ Programming
Lounge
Jobs
|
v3.1
Spotted an error? contact us | http://www.cplusplus.com/forum/lounge/74561/ | CC-MAIN-2013-20 | refinedweb | 1,982 | 74.08 |
The layout page shares the common design across all pages. There are many methods which are listed below to change the layout page dynamically in ASP.NET Core MVC.
I hope you have understood about the layout page from the preceding brief summary. Now let's implement it practically.
- While Adding View Page, Assign Layout Page .
- Using View Start Page
- Start then All Programs and select "Microsoft Visual Studio 2019".
-.
Step 2: Add user and admin controller controller.
Right click on the Controller folder in the created ASP.NET Core MVC application and add the two controller classes with names UserController and AdminController as follows.
The preceding two controller classes are added into the project which are User and Admin and create the following action methods in respective controller class.
UserController.cs
public class UserController : Controller { public IActionResult Login() { //write logic here return View(); } }
public class AdminController : Controller { [HttpPost] public IActionResult AddRole() { //write logic here return View(); } }
Step 3: Add Views and Layout pages
We can decide which layout page to be used while adding the view. Let us follow the following steps to add the layout page with view. Click on the View folder of the created ASP.NET Core MVC application as,
As shown in the preceding image, specify the view name and check the use layout page option and click the adding button, then the following default layout page will be added into the solution explorer. Now let's add another layout page named admin as in the following. Click on solution explorer and add the layout page as follows:
Now click on add button, then two layout pages are added under shared folder which are AdminLayoutPage and Layout.
Step 4: Set layout pages to view
We have created view and layout pages. Now let us assign layout pages to the views. There are many ways to assign layout page to the view which are listed as in the following:
- Using wizard
- Using ViewStart page
- Using view method
You can use wizard to set the layout page while adding the view, steps are as follows:
- Right click on view folder and select view template as,
Specify the view name and check on Use a layout page and click on browse button. The following window will appear,
Now choose layout page from preceding available Layout pages and click on ok button. The layout page will look like as follows,
Now click on add button then layout page reference added into the view page as,
Using ViewStart page
Adding reference of layout page in every page is very difficult and repetitive of code. Let us consider I have one controller which as twenty plus action method then each twenty views we need to add reference of layout page. Assume another requirement we need to set layout page according to condition basic or controller basic then we need to use Viewstart page.
Lets open the ViewStart.cshtm page and write the following code,
@{ string CurrentReq = Convert.ToString(Context.Request.HttpContext.Request.RouteValues["Controller"]);
dynamic Layout; switch (CurrentReq)
{ case "User": Layout = "~/Views/Shared/_Layout.cshtml"; break; default: //Admin layout Layout = "~/Views/Shared/_AdminLayoutPage.cshtml"; break; } }
Now run AddRole view, Then the output will look like the following,
I hope from all the preceding examples, you have learned how to work with multiple layout pages in ASP.NET Core MVC.
Note:
- Apply proper validation before using it in your project.
I hope this article is useful for all readers. If you have any suggestions, then please mention it in the comment section.
Related Tutorials
How To Create an ASP.NET MVC Application | https://www.compilemode.com/2021/07/multiple-layout-pages-in-asp-net-core-mvc.html | CC-MAIN-2022-40 | refinedweb | 600 | 63.19 |
SwiftRPlot
A Swift(‘er) framework for Real-time time series data visualization
Plotting time series data such as analog signals in real-time can be difficult.
Charting solutions currently available are not fine-tuned for real-time plots and can be CPU/RAM intensive so I’ve decided to address this.
The project will be as lightweight as possible and purely Swift & Cocoa(Touch) based to minimalize problems & update time when new OS arrives.
Features
- Two types of plot : Merged and Split
- Support multiple instances running at the same time
- Thread-safe, you can safely add data obtained from another thread to SwiftR.
- Up to 60fps rendering on macOS
- Use RAM sparingly, CPU needs further optimization
- Support scaling and resizing through view constraints
- Customizable y-tick labels
- Up to 7 predefined pastel color templates through PrismColor()
Getting Started
To use SwiftRPlot:
- Drag the SwiftR.xcodeproj to your project
- Go to your target’s settings, hit the "+" under the "Embedded Binaries" section, and select the SwiftR.framework
- In your sourcefile:
import SwiftR
Currently there is no documentation but both platform shares the same API.
Please try SwiftRDemo_macOS project or SwiftRDemo_iOS to see the example of how you can use the API.
Installing via CocoaPods
Add pod ‘SwiftRPlot’ to your Podfile.
Known Issues
- Lower fps when running multiple instances of the plot on iOS
- This is due to text drawing which is resource intensive and is on my top fix priority
Question, Issues & Feature requests
If you are having questions or problems, :
- Make sure you are using the latest version of the library. Check the release-section.
- Search known issues for your problem (open and closed)
- Create new issues (please do not create duplicate": "SwiftRPlot", "version": "0.0.1", "summary": "A swift module for real-time time series data visualization on macOS/iOS", "description": "Plotting time series data such as analog signals in real-time can be difficult. Charting solutionsn currently available are not fine-tuned for real-time plots and can be CPU/RAM intensive so I've decided to address this.nn This project was partly inspired from the tiredness of having 3rd party library fails after OS updates.n Thus, the project will be as lightweight as possible and purely Swift & Cocoa(Touch) based to minimalize problems & update time when new OS arrives.", "homepage": "", "license": { "type": "MIT", "file": "LICENSE" }, "authors": { "kalanyuz": "[email protected]" }, "social_media_url": "", "platforms": { "ios": "10.2", "osx": "10.12" }, "source": { "git": "", "commit": "b8c41f48dfab09888613da953e89037becc46e59", "tag": "v0.0.1alpha" }, "source_files": "CommonSource/common/*.{h,swift}", "ios": { "source_files": "CommonSource/iOS/*.{swift}", "frameworks": "UIKit" }, "osx": { "source_files": "CommonSource/macOS/*.{swift}", "frameworks": [ "Cocoa", "AppKit" ] }, "frameworks": "Foundation", "pushed_with_swift_version": "3.0" }
Thu, 23 Feb 2017 12:40:06 +0000 | https://tryexcept.com/articles/cocoapod/swiftrplot | CC-MAIN-2019-09 | refinedweb | 443 | 54.42 |
CInclude serves the same basic function as XInclude however it is a bit more versatile and is most likely higher performance.
Steps
You must:
- Add CInclude Namespace to your Page Element definition.
- Use the cinclude:include element in your page
- Add the CInclude transformer to your sitemap component definitions
- Add a CInclude transformer to the pipeline where you used the cinclude:include element
This page defines the first two, the second two steps are shown here: AddCIncludeToMinimalSitemap.
Basic Usage
You must include the CInclude transformer in your basic sitemap. From there you must include this:
<page xmlns:
to your Page Element. And this:
<cinclude:include
whereever you wish the content to be included.
Attributes
- src -- src is required and is a supported URL. These may be relative or direct and may use any cocoon-supported protocol.
- element -- element is optional. The string is the name of a tag which should surround cincluded content.
- prefix -- prefix is optionally used with the element attribute. If prefix is speficied then the element will be *included* with the defined Namespace.
- ns -- ns is optionally used with the element attribute. If ns is specified then the element will be *selected* with the defined Namespace.
select -- If select is defined then the elements specified by the XLink formatted string will be included. (available in 2.1)
Example of select and element attribute
<cinclude:include
This would include all children of "xformpayments" but not "xformpayments" itself, instead the root element will be "payments".
Hint: If your select is returning empty but you're sure you're naming the right nodes you may be selecting from the wrong namespace. When selecting nodes from the document's default namespace (as with most xhtml documents) try selecting the node with //*[local-name()='body']/* where body is the node name.
See this example:
AddCIncludeToMinimalSitemap for information on adding a CInclude tranformer to the basic sitemap.
Advanced use
In cocoon-2.0.4, only the basic CInclude transformer described above is available. Development versions (see WhereToGet21Dev) add a new element, <cinclude:includexml>, which is described in the Cocoon documentation. | http://wiki.apache.org/cocoon/CInclude | CC-MAIN-2013-48 | refinedweb | 347 | 56.86 |
Doctor Fortran in "The Future of Fortran"
By Steve Lionel (Intel), published on March 27, 2015…
I had prepared a short, vendor-neutral presentation on the current state of the Fortran standard and what was coming in the next standard, currently called Fortran 2015. (I expect this name to stick.) After that I opened it up for questions from the audience, which was more substantial than I expected (about 70-80 people, despite an awful time slot.) At the end invited attendees to participate in an online survey of Fortran usage, and the results were interesting (see the end of this post).
First, I spoke about the current standard, Fortran 2008 (2010) and the previous one, Fortran 2003 (2004). There is one vendor supporting all of Fortran 2008 (Cray) and three supporting Fortran 2003 (IBM, Intel, PGI/NVidia).
Fortran 2015 was envisioned as a minor revision with only two major feature sets plus corrections of inconsistencies (referred to as “wart removal”). As is usually the case, a bunch of small features have crept in, but the result is still manageable. If you want the complete list, the Introduction to the new standard lists all the changes. The current draft is here, but it doesn't yet reflect all the changes.
The current schedule calls for the “technical work” to be done in 2015, hence the name. The expectation is that the new standard would be published sometime in 2017, but it may get pushed out further, as I discuss later.
In the following sections I will highlight the major changes, but won’t go into full detail. You can look at the draft (see above) for that.
The "Interop TS"
The first big feature set was defined in a separate Technical Specification TS29113 “Further Interoperability with C”. This was approved in 2012 with the intent that vendors could implement the features now with reasonable assurance that the final standard would not be incompatible. A big driver of the “Interop TS”, as it is often called, was the needs of MPI 3.0. As is hinted by the title, Fortran 2015 extends the C interoperability features that first appeared in Fortran 2003 as follows:
- Assumed-type and assumed-rank make it easier to interoperate with C’s “void *”
- ALLOCATABLE, POINTER, assumed-shape and CHARACTER(*) arguments can be passed to and from C, communicated with a new data structure called a “C descriptor”
- New ISO_Fortran_binding.h C header file declares C descriptors and routines for manipulating them
- OPTIONAL arguments are now interoperable
- ASYNCHRONOUS was extended to more than I/O
- Relaxed restrictions on dummy arguments
Assumed-type variables are declared with the new syntax TYPE(*) and are allowed only as dummy arguments. Assumed-type variables have no type (are effectively unlimited polymorphic, so you can pass any type to one of these), but unlike ordinary polymorphic variables you can’t do a SELECT TYPE on them. In Fortran code if you want to use these at all you have to “cast” them to a Fortran pointer type using C_LOC and C_F_POINTER. Assumed-type variables may not have the ALLOCATABLE, CODIMENSION, INTENT(OUT), POINTER or VALUE attributes.
Assumed-rank variables are declared with the new syntax DIMENSION(..); DIMENSION(*) already had a meaning, so you’re supposed to think of this as a colon on its side. Again, only dummy variables may have assumed-rank. You can pass an item of any rank, including scalar, to an assumed-rank dummy argument. (If the item is assumed-type, it must also be assumed-shape or assumed-rank to pass to an assumed-rank dummy.) The rank of an assumed-rank dummy is assumed from the actual argument, which may be 0 if the actual was a scalar. These are passed by descriptor (C descriptor if the procedure is BIND(C) - see below). An assumed-rank dummy may not have the CODIMENSION or VALUE attributes. A new RANK intrinsic returns the rank of its argument.
At the February 2015 standards meeting, we accepted a proposal for a new SELECT RANK construct which is similar in concept to SELECT TYPE. It can be used on assumed-rank variables only and within each select block, the “associate variable” has the rank specified. This hasn't yet been accepted by the ISO committee, but I expect that to happen at our meeting in August 2015. Without SELECT RANK, it is difficult to do much with an assumed-rank variable in Fortran code.
C descriptors are an invention of the Fortran standard – they define a standard-conforming way to communicate extended information about procedure arguments such as shape, type, ALLOCATABLE/POINTER attributes and more. The standard specifies that the Fortran implementation provide a C header file named ISO_Fortran_binding.h that declares the typedefs for C descriptors, macros for values of various members, and definitions of functions for accessing and manipulating C descriptors. Any changes to the descriptor must be made through one of these functions. All the typedefs, macros and functions have names beginning with “CFI_”. An important point is that while the standard defines some restrictions on the layout of C descriptors, there is room for implementation-dependent members and member order, so C descriptors are not necessarily interchangeable among different Fortran implementations. Nonetheless, they allow mixed Fortran-C applications to do away with dependencies on vendor extensions.
Here’s a short example of a Fortran-C program that uses C descriptors.
use, intrinsic :: iso_c_binding interface function c_alloc (array) bind(C) import integer(C_INT) :: c_alloc real(C_FLOAT), intent(out), allocatable, dimension(:) :: array end function c_alloc end interface real(C_FLOAT), allocatable, dimension(:) :: my_array if (c_alloc(my_array) == 0) then print *, lbound(my_array), ubound(my_array); print *, my_array end if end
#include "ISO_Fortran_binding.h“ extern int c_alloc (CFI_cdesc_t * descr) { int ret, i; float * array; CFI_index_t lower = 0, upper = 10; ret = CFI_allocate (descr, &lower, &upper, 0); // No elem_len if (ret == CFI_SUCCESS) { array = descr->base_addr; for (i=lower;i<=upper;i++) {array[i] = (float) i;} } return ret; }
The “Coarray TS”
The other major feature set is contained in TS18508, “Additional Parallel Features in Fortran”, generally referred to as the “Coarray TS”. Unlike the “Interop TS”, the Coarray TS is still in flux. When I spoke about this last November, I was not optimistic that things would settle down enough to meet the already ambitious schedule. However, at the February 2015 meeting we made major changes to the most contentious aspect and I am feeling a lot better about its chances now.
There are generally four parts to the Coarray TS: teams, events, atomics and collectives. A “team” is a collection of images in a coarray application that are working together on a particular task. The application may have several (or many) teams working independently and then communicating their results to their parent. The nice thing about teams is that image numbering is relative to the team making it much easier to construct libraries that use coarrays without having to worry about the “coshape” of the entire application. Syntax has been added to allow you to specify a team variable as part of a coindex reference.
Teams can themselves form subteams, and teams can be dissolved and reformed dynamically. One reason to do this is if one of the images fails – as the number of images grows this can become more and more likely. But this was where the arguments raged – how do you know if an image has failed, or perhaps is just taking longer than it should to respond?
Initially, the TS had the notion of “stalling” where an image failed to make progress but was still somehow alive. We spent many, many meetings debating this topic and failed to come up with a solution for expressing in the language how to detect and recover from stalled images. In February we chose to toss the whole idea of stalling. Instead, an image could be deemed to have failed and status arguments were added to synchronization constructs to detect that an image had failed. (The actual mechanism for such detection is implementation dependent, of course.) A caveat is that some of the more vocal participants in the debate weren't at the February meeting, so it remains to be seen how the revised TS is accepted at the August meeting.
Another significant part to the TS is “events”, a way for one image notifying another image that a task has been completed and that it can proceed. This part has been settled for some time now and I don’t expect further changes.
Fortran 2008 added the concept of ATOMIC objects for which operations are done indivisibly, but there was limited support for them. The Coarray TS adds procedures for more atomic operations such as ADD, AND, CAS (Compare and Swap), OR and XOR.
Lastly, collectives are intrinsic procedures for performing some operation across all the images of the current team. New subroutines defined are CO_MAX, CO_MIN, CO_SUM, CO_BROADCAST and CO_REDUCE.
What Else?
In addition to the two Technical Specifications above, Fortran 2015 will have a number of smaller changes intended to improve consistency across the language. Some of these are:
- Being able to specify the type and kind of the implied DO variable in an array constructor and in DATA (this was prompted by a suggestion in our user forum!)
- SIZE= can be used with advancing input (like our Q format extension)
- G0.d can be used with integer, logical and character list items
- New intrinsics RANDOM_INIT, COSHAPE, REDUCE, OUT_OF_RANGE
- IMPORT is now allowed in a contained procedure and in BLOCK
- IMPLICIT NONE(EXTERNAL) requires explicit interfaces for all procedures
- All procedures are recursive by default – NON_RECURSIVE may be specified if desired
Finally, what the standards committee giveth, the standards committee can also taketh away. Newly declared “obsolescent” are labeled DO loops (DO 10 I…), EQUIVALENCE, COMMON and BLOCK DATA. Arithmetic IF and the non-block DO construct, where the DO range doesn't end in a CONTINUE or END DO, are finally deleted from the language, having been declared obsolescent for several revisions. Note that none of these will actually be removed from any compiler you are likely to use, but their use will be flagged if you ask for standards checking.
The Survey Says….
As I noted earlier, I invited those attending the SC14 session to respond to a survey about Fortran usage. I didn't get a lot of responses (eleven), but what I did get was interesting:
- 45% said they were using coarrays now, 64% thought they would be using coarrays in three years
- 73% were using C interoperability features now and 36% thought they would use DO CONCURRENT within three years
- C was the overwhelming (90%) “other language” used in mixed-language applications, but Python came in a surprising second (27%)
- 90% of the responders use three or more different Fortran compilers in their work
- Responders had been programming in Fortran an average of 14 years (range was 4 to 28)
- 100% thought they would still be using Fortran in five years
Want to take the survey yourself? It’s still open at
Feel free to put in your comments here on Fortran 2015 (or the survey). As always, if you need help with the compiler or with the Fortran language, please ask in our user forums. We have one for Windows users and one for Linux/Mac users.
4 commentsTop
Steve Lionel (Intel) said on Jun 1,2015
Mark, you may email me at steve.lionel at intel dot com. I have a pretty good idea of what your issue is.
Steve
Mark M. said on Jun 1,2015
Steve,
I am using Intel Visual Fortran Composer XE 2013, along with MVS 2010 professional, to compile, link and run a legacy Fortran program that was first written in the early 1980's. I recently made a slew of updates to this program and am now trying to obtain a working linked module. The compilation steps goes without a hitch (there are ~650 subroutines), but I am presented with an LNK2005 error upon linking. I've been contacting Intel indirectly, through our company designated go-between, who forwards to Intel the emails I send to her, and then she forwards to me the emails she receives back from Intel.
Nothing that I've heard from Intel has helped me in any way to find out why I'm getting the aforementioned LNK2005 error. I have tried linking with the /FORCE:MULTIPLE option, and it has reduced the error to a warning, but I'm not comfortable with this approach since I'm left not knowing what it is I "fixed" by using this option.
I work for Northrop Grumman, and we are using this software under license. Would it be possible for me to have a one-on-one email chat with you so that I can understand the root cause of my issue? I'm guessing it won't take too many email exchanges before we arrive at the answer. I've spent two weeks trying to debug this so far, and I'm at my wits end.
Appreciatively,
Mark Merritt
(562)424-3225
Steve Lionel (Intel) said on Mar 28,2015
The audience was, I think, mostly Fortran programmers, but I didn't ask. I know that a representative from at least one other vendor was there as well. An offhand comment I made about ISO wanting to place committee documents out of public view did engender some lively discussion, including members of other committees who also are fuming about this. This is why, for example, you can no longer easily find the link to the committee working drafts on the J3 web site, and why the WG5 documents are no longer updated on the NAG FTP server. But ISO now wants ALL committee documents locked up so that only committee members can see them, which the Fortran committee is fighting, as we've always operated in full view.
There was a suggestion that a MAP intrinsic be added to go along with REDUCE.
I'll check my notes and see if there was anything else of note.
FortranFan said on Mar 28,2015
Steve,
Great blog, very informative and helpful, especially for us readers to keep up with Fortran updates. Nice to read of the progress in Fortran 2015.
Re: "I led a session at SC14.. opened it up for questions from the audience, which was more substantial than I expected (about 70-80 people" - was the audience mostly Fortran users and any interesting/surprising questions from the audience?
Thanks,
Add a CommentSign in
Have a technical question? Visit our forums. Have site or software product issues? Contact support. | https://software.intel.com/en-us/blogs/2015/03/27/doctor-fortran-in-the-future-of-fortran | CC-MAIN-2019-04 | refinedweb | 2,443 | 58.01 |
NetBeans Platform Plugin Quick Start
Last reviewed on 2020-12-01
Welcome to Apache NetBeans plugin development!
This tutorial provides a simple and quick introduction to the Apache NetBeans plugin development workflow by walking you through the creation of a new toolbar for any Apache NetBeans Platform application. Once you are done with this tutorial, you will know how to create, build, and install plugins for the Apache NetBeans Platform.
After you finish this tutorial, you can move on to the NetBeans Platform learning trail. The learning trail provides comprehensive tutorials that highlight a wide range of Apache NetBeans APIs for a variety of application types. If you do not need to do a "Hello World" tutorial, you can skip the instructions that follow and jump straight to the learning trail.
The toolbar you create in this tutorial will look as follows:
The concept is that the user enters some text, presses kbd:Enter, and the IDE’s default browser opens and the text in the toolbar is sent to a Google search, with the results available in the open browser.
To create this toolbar, you will use the NetBeans APIs to enhance the Apache.
If it would help you, do some background reading before diving into this tutorial. In particular, you might like to read the Modules API Reference document, which explains what modules are and provides some context for this tutorial. Also note that there is an extensive Reference Material section on the NetBeans Platform Learning Trail. Of course you can always go back to those later, or again, whenever you would like to.
The completed tutorial source code is also available as a GitHub repository.
Setting up the Module Project
We begin by creating the source structure common to all NetBeans Platform modules. Read the Modules API Reference for details.
Choose Ctrl+Shift+N). Under Categories, expand Java with Ant, and select NetBeans Modules. Under Projects, select Module:(
.
In the Name and Location panel:
In the Project Name field, type
GoogleToolbar.
In the Project Location field, change the value to any directory on your computer where the module will be stored.
It should look similar to this:
Click Next.
In the Basic Module Configuration panel:
Type
org.myorg.googletoolbarin "Code Name Base", which defines the unique string identifying the module you are creating. The code name base is also used as the main package of the module, i.e., your main package will be "org.myorg.googletoolbar".
Do not select the "Generate OSGi Bundle" checkbox, since we will be using the default NetBeans module system, rather than OSGi.
It should look similar to this:
Click Finish.
The IDE creates the
GoogleToolbar project:
AutoUpdate-Show-In-Client: true OpenIDE-Module: org.myorg.googletoolbar OpenIDE-Module-Localizing-Bundle: org/myorg/googletoolbar/Bundle.properties OpenIDE-Module-Specification-Version: 1.0
For details on these NetBeans-specific manifest keys, check the NetBeans Modules API Javadoc description.
Coding the Module
In order to create a Google toolbar, we will need to complete the following steps:
Creating the Action.
Right-click the GoogleToolbar project node and choose New > Action:
If Action is not displayed, access it by choosing Other, then in the New File wizard under Categories, select Module Development and then Action.
Click Next.
In the Action Type panel:
Keep the default setting, which will let the Action be unconditionally enabled, as shown below.
Click Next.
In the GUI Registration panel:
Select File from the Category drop-down list. The Category drop-down list controls where an action is shown in the Keyboard Shortcuts editor in the IDE.
Deselect Global Menu Item because we will not need a menu item.
Select Global Toolbar Button. In the Toolbar drop-down list, select File, then in the Position drop-down list, select the toolbar button’s position within the toolbar as "Save All - HERE" as shown below.
Click Next.
1. In the Name, Icon, and Location panel:
In the Class Name field, type
GoogleActionListener
In the Display Name field, type
In the Icon field, browse to an icon that has a dimension of 16x16 pixels.
If needed, here are two icons you can use:
16x16:
24x24:
However, note that by the end of this tutorial:
Additional dependencies have been included in the Libraries section, and additional sources have been added. = 500) @Messages("CTL_GoogleActionListener=Google") public final class GoogleActionListener implements ActionListener { @Override public void actionPerformed(ActionEvent e) { // TODO implement action body } }
In the Projects window, right-click the
GoogleToolbarproject node and choose Run. The module is built and installed in a new instance of the IDE (which is currently set to be the target platform). By default, the default target platform is the version of the IDE you are currently working in. The target platform opens so that you can try out the new module. You should be able to see your button and click it:
Close the target platform instance:
Creating the Toolbar
In this section, we will create a
JPanel that will replace the
JButton that the Action wizard created in the previous section.
Right-click the project node and choose New > Other.
Under Categories, select Swing GUI Forms. Under File Types, select JPanel Form:
Click Next.
In the Name and Location panel, type
GooglePanelas the Class Name and select the package from the drop-down list:
Click Finish.
GooglePanel.java is added to the package and is opened in the Design view in the Source Editor.
Place the cursor at the bottom right-hand corner of the JPanel, then select the JPanel and drag the cursor to resize it, so that its width and length resemble that of a toolbar, as shown below:
Drag a Label (
JLabel) item and a Text Field (
JTextField) item from the Palette (Ctrl+Shift+8) directly into the
JPanel, then resize the
JPaneland the other two items so that they fit snugly together. Finally, press F2 on the
JLabeland change its text to
Google:, then delete the default text in the
JTextField.
If you click F2 over the
JLabel and the
JTextField , their display text will become editable. You can also do this using the properties dialog.
Your
JPanel should now resemble the image shown below:
You can set other UI properties as required.
Double-click on the JTextField (or right-click on it and choose Events > Action > actionPerformed). This generates a
jTextField1ActionPerformed()method in the
GooglePanel.javasource code, which displays in the Source Editor. Fill out the
jTextField1ActionPerformedmethod as follows (inserted text shown in bold):
private void jTextField1ActionPerformed(java.awt.event.ActionEvent evt) { try { String searchText = URLEncoder.encode(jTextField1.getText(), "UTF-8"); URLDisplayer.getDefault().showURL(new URL("" + searchText)); } catch (UnsupportedEncodingException | MalformedURLException eee) { //nothing much to do } }
If you need to, right-click in the Source Editor and choose Format (Alt+Shift+F).
Right-click in the Source Editor and choose Fix Imports (Ctrl+Shift+I). The Fix All Imports dialog displays, listing suggested paths for unrecognized classes:
Click OK.
The IDE creates the following import statements at the top of the class:
import java.io.UnsupportedEncodingException; import java.net.MalformedURLException; import java.net.URL; import java.net.URLEncoder; import org.openide.awt.HtmlBrowser.URLDisplayer;
Also notice that all errors disappear from the Source Editor.
1. = "File", id = "org.myorg.googletoolbar.GoogleActionListener" ) @ActionRegistration( lazy = false, displayName = "NOT-USED" ) @ActionReference(path = "Toolbars/File", position = 500) @Messages("CTL_GoogleActionListener=Google") public final class GoogleActionListener extends AbstractAction implements Presenter.Toolbar { @Override public void actionPerformed(ActionEvent e) { // delegated to toolbar } @Override public Component getToolbarPresenter() { return new GooglePanel(); } }
Presenter.Toolbar is provided in the Utilities library, which we need to add.
Click near to the relevant
importstatement, and select Search Module Dependency for org.openide.util.actions.Presenter:
The matching dependency is shown:
Click menu:[OK] to add the library module, which appears in the Project view.
Run the module again. This time, instead of a
JButton, you should see your
JPanel. Type a search string in the text field:
Press Enter. The IDE’s default browser starts up, if you have set one in the Options window. .
See Also
This concludes the NetBeans Plugin Quick Start. This document has described how to create a plugin that adds a Google Search toolbar to the IDE. For more information about creating and developing plugins, see the following resources:
NetBeans Platform Learning Trail
-
NetBeans API classes used in this tutorial:
-
- | https://netbeans.apache.org/tutorials/nbm-google.html | CC-MAIN-2021-17 | refinedweb | 1,385 | 55.74 |
Hi
I have Fedora Core 5 2.6.17-1.2174_FC5 with gcc version: gcc (GCC) 4.1.1 200
60525 (Red Hat 4.1.1-1)
everything compiles well, and run smoothly on my machine. But the actual pro
duction machine is Linux 2.4.20-46.9.legacysmp
with gcc version gcc (GCC) 3.2.2 20030222 (Red Hat Linux 3.2.2-5)
on the production machine, I get errors like:
1) warning #266: function declared implicitly
while ((read = getline (&line, &len, sfile)) != -1) {
2) undefined reference to `sem_init'
3) undefined reference to `sem_destroy'
while I am using the correct headers:
#define _GNU_SOURCE
#include <stdio.h>
for the getline
and
#include <semaphore.h>
for the sem_init and sem_destroy
I receive the same "warning #266: function declared implicitly" for my own d
efined functions as well, while my header is included,
any ideas where could be the problem,
sorry for asking a question that might be basic, but it is my first time to
move code between different machines in linux and compilers versions,
I appreciate any help,
thanks,
Manal | http://cboard.cprogramming.com/c-programming/83878-problems-c-program-different-machines-using-different-compilers.html | CC-MAIN-2014-23 | refinedweb | 182 | 65.12 |
.
Trees and Containers
I'm missing a key concept related to trees and data sources.
I need a tree that displays complex objects rather than just strings.
As an example let's say I have a class Person:
public class Person { private String name; private String id; // getters and setters }
I've extended HierarchicalContainer and added a method to create people and add them. So far I think I have things correct.
However, adding people to the tree results in the tree displaying that object id. What I need is to be able to display the name from that object's person field. When someone clicks on an element in the tree I'd then like to be provided with the Person object mapping to that element they clicked on, or the id that I can then look up myself in a hash.
I guess I should have tried this from the beginning.. a toString on the data object works, dunno if that's the right way and I'm not positive where captions fit in but it works.
Is there a way to implement my tree and container pair such that I can tell the tree "use this field from each item for display"?
I can see it has something to do with setting the caption, but it's not clear where I do this. Appropriate calls appear on the Tree itself, but that doesn't seem to line up with using a data source.
Something in the lines of this
HierarchicalContainer container = new HierarchicalContainer(); // Make sure you have properties defined in your container container.addContainerProperty("name", String.class, null); addPersonToContainer(container, new Person("Jeremy German")); tree.setContainerDataSource(container); // Define which property is going to be used as the item's caption in the tree tree.setItemCaptionPropertyId("name"); ... public static void addPersonToContainer(Container container, Person person) { Item item = container.addItem(person); // Item is null if the person is already in the container if(item != null) { item.getItemProperty("name").setValue(person.getName()); } }
When you select an item in the tree, an event is dispatched. The event will tell you the item id of the item which was clicked. In this case, the item id is the person object. You'll need to case the itemId to a Person,
Person person = (Person) event.getItemId(); | https://vaadin.com/forum/thread/135807/trees-and-containers | CC-MAIN-2021-43 | refinedweb | 383 | 62.88 |
1
Workbook 2
Introduction.
Important.
Serialisation
1
The use of "-ise" verses "-ize" is much debated and is some times erroneously attributed to a difference between American
and British English. Oxford University Press is said to favour "-ize" whilst Cambridge University Press prefers "-ise" (http://
en.wikipedia.org/wiki/American_and_British_English_spelling_differences). As this course is taught in Cambridge, we'll use "-ise".
Workbook 2
2Out-
putStream interact with each other in the above code as byte-oriented streams of data. Formally, they
provide an OutputStream and use an OutputStream respectively; you've seen this interaction be-
fore with BufferedReader. This loose coupling of classes enables Java objects to be written to any
class which provides an OutputStream. In the last workbook you used the OutputStream provid-
ed by instances of the Socket class. Many other classes in the Java standard library which read or
write data support either an InputStream or an OutputStream respectively, a fact you will need to
remember when completing this workbook!
Important
Further information on serialisation is available at:
2
Workbook 2
3
Serialising a simple class
Here is a simple class which implements the serialisation interface:
package uk.ac.cam.crsid.fjava.tick2;
import java.io.Serializable;
public class TestMessage implements Serializable {
private static final long serialVersionUID = 1L;
private String text;
public String getMessage() {return text;}
public void setMessage(String msg) {text = msg;}
}
1.Create a new package inside your Eclipse "Further Java" project with the appropriate name,
and places the class TestMessage as defined above inside it.
2.Complete the sections marked TODO in the class TestMessageReadWrite shown below.
3.Test your implementation by instructing your program to download and print out the mes-
sage contained within a serialised instance of TestMessage at the following URL: http://".
}
}
Workbook 2
4
A chat client using Java objects.
Message type (class)
Direction
Description
ChangeNickMessage
Client→Server
Update nickname of the client stored by the server.
ChatMessage
Client→Server
Message written by a user is sent to the server.
RelayMessage
Server→Client
User message sent from server to all clients.
StatusMessage
Server→Client
Message generated by the server, sent to all clients.
Table 1. Message classes sent between the client and server
Serialisation
4.Download a copy of the classes which define the message types from:
teaching/current/FJava/messages.jar.
5.Open the jar file you downloaded in the previous step using the command line tool jar to
extract the Java source code for the messages. There are additional classes contained in this
Jar file which will be explained later in this workbook.
6.Create a new package and class files with appropriate names in your "Further Java" project to
contain the five message types found in the jar file..
3
This suggestion is somewhat tongue-in-cheek, and is certainly not compulsory, but the really keen student should read existing
specifications before designing their own. The XMPP RFCs () are a good place to start.
Workbook 2
5 ob-
ject
4
A quote from the 1968 epic "2001: A Space Odyssey". Directed by Stanley Kubrik, and written by Arthur C. Clarke and Stanley
Kubrick.
Workbook 2
6
operator instanceof; for example (m instanceof StatusMessage) will evaluate to true if m is
an instance of the StatusMessage class and false otherwise.
A new Java chat client
7.Look at the Java documentation for the java.text.SimpleDateFormat and the
java.util.Date classes and work out how to print the current time in the same format as
shown in the Figure.
8.Create the class ChatClient inside the package uk.ac.cam.crsid.fjava.tick2 which
implements a Java chat client as described earlier in this section. Your class should provide
a standard public static void main(String[] args) method which accepts two
arguments on the command line: a server name and a port number; errors (such as insufficient
arguments) should be handle as you did in your implementation of StringChat last week.
9.Test your implementation using the server java-1b.cl.cam.ac.uk on port number 15003.
Class loaders and reflectionIn-
put.
Workbook 2
7
Using a class loader
10. Replace the use of ObjectInputStream with DynamicObjectInputStream in your im-
plementation of ChatClient.
11. Insert additional code to ChatClient to handle receiving messages of type NewMes-
sageType and call addClass on your instance of DynamicObjectInputStream whenever
you receive a message of this type. Print out the following message:
14:54:27 [Client] New class <name> loaded.
where <name> is the name of the class you have received.
12. Modify your implementation of ChatClient so that if you receive an object that is not a
RelayMessage, a StatusMessage or a NewMessageType, your program will print:
14:54:27 [Client] New message of unknown type received.
13. Test your new version of ChatClient with java-1b.cl.cam.ac.uk, port 15004.
Your new version of ChatClient should now print out the names of several classes which were
sent by the server when the client connected, together with periodic "unknown type" messages.
If it does not, ask for help from a demonstr repre-
sentation-
Class. ap-
propriate
Workbook 2
8
example, the method getDeclaredFields will return a list of fields found in the Java class. Similarly,
getMethod will search for a method by name and return a reference to it, if it exists. The Java docu-
mentation for java.lang.Class contains information on how to use these and other methods.
Reflection
14. Remove the print statement you placed in the code in response to question 12.
15. Modify your implementation of ChatClient to extract all the declared fields from any unknown
new message type sent by the server and print out their contents. For example, if you receive
an unknown message with the name NewMessageClass with two fields, field1 (with the
value hello) and field2 (with the value world), your program should print out
14:55:27 [Client] NewMessageClass: field1(hello), field2(world)
Annotations
The two most recent releases of Java (1.5 and 1.6) support annotations. Annotations provide informa-
tion:
Workbook 2
9
other-
wise generate. For example, @SuppressWarnings("deprecation") can
be used to suppress a compiler warning generated by the use of a method
which is marked by the deprecated annotation described above; another com-
mon example is SuppressWarnings("unchecked") which removes the
warnings which result when interfacing with code which was written before the
introduction of Java generics.
Annotations can even be applied to annotations. For example, if we wish to access an annotation via re-
flection at runtime, the annotation @Retention(RetentionPolicy.RUNTIME) must be added above
the definition of the annotation. You can see an example of this in the messages.jar you imported
earlier for the Execute annotation.
Reflection and annotations
16. Add the definition of the annotation FurtherJavaPreamble as shown above to the correct
package.
17. Use the annotation FurtherJavaPreamble by annotating your ChatClient class.
18. Annotate the FurtherJavaPreamble annotation so that its values are accessible to the
Further Java testing engine when it tests your code.
19. Modify your implementation of ChatClient to check for the presence of declared methods
from any unknown new message type. If such a method takes no arguments and is annotated
with the Execute annotation (as found in messages.jar), invoke the method on the object
you've received. (In this case you are dynamically executing new code you've downloaded from
the Socket object!) Please make sure you print out the contents of any fields (question 15
above) before you invoke any methods.
Important
Read the following articles to cement your knowledge of class loading, reflection and annotations:
Your Assessor will ask you questions based on the contents of these articles next week.
Workbook 2
10
Ticklet 2 | https://www.techylib.com/el/view/farflungconvyancer/workbook_2_the_computer_laboratory | CC-MAIN-2017-34 | refinedweb | 1,287 | 55.64 |
QML mouse pressed and onEntered
How can I pressed mouse and move into a mouse area, and this mouse area could run onEntered?
With current code, once I press and hold on buttonMouseArea 1, and move mouse to another buttonMouseArea 2, Area 2 will not response onEntered.
Thanks
MouseArea { id: buttonMouseArea anchors.fill: parent hoverEnabled: true onEntered: console.log(txt.text); // onClicked: root.clicked() // onPressed: alternatesRow.visible = true // enable buble effect // onReleased: alternatesRow.visible = false // enable buble effect }
@p3c0 With current code, once I press and hold on buttonMouseArea 1, and move mouse to another buttonMouseArea 2, Area 2 will not response.
@sharethl AFAIK it doesn't trigger if the area under mouse changes while the mouse is still pressed. Some info here. BTW, are you trying to do drag-drop kind of thing ?
@p3c0 I am actually doing QML keyboard. User pressed on wrong key, they want to press and hold move to another key and release.
@sharethl Well in that case you can keep
MouseAreaas a parent item which will contain all other item i.e in your case keyboard keys. Then you can use
onPressedand
onReleasedsignal handlers where in you can get the exact child using childAt .
Consider the following example:
import QtQuick 2.4 Item { width: 140 height: 100 MouseArea { anchors.fill: parent onPressed: console.log(childAt(mouseX,mouseY).objectName) onReleased: console.log(childAt(mouseX,mouseY).objectName) Rectangle { objectName: "Rect1" width: 50 height: 50 color: "red" } Rectangle { objectName: "Rect2" x:50 width: 50 height: 50 color: "green" } } }
@p3c0 Nice. But I have nested layout, and Key are in the most bottom level. How can I use childAt() find it out?
Column{ Repeater{ Row{ Repeater{ KeyDelegate } } } }
@sharethl If the order of nesting is fixed then you can chain
childAtmethod.
Alternatively you can may be use
GridView. It has a similar method called itemAt. The advantage in this case is you can directly call that method using
GridView's
id.
use nested for loop to go through column and row, to find keys.
Thank you!
MouseArea{ anchors.fill: parent onMousePositionChanged: { var row = column.childAt(mouseX, mouseY); // clear all bubbles for (var i = 0; i < column.children.length; i++) { var r = column.children[i]; for(var j=0; j < r.children.length; j++){ if(r.children[j].objectName==="KeyButton"){ r.children[j].bubleVisible=false; } } } if(row === null){ return; } var key = row.childAt(mouseX,5); // relative use any number 0 to height of key if(key !== null && key.objectName=== "KeyButton"){ key.bubleVisible=true; } } onReleased: { var row = column.childAt(mouseX, mouseY); var key = row.childAt(mouseX,5); // relative use any number 0 to height of key if(key !== null ){ key.bubleVisible= false; console.log(key.text); } } } | https://forum.qt.io/topic/55217/qml-mouse-pressed-and-onentered | CC-MAIN-2017-47 | refinedweb | 445 | 61.73 |
Almost all uses of the bucket-based hash table have been removed save 3. We could numHashTableImplementations-- if these last 3 were finished off.
Whoa, JSAtomList and JSHashTable are rather intertwined. That one is non-trivial.
(In reply to comment #1) > Whoa, JSAtomList and JSHashTable are rather intertwined. That one is > non-trivial. No kidding! The JSAtomList starts off as a linked list of JSHashEntry (which is effectively just a generic linked list node). Once the list hits critical mass the JSAtomList table-ifies with explicitly managed stable ordering of the entries during the transition. JSAtomList's implementation manages the hash table's guts to be a multimap with special properties: a lookup that hits causes a node to be moved to the front of the hash chain; hoisted definition nodes are added at the end of the hash chain; any number of shadowed definitions can coexist in a single hash bucket. I'm mulling over a few solutions.
Wow. Well, if you are going break an abstraction, might as well break the hell out of it!
billm's taking care of the scripFilenameTable use in bug 661903 and jorendorff is self-hosting sharp var support in bug 486643. There's one more minor usage in traceviz that follows the same form as the script filename table. When billm's patch lands we can generalize a CStringHasher. Now we wait.
Created attachment 547230 [details] [diff] [review] WIP: rm jshash.h Decided to see if I could remove jshash in the engine. Only took about an hour. JS binary goes down by ~20K. However, there are still some uses in jsd! That's for another day.
Created attachment 643259 [details] [diff] [review] Sequester jshash.{h,cpp} in js/jsd/. The removal of sharps made this a lot easier. Although js/src/ barely uses jshash.{h,cpp}, js/jsd/ still does. So I just moved those files into js/jsd/. (I'm not sure if that'll show up in Bugzilla's nicely-formatted patch viewers.) I figure this is a win because I have a notion that js/jsd/ is headed for the scrap-heap -- is that right? And even if it's not, this greatly reduces the visibility of those files. A few things from those moved files were used in js/src/ and js/xpconnect/src/. Here's what I did with them: - I replaced JSHashNumber with js::HashNumber; they're both typedefs of uint32_t. - I moved JS_HashString, which has a single use, into jsstr.{h,cpp}. - I inlined the single use of JS_GOLDEN_RATIO. - And I removed ATOM_HASH because it is dead code.
Comment on attachment 643259 [details] [diff] [review] Sequester jshash.{h,cpp} in js/jsd/. Review of attachment 643259 [details] [diff] [review]: ----------------------------------------------------------------- Sweet! I agree with your reasoning on it to jsd. ::: js/src/jsatom.h @@ +100,5 @@ > #endif > + > + static const js::HashNumber goldenRatio = 0x9E3779B9U; > + > + return n * goldenRatio; Even better, I think you could replace the body with: return HashGeneric((void *)JSID_BITS(id)); from mozilla/HashFunctions.h. (This (void *) wouldn't be necessary if you added a AddToHash(uint32_t hash, uint64_t value) overload to HashFunctions.h.) ::: js/src/jsstr.h @@ +18,5 @@ > #include "vm/Unicode.h" > > +/* General-purpose C string hash function. */ > +extern JS_PUBLIC_API(js::HashNumber) > +JS_HashString(const void *key); I think you kept this around for the 1 use in ScriptFilenameHasher. For that, you could use HashString in mozilla/HashFunctions.h instead. With that change, I think you could kill JS_HashString and make it a non-public extern in jsd. Sorry, but this caused Windows bustage, so I had to back it out. e:/builds/moz2_slave/m-in-w32/build/js/jsd/jshash.cpp(68) : error C2491: 'JS_NewHashTable' : definition of dllimport function not allowed e:/builds/moz2_slave/m-in-w32/build/js/jsd/jshash.cpp(106) : error C2491: 'JS_HashTableDestroy' : definition of dllimport function not allowed e:/builds/moz2_slave/m-in-w32/build/js/jsd/jshash.cpp(138) : error C2491: 'JS_HashTableRawLookup' : definition of dllimport function not allowed e:/builds/moz2_slave/m-in-w32/build/js/jsd/jshash.cpp(218) : error C2491: 'JS_HashTableRawAdd' : definition of dllimport function not allowed e:/builds/moz2_slave/m-in-w32/build/js/jsd/jshash.cpp(248) : error C2491: 'JS_HashTableAdd' : definition of dllimport function not allowed e:/builds/moz2_slave/m-in-w32/build/js/jsd/jshash.cpp(270) : error C2491: 'JS_HashTableRawRemove' : definition of dllimport function not allowed e:/builds/moz2_slave/m-in-w32/build/js/jsd/jshash.cpp(288) : error C2491: 'JS_HashTableRemove' : definition of dllimport function not allowed e:/builds/moz2_slave/m-in-w32/build/js/jsd/jshash.cpp(304) : error C2491: 'JS_HashTableLookup' : definition of dllimport function not allowed e:/builds/moz2_slave/m-in-w32/build/js/jsd/jshash.cpp(323) : error C2491: 'JS_HashTableEnumerateEntries' : definition of dllimport function not allowed e:/builds/moz2_slave/m-in-w32/build/js/jsd/jshash.cpp(418) : error C2491: 'JS_HashTableDump' : definition of dllimport function not allowed e:/builds/moz2_slave/m-in-w32/build/js/jsd/jshash.cpp(430) : error C2491: 'JS_HashString' : definition of dllimport function not allowed e:/builds/moz2_slave/m-in-w32/build/js/jsd/jshash.cpp(442) : error C2491: 'JS_CompareValues' : definition of dllimport function not allowed
\o/ | https://bugzilla.mozilla.org/show_bug.cgi?id=647367 | CC-MAIN-2017-13 | refinedweb | 850 | 60.21 |
Node that provides hints about shapes. More...
#include <Inventor/nodes/SoShapeHints.h>
Node that provides hints about shapes.
By default, Open Inventor assumes very little about the shapes it renders. You can use the SoShapeHints node to indicate that vertex-based shapes (those derived from SoVertexShape) are solid, contain ordered vertices, or contain convex faces. For fastest rendering, specify SOLID, COUNTERCLOCKWISE, CONVEX shapes.
These hints allow Open Inventor to optimize certain rendering features. Optimizations that may be performed include enabling back-face culling and disabling two-sided lighting. For example, if an object is solid and has ordered vertices, Open Inventor turns on backface culling and turns off two-sided lighting. If the object is not solid but has ordered vertices, it turns off backface culling and turns on two-sided lighting. In all other cases, both backface culling and two-sided lighting are off.
Summary:
Note: Two-sided lighting is automatically enabled for VolumeViz "slice" primitives, such as SoOrthoSlice and SoVolumeSkin.
This node allows the creation of polygons with holes. See the windingType field description, SoIndexedFaceSet, and SoFaceSet for details.
The SoShapeHints node also affects how default normals are generated. When a node derived from SoVertexShape has to generate default normals, it uses the creaseAngle field to determine which edges should be smooth-shaded and which ones should have a sharp crease. The crease angle is the angle between surface normals on adjacent polygons. For example, a crease angle of .5 radians means that an edge between two adjacent polygonal faces will be smooth shaded if the normals to the two faces form an angle that is less than .5 radians (about 30 degrees). Otherwise, it will be faceted.
Smooth shaded means that the normal vectors of all the faces that share a vertex are averaged together to compute the normal vector for that vertex.
Normal generation is fastest when the creaseAngle is 0 (the default), producing one normal per facet. However, if a vertex is shared by two (or more) faces that should be faceted, the result of normal generation will be multiple normal vectors (one for each face) associated with that vertex. This is in conflict with using OpenGL Vertex Buffer Objects (VBO), which usually provide the best performance. VBO rendering only allows per-vertex (not per-face) normal vectors and only one normal vector per vertex. Some high-level shapes, like SoIndexedFaceSet, will automatically reorganize the geometry data to produce the requested appearance and still use VBOs for rendering. Lower level shapes like SoBufferedShape will ignore the creaseAngle field and compute normals using a crease angle of Pi in order to use VBOs for rendering. A creaseAngle of Pi produces one averaged normal per vertex.
See SoVertexShape for more information about Vertex Buffer Objects and shape rendering.
SoVertexShape SoIndexedFaceSet SoFaceSet
FaceSetHole, IndexedFaceSetHole, VBO, VBO
Hints about faces of shape: if all faces are known to be convex or not.
Hints about entire shape: if shape is known to be a solid object, as opposed to a surface.
Hints about ordering of face vertices: if ordering of all vertices of all faces is known to be consistent when viewed from "outside" shape or not.
Winding type possible values.
Creates a shape hints node with default settings.
Indicates if Vertex Buffer Object (VBO) is supported by your graphics board.
Set the state of the override field.
see SoNode::setOverride doc.
Reimplemented from SoNode.
Indicates the minimum angle (in radians) between two adjacent face normals required to form a sharp crease at the edge when normal vectors are computed automatically by Open Inventor.
It has no effect when normal vectors are explicitly provided by the application.
Indicates whether each face is convex.
Because the penalty for non-convex faces is very steep (faces must be triangulated expensively), the default assumes all faces are convex. Therefore, shapes with concave faces may not be displayed correctly unless this hint is set to UNKNOWN_FACE_TYPE. Use enum FaceType. Default is CONVEX.
Specifies the tolerance value to use when default normals are computed.
The default is 1e-6.
Specifically it determines which (other) points in the shape are close enough to influence the normal at each vertex. Setting a smaller tolerance value will select a smaller number of points and can reduce the time required for computing normals on very large, very dense geometry.
If the OIV_NORMGEN_TOLERANCE environment variable is set, the default is 1/OIV_NORMGEN_TOLERANCE.NOTE: field available since Open Inventor 8.0
This field controls whether subsequent shapes in the scene graph can use OpenGL Vertex Buffer Objects (VBO) to speed up rendering.
Default is TRUE (since Open Inventor 8.1). The default value can be set using the OIV_FORCE_USE_VBO environment variable (see SoPreferences). including OIV_MIN_VERTEX_VBO and OIV_MIN_VERTEX_VAVBO_NOCACHE. See SoVertexShape for more discussion about shape rendering strategy.NOTE: field available since Open Inventor 5.0
Indicates how the vertices of faces are ordered.
CLOCKWISE ordering means that the vertices of each face form a clockwise loop around the face, when viewed from the outside (the side toward which the normal points). Use enum VertexOrdering. Default is UNKNOWN_ORDERING.
Indicates the winding rule used to define holes in a polygon.
It is used by SoIndexedFaceSet and SoFaceSet to determine which parts of the polygon are on the interior and which are on the exterior and should not be filled. By default this field value is NO_WINDING_TYPE, that is, no winding rules are used, so there are no holes. Use enum WindingType..
The following figure four sets of contours shown in the following figure are used with different winding rule properties to see their effects. For each winding rule, the dark areas represent interiors. Note the effect of clockwise and counterclockwise winding.
NOTE: In LINES drawing style (see SoDrawStyle), if windingType is not NO_WINDING_TYPE, or if faceType is UNKNOWN_FACE_TYPE, the edges of the tessellated triangles will be drawn.NOTE: field available since Open Inventor 4.0 | https://developer.openinventor.com/refmans/latest/RefManCpp/class_so_shape_hints.html | CC-MAIN-2022-05 | refinedweb | 981 | 57.57 |
Deploying More than Text Content using Your DigitalOcean Droplet
The following article got me to the point where I could display text content, but that's where it stopped.
How To Set Up a Node.js Application for Production on Ubuntu 16.04
I will continue from that point.
note: I could not format this post. It contains formatting I did not want and it removed my formatting.
This interface is very buggy.
I use the current versions of: Ubuntu, Nginx (server), pm2, Node, NPM, express.
My Node app (that I'm deploying) uses React and Bootstrap 4, ...
but this post is also relative, if you're only deploying HTML with CSS.
...
DigitalOcean >etc >nginx >sites-available >default (file):
Important settings not previously discussed:
location ~* \.(css|gif|html|ico|jepg|jpg|js|jsx|pdf|php|png|scss|svg|txt|zip) { add_header Cache-Control public; add_header Cache-Control must-revalidate; } location /imgs/ {}
The above file types are blocked unless you include their extensions (as I did).
Below, is how I open a port and create a "pretty URL".
Meaning, is the URL to "that" sub-directory.
location /your-whatever-subdirectoty-name { rewrite ^/your-whatever-subdirectoty-name(.*) $1 break; proxy_pass ""; }
...
DigitalOcean >home >your-account-name >:
index.html This is my Nginx server/domain splash page.
My Node apps are in sub-directories on this directory level with Nginx using reverse proxy (localhost).
This is where the article I mentioned, left off.
I will show you my version of the server.js file.
I create a subdirectory for my app (or whatever), then (for example) I name it react-bootstrap.
DigitalOcean >home >your-account-name >react-bootstrap
I create a file and name it: r-bs-server.js (r-bs- stands for react-bootstrap).
Give your server.js unique names, so it is unique inside pm2.
Every localhost port you use will have its own server.js file, so in pm2 you want to have a
unique server.js file name so you can turn that port on and off.
Now, assign the react-bootstrap directory (folder) to port 3002, as mentioned in my
above section ... sites-available >default (file) ... proxy_pass "";
r-bs-server.js:
var express = require("express"); var app = express(); var path = require("path"); app.get('/',(req,res) => { res.sendFile(path.resolve(__dirname+'/index.html')); }); app.use(express.static(__dirname + '/static')); app.use(express.static(__dirname + '/imgs')); app.listen(3002); console.log("Running at Port 3002");``` Above:
... _dirname + '/static' is a folder I need for React.
... _dirname + '/imgs' is my images folder.```
...
Then, I put my app content (including an index.html file) inside the react-bootstrap directory.
Make sure you (also) did everything the original article mentioned + what I mentioned.
Now (hopefully), the webpage will display the contents related to the (above) index.html file ...
of the react-bootstrap directory.
...
For people using React: If your app is in a sub-directory, here's a few instructions.
#1) In your package.json file, add: "homepage": "",
Now, when you: npm run build your-app-name/ is prefixed to your app's href and src values, ...
so your server can properly connect to the relevant files.
#2) In your project's index.js file (or whatever):
import {HashRouter, Route, NavLink} from 'react-router-dom'; Use HashRouter not BrowserRouter.
This should help with reloading, 404, ... problems.
MLR | https://www.digitalocean.com/community/questions/deploying-more-than-text-content-using-your-digitalocean-droplet | CC-MAIN-2019-09 | refinedweb | 554 | 53.37 |
Testing a Large Application with Multiple UI Maps
This topic discusses how to use coded UI tests when you are testing a large application by using multiple UI maps.
When you create a new coded UI test, the Visual Studio testing framework generates code for the test by default in a UIMap class. For more information about how to record coded UI tests, see How to: Create a Coded UI Test and Anatomy of a Coded UI Test.
The generated code for the UI Map contains a class for each object that the test interacts with. For each generated method, a companion class for method parameters is generated specifically for that method. If there are a large number of objects, pages, and forms and controls in your application, the UI Map can grow very large. Also, if several people are working on tests, the application becomes unwieldy with a single large UI Map file.
Using multiple UI Map files can provide the following benefits:
Each map can be associated with a logical subset of the application. This makes changes easier to manage.
Each tester can work on a section of the application and check in their code without interfering with other testers working on other sections of the application.
Additions to the application UI can be scaled incrementally with minimal effect on tests for other parts of the UI.
Create multiple UI Maps in each of these types of situations:
Several complex sets of composite UI controls that together perform a logical operation, such as a registration page in a Web site, or the purchase page of a shopping cart.
An independent set of controls that are accessed from various points of the application, such as a wizard with several pages of operations. If each page of a wizard is especially complex, you could create separate UI Maps for each page.
To add a UI Map to your project
In Solution Explorer, to create a folder in your test project to store all the UI Maps, right-click the test project file, point to Add and then click New Folder. For example, you could name it UIMaps.
The new folder is displayed under the test project.
Right-click the UIMaps folder, point to Add, and then click New Item.
The Add New Item dialog box is displayed.
Select Coded UI Test Map from the list.
In the Name box, enter a name for the new UI Map. Use the name of the component or page that the map will represent, for example, HomePageMap.
Click Add.
The Visual Studio window minimizes and the Coded UI Test Builder dialog box is displayed.
Record the actions for the first method and click Generate Code.
After you have recorded all actions and assertions for the first component or page and grouped them into methods, close the Coded UI Test Builder dialog box.
Continue to create UI Maps. Record the actions and assertions, group them into methods for each component, and then generate the code.
In many cases, the top level window of your application remains constant for all wizards, forms, and pages. Although each UI Map has a class for the top level window, all maps are probably referring to the same top level window within which all components of your application run. Coded UI tests search for controls hierarchically from the top down, starting from the top level window, so in a complex application, the real top level window could be duplicated in every UI Map. If the real top level window is duplicated, multiple modifications will result if that window changes. This could cause performance problems when you switch between UI Maps.
To minimize this effect, you can use the CopyFrom() method to ensure that the new top level window in that UI Map is the same as the main top level window.
The following example is part of a utility class that provides access to each component and their child controls which are represented by the classes generated in the various UI Maps.
For this example, a Web application named Contoso has a Home Page, a Product Page, and a Shopping Cart Page. Each of these pages share a common top level window which is the browser window. There is a UI Map for each page and the utility class has code similar to the following:
using ContosoProject.UIMaps; using ContosoProject.UIMaps.HomePageClasses; using ContosoProject.UIMaps.ProductPageClasses; using ContosoProject.UIMaps.ShoppingCartClasses; namespace ContosoProject { public class TestRunUtility { // Private fields for the properties private HomePage homePage = null; private ProductPage productPage = null; private ShoppingCart shoppingCart = null; public TestRunUtility() { homePage = new HomePage(); } // Properties that get each UI Map public HomePage HomePage { get { return homePage; } set { homePage = value; } } // Gets the ProductPage from the ProductPageMap. public ProductPage ProductPageObject { get { if (productPage == null) { // Instantiate a new page from the UI Map classes productPage = new ProductPage(); // Since the Product Page and Home Page both use // the same browser page as the top level window, // get the top level window properties from the // Home Page. productPage.UIContosoFinalizeWindow.CopyFrom( HomePage.UIContosoWindowsIWindow); } return productPage; } } // Continue to create properties for each page, getting the // page object from the corresponding UI Map and copying the // top level window properties from the Home Page. } | http://msdn.microsoft.com/en-us/library/vstudio/ff398056(v=vs.100).aspx | CC-MAIN-2014-35 | refinedweb | 869 | 51.28 |
table of contents
NAME¶
strtod, strtof, strtold - convert ASCII string to floating-point number
SYNOPSIS¶
#include <stdlib.h>
double strtod(const char *nptr, char
**endptr);
float strtof(const char *nptr, char **endptr);
long double strtold(const char *nptr, char **endptr);
strtof(), strtold():
DESCRIPTION¶¶
These¶
ATTRIBUTES¶
For an explanation of the terms used in this section, see attributes(7).
CONFORMING TO¶
POSIX.1-2001, POSIX.1-2008, C99.
strtod() was also described in C89.
NOTES¶.
EXAMPLES¶
See the example on the strtol(3) manual page; the use of the functions described in this manual page is similar.
SEE ALSO¶
atof(3), atoi(3), atol(3), nan(3), nanf(3), nanl(3), strfromd(3), strtol(3), strtoul(3)
COLOPHON¶
This page is part of release 5.10 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at. | https://manpages.debian.org/bullseye/manpages-dev/strtof.3.en.html | CC-MAIN-2021-49 | refinedweb | 151 | 63.59 |
Compare browser type systems with the new API Catalog tool
Interoperability has long been a driving priority for the Microsoft Edge team. To make informed, data-driven decisions, we need the ability to find the similarities and differences between the type systems of all browsers, to compare how core global functions, namespaces, objects, and interfaces are exposed. We also need to know how these match or differ from existing web specifications. Finally, we need an easy way to review the data.
To achieve all this, we built the API Catalog, a new tool which we’ve now made public for the benefit of the web community. Learn more in Greg Whitworth’s talk introducing our new Platform Data site at Microsoft Edge Web Summit, or read on for a detailed overview!
A quick tour of the API Catalog
The API Catalog has two views: a visualization and a table view. The former is a high level visualization of the browser and specification landscape in the form of an interactive Venn diagram. You can select and compare any three data sets (browsers or specifications) to see a graphical representation of both intersecting and unique APIs.
The table view allows you to browse the full list of known APIs that are either defined in a specification we have crawled or detected in the type system of a browser. You can search and filter to examine specific subsets of APIs. For example, you might filter to see only APIs from the Beacon specification that are detected in all browsers:
The raw CSV data is also available on Github for in-depth analysis – we welcome your feedback on this dataset on our GitHub issues page.
Compiling the data
In the weeks since announcing the API Catalog at Microsoft Edge Web Summit, we’ve gotten some questions about where the data comes from. To have the most objective view of browser APIs, we wanted the raw, unprocessed data from browser implementations and web specifications. The API Catalog combines data from two primary sources: To collect browser data, we extract the type system data from the most recent versions of the Microsoft Edge, Google Chrome, Mozilla Firefox and Apple Safari for comparison. We then coalesce the data with the Web IDL definitions inside the specifications.
Collecting data from each browser requires looping over the type system starting at the Window object (WorkerGlobalScope in case of workers) and collecting only the members defined on an interface. We then verify each member (using <object>.hasOwnProperty) collected to verify if it is actually a member belonging to that object. This is used to eliminate members from the prototype chain. Results of that filter provide a view of the object that is consistent with what is defined in the specifications allowing the catalog to compare the data in a consistent format.
To collect specification data, we use specref.org, a database of over 14,000 (and growing) specifications and documents that are helping define what should be implemented in each browser. We use this database to look up the URLs for the specifications in order to gather the necessary Web IDL data. We look up the latest published documents and the latest editors draft URLs from the specref.org database. Each URL is loaded and the document is then parsed for Web IDL sections. This builds the specification definitions for the catalog’s interface and members. Today, we are using a subset of notable specifications to populate API Catalog. We plan to expand the specification data collection to all specifications in the near future – we’d love your feedback on which specifications to prioritize that we may be missing!
Understanding and using the data
The API Catalog shows a view of the API members grouped by interfaces while not mixing in the prototype chain members. In looking at the data, a number of low level questions about the specifications and the types systems of the browsers arose:
- When the specification definition does not match the reality of browser implementations, is it a specification issue?
- Do the browser implementations define the member in the incorrect place on the prototype chain?
- Is this a browser proprietary member?
- Is this member just defined wrong or in the wrong place?
- How are ES 2015 features exposed on the browser’s platform types?
Each specification and every implementation has many definitions for interfaces and members. With so many definitions, it is often the case where members are incorrectly defined or missed altogether. Using the API Catalog, we can quickly see how implementations vary, what each browser may have implemented, and where it may have implemented the members. Note that missing APIs are not necessarily indicative of whether a browser supports a given feature―it may simply be that they don’t expose it in the type system in a way that can be detected using the methods we’ve described.
With API Catalog, you can also find that not every specification is defined correctly. For example, if we look at the WebGL 2 Specification, we will find a Web IDL definition for WebGLRenderingContextBase interface where there are typedef members incorrectly defined (see Issue 1600). Specifications also have definitions that may be at odds with implementations. This may be the case where the browser vendors will want to drive changes into the specification based on real world implementation (see Issue 182). Alternatively, there may just be a lack of implementations (see Issue 151).
Finally, the API Catalog also helps expose bugs in each browser’s type system. For example, Microsoft Edge in EdgeHTML 13 shows that it doesn’t support the length property for many interfaces. Length is actually supported, but is unfortunately not in the correct location on the prototype chain, which potentially could cause interoperability issues (e.g., Issue 110711).
The API catalog can be used to expose or illustrate bugs in other browser type systems as well – for example, Webkit bug 49739 and Chrome bug 43394. Our goal for API Catalog is to show an unbiased view of each browser’s type system. With this data we believe all browser vendors are armed to make type systems more interoperable.
A resource for the web community
Through the API Catalog, we try to make every browser’s type system easy to navigate. The views we expose are similar to what a developer might create herself by running scripts within the page, or from each browser’s developer console. Browser vendors and the web community can work together to expand the “interoperable intersection” of the web platform. We welcome collaboration from browser vendors and contributors from the web community to improve this tool, and look forward to your feedback!
– Justin Rogers, Principal Software Engineer
– Arron Eicholz, Program Manager
– Joseph Shum, Program Manager | https://blogs.windows.com/msedgedev/2016/04/28/introducing-api-catalog/ | CC-MAIN-2021-10 | refinedweb | 1,127 | 51.07 |
Although CGI may not be the most popular platform to host WSGI applications, with the intent of trying to promote the cause of writing portable WSGI application code, in mod_wsgi the decision was made to restrict access to sys.stdin and sys.stdout to highlight when non portable WSGI code was being written.
The result of doing this is that when 'print' was used in a WSGI application hosted by mod_wsgi, a Python exception would be raised of the type:
IOError: sys.stdout access restricted by mod_wsgi
This was all done with good intention, but what has been found is that people can't be bothered reading the documentation which explains why it was done and even when they do, they still can't be bothered fixing up the code not to use 'print'. It seems the convenience of using 'print' out weighs the ideal of writing code that may actually work across different WSGI hosting mechanisms.
More annoying is that whenever questions arise about this error on the irc channels, rather than people being told to read the documentation and/or fix their code not to use 'print', voodoo is summoned and they are instead told to use the magic incantation of:
sys.stdout = sys.stderr
Yes this is given as one of the workarounds in the documentation, the other being to disable the restriction using the configuration directive specifically for the purpose, but the only reason the workaround is given is for where you have no choice because you cannot change the code to remove the 'print' statement. People aren't told this though, all they are told is to make that change and effectively ignore the whole issue.
The whole mythology that is developing around this is now getting to the extent that some have been saying that neither 'sys.stdout' or 'sys.stderr' are working in mod_wsgi. The suggestion is starting to come out now that if you want to get any debug output from your WSGI application that you have to use a separate log file of your own creation, optionally hooked up to the 'logging' module. In one case, a BuildOut recipe is explicitly providing an option to define the separate log file that they believe has be used to replace 'sys.stdout' and 'sys.stderr'.
So, what is the real answer? Well, if you care about writing portable WSGI application code, then do not use 'print' by itself, instead redirect it to 'sys.stderr' by writing:
print >> sys.stderr, 'message ...'
This is especially important if you are writing framework libraries or plugins to be used in some other application or by other users. You shouldn't be making an assumption that 'sys.stdout' can always be used. If it is a debug or error message, then use 'sys.stderr' as it is meant to be.
If for some reason you really don't want to care about the issue, then rather than use the magic voodoo above, you should simply disable the restrictions that mod_wsgi puts into place altogether. This is done by putting in the main Apache configuration file:
WSGIRestrictStdin Off
WSGIRestrictStdout Off
Anyway, because of all the contention arising over all of this, in mod_wsgi 3.0 I will be giving up and will be making the restrictions off by default. If you want to write non portable WSGI application, you can quite happily do so. If you do care about portable WSGI application code, then you will be able to optionally reenable the restriction using the same directives above.
9 comments:
I'm sorry to see you having to make this change to appeal to the unwashed masses who can't read documentation. I will certainly turn that feature back on in mod_wsgi 3.0.
I am trying hard to figure out why I should care if my debugging code is cross system compatible. Its debugging code, written to run on the system I am working on right now.
It's going to be gone once the bug is tracked down.
The WSGIRestrictStdin Off
WSGIRestrictStdout Off
settings would be much more useful for developers if they would work in the .htaccess
Since (I think) the average developer does not have access to the httpd.conf he or she may feel that there is no way to get stdout. Since for them there is not.
What about providing a third option, a data sink, like /dev/null. Provide a fake stdout file object that instead of raising an error or sending the output back to CGI, it just silently discards it. Maybe
WSGIRestrictStdin Discard
This could also help in those situations where you have to use a third-party module, that unfortunately sometimes does a stray print.
@garylinux
It isn't just debug code we are talking about here, it could be quite valid error messages which the application wants to log to the error log file, but people have wrongly used 'print' by itself to send it to stdout instead of stderr. Any WSGI framework library which used stdout for logging error messages would fail to work properly if used with a WSGI application hosted by a CGI/WSGI bridge.
One cant have WSGIRestrictStdin and WSGIRestrictStdout in the .htaccess file because the settings need to be known at the time that the Python interpreter is initialised. This is why they are global directives which affect the main interpreter and all sub interpreters which are created. The .htaccess file is way too late for that as the interpreters are already created before that point.
@Deron
The issue isn't to have the messages logged to stdout to vanish, it is to get people to log them to the proper output stream, which is stderr.
One thing that both yours and the other response show is that even people who have some experience with Python development don't necessarily understand the greater issues around this. So, what hope is there for an absolute newbie who wouldn't even know about the concepts of standard input, standard output and standard error in the first place. This is why one can never win and you just have to make it accepting of bad programming practice.
For Google people that find this post in the future, a cause of:
"sys.stdin access restricted by mod_wsgi" when moving from dev to production is because of an errant:
import pdb; pdb.set_trace();
Frustratingly, wsgi will throw an error from the NEXT line so you won't know that pdb caused it :-)
Your suggested "portable" version of
print >> sys.stderr, 'message ...'
is no more portable than the
print 'message ...'
than it's replacing.
It should be:
environ['wsgi.errors'].write('message ...')
Jon, the documentation at '' mentions 'wsgi.errors'. The problem is that most frameworks hide it or it is non obvious how to get access to it. Even more, practically no one realises it is there and uses it and they always just use 'print' without even redirecting it. The 'wsgi.errors' stream is also not available outside of the request context, so useless at global scope when importing modules or in background threads, or library code. Finally, using 'write()' on 'wsgi.errors' is also technically not sufficient anyway as WSGI specification says that to guarantee display of message straight away, must also be flushed. Normally 'sys.stderr' would at least be automatic flush at end of line or earlier, except for broken systems like mod_python. | http://blog.dscpl.com.au/2009/04/wsgi-and-printing-to-standard-output.html | CC-MAIN-2014-15 | refinedweb | 1,238 | 60.75 |
Data::Pareto - Computing Pareto sets in Perl
Version 0.05
use Data::Pareto; # only first and third columns are used in comparison # the others are simply descriptive my $set = new Data::Pareto( { columns => [0, 2] } ); $set->add( [ 5, "pareto", 10, 11 ], [ 5, "dominated", 11, 9 ], [ 4, "pareto2", 12, 12 ] ); # this returns [ [ 5, "pareto", 10, 11 ], [ 4, "pareto2", 12, 12 ] ], # the other one is dominated on selected columns $set->get_pareto_ref;
This module makes calculation of Pareto set. Given a set of vectors (i.e. arrays of simple scalars), Pareto set is all the vectors from the given set which are not dominated by any other vector of the set. A vector
X is said to be dominated by
Y, iff
X[i] >= Y[i] for all
i and
X[i] > Y[i] for at least one
i.
Pareto sets play an important role in multiobjective optimization, where each non-dominated (i.e. Pareto) vector describes objectives value of "optimal" solution to the given problem.
This module allows occurrence of duplicates in the set - this makes it rather a bag than a set, but is useful in practice (e.g. when we want to preserve two solutions giving the same objectives value, but structurally different). This assumption influences dominance definition given above: two duplicates never dominate each other and hence can be present in the Pareto set. This is controlled by
duplicates option passed to new(): if set to
true value, duplicates are allowed in Pareto set; otherwise, only the first found element of the subset of duplicated vectors is preserved in Pareto set.
The values are allowed to be invalid. The meaning of 'invalid' is 'the worst possible'. It's different concept than 'unknown'; unknown value make the definition of domination less clear.
By default, the comparison of column values is numerical and the smaller value dominates the larger one. If you want to override this behaviour, pass your own dominator sub in arguments to new().
By default, a vector is passed around as a ref to array of consecutive column values. This means you shouldn't mess with it after passing to
add method.
Creates a new object for calculating Pareto set.
The first argument passed is a hashref with options; the recognized options are:
columns
Arrayref containing column numbers which should be used for determining domination and duplication. Column numbers are
0-based array indexes to data vectors.
Only values at those positions will be ever compared between vectors. Any other data in the vectors may be present and is not used in any way.
At least one column number should be passed, for obvious reasons.
duplicates
If set to
true value, duplicated vectors are all put in Pareto set (if they are Pareto, of course). If set to
false, duplicates of vectors already in the Pareto set are discarded.
invalid
The value considered invalid in pareto set. Such value is dominated by any value and dominates only invalid value.
However, computations of domination in presence of invalid values can be considerably slower, as much as 5 times. So it probably will be faster to first parse the data and replace invalid markers with some huge-and-surely-dominated values.
column_dominator
The sub(s) used to compare specific column values and determining domination between them. Scalar, sub ref or hash ref. If not set, the default is that the numerically smaller value dominates the other one.
When the scalar is passed, it is assumed to be the name of a predefined dominator. This is a much faster option to specifying the sub of your own. Recognized dominators are:
minnumerically smaller value dominates
maxnumerically greater value dominates
lexiearlier in collation order value dominates (lexicographical order)
lexi_revlater in collation order value dominates (reversed lexicographical order)
stdstandard, i.e.
mindominator
During creation of Pareto set, the dominator sub is called with three arguments: column number, first vector's value, second vector's value, and should return
true, when the second value dominates the first one, assuming they appeared in the specified column.
Make sure that your sub returns
true when two passed values are the same. This is necessary to obey the whole Pareto set domination contract.
There are two approaches possible when the values in different columns are of different types, in the sense of domination. First, you can use passed column number to decide the domination check function. Alternatively, you can pass a hash ref with mapping from the column number to the sub ref used to compare the given column:
my $lexi_dominator = sub { my ($col, $dominated, $by) = @_; return ($dominated ge $by); }; my $min_dominator = sub { my ($col, $dominated, $by) = @_; return ($dominated >= $by); } my $set = new Data::Pareto({ columns => [0, 2], column_dominator => { 0 => $lexi_dominator, 2 => $min_dominator } }); $set->add(['a', 'label 1', 12], ['b', 'label 2', 9]);
The rest of arguments are assumed to be vectors, and passed to add() method.
Tests vectors passed as arguments and adds the non-dominated ones to the Pareto set.
Returns the current content of Pareto set as a list of vectors.
Returns the current content of Pareto set as a ref to array with vectors. The return value references the original array, so treat it as read-only!
Checks if the first vector passed is dominated by the second one. The comparison is made based on the values in vectors' columns, which were passed to new().
The vectors passed are never duplicates of each other when this method is called from inside this module.
Returns
true, when the first vector from arguments list is dominated by the other one, and
false otherwise.
Checks if the given value is considered invalid for the current object. Every value is valid by default.
Allow specifying built-in dominators inside dominator hash.
For large data sets calculations become time-intensive. There are a couple of techniques which might be applied to improve the performance:
Przemyslaw Wesolek,
<jest at go.art.pl>
Please report any bugs or feature requests to
bug-data-pareto at rt.cpan.org, or through the web interface at. I will be notified, and then you'll automatically be notified of progress on your bug as I make changes.
You can find documentation for this module with the perldoc command.
perldoc Data::Pareto
You can also look for information at:
This program is free software; you can redistribute it and/or modify it under the terms of the Artistic License 2.0. For details, see the full text of the license in the file LICENSE. | http://search.cpan.org/~pwes/Data-Pareto-0.05/lib/Data/Pareto.pm | CC-MAIN-2015-22 | refinedweb | 1,081 | 54.22 |
The list function & practical uses of array destructuring in PHP
| 5 min read.
list() 101
The
list() construct (it’s actually not a function but a language construct like
array()) has been around since PHP 4.
list() allows you to pull variables from an array based on their index. Let’s start with a quick primer!
Here’s what a basic
list() call looks like:
<?php list($a, $b, $c) = ['foo', 'bar', 'baz']; echo $a; // "foo" echo $b; // "bar" echo $c; // "baz"
You can give
list() less arguments than the array’s size if you don’t care about the rest of the items.
<?php list($a, $b) = ['foo', 'bar', 'baz']; echo $a; // "foo" echo $b; // "bar"
If you try to pull out an index that isn’t set, an undefined offset error is thrown.
<?php list($a, $b, $c) = ['foo', 'bar']; echo $a; // "foo" echo $b; // "bar" echo $c; // Undefined offset: 1
To skip an intermediate value, you can provide an empty argument.
<?php list($a, , $b) = ['foo', 'bar', 'baz']; echo $a; // "foo" echo $b; // "baz"
As of PHP 7.1, you can specify the key of the item you want to unpack from an associative array. (RFC here)
<?php $person = [ 'name' => 'Sebastian', 'job' => 'Developer', ]; list('job' => $job) = $person; echo $job; // "Developer"
Fun fact: since
list() is an assignment operator, you could also assign things in arrays. Not saying this is particularly useful, but it’s possible!
<?php $people = [ ['name' => 'Freek', 'role' => 'Developer'], ['name' => 'Sebastian', 'role' => 'Developer'], ['name' => 'Willem', 'role' => 'Designer'], ]; $names = []; foreach ($people as $person) { list('name' => $names[]) = $person; } var_dump($names); // ["Freek", "Sebastian", "Willem"];
The
list() operator is pretty cool, but has one caveat: it’s ugly. PHP 7.1 fixes this with some syntactic sugar: plain old vanilla square brackets. These two statements do the same:
<?php // Oldschool list($a, $b, $c) = ['foo', 'bar', 'baz']; // PHP 7.1 [$a, $b, $c] = ['foo', 'bar', 'baz'];
While it’s not as powerful as JavaScript’s array destructuring, it’s still a cool tool to have baked in the language.
Now that we know how array destructuring works, let’s move on to some practical examples.
Exhibit A: Some Classic Unpacking Examples
In this exhibit, we’ll glide over some with some real world situations.
You’ve exploded a string and want to immediately assing the result as two seperate variables.
<?php [$user, $repository] = explode('/', 'spatie/laravel-medialibrary', 2);
You’ve created a few objects at once, and want them all as their own variables.
<?php [$userA, $userB, $userC] = factory(User::class, 3)->create();
You want to swap two variables.
<?php $a = 'hello'; $b = 'world'; [$a, $b] = [$b, $a]; echo $a; // "world" echo $b; // "hello"
Voila, nothing too fancy here, let’s move over to some more specific use cases, borrowed from other languages.
Exhibit B: Tuples
The most concise definition of a tuple is a “data structure consisting of multiple parts”. Tuples are pretty much arrays with a fixed size and a predefined structure, for example a tuple of a status code and a message, a tuple of a longitude and a latitude, etc.
Tuples are useful in PHP when a key-value pair just doesn’t cut it (maybe we need duplicate keys, or would want more that one value) and when we don’t want to bother creating a value object.
Let’s start with an array of some fruits and vegetables. Every piece of produce has an
id, a
name and a
type.
<?php $produce = [ [1, 'apple', 'fruit'], [2, 'banana', 'fruit'], [3, 'carrot', 'vegetable'], ];
Using the index to access the values requires us to reason about the array contents on every statement. Though process: “This is
$item[0], which means is the first item in the array. Next is
$item[1], which means it’s the second item, etc.”
<?php $mappedProduce = []; foreach ($produce as $produce) { $mappedProduce[] = [ 'id' => $produce[0], 'name' => $produce[1], 'type' => $produce[2], ]; }
By destructuring, we can immediately assign the values to a variable, making the contents of the loop clearer. Thought process: “I have an array containing entries that have an id, a name and a type.”
<?php $mappedProduce = []; foreach ($produce as [$id, $name, $type]) { $mappedProduce[] = [ 'id' => $id, 'name' => $name, 'type' => $type, ]; }
Bonus snippet: we could use compact to create the array with a single statement (although I don’t like compact because it’s just as ugly as
list()).
<?php $mappedProduce = []; foreach ($produce as [$id, $name, $type]) { $mappedProduce[] = compact('id', 'name', 'type'); }
A more real world example: I recently had to make a pretty large form that only contained simple text inputs. I defined every input as a
[$name, $value, $required] tuple and looped over all of them.
@foreach([ ['name', $user->name, true], ['telephone', $user->telephone, true], ['fax', $user->fax, false], // ... ] as [$name, $value, $required]) <div> <label for="{{ $name }}">{{ __($name) }}</label> <input type="{{ $text }}" name="{{ $name }}" id="{{ $name }}" value="{{ $value }}" @if($required) required @endif /> <div>@endforeach</div> </div>
Further reading: Tuples in Python, Tuples in Elixir
Exhibit C: Multiple Returns
Some languages allow multiple returns. Consider a function that expects an integer, and returns two integers, one of them being
$input + 5, the other
$input - 5. Here’s what that would look like in Go:
func addAndRemoveFive(i int) (int, int) { return i + 5, i - 5 } func main() { x, y := addAndRemoveFive(10) }
We could achieve something similar by returning an array in PHP and immediately destructuring:
<?php function addAndRemoveFive(int $i): array { return [$i + 5, $i - 5]; } [$x, $y] = addAndRemoveFive(10); echo $x; // 15 echo $y; // 5
One large downside here: there’s currently no way to document this, neither with PHP 7’s return types nor with PHPDoc (unless both returns have the same type, then you could hint with
@return int[]).
A more real world situation where multiple returns can be useful is cases where you’re expecting a status and a message that you may or may not care about, like validation.
<?php [$valid, $reason] = $validator->validate($data); if (! $valid) { return new JsonResponse(422, ['reason' => $reason]); } return new JsonResponse(200);
The previous example could also be interpreted as a tuple containing a status and a reason.
Further reading: Multiple returns in Go
That’s It!
That concludes today’s presentation. I’ll update the post in the future when I come accross other nifty use cases. Meanwhile, feel free to hit me up on Twitter if you’re using
list() in other cool ways! | https://sebastiandedeyne.com/the-list-function-and-practical-uses-of-array-destructuring-in-php/ | CC-MAIN-2019-47 | refinedweb | 1,066 | 58.82 |
signal()
Set handling for exceptional conditions
Synopsis:
#include <signal.h> void ( * signal( int sig, void ( * func)(int) ) )( int );
Since:
BlackBerry 10.0.0:
The signal() function is used to specify an action to take place when certain conditions are detected while a program executes. See the <signal.h> header file for definitions of these conditions, and also refer to the System Architecture manual.
In order to attach signal handlers to a process with a different real or effective user ID, your process must have the PROCMGR_AID_SIGNAL ability enabled. For more information, see procmgr_ability().
There are three types of actions that can be associated with a signal: SIG_DFL, SIG_IGN or a pointer to a function. Initially, all signals are set to SIG_DFL or SIG_IGN prior to entry of the main() routine. An action can be specified for each of the conditions, depending upon the value of the func argument, as discussed below.
func is a function().
It isn't safe to use floating-point operations in signal handlers..
func is SIG_DFL
If func is SIG_DFL, the default action for the condition is taken.
If the default action is to stop the process, the execution of that process is temporarily suspended. When a process stops, a SIGCHLD signal is generated for its parent process, unless the parent process has set the SA_NOCLDSTOP flag (see sigaction()). While a process is stopped, any additional signals that are sent to the process aren't delivered until the process is continued, except SIGKILL, which always terminates the receiving process.
Setting a signal action to SIG_DFL for a signal that is pending, and whose default action is to ignore the signal (for example, SIGCHLD), causes the pending signal to be discarded, whether or not it's blocked.
func is SIG_IGN
If func is SIG_IGN, the indicated condition is ignored.
You can't set the action for the SIGSTOP and SIGKILL signals to SIG_IGN.
Setting a signal action to SIG_IGN for a signal that's pending causes the pending signal to be discarded, whether or not it is blocked.
If a process sets the action for the SIGCHLD signal to SIG_IGN, its children won't enter the zombie state and the process can't use wait() or waitpid() to wait on their deaths.
Handling a condition
When a condition is detected, it may be handled by a program, it may be ignored, or it may be handled by the usual default action (often causing an error message to be printed on the stderr stream followed by program termination).
A condition can be generated by a program using the raise() or kill() function
Returns:
The previous value of func for the indicated condition, or SIG_ERR if the request couldn't be handled ( errno is set to EINVAL).
Examples:
#include <stdlib.h> #include <signal.h> sig_atomic_t signal_count; void MyHandler( int sig_number ) { ++signal_count; } int main( void ) { signal( SIGFPE, MyHandler ); /* set own handler */ signal( SIGABRT, SIG_DFL ); /* Default action */ signal( SIGFPE, SIG_IGN ); /* Ignore condition */ return (EXIT_SUCCESS); }
Classification:
Last modified: 2014-11-17
Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus | https://developer.blackberry.com/native/reference/core/com.qnx.doc.neutrino.lib_ref/topic/s/signal.html | CC-MAIN-2021-39 | refinedweb | 515 | 53.51 |
In the previous entry I discussed changes to Amara's mutation API. In the original discussion one of the things that came up was the old element/attribute conundrum. Take the following document:
Users like to be able to access both elements and attributes using friendly Python idiom, but here we have a name clash on the resulting
a object.Right now Amara exposes the attribute as
a.b and the element as
a.b_, using name mangling to disambiguate.
The important thing to remember, however, is that such clashes are quite rare in practice, even when you throw in namespaces, so such mangling is rarely necessary, and I personally think Amara's current behavior makes sense. But I may just have a blind spot, so I've been paying attention to suggestions from others.
Jeremy Kloth suggested just always using different idioms.
a.[u"b"] for the attribute and
a.b for the element. This is not a bad idea, but I feel that given that clashes are rare, that it complicates the common case just to aid the rare case.
Luis Miguel Morillas had an idea I consider almost the opposite. Rather than completely separate element/attribute idioms, Luis suggests embracing how Amara has unified them. Right now Amara rolls up multiple elements of the same name in a convenient way:
Works such that
a.b or
a.b[0] yields the element with
y and
a.b[1] yields the element with
z. Luis thinks that the following case should just be an extension of this:
And then
a.b or
a.b[0] would yields the attribute value (
u"x"),
a.b[1] would yield the element with
y, and
a.b[2] would yield the element with
z. I kinda think of this idea as "so crazy it almost makes perfect sense", but it's way too big a change to introduce before Amara 1.0. I'd be curious to hear what others think of it. Luis actually brings it up in the context of mutation--see his original post (scroll to the bottom)--but I figure that the mutation API will follow naturally from the access API, so I'm focusing my thoughts a bit. | http://copia.posthaven.com/elements-versus-attributes-in-amara-0 | CC-MAIN-2017-51 | refinedweb | 372 | 66.54 |
Helper functions for storing text in Keychain for iOS, macOS, tvOS and WatchOS
This is a collection of helper functions for saving text and data in the Keychain.
As you probably noticed Apple’s keychain API is a bit verbose. This library was designed to provide shorter syntax for accomplishing a simple task: reading/writing text values for specified keys:
let keychain = KeychainSwift() keychain.set("hello world", forKey: "my key") keychain.get("my key")
The Keychain library includes the following features:
- Get, set and delete string, boolean and Data Keychain items
- Specify item access security level
- Synchronize items through iCloud
- Share Keychain items with other apps
What’s Keychain?
Keychain is a secure storage. You can store all kind of sensitive data in it: user passwords, credit card numbers, secret tokens etc. Once stored in Keychain this information is only available to your app, other apps can’t see it. Besides that, operating system makes sure this information is kept and processed securely. For example, text stored in Keychain can not be extracted from iPhone backup or from its file system. Apple recommends storing only small amount of data in the Keychain. If you need to secure something big you can encrypt it manually, save to a file and store the key in the Keychain.
Setup
There are three ways you can add KeychainSwift to your Xcode project.
Add source (iOS 7+)
Simply add KeychainSwiftDistrib.swift file into your Xcode project.
Setup with Carthage (iOS 8+)
Alternatively, add
github "evgenyneu/keychain-swift" ~> 9', '~> 9.0'
Setup with Swift Package Manager
Add the following text to your Package.swift file and run
swift build.
import PackageDescription let package = Package( name: "KeychainSwift", dependencies: [ .Package(url: "", versions: Version(9,0,0)..<Version(10,0,0)) ] )
Legacy Swift versions
Setup a previous version of the library if you use an older version of Swift.
iOS 7 support
Use iOS 7 compatible version of the library.().set("Hello world", forKey: "key 1", withAccess: .accessibleWhenUnlocked)
You can use
.accessibleAfterFirstUnlock if you need your app to access the keychain item while in the background. Note that it is less secure than the
.accessibleWhenUnlocked option.
See the list of all available access options.
Synchronizing keychain items with other devices
Set
synchronizable property to
true to enable keychain items synchronization across user’s multiple devices. The synchronization will work for users who have the "Keychain" enabled in the iCloud settings on their devices.
Setting
synchronizable property to
true will add the item to other devices with the
set method and obtain synchronizable items with the
get command. Deleting a synchronizable item will remove it from all devices.
Note that you do NOT need to enable iCloud or Keychain Sharing capabilities in your app’s target for this feature to work.
// First device let keychain = KeychainSwift() keychain.synchronizable = true keychain.set("hello world", forKey: "my key") // Second device let keychain = KeychainSwift() keychain.synchronizable = true keychain.get("my key") // Returns "hello world"
We could not get the Keychain synchronization work on macOS.
Sharing keychain items with other apps
In order to share keychain items between apps on the same device they need to have common Keychain Groups registered in Capabilities > Keychain Sharing settings. This tutorial shows how to set it up.
Use
accessGroup property to access shared keychain items. In the following example we specify an access group "CS671JRA62.com.myapp.KeychainGroup" that will be used to set, get and delete an item "my key".
let keychain = KeychainSwift() keychain.accessGroup = "CS671JRA62.com.myapp.KeychainGroup" // Use your own access goup keychain.set("hello world", forKey: "my key") keychain.get("my key") keychain.delete("my key") keychain.clear()
Note: there is no way of sharing a keychain item between the watchOS 2.0 and its paired device:
Setting key prefix
One can pass a
keyPrefix argument when initializing a
KeychainSwift object. The string passed in
keyPrefix argument will be used as a prefix to all the keys used in
set,
get,
getData and
delete methods. Adding a prefix to the keychain keys can be useful in unit tests. This prevents the tests from changing the Keychain keys that are used when the app is launched manually.
Note that
clear method still clears everything from the Keychain regardless of the prefix used.
let keychain = KeychainSwift(keyPrefix: "myTestKey_") keychain.set("hello world", forKey: "hello") // Value will be stored under "myTestKey_hello" key
Check if operation was successful
One can verify if
set,
delete and
clear methods finished successfully by checking their return values. Those methods return
true on success and
false on error.
if keychain.set("hello world", forKey: "my key") { // Keychain item is saved successfully } else { // Report error }
To get a specific failure reason use the
lastResultCode property containing result code for the last operation. See Keychain Result Codes.
keychain.set("hello world", forKey: "my key") if keychain.lastResultCode != noErr { /* Report error */ }
Using KeychainSwift from Objective-C
This manual describes how to use KeychainSwift in Objective-C apps.
Known serious issue
It has been reported that the library sometimes returns
nil instead of the stored Keychain value. The issue seems to be random and hard to reproduce. It may be connected with the Keychain issue reported on Apple developer forums. If you experienced this problem feel free to create an issue so we can discuss it and find solutions.
Demo app
Running Keychain unit tests
Xcode 8 introduced additional hoops that one needs to jump through in order to run the unit test:
- Enable signing in both the demo app and the test target.
- Enable Keychain Sharing in the Capabilities tab of the demo app target.
- Select the demo app as Host Application in the test target.
The process is shown in more details in this article.
Alternative solutions
Here are some other Keychain libraries.
- DanielTomlinson/Latch
- jrendel/SwiftKeychainWrapper
- kishikawakatsumi/KeychainAccess
- matthewpalmer/Locksmith
- phuonglm86/SwiftyKey
-.
Feedback is welcome
If you notice any issue, got stuck or just want to chat feel free to create an issue. We will be happy to help you.
License
Keychain Swift is released under the MIT License.
Latest podspec
{ "name": "KeychainSwift", "version": "9.0.2", "license": { "type": "MIT" }, "homepage": "", "summary": "A library for saving text and data in the Keychain with Swift.", "description": "This is a collection of helper functions for saving text and data in the Keychain.nn* Write and read text and Data with simple functions.n* Specify optional access rule for the keychain item.n* Limit operations to a specific access group.", "authors": { "Evgenii Neumerzhitckii": "[email protected]" }, "source": { "git": "", "tag": "9.0.2" }, "screenshots": "", "source_files": "Sources/*.swift", "platforms": { "ios": "8.0", "osx": "10.10", "watchos": "2.0", "tvos": "9.0" }, "pushed_with_swift_version": "4.0" }
Thu, 28 Sep 2017 05:00:33 +0000 | https://tryexcept.com/articles/cocoapod/keychainswift | CC-MAIN-2019-47 | refinedweb | 1,114 | 59.4 |
.
Introduction
In this tutorial we will check how to obtain temperature measurements from a DHT22 sensor using the ESP32, the Arduino core and timer interrupts.
In order to interact with the DHT22 from the ESP32, we will need an auxiliary library. Please check this tutorial which explains how to install it and also how to wire the ESP32 to the DHT22.
For an introduction on the ESP32 timers, please check this previous post. It explains in detail the timer concepts, which will be important to understand the code below.
One important thing to remember about the ESP32 timers is that they are implemented with 64 bit counters and 16 bit prescalers. So later, in the coding section, we are going to be configuring the timer interrupt by setting the counter value at which the interrupt should be triggered.
For a tutorial that introduces the synchronization between a task and an Interrupt service Routine, please check here. As shown in that tutorial, we will take advantage of the FreeRTOS semaphores to achieve such synchronization.
In terms of implementation, our code will periodically read measurements from the DHT22 sensor. Nonetheless, instead of relying on polling or Arduino delays, we will use the timer interrupts to implement the periodicity of the measurements.
Note however that interrupt handling functions should run as fast as possible, so we should not communicate with the DHT22 inside those functions. So, the interrupts will only be responsible for signaling the Arduino main loop when its time to get a new measurement.
We will use an interval of 10 seconds between each measurement, which is a lot of time when we think about machine instructions and clock frequencies. So, between measurements, it doesn’t make sense to leave the main loop active, wasting precious CPU time.
So, we will use the semaphores to keep the main loop “blocked” while it is waiting for the next interrupt, leaving the CPU available for the scheduler to execute other tasks. We could have used a dedicated FreeRTOS task but since for this example we will only fetch and print measurements, we will keep the code simpler by using the main loop.
Naturally, this blocking architecture doesn’t add much to our simple example where our application doesn’t do anything else than interacting with the DHT22. Nonetheless, in a real application use case where multiple tasks may be executing concurrently, freeing the CPU from a task when it is not doing anything is of extreme importance.
On a final introductory note, please take in consideration that the DHT22 allows to both get temperature and humidity measurements. Nonetheless, for simplicity and to focus on the synchronization, we will only fetch temperature measurements.
To facilitate the interaction with the mentioned sensor, I’m using a DFRobot DHT22 module which already has all the electronics needed and exposes a wiring terminal that facilitates the connections to the ESP32.
The tests were performed using a DFRobot’s ESP32 module integrated in a ESP32 development board.
The code
Includes and global variables
The first thing we are going to do is including the DHTesp.h library, so we can interact with the DHT22 using a high level API rather than having to worry about the lower level details of the communication protocol between the ESP32 and this sensor.
#include "DHTesp.h"
Then, we will need an object of class DHTesp, which we will use to get the measurements from the sensor. We will do it by calling some methods of this object, as we will see below.
DHTesp dht;
In order to be able to configure the timer, we will need to declare a pointer to a variable of type hw_timer_t. This will be used later to configure the timer.
hw_timer_t * timer;
To finalize the global variable declarations, we will need a semaphore, which will be used to synchronize the main loop and the Interrupt Service Routine.
SemaphoreHandle_t syncSemaphore;
Moving on to the setup function, we will start by opening a serial connection, so we can later output the measurements obtained from the DHT22.
Serial.begin(115200);
Next, we will create the semaphore and assign it to our global variable, so it can be used to synchronize the main loop and the Interrupt Service Routine. We will create a binary semaphore with a call to the xSemaphoreCreateBinary function. This function takes no arguments and will return a SemaphoreHandle_t that we should use in the next semaphore related calls.
Note that there are other types of semaphores in FreeRTOS, such as counting semaphores, but since we are going to do some simple synchronization with just need a binary one.
Another important thing to take in consideration is always making sure to initialize the synchronization primitive before setting up the interrupts, to avoid an interrupt occurring sooner than expected and making use of an uninitialized semaphore.
syncSemaphore = xSemaphoreCreateBinary();
Before configuring the interrupts, we also need to take care of initializing the DHT22 sensor interface. We do this by calling the setup method on the DHTesp object, passing as input the number of the microcontroller pin connected to the sensor. I’m using pin 27, but you can try with others.
Note that if we don’t pass any additional argument, the library will try to detect automatically the sensor connected to the ESP32. This procedure is performed because the library supports other sensors.
If you experience problems with this automatic detection, my recommendation is to explicitly pass as second argument of this method the value DHT22, which is defined here.
dht.setup(27);
Next, we will configure the timer and bind it to an interrupt. The details about the ESP32 timers were already covered here, so please take a look at that post if you are not yet familiar with them.
So, we will initialize the timer with a call to the timerBegin method, which will return a pointer to a structure of type hw_timer_t, which we will store in our previously declared global variable.
As first input, we need to pass the number of the timer we want to use. The ESP32 has 4 hardware timers and we should specify which one to use by passing a number from 0 to 3. We will be using timer 0.
As second argument, we need to pass the prescaler value, which will allow to divide the ESP32 timers’ clock signal frequency by a factor from 2 to 65536 (the prescaler has 16 bits). Since ESP32 boards clock frequency for the timers is usually 80 MHz (the value used by the Firebeetle board), we can divide it by 80 and we will get a signal of 1 MHz.
With this value, it means that in each second the counter of the timer will be incremented 1000000 times. In other words, this means that the counter value will be incremented at each microsecond.
So, when later we are specifying the counter value that will trigger the interrupt, we will be setting the value in microseconds.
The third value to pass to the timerBegin method is a Boolean value that will simply indicate if the counter should count up or down (true and false, respectively). We are going to set it to count up.
timer = timerBegin(0, 80, true);
Note that at this point the timer is not yet enabled, since we did not configure the value that should trigger the interrupt. But before we do that, we need to bind it to an interrupt handling function.
We do this by calling the timerAttachInterrupt function. As first input, it will receive the pointer to the initialized timer we obtained in the previous call.
As second input, it will receive the address of the Interrupt Service Routine function, which we will call onTimer. We will check its implementation below.
As third argument, we need to pass a Boolean value indicating if the interrupt to be generated is edge (true) or level (false). You can check the difference about them here or here. We will pass the value true so the generated interrupt is of edge type.
timerAttachInterrupt(timer, &onTimer, true);
After this we need to set the counter value that will trigger the interrupt. We do this with a call to the timerAlarmWrite function.
As first input, we will pass the pointer to the hw_timer_t variable, which we have stored before.
As second parameter, we need to specify the counter value that will trigger the interrupt. We will be assuming that an interrupt should be triggered every 10 seconds. Since with the prescaler used we are specifying the value in microseconds, then we will pass 10000000. You can try with other values as long as you respect the DHT22 minimum sampling period.
As third and final argument, we need to pass a Boolean value indicating if the counter should reload automatically upon generating the interrupt. We will pass the value true, so it reloads and the timer keeps firing periodically.
timerAlarmWrite(timer, 10000000, true);
Finally, we enable the timer by calling the timerAlarmEnable function and passing as input the pointer to the timer.
timerAlarmEnable(timer);
The final setup function can be seen below.
void setup() { Serial.begin(115200); syncSemaphore = xSemaphoreCreateBinary(); dht.setup(27); timer = timerBegin(0, 80, true); timerAttachInterrupt(timer, &onTimer, true); timerAlarmWrite(timer, 10000000, true); timerAlarmEnable(timer); }
Main loop
Moving on to the main loop, we will handle there the sensor measurements. Nonetheless, we are going to be relying on interrupts instead of using the Arduino delay function, so our approach is a little bit different from the previous tutorials about interacting with the DHT22.
So, the first thing we will do is trying to obtain the semaphore. If there is no unit available to take in the semaphore, then the main loop should block until it becomes available, thus freeing the CPU for the scheduler to assign to other tasks.
We will try to obtain the semaphore with a call to the xSemaphoreTake function, passing as first input the semaphore and as second the value portMAX_DELAY, which will ensure that task stays blocked indefinitely until the unit becomes available.
xSemaphoreTake(syncSemaphore, portMAX_DELAY);
After the execution passes this point, it means an interrupt has occurred and unblocked the main loop, which means it is time to get another temperature measurement from the DHT22. We do this by calling the getTemperature method of the dht object, which will return a float. We will print its value to the serial port.
float temperature = dht.getTemperature(); Serial.print("Temperature: "); Serial.println(temperature);
After this, the main loop will get back to the beginning and try to get the semaphore again, being locked until a new interrupt occurs. The full Arduino loop is shown below
void loop() { xSemaphoreTake(syncSemaphore, portMAX_DELAY); float temperature = dht.getTemperature(); Serial.print("Temperature: "); Serial.println(temperature); }
Interrupt Service Routine
To finalize, we need to write Interrupt Service Routine implementation. First, we need to recall from the previous post that we need to add the IRAM_ATTR attribute to its declaration, so it is placed in IRAM by the compiler.
In terms of implementation, it will be very simple. When the interrupt happens, we only want to unblock the Arduino main loop, so it executes the code to read from the sensor. As we have seen, the main loop is blocked on the semaphore, waiting indefinitely for a unit to be added to that semaphore.
So, the only thing we need to do is calling the xSemaphoreGiveFromISR function to add a unit to the semaphore, thus unblocking the main loop to execute a new iteration.
Note the “FromISR“ at the end of the function name, which indicates it is safe to call it from inside an Interrupt Service Routine. Inside an ISR, you should not use the xSemaphoreGive function, but always the xSemaphoreGiveFromISR function.
This function receives as first input the semaphore and as second an argument that we don’t need to use, so we can set it to NULL. You can read more about that second argument here.
void IRAM_ATTR onTimer() { xSemaphoreGiveFromISR(syncSemaphore, NULL); }
The full code
The final complete code can be seen below.
#include "DHTesp.h" DHTesp dht; hw_timer_t * timer; SemaphoreHandle_t syncSemaphore; void IRAM_ATTR onTimer() { xSemaphoreGiveFromISR(syncSemaphore, NULL); } void setup() { Serial.begin(115200); syncSemaphore = xSemaphoreCreateBinary(); dht.setup(27); timer = timerBegin(0, 80, true); timerAttachInterrupt(timer, &onTimer, true); timerAlarmWrite(timer, 10000000, true); timerAlarmEnable(timer); } void loop() { xSemaphoreTake(syncSemaphore, portMAX_DELAY); float temperature = dht.getTemperature(); Serial.print("Temperature: "); Serial.println(temperature); }
Testing the code
To test the code, simply compile it and upload it to the ESP32 using the Arduino core, assuming that the wiring between the device and the DHT22 sensor is already done and the power is on.
Once the procedure finishes, open the Arduino IDE serial monitor. It should start printing the measurements periodically, as shown below a figure 1.
Figure 1 – Output of the program, showing the DHT22 measurements.
One Reply to “ESP32 Arduino: Getting DHT22 sensor measurements with interrupts” | https://techtutorialsx.com/2018/09/15/esp32-arduino-getting-dht22-sensor-measurements-with-interrupts/ | CC-MAIN-2019-43 | refinedweb | 2,150 | 51.58 |
User account creation filtered due to spam.
When compiled with -Wall,
int foo (void) { if (9007199254740991.4999999999999999999999999999999995 ==
9007199254740991.5) return 1;}
does not complain about a missing return statement, showing that GCC is folding
the comparison to true. However, the LHS should round down.
With one less 9 GCC gets it correct.
I'm not convinced it's worthwhile to get perfect rounding, but Joseph wanted me
to file a report.
We probably don't even get it right for all cases with DECIMAL_DIG digits for
all long double formats (required by Annex F).
Here is the example requested on IRC showing that we don't get it right with
DECIMAL_DIG digits and IEEE quad long double. Tested with current mainline on
sparc-sun-solaris2.10 as an example target where long double is IEEE quad.
Different examples may be needed for 64-bit hosts where we use more than 160
bits (but still not enough). DECIMAL_DIG is 36 for IEEE quad.
int
f (void)
{
if (0.000000000000000000000000000000000691986517009071585211899336165210746L
== 0x0.000000000000000000000000000397cecf54a352d2cf251cf4c2d40ap+0L)
return 1;
}
/* Above decimal is
0x3.97cecf54a352d2cf251cf4c2d40900000000000000000000000000000006cfd...p-112,
which to within 0.5ulp (113-bit mantissa) should round up to above
hex value. Thus, there should be no warning with -O2 -Wall. */.
Confirmed.
Now we always require MPFR, it should be easy to fix this by using MPFR's facilities for parsing numbers from strings.
(In reply to comment #1)
> We probably don't even get it right for all cases with DECIMAL_DIG digits for
> all long double formats (required by Annex F).
(In reply to comment #2)
>.
I think to do DECIMAL_DIG digits correctly for long-double requires over 11,500 bits of precision.
I'm assuming above you're permitted to attach an exponent to your DECIMAL_DIG digits; that's how I read the definition of DECIMAL_DIG anyway. The exponents push up the required precision enormously.
(In reply to comment #5)
>.
Here's an example to prove this assertion. When compiled with GCC 4.1.2 or 4.1.3 with -std=c99, assuming a correctly-rounding libc (e.g. NetBSD's; glibc also seems to get this correct) you get the following output:
0x1.8p-147 0x1.4p-147 8.40779078594890243e-45
So not only is it rounded incorrectly, but the number it is rounded to, when converted back to decimal, does not even match the input number in the first digit.
#include <stdio.h>
#include <stdlib.h>
int main (void)
{
float f1 = 7.7071415537864938e-45;
float f2 = strtof ("7.7071415537864938e-45", NULL);
printf( "%a %a %0.18g\n", f1, f2, f1);
return 0;
}?
Maybe we use mpft_t at the tree level and convert to REAL_VALUE_TYPE when converting over to rtl.
Another option would be to put an mpfr_t inside struct real_value instead of all those bitfields.
Subject: Re: real.c rounding not perfect
On Mon, 20 Apr 2009, ghazi at gcc dot gnu dot org wrote:
>?
I sort of imagine that real.c should provide a wrapper layer that checks
whether the real number is decimal (in which case it calls the dfp.c code
which uses libdecnumber) or binary (in which case it uses MPFR, taking
care to use the correct precision, range, subnormal handling etc. for the
type in question) or split-binary (IBM long double) in which case it
should use more complicated GCC-local code that ultimately uses MPFR
underneath (bug 19779, bug 26374) - certainly the implementation should
avoid causing problems for future split-binary folding implementation.
(A base class from which implementations for decimal, binary and
split-binary derive, if you wish.) I don't see any need for GCC to have
its own implementation of the binary arithmetic - but it will need its own
encoding/decoding support for all the many supported formats.
I also wonder if the code for handling target formats and arithmetic on
them should in principle be a library shared with GDB (which presently
uses host floating-point arithmetic) but wouldn't like to try to persuade
GDB to require MPFR.
> Another option would be to put an mpfr_t inside struct real_value instead of
> all those bitfields.
You'd need to deal with the memory allocation/deallocation requirements,
the MPFR allocation model is rather different from the fixed-size
structures presently used.
I discovered two other examples of incorrect rounding in gcc:
1) 0.500000000000000166533453693773481063544750213623046875
2) 1.50000000000000011102230246251565404236316680908203125
These are both halfway cases. Example 1 has bit 53 equal to 1, so it should round up; gcc rounds down. Example 2 has bit 53 equal to 0, so it should round down; gcc rounds up.
I've written an article describing these examples in more detail: .
BTW, why doesn't gcc use David Gay's dtoa.c () for correct rounding?
Subject: Re: real.c rounding not perfect
On Fri, 4 Jun 2010, exploringbinary at gmail dot com wrote:
> BTW, why doesn't gcc use David Gay's dtoa.c ()
> for correct rounding?
Anything using the host's "double" or other floating-point types is
fundamentally unsuitable for GCC floating-point handling, since GCC needs
to handle many different floating-point formats for different targets on a
single host, including target formats with no corresponding host format.
The correct solution for GCC is well-established: use GNU MPFR for
decimal-to-binary conversions (and for other arithmetic), which also makes
it much more straightforward to handle the discontiguous mantissa cases of
IBM long double.
From:
---.
---
(In reply to comment #11)
>.
Alternatively, you can write code based on MPFR without using the ternary value. The algorithm would be:
1. Round to the target precision.
2. If the result is in the subnormal range (this can be detected by looking at the exponent of the result), then deduce the "real" precision from the exponent, and recompute the result in this precision directly.
Given that the correct MPFR isn't widely available, is that
possible to fix rounding in real.c?
Otherwise, how about taking code from the glibc implementation of strtof/strtod/strtold? Code in strtod was recently fixed. I don't know about strtold...
The glibc code is pretty complicated (using glibc's copies of mpn_*
low-level GMP functions for multiple-precision arithmetic) and entangled
with other bits of glibc (it needs to handle things such as locales /
thousands grouping characters, which are not relevant to GCC). And of
course there is no guarantee that the host has any floating-point type
corresponding to the required type on the target. Even working around the
absence of a reliable ternary value in some supported MPFR versions, using
MPFR for this would be much simpler than adapting the glibc code for use
in GCC - it's the natural thing to do, given the use of MPFR for built-in
function evaluation in GCC. (MPFR should also be used to replace
real_sqrt - real.c doesn't use enough bits internally to get a correctly
rounded sqrt result in all cases. fold_builtin_sqrt already does use
mpfr_sqrt.)
I can no longer find any conversions that gcc (I'm using 4.6.3) performs incorrectly, including the examples cited above. It doesn't look like there has been any related code changes in real.c though. Is this an architecture-dependent bug perhaps? When I first tested this three years ago I was on a 32-bit Intel machine; now I'm on a 64-bit machine. Thanks.
I confirm that it is an architecture-dependent bug. I can't reproduce any error with your test program on even with old gcc versions, from an x86_64 machine.
With GCC 4.3.5 (Debian 4.3.5-4) on Debian 6.0.7 (squeeze):
0.500000000000000166533453693773481063544750213623046875
Correct = 0x1.0000000000002p-1
gcc = 0x1.000000000000cp-31
1.50000000000000011102230246251565404236316680908203125
Correct = 0x1.8p+0
gcc = 0x1.8p+0
strtod = 0x1.8p+0
9007199254740991.4999999999999999999999999999999995
Correct = 0x1.fffffffffffffp+52
gcc = 0x1.fffffffffffffp+52
strtod = 0x1.fffffffffffffp+52
If I add the -m32 option, I get the same output except:
and the -mfpmath=387 option doesn't change anything.
On a Debian 6.0.7 (squeeze) 32-bit machine, with GCC 4.3.5 (Debian 4.3.5-4) and GCC 4.4.5 (Debian 4.4.5-8):
0.500000000000000166533453693773481063544750213623046875
Correct = 0x1.0000000000002p-1
gcc = 0x1.000000000000
1.50000000000000011102230246251565404236316680908203125
Correct = 0x1.8p+0
gcc = 0x1.8000000000001p+0
strtod = 0x1.8p+0
9007199254740991.4999999999999999999999999999999995
Correct = 0x1.fffffffffffffp+52
gcc = 0x1p+53
strtod = 0x1.fffffffffffffp+52
Since the computations are done at compile time, it is the architecture of the GCC code that matters, not the one of the generated code (so that using -m32 on a 64-bit machine doesn't have any effect on the GCC side).
(In reply to Vincent Lefèvre from comment #17)
> I confirm that it is an architecture-dependent bug. I can't reproduce any
> error with your test program on
>-
> glibc/ even with old gcc versions, from an x86_64 machine.
Thanks for the quick and thorough response!
Besides those testcases, I wrote an automated testcase to test gcc conversions; I have not found any incorrect ones. (In a loop, the program generates a random decimal string and puts it in a small C program that it compiles and executes on the fly. The generated C program then checks the result against David Gay's strtod(), the result of which I also coded into the program.)
As an aside, I see that you must have an older version of glibc; I have been testing strtod() in glibc 2.18 and I no longer get incorrect conversions. According to this,, it was fixed in 2.17.
(I will be writing this up on my blog soon to reflect the new status of these problems.)
I've looked into this and found that real.c/real.h use a precision of SIGNIFICAND_BITS, which is dependent on an architecture-dependent value called HOST_BITS_PER_LONG. In addition, SIGNIFICAND_BITS limits the precision to 192 bits on a 64-bit system, and I was able to find an example of an incorrect conversion there: 5.0216813883093451685872615018317116712748411717802652598273e58.
(Please see my article for details.)
I suppose that for any 54-bit[*] odd integer multiplied by a power of two with a large exponent (in absolute value), some decimal numbers close to this value will be affected.
[*] 54-bit for double, 25-bit for float, 65-bit for long double if x87 extended precision, 114-bit for long double if quadruple precision.
Author: jsm28
Date: Wed Nov 20 14:34:49 2013
New Revision: 205119
URL:
Log:
PR middle-end/21718
* real.c: Remove comment about decimal string conversion and
rounding errors.
(real_from_string): Use MPFR to convert nonzero decimal constant
to REAL_VALUE_TYPE.
testsuite:
* gcc.dg/float-exact-1.c: New test.
Added:
trunk/gcc/testsuite/gcc.dg/float-exact-1.c
Modified:
trunk/gcc/ChangeLog
trunk/gcc/real.c
trunk/gcc/testsuite/ChangeLog
Fixed for 4.9, at least if MPFR 3.1.1p2 or later is used (I don't know if the bug with mpfr_strtofr ternary value in older versions would affect the way mpfr_strtofr is now used in GCC, rounding towards zero with a higher precision and using the ternary value to set a sticky bit if the value was inexact, so that GCC's final rounding to the destination format is correct).
*** Bug 55145 has been marked as a duplicate of this bug. ***
I don't understand -- won't "mpfr_init2 (m, SIGNIFICAND_BITS);" have the same problem? Don't we need to change the computation of SIGNIFICAND_BITS in real.h?
Rounding to zero and setting a sticky bit based on inexactness works as long as the internal precision has at least two more bits than the final precision for which correctly rounded results are required. (This is what Boldo and Melquiond call rounding to odd in their fma algorithm, but the basic idea is much older than that.)
So the problem was never lack of precision -- just lack of stickiness. We were seeing higher precision making more conversions work (e.g., more worked with 192 bits than 160), but ultimately, stickiness was required to augment the limited precision. This will fix the architecture-dependent aspect of the bug as well.
I suppose you could have fixed the existing algorithm, although admittedly using MPFR is a simple and elegant solution. BTW, why was this fixed now, and not when an MPFR based solution was discussed four/five/six years ago? You certainly could have done it a few weeks ago, before I wrote all those articles :).
No, it was lack of precision. MPFR may need to use many more than SIGNIFICAND_BITS internally in order to compute a result that is correctly rounded towards zero to SIGNIFICAND_BITS (plus the ternary value), but GCC doesn't need to know or care how many bits MPFR uses; all it needs to know is that SIGNIFICAND_BITS is at least 2 more than the largest number in any supported binary floating-point format.
Having got the C11 features I wanted into 4.9 I then looked for bugs on my list of conformance issues that would be more suited to development stage 1 (ends today) than to subsequent stabilization stages. This was one.
Yes, that makes sense. I originally (mistakingly) thought that SIGNIFICAND_BITS was the intermediate precision for mpfr_strtofr(), like it was for gcc's original algorithm. Then I talked myself out of the "needs more precision" argument based on my interpretation of your response.
So then is the round to zero/stickiness just to avoid double rounding (as opposed to using round to nearest/no stickiness)?
BTW, I'm testing out the code. I tried a test I found in float-exact-1.c: it's the literal assigned to the double named d1c. When I run it (on a 64-bit system) with the fix I get 0x0.0000000000001p-1022; strtod() (David Gay's and glibc's) gives me 0. Also, before the fix, I get "warning: floating constant truncated to zero" and the conversion is correct; after the fix, no message, and an incorrect conversion.
GCC supports lots of different floating-point formats, not all IEEE, including variants such as "no subnormals" or "no infinities". Using round to zero/stickiness allows the existing GCC code that knows about the peculiarities of these formats to do the final rounding for them, instead of needing to replicate that logic specially for string conversions in order to use mpfr_subnormalize. Simple use of round-to-nearest without setting exponent limits and mpfr_subnormalize would indeed result in problems with double rounding.
d1c is slightly above half the least subnormal (d1a is slightly below, d1b is exactly half the least subnormal), so should give the least subnormal as result, which is what I get with current glibc. Note that if GCC is running with an MPFR version before 3.1.1p2, it's possible MPFR bugs will cause incorrect results (I don't know if the bug in question affects the way GCC uses MPFR). (On 32-bit i686-pc-linux-gnu you'd have issues with excess precision meaning constants are first interpreted as long double then later rounded to their semantic type, hence the FLT_EVAL_METHOD conditionals.)
I was running with the unfixed glibc, so that mislead me into thinking that, since it matched David Gay's output, it was right. But the fixed gcc and fixed glibc both get it right (0x0.0000000000001p-1022), and David Gay's strtod() gets it wrong (0)!
(That's one nasty input -- bit position 1075 is a 1, and the next 1 after that is at bit position 3582.)
Thanks for enlightening me. Now it's time to bug Dave Gay...
Gay's strtod() bug was reported and fixed: | https://gcc.gnu.org/bugzilla/show_bug.cgi?id=21718 | CC-MAIN-2017-34 | refinedweb | 2,608 | 66.23 |
Delete?
Table of Contents
Concurnas provides a mechanism where by a local variable can be removed from scope via the
del keyword. This has the additional side effect for non array Object types of invoking the delete method on them if one is defined. This is incredibly useful for performing resource management and is used in supporting the Concurnas gpu parallel computing as well as the off heap memory frameworks. Additionally Concurnas offers first class citizen support for the
del keyword when applied to maps and lists.
Removing values from maps?
Values can be removed from a map as follows:
mymap = {taste -> 10} del mymap['taste']//value corresponding to 'taste' is deleted
This saves us from having to write:
mymap.remove('taste')
Removing values from lists?
Values can be removed from a list as follows:
mylist = [5, 34, 2, 5, 11] del mylist[1]//the second value, '34', is removed from the list
This saves us from having to write:
mylist.remove(1)
Deleting Objects?
Calling the delete operator on a variable in Concurnas does two things:
The variable is removed from scope
The delete method is called on the object pointed to by the variable
Here is an example of this in action:
deleteCalled = false class DeleteMe{ override delete() void => deleteCalled = true } todel = DeleteMe() del todel //todel is now out of scope and cannot be referenced assert deleteCalled
Overriding the
delete method as above will allows us to implement functionality to be invoked upon the
del operator being applied to the object (or the
delete method being called directly). For instance, closing or otherwise managing resources.
@DeleteOnUnusedReturn Annotation?
The
@com.concurnas.lang.DeleteOnUnusedReturn annotation can be used to denote that the delete method should be invoked on the return value of a method or function if it is unused by the caller (i.e. popped off the stack). This affords a degree of flexibility in API design around objects which require resource management as one does not need to worry about the caller correctly dealing with a return value which they may to be optional. This is used heavily in the support for Concurnas parallel GPU computing framework and is useful with off heap memory management. For example:
from com.concurnas.lang import DeleteOnUnusedReturn @DeleteOnUnusedReturn class ClassWithResource{ def delete(){ //... } } def doWorkAndGetClassWithResouce(){ ret = ClassWithResource() //do some work here ret } doWorkAndGetClassWithResouce()//the delete method on the returned ClassWithResource object will be called
In the above example the delete method will be called on the returned
ClassWithResource object as it is unused by the caller. If it were used by the caller (e.g. assigned to a value, or used in as part of a nested method or function invocation) then the delete method will not be called.
Notes:
The delete method will not be called if the returned object is null.
If a method is decorated with
@DeleteOnUnusedReturnthen instances in all subclasses will also inherit this decoration.
The annotation is not applied to refs to methods or functions which have been decorated with it.
Additionally, for refs which are returned from a function or method and are immediately extracted by the caller, these will also have the delete method called on the ref if the
@DeleteOnUnusedReturn annotation is used. For example:
from com.concurnas.lang import DeleteOnUnusedReturn @DeleteOnUnusedReturn class ClassWithResource{ def delete(){ //... } } def doWorkAndGetClassWithResouce(){ ret = ClassWithResource() //do some work here ret://returned as a local ref } got = doWorkAndGetClassWithResouce()//the delete method on the returned Local ref holding the ClassWithResource object will be called (but not on the ClassWithResource itself) | https://concurnas.com/docs/delete.html | CC-MAIN-2022-21 | refinedweb | 588 | 50.16 |
A tiny library for sending OTP and SMS via MSG91
Project description
sms91-sms-otp
This a tiny library for sending sms and otp using sms91.
Basic usage
You could simply send a message, OTP and verify otp.
from sms91.sendsms import SMSClient cli = SMSClient() #initializing app # get this key from key = "31*****************P1" # sender can be of anything in the length of 6. in inbox it will be "AX-TESTIN". sender = "TESTIN" # [optional] no.of char in otp. minimum 4 char otp_size = 6 cli.initialize(key,sender,otp_size) #send sms #phno list should have 10 digit indian numbers. phno = ["98XXXXXXX0,'77XXXXXXXX5"] message = "hi! this is a test message" cli.sendMessgae(message,phno) #send otp phno = ["98XXXXXXX0,'77XXXXXXXX5"] cli.sendOtp(phno)
Yes, it is as simple as this.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/msg91-sms-otp/ | CC-MAIN-2021-49 | refinedweb | 157 | 69.89 |
etc. are halted by the webserver with a message 'The type of page is not served', anybody with console access to the system can still open and read the database connection strings, which might contain the password to the database in an unencrypted manner.
After a great deal of search in MSDN and other sites, I found a simple way to encrypt strings using a minimum of 8 character string (8 characters should be okay since even MSN Hotmail recommends a minimum of 8 character passwords for all accounts). Sections have been taken from ASPAlliance example but the method has been kept as a static method for the simple reason that you need not create object for every encryption and decryption strategy.
The attached example makes use of DESCryptoServiceProvider that is available in System.Security.Cryptography namespace.
DESCryptoServiceProvider
System.Security.Cryptography
For example sake, I have given both the key and the encrypted string in web.config. But for security reasons, it would be advisable to keep the key elsewhere in the file system and read the key dynamically from this file from the specified location. Additional care has to be taken that the place where we store the key is accessible only to System Administrators and other authorized personnel. With this strategy and trick in place, the database connection string could be made relatively safe for a particular web application.
Include the following two lines in web.config:
<add key="cKey" value="LavanyaDeepak"/>
<add key="cDb" value="C0AHny7FDFewTPE7eTp5RA=="/>
To any of your test applications, unzip the files in the archive (Cryptography.cs and Test.Aspx and Test.Cs). Include them in a project in Visual Studio .NET. Build the application and run test.aspx from the web browser.
I hope the above article would be very useful for .NET developers worldwide to make effective and secure use of database connection strings that are put in Web.Config. Many thanks to developers whose ideas and pieces of code have been helping me out in drafting these static. | http://www.codeproject.com/Articles/3559/Enhanced-and-Secure-Connection-Strings-in-Web-Conf?fid=14143&df=90&mpp=10&sort=Position&spc=None&tid=3611261 | CC-MAIN-2015-48 | refinedweb | 337 | 53.31 |
With monday.com’s project management tool, you can see what everyone on your team is working in a single glance. Its intuitive dashboards are customizable, so you can create systems that work for you.
public class test extends JFrame{ int [] people ; public test(){ Person [] people = new Person[4]; float grades[] = {90,90,91,92,92,93,93,94,94,95,96,96,95,95,95}; ArrayList<Person> list = new ArrayList<Person>(4); list.add(new Teacher("Teacher ", "One", 420)); list.add(new Student("Student ", "One", 11111, grades)); list.add(new Teacher("Teacher ", "Two", 421)); list.add(new Student("Student ", "Two", 22222, grades)); setLayout(new GridLayout(4, 2, 5, 5)); for (int i=0; i< list.size(); ++i){ add(new JTextField("" + list.get(i)), BorderLayout.WEST); } } public static void main(String[] args){ test frame = new test(); frame.setTitle("Listing"); frame.setSize(400, 400); frame.setLocationRelativeTo(null); frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); //allow x to exit frame.setVisible(true); } }
Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.
Have a better answer? Share it in a comment.
Of course you won't be able to do that for the Person class, as they are all people.
(Iterator "it" = list.iterator())
. . . which is a rather excellent point.
The revolutionary project management tool is here! Plan visually with a single glance and make sure your projects get done. | https://www.experts-exchange.com/questions/28301471/Java-arrayList.html | CC-MAIN-2018-13 | refinedweb | 232 | 53.98 |
zap-automation
This is a library utilising the ZAP API, with pre configured steps to run a spider attack and then an active scan.
Run the unit tests for the libraryRun the unit tests for the library
sbt test
Adding to your buildAdding to your build
In your SBT build add:
resolvers += Resolver.bintrayRepo("hmrc", "releases") libraryDependencies += "uk.gov.hmrc" %% "zap-automation" % "x.x.x"
How to use the libraryHow to use the library
- You will likely want to have a way to run some tests from the UI of the service/application you are testing, so that ZAP can learn about the URLs it needs to test.
- You will also need to install and setup ZAP either locally, or on your build machine in order to use this library.
- You will need a way to run the library, we have done this by using this file:
package utils.Support import uk.gov.hmrc.{ZapAlertFilter, ZapTest} class ZapRunner extends ZapTest{ /** * zapBaseUrl is a required field - you'll need to set it in this file, for your project to compile. * It will rarely need to be changed. We've included it as an overridable field * for flexibility and just in case. */ override val zapBaseUrl: String = "xxx" /** * testUrl is a required field - you'll need to set it in this file, for your project to compile. * It needs to be the URL of the start page of your application (not just localhost:port). */ override val testUrl: String = "xxx" /** * alertsBaseUrl is not a required field. This is the url that the zap-automation library * will use to filter out the alerts that are shown to you. Note that while Zap is doing * testing, it is likely to find alerts from other services that you don't own - for example * from logging in, therefore we recommend that you set this to be the base url for the * service you are interested in. */ override val alertsBaseUrl: String = "xxx" /** * contextBaseUrl is not a required field. This url is added as the base url to your * context. * A context is a construct in Zap that limits the scope of any attacks run to a * particular domain (this doesn't mean that Zap won't find alerts on other services during the * browser test run). * This would usually be the base url of your service - eg.* */ override val contextBaseUrl: String = "xxx.*" /** * desiredTechnologyNames is not a required field. We recommend you don't change this * field, as we've made basic choices for the platform. We made it overridable just in case * your service differs from the standards of the Platform. * * The technologies that you put here will limit the amount of checks that ZAP will do to * just the technologies that are relevant. The default technologies are set to * "OS,OS.Linux,Language,Language.Xml,SCM,SCM.Git". */ //override val desiredTechnologyNames: String = "" /** * routesToBeIgnoredFromContext is not a required field. You may set this if you have any routes * that are part of your application, but you do not want tested. For example, if you had any * test-only routes, you could force Zap not to test them by adding them in here as a regex. */ //override val routeToBeIgnoredFromContext: String = "xxx" /** * If, when you run the Zap tests, you find alerts that you have investigated and don't see as a problem * you can filter them out using this code, on the cweid and the url that the alert was found on. * The CWE ID is a Common Weakness Enumeration (), you can * find this by looking at the alert output from your tests. url can either be a normal string or a regex * (for example you may wish to use a regex where a url includes an ID that differs with each test run) * * As dots '.' and question marks '?' are used to build both regular expressions and URLs you need to be careful * when instiating the filter that includes them (make sure that you escape them if they are not intended to be a * regex quantifier). * For example: * www\.google\.com/search\?q=blah will match */ val alertToBeIgnored1: ZapAlertFilter = ZapAlertFilter(cweid = "16", url = "xxx") override val alertsToIgnore: List[ZapAlertFilter] = List(alertToBeIgnored1) /** * Not a required field. You should set this to be true if you are testing an API. * By default this assumes you are testing a UI and therefore is defaulted to be false. */ //override val testingAnApi: Boolean = false }
- You’ll need to be able to create a new browser profile for ZAP, and switch your browser to this new profile. We’ve done this using this code:
def createZapDriver: WebDriver = { val profile: FirefoxProfile = new FirefoxProfile profile.setAcceptUntrustedCertificates(true) profile.setPreference("network.proxy.type", 1) profile.setPreference("network.proxy.http", "localhost") profile.setPreference("network.proxy.http_port", 11000) profile.setPreference("network.proxy.share_proxy_settings", true) profile.setPreference("network.proxy.no_proxies_on", "") val options: FirefoxOptions = new FirefoxOptions options.setLegacy(true) val firefoxCapabilities: DesiredCapabilities = new DesiredCapabilities() firefoxCapabilities.setCapability(FirefoxDriver.PROFILE, profile) firefoxCapabilities.setCapability("marionette", false) options.addDesiredCapabilities(firefoxCapabilities) val capabilities = options.toDesiredCapabilities println("Running ZAP Firefox Driver") new FirefoxDriver(capabilities) } def createZapChromeDriver: WebDriver = { var options = new ChromeOptions() var capabilities = DesiredCapabilities.chrome() options.addArguments("test-type") options.addArguments("--proxy-server=") capabilities.setCapability(ChromeOptions.CAPABILITY, options) val driver = new ChromeDriver(capabilities) val caps = driver.getCapabilities val browserName = caps.getBrowserName val browserVersion = caps.getVersion println( "Browser name & version: "+ browserName+" "+browserVersion) driver }
Run the ZAP tests on your machineRun the ZAP tests on your machine
- Start your application locally
- Start ZAP from the command line:
- Change directory to where ZAP is installed (default Mac installation is in the root Applications directory: /Applications)
- Run this command: ZAP\ 2.6.0.app/Contents/Java/zap.sh -daemon -config api.disablekey=true -port 11000
- Run your acceptance tests pointing at your new ZAP profile. Our command to do this looks like this:
sbt -Dbrowser=zap -Denvironment=Local ‘test-only Suites.RunZapTests’
You need to make sure you run enough UI tests to hit all the urls that you want to run your ZAP tests on. This may be all of your tests or a subset, it’s up to you. Run the penetration tests (using your new ZapRunner file) - our command to do this looks like this:
sbt "testOnly utils.Support.ZapRunner"
How do we read the output of the tests?How do we read the output of the tests?
Green build - no html report is created as there are no alerts to give you more information about. If you are surprised about getting a green build, if may be that you need to adjust the variables you are passing to the library, or you may not have run enough UI tests proxying through ZAP. If you have doubts please contact us. Red build - alerts are printed in the console and a html report is created on the workspace. Note that the report will be deleted each time a new build is started.
Supported browsersSupported browsers
We have tested the library using Chrome and Firefox.
LicenseLicense
This code is open source software licensed under the Apache 2.0 License. | https://index.scala-lang.org/hmrc/zap-automation/zap-automation/0.19.0-2-g5ee7be8?target=_2.11 | CC-MAIN-2020-40 | refinedweb | 1,167 | 55.95 |
."
Not news (Score:5, Interesting)
~~~~
Re:Not news (Score:5, Insightful)
Some presumably do deface the pages, but I don't find it terribly surprising that somebody that primarily uses wikipedia would be more reliable than somebody that spends most of their time building a reputation. There's just so much more incentive to fix it if you are using it. That isn't to say that named contributers are inherently bad.
Re: (Score:2, Interesting)
~~~~
Re: (Score:2)
Re: (Score:2)
Although I do believe grammar is becoming a specialty skill, I'm also glad that such "specialists" regularly edit the submissions. Forum trolls (really, let's keep the word "trolls" -- "poo-flinging monkeys" may be accurate, but is a slight on our simian bre
Re: (Score:2, Insightful)
Re:Of course... (Score:4, Informative)
Happened to me once. I noticed a list of "movies about the Mafia" was full of titles just about crime, so I deleted those I knew were not Mafia related. Then later I see they've been reverted by some asshole (later I worked out it was a bot) that had decided I was a vandal (as stated in the comment).
Re:Of course... (Score:4, Informative)
The bots are not infallible. They do catch a ton of the really ridiculous crap that people add to Wikipedia, but they miss some, and have a few false positives as well.
If you are not some random vandal, one thing that you could (actually, should do, as I strongly recommend it) is that you specify why you remove content in the "Edit summary" box. If you say, "Removing movies unrelated to mafia", the bot leaves you alone, or if someone sees the bot revert your removal for an invalid reasons, they can always revert the bot. I've done that myself many a time.
Remember: Humans watch the Recent changes [wikipedia.org] feed too. If you provide a reason for the human, the human may leave you alone. Otherwise, you're just a random IP that is removing content for no reason whatsoever, which happens all day, every day. ~~~~
Re: (Score:2)
Re: (Score:2)
Well, did you provide a reason why you deleted the content?
Yes. That's why I was pissed off, as well as insulted at being labelled a "vandal" when I was actually correcting errors. This happened within minutes of my edit, so I guess it was totally automated. Which is even more annoying, being reverted by a bot set loose by some smarmy teenager (I looked up his profile).
Re:Of course... (Score:5, Insightful)
Re: (Score:3, Interesting)
Yes, I did try to communicate with him, and no I wasn't abusive. He ignored me.
I'm not "way too angry". I only mentioned it at all in answer to a direct question. At the time I was pissed because I'd spent time and thought "giving back" to the community only to have it deleted and be insulted for my trouble. I got over it a few minutes later and haven't mentioned it to anyone till now. Now I know Wikipedia is infested with self-important twats who like to play power games, so I don't waste my
Re: (Score:3, Informative)
You realise that you are calling me a liar?
had his mother raped. Guy Number Three had his ancestors' graves desecrated,
I told you exactly what happened. I didn't exaggerate or claim malice, just careless arrogance. Some twat sent a bot to delete stuff without bothering to check what it was, and ignored my attempts to discuss it with him. Fuck you if you don't believe me.
Re: (Score:2)
I did not call you a liar. Read my post again. Yes, I said I'm not inclined to believe you, but that does not equate me calling you a liar. Basically, I'm saying you may well be right, but your claim is pretty outlandish. As I mentioned, these claims always come up in wikipedia discussions on slashdot, and I've yet to see evidence for them. Naturally, my inclination to believe these claims decerese for every time I see them without any evidence to back them up.
Have a pint and a smoke and don't take th
Re: (Score:3, Interesting)
Yes it does. Weasel words.
these claims always come up in wikipedia discussions on slashdot, and I've yet to see evidence for them
I don't "always make these claims". I'm talking, for the first time, about something that happened to me, personally. So if you don't believe me, you are calling me, personally, a liar. Your smug response is exactly what exacerbates these situations.
If these "claims" are not true, why
Brilliant (Score:3, Insightful)
Why does your one data point override the dozens of data points you've seen other people post? And the poster you're responding to is obviously a liar, since his experience is different than yours.
Anecdote is not the singular of data, and it's pretty clear that there are lots of folks out there who've seen petty, ridiculous pissing contents by twits. But, of course, the important thing is to blindly defend the glorious Wikipedia from criticism, right?
Cue the mantra: Anyone can edi
Re: (Score:3, Interesting)
Re:Of course... (Score:4, Interesting)
Really? I just checked the half dozen edits Ive made over the past six months. The 5 trivial ones are all intact, and the extensive one (transportation in the town I live in) was edited & rearranged, for the better.
Perhaps the quality of the edits is important.
Re: (Score:3, Insightful)
Re: (Score:3, Insightful)
Re: (Score:2, Interesting)
What's the motivation to register? (Score:3, Interesting)
Re: (Score:3, Informative)
Re: (Score:3, Informative)
Additionally, as an anonymous editor, you can't edit semi-protected pages, but you cannot upload images either. You cannot move pages either, nor create pages in the article namespace. You can still create talk pages, but if you want to create an article, you
Blind Wikipedians? (Score:2, Informative)
Re: (Score:3, Interesting)
(By the way, in US english, commas and periods should ALWAYS go inside the q
Re:Not news (Score:5, Funny)
Re: (Score:3, Funny)
Re:Not news (Score:5, Interesting)
Every time I have done so, it has been rolled back within minutes, which I assume means that registered editors are watching for anonymous changes and removing them no matter what. As a result, my current attitude towards Wiki editors can be summarized with the words "fuck you."
Hopefully, some of those pricks will read this article and change their attitude, but I doubt it.
Re: (Score:3, Funny)
[citation needed]
Re: (Score:2)
Re: (Score:3, Insightful)
Re: (Score:3, Insightful)
Ah, but that's when it is the most fun.
:-D
Re: (Score:2)
The problem is that not all of us have the time to be monitoring the changes we do to the Wikipedia. I have made some changes as anonymous some time. But a lot of times the article is locked so I can not edit it myself and what I do is just write the comment, correction or idea in the discussion page. If the article is as "important" to have be locked then it ha
Re: (Score:2)
(By the way, in US english, commas and periods should ALWAYS go inside the quotes.)
No they "don't".
Re: (Score:3, Interesting)
Yes, but for a strange reason. American newspapermen couldn't be bothered with the nuances of the English language, even though some of the nuances were linguistically valuable. In this case, placement of punctuation inside or outside quotation marks relays information about the quote. I'm American so, you know, screw the redcoats and all that, but I think our stateside grammarians dropped the ball on this one.
Re:Not news (Score:5, Interesting)
The American style is inconsistent. We put question marks and exclamation points inside the quotes only if it is part of the quoted content. Note the difference in these sentences:
Similarly, we treat punctuation with regards to parentheses in this way. A period goes inside the parentheses only if it is a complete sentence.
However, the American style says to put periods and commas inside the quotation marks in all cases. I would argue that American usage is simply wrong here, as it is thoroughly inconsistent with all other punctuation combinations. Thus, I make it a point (despite being American) to ignore it and follow the much more rational British rules. The rules for punctuation should logically be determined by whether the punctuation is truly part of what is being quoted or not.
For example, if I use a term in an ironic way, I might put that in quotes.In that context, if that were at the beginning or end of a sentence, punctuation should logically go outside the quotes.
Putting it inside the quotes implies that you are quoting the period as though "nice girl" were a complete sentence or some reasonable facsimile thereof. It just doesn't make sense. If a wookiee can live on Endor....
The same holds true if you are using quotes to define new terminology (though this is less frequently done with quotes these days and more frequently done with boldface text or other typographic conventions).
This stands in contrast with cases in which punctuation would logically be part of the quote.
But I digress.
Re:Not news (Score:4, Interesting)
The graph they show is practically meaningless.
The rest of the study may possibly have some good stuff in it, but the incompetence that has gone into this graph leaves me with grave reservations. I notice, for example, that the main body on p. 15 refers to "FIGURE 2 ABOUT HERE". I also notice that there is no figure 2. I don't think this study has a shred of credibility.
Re: (Score:2)
I don't find it surprising either. It sounds like the expected results to me. Basically, when a person first discovers it's possible to edit something on wikipedia, and wants to do so, they probably don't have an account. Therefore their first (perhaps even first few) submissions or edits are likely to be anonymous. These new users fall into two basic categories: Well-meaning individuals, and idiots who deface wikipedia. The idiots continue to submit anonymously because a named account doesn't last lon
ha! (Score:5, Funny)
... and more reliable than Slashdot summaries (Score:5, Informative)
That study was published by Dartmouth College. Dartmouth University is an unrelated entity in Canada.
...Dartmouth? (Score:2)
Re: (Score:2)
Re: (Score:3, Interesting)
Or... (Score:4, Insightful)
Re: (Score:2, Interesting)
It would be interesting to try that study with a commercial entity.
Yo, mod point tanks (Score:3, Interesting)
At what point do these posters become registered? (Score:5, Interesting)
Re:At what point do these posters become registere (Score:4, Interesting)
If I'm not already logged in and see a minor problem in an article, I'll usually fix anonymously. Not worth the time to log in.
Re: (Score:2)
Re: (Score:2)
But yeah, I don't even do typo fixes on partially locked or "this article is disputed" pages because I figure it'll just get lost in the revert war or undone by a knee jerk "OMG an edit it must be vandalism on my precious page!" reaction.
Even if they do... (Score:2)
Re:At what point do these posters become registere (Score:5, Informative)
A - 0
B - 0
C - 0
D - 2
E - 0
F - 0
G - 0
H - 0
I - 0
J - 0
In short: most of the people registering accounts had made no edits prior to registering. It's common knowledge on Wikipedia that something like half of all accounts registered never make any edits at all, so this makes sense.
Re: (Score:3, Informative)
How many of those new users you selected have made edits since registering? I think many of those you sampled will never edit, period. Not before, not after. To mak
Re: (Score:2)
Re: (Score:3, Informative)
Re: (Score:3, Insightful)
Can't speak for the anonymous posters at Wikipedia or even any of the ACs here but for myself. The validity of a statement is the greatest when it can stand on its on without the benefit or detriment allocated to the statement by its maker. Posting as AC often draws extra scrutiny as to the validity and worth of the posting. If a moderator perceivers value in it worth their mod point application then
Re: (Score:2)
Lost Passwords? (Score:4, Interesting)
Re: (Score:2)
I read it on wikipedia (Score:4, Funny)
Even better, the number of these "Good Samaritans" has tripled in the last six months!
well duh (Score:5, Insightful)
Re: (Score:2, Funny)
I was going to agree with you, then I noticed I've broken the 5,000 post mark on Slashdot. So apparently I do have the time and can't make fun of the wikipedians.
Re: (Score:2)
Re:well duh (Score:4, Funny)
Digg users? Come on now, that's just being unfair. They have no "darlings" in any sense of the word.
Re:well duh (Score:5, Insightful)
Luckily, not everyone views volunteering as a waste of time, or indicative of fanaticism. Many people contribute to Wikipedia because they value information and education. They enjoy challenging their mind. This is their hobby (instead of Sudoku and crossword puzzles), or perhaps even their passion. This is their way of contributing to a greater good. You are more than welcome to ignore the free spread of information and impromptu musical gatherings, and focus on all the important things in your adult life. However it is rather unfortunate that you cannot see the value in what other communities achieve when they willingly devote time from their busy schedules to a communal project.
Re: (Score:3, Funny)
Depends, (Score:3, Funny)
Re: (Score:3, Insightful)
Wikkipedia has had it's share of experts that amount to people lieing about their credentials to fix a page in a certain way and keep the tones of pages agenda driven. This is especially true for anything political or even emotional. I don't know how many times I h
True on slashdot too (Score:5, Funny)
I used to run a small site... (Score:2)
Maybe I was below the traffic threshold for trolls to show up.
Re: (Score:2)
FUCK OYU
Oyu? (Score:5, Funny)
Questionable methodology, questionable results (Score:4, Interesting)
This metric makes sense if the wiki is new, and most of the edits are adding new content. The metric is virtually meaningless if the wiki is established, and most of the edits by a group of people are vandalism or reverts - people fixing the article will have a lower score by virtue of the fact that they are making the same edit (more or less) over and over again.
Normally, you'd expect that the more edits a user makes, the more trustworthy he is. If he were vandalizing, he wouldn't make more than a few before being blocked. If he's making hundreds, he should be considered more trust worthy (and have a higher retention rate) than if he's new. The results here show the exact opposite for anonymous users. In short, the methodology is flawed and the results are wrong.
Re: (Score:2)
"Vandalism : Revert : Vandalism : Revert" is counted such that Revert is credited with 100% retention and Vandalism is credited with 0% retention. What matters is which characters of an edit are preserved going forward, not if they were preserved in an immediate sense.
The "per-contribution" aspects refers to averaging/normalization.
Re: (Score:3, Interesting)
Road to hell paved with good intentions (Score:5, Interesting)
Unfortunately, not all edits which are good-intended actually contribute to the overall quality. Of course, edits which fix simple things like revert vandalism, fix a typo, update a number etc, are all good. But the rest pose a potential problem. First off, newcomers, while well-intented, simply do not know the way Wikipedia works. They may include unsourced or poorly sourced material, insert a POV without even realising it, piss off another editor by being careless (and thus start an edit war) etc.
But even those edits which do not break any Wikipedia rules or guidelines still can cause damage, this time much more subtle. The thing is, a (good) Wikipedia article is not just a collection of facts, even if every single fact is relevant, neutral, sourced, and deserves to be in the article. An article is a unified piece of work. It should flow to the reader, not bump. Information must be properly organized and related to each other. A major suffering of Wikipedia is the so-called "contribution creep", where people just keep dumping more and more facts into the article. The result is grossly disproportional coverage of some sections compared to others, a huge overemphasis on bullet-point lists rather than coherent paragraphs, lots of small factoids which while each good on their own right, do not belong together, parts of articles being outdated compared to other parts, and a lot of other problems which make Wikipedia look like a search result by Google rather than a real encyclopedia.
Early on, Wikipedia's first priority was to fill its databank with stuff, and all contributions (other than those breaking policy) were welcome. Recently, WP is at the stage of more stringent enforcement of policies, as well as guidelines and styleguides. And by all means, that is very important and should be the first priority. But it's not enough to be a good encyclopedia. Making sure everything is neutral, notable, verifiable, attributed, legal, and formatted according to style, is all sub-article tasks, which you apply to a particular sentence, paragraph, or image. But then you have to pause for a moment and look at an article at the big picture. Does it flow smoothly? Are all sections balanced? Are all parts equally updated? Would an average reader get a proportional representation from the article?
You can easily handle the sub-article problems (those that break a clearcut policy or guideline) contributions from anonymous edits (as well as non-anonymous edits). But "Contribution creep" is biggest problem to the overall article, where there is no clearcut right or wrong. And that's why, no matter how important anonymous edits are to Wikipedia (and they certainly are), the already developed articles should be marked as "revised" and new contributions screened before updating them. Not because of potential vandalism or policy violations (those are easy to fix), but precisely to manage contribution creep and make sure well-intented contributions don't introduce speedbumps to an article and break its coherent organization and flow.
Re:Road to hell paved with good intentions (Score:5, Informative)
There are edits that are obviously unhelpful; there are others that are clearly helpful. But there is a gray area of edits that falls in between, and for which editors' reactions vary a lot.
A good example is an anonymous/new editor adding unsourced information to a carefully-sourced Featured article. You can't let the information just remain there, as editors have gone through that page, double-checked the citations and validity of the statements, and generally polished the article to have its prose crisp and clean. But you cannot just revert the edit wholesale, as the edit was not done in bad faith. While sometimes the edits can be fixed, there are many times that the edits are incorrigible, and need to be completely reworked or removed (such as introducing widespread, irrelevant rumors on the biography of a celebrity).
So, at this time, some editors remove the text, with an explanation in the edit summary. Sometimes anonymous editors read the edit summaries, sometimes they don't. Often they wonder why their text got removed, justifiably so. Some users take that personally and begin accusing us of being "grammar Nazis", or even "suppressors of the truth" (I've heard that one before). But in a way, we're just trying to keep everything in order.
~~~~
Re: (Score:2)
Re: (Score:2, Insightful)
~~~~
Re: (Score:2)
The problem, as I saw when I was still active, was not that mop-wielders were editing, but that mop-wielders often forgot to act in bad faith, and were also overwhelmingly deletionist in almost every line of thinking and discussion. While there are many good admins, there are many more bad admins, and the systemic nature of the problem is such that many bad admins think they are seriously doing good work by simpl
Re: (Score:2)
All it requires is a server cluster,a copy of wiki, minimum rules and some advertisement.
Re: (Score:2, Insightful)
Re: (Score:2, Interesting)
So if someone sees something lacking, they should be free to add to it. If the content they add isn't formatted correctly or
whatever, that is what editors should fix, no?
As it stands, I've made minor modifications to wikipedia as well as attempting to add at least two pages that have links
to the page already, but aren't filled out. The modifications went ok, but last I checked the pages I
Wikipedia should return to its early days (Score:2)
Early on, Wikipedia's first priority was to fill its databank with stuff, and all contributions (other than those breaking policy) were welcome.
And ideally it should still be like this.
Recently, WP is at the stage of more stringent enforcement of policies, as well as guidelines and styleguides
Which is a very Bad Thing, IMO. Wikipedia is still incomplete, and the more paranoid it becomes about 'protecting' its content, the less contributions it's going to get. There is now too much unnesessary bureaucracy on Wikipedia that makes everyone's life very difficult.
Makes sense ? (Score:2)
"An anonymous user who makes a single edit is probably a good guy who spotted a mistake, an anonymous user who makes lots of edit is probably a vandal, if his contribution were good he'd probaly register to get credit. A registred user who does bad edits would be kicked pretty soon therefore registered user with large number of edits probably do quality edits".
Duh ?
If the finding were the
Not too surprising (Score:2)
This makes sense, right? If someone is editing anonymously, why are they editing anonymously? If they edit the Wikipedia frequently and just haven't bothered to get an account, it seems likely that they're lazy, stupid, or have something to hide. If they're anonymous because they don't make frequent edits and don't see the point in making an accoun
Re: (Score:2)
Re: (Score:2, Insightful)
~~~~
This is true... (Score:4, Interesting)
I would imagine that most single edits are like that - someone with a good depth of knowledge on a subject, noticing something that's not quite right. The threshold for action is high enough that you'd only do it if it was worthwhile.
ease of logging in (Score:5, Interesting)
Dynamic IP (Score:2, Insightful)
I'd think this would be obvious..... (Score:2)
And if you're going to go though the trouble of making an account, you're either going to do a quick piece of vandalism and leave, or be an active contributor.
I don't even think this is hindsight bias kicking in for calling this obvious. It really is just common sense. I'
I give you 10 EUR via PayPal if you write a paper (Score:3, Insightful)
I'm surprised someone actually got money to research this.
Research is useful even if it's obvious. Previously we couldn't cite anyone if we wanted to say that anons who edit once or twice make good edits. Now, thanks to this research, we can. While it's true that these researchers could spend their time and money in better questions, for example examining P=NP, but this research is still useful, if not for everything else, at least for putting it in the references of some other wiki-related research. Now, if I want to write a paper on wikis, I can cite their
Re:I give you 10 EUR via PayPal if you write a pap (Score:2)
Who gives a fsck about reputation? (Score:2)
I find it strange that it is suggested that logged-in users care about their reputation. I am such a logged-in user [wikipedia.org] and I don't care that much about reputation or what people may think or say about me, even though my Wikipedia account is linked with my real name. I mostly care to improve articles or correct misunderstandings. If I find that in some specific occasion I can make the encyclopedia better at the cost of making 500 people hate me, I won't give a fsck what the people are going to think or say a
my experiance (Score:3, Interesting)
I find this to be true (Score:3, Interesting)
Re: (Score:2, Insightful)
Re: (Score:2) | https://slashdot.org/story/07/10/17/2249246/infrequent-anonymous-cowards-reliable-on-wikipedia | CC-MAIN-2017-30 | refinedweb | 4,231 | 60.95 |
Defines a FirstClass node which will keep track of changes in the SASA and hpatch score. FirstClassNode is defined and implemented in AdditionalBackgroundNodesInteractionGraph. More...
#include <HPatchInteractionGraph.hh>
Defines a FirstClass node which will keep track of changes in the SASA and hpatch score. FirstClassNode is defined and implemented in AdditionalBackgroundNodesInteractionGraph.
destructor – no dynamically allocated data, does nothing
bookkeeping to follow a neighbors state substitution. this method gets called when a HPatchNode commits a sub and then broadcasts that change to all its neighboring fc nodes via the incident HPatchEdges. basically we need to set current state equal to alt state here. (Hopefully alt state is still correct!!) Since there's no way for a HPatchNode to know what other HPatchNodes are connected to it except via HPatchEdges, the calls seem a bit complicated. A HPatchNode has to call acknowledge_state on each Edge. The Edges have to figure out which Node is changing/not changing and then they call an inform_non_changing node of change method. That method then makes the call to this method on the correct HPatchNode. The inform_non_changing method can not be removed, because it's used during the substitution evaluations as well.
Assign the node to state 0 – the "unassigned" state.
A node in state 0 produces no hpatch score. Its neighbors have to adjust their scores appropriately. This method iterates over all the edges emanating from this node and tells them to acknowledge that they've been zeroed out.
References core::pack::interaction_graph::TR_NODE().
Returns the change in energy induced by changing a node from its current state into some alternate state for the PD energy terms only.
This function always gets called for every substitution. Only the consider_alt_state() call can get procrastinated.
Sets the current state to the alternate state this node was asked to consider. Copies appropriate score information. Notifies all of its neighbors that it is going through with the state substitution it had been considering.
There's a potential situation with considers() and commits() that needs to be checked for here. It's possible that a consider() call is made which causes a set of Nodes to update their alt states correspondingly. Since the consider() only gets processed by (or actually the call only goes out to) Nodes which are neighbor graph neighbors, only those Nodes will have their counts updated. Assume that the first consider() is really bad and no commit goes out. If we then consider() another sub, the alt state counts at the previous consider()'s set of nodes are incorrect. If get_deltaE is called by this consider() on some of those nodes, some of them will have their alt states reset. But when the commit goes out to ALL Nodes that are neighbors (NOT just the neighbor graph neighbors) it's possible that some of the Nodes will save the wrong alt state count. One way I think this can be avoided to is check at this node, if the alt state count is different from current, whether the node that originally changed is a neighbor graph neighbor of this node. If it is, that means the counts changed because of that node. If it's not, then this Node must have been one of the ones that fell out of sync.
Oooooh, I just thought of another way. When the SIG consider() method is called, I could have a reset alt state counts method that will deal with Nodes that are out of sync. That's more elegant than yet another if statement here!
I'm not sure the above is really a problem in the case of this IG. It seems like even if a non-committed consider() call occurs that alters the alt state counts/dots at some set of nodes, when a following sub does get commit'd(), then the nodes that will get the commit message should have all been updated. Some nodes will still have alt_state counts that are weird, but when a consider() call comes back around to them, it should reset the alt_state before doing anything.
References core::pack::interaction_graph::TR_NODE().
Instructs the Node to update the alt state information held by it and its neighbors in response to switching from the current state to an alternate state.
References core::pack::interaction_graph::TR_NODE().
Returns the amount of dynamic memory used by this Node object.
Returns the amount of static memory used by this Node object.
determines if any atom for any rotamer of this vertex overlaps with any atom from some background residue. called by BGNodes for the detect_background_residue_and_first_class_residue_overlap phase of the prep for simA call in the HPatchIG.
Referenced by core::pack::interaction_graph::HPatchBackgroundNode< V, E, G >::detect_overlap().
Returns current state. Only used by the unit tests.
Returns the alternate state SASA for the passed in atom index.
Returns a const reference to the atom-x-atom-pair vector-of-vectors of bools that specifies which atoms are overlapping, in the given state.
Returns current state. Only used by the unit tests.
returns the amount of sasa this node has in its current state assignment
RotamerDots objects know when they are in the unassigned state, so nothing special needs to be done to handle the 0 state. But, we could also query parent using get_current_state() to check if it's nonzero as an alternative.
Returns the current state SASA for the passed in atom index.
Returns the deltaE for just the PD terms. Separate method from the one above because this one can be called from within a commit_sub call that didn't go through consider_sub().
Returns a constant OP to the rotamer/residue object for the given state.
Need to save a reference to the rotamer_set so that we can determine what a particular state change will do to the score
Returns alt_state_rotamer_dots_.get_sasa() - curr_state_dots.get_sasa() except when either the current state or the alternate state is the unassigned state; state 0.
This method requires that the variables current_state_ and alternate_state_ correspond to the rotamers held in current_state_rotamer_dots_ and alt_state_rotamer_dots_. Usually, alternate_state_ holds meaningful data only if a vertex is considering a state substitution. This method will be invoked by a vertex as it considers a state substitition; it will also be invoked by each of its neighbors. That means the statement alternate_state_ = current_state_; must be present in HPatchNode::update_state_for_neighbors_substitution().
Not implemented, but needs to be!
Initializes the atom_atom_overlaps vector.
The atom_atom_overlaps vector stores a boolean for the intra-residue atom-atom overlap for every state possible at this Node. During simulated annealing, the IG has to determine the connected components after every sub. To do this, it has to know which atoms are exposed as well as overlapping. Instead of recomputing atom-atom overlaps after every sub, do it once and cache the values on the Node.
References core::kinematics::tree::distance_squared(), core::pack::interaction_graph::RotamerDots::get_atom_radius(), and core::scoring::sasa::get_legrand_atomic_overlap().
This method computes and caches each set of overlaps this node's states have with a background residue.
The node stores these overlaps in two places: 1) in the input vector from the HPatchBackgroundEdge object, and 2) in its own array of dot coverage counts for the self-and-background sphere overlaps.
References core::pack::interaction_graph::RotamerDots::get_num_atoms().
Initializes rotamer self overlap; called by HPatchInteractionGraph;:prepare_for_sim_annealing right before simulated annealing begins.
returns true if any sphere for any atom of any state on this vertex overlaps with any atom on any sphere on a neighboring vertex.
neighbor - [in] - the vertex that neighbors this vertex
References core::pack::interaction_graph::HPatchNode< V, E, G >::self_and_bg_dots_for_states_.
invokes V's prep_for_simA method. Also computes the "self-overlap" that each of its rotamers has.
References core::pack::interaction_graph::TR_NODE().
useful for debugging
useful for debugging - writes information about a node to the tracer
References core::pack::interaction_graph::TR_NODE().
Sets the alt state rotamer dots to the current state rotamer dots. See comments in SIG and commit_considered_substitution for more information about why this method exists.
References core::pack::interaction_graph::TR_NODE().
stores the coordinates for a state.
Currently RotamerCoords stores the atoms in the order they are created for the pose. In the future, this might be changes to store the atoms in special trie order to reduce the number of sphere calculations necessary.
self_and_bg_dots_for_states_ is a vector1 of RotamerDots objects. the size of the vector is set during Node construction to parent::get_num_states().
Need to save a reference to the rotamer_set so that we can determine what a particular state change will do to the score
References core::chemical::ResidueType::atom_type(), core::chemical::AtomType::element(), core::chemical::ResidueType::nheavyatoms(), and core::pack::interaction_graph::TR_NODE().
Updates the vector alt_state_exp_hphobes_ by checking the sasa of every atom in the residue currently on this Node.
returns the change in sasa for this node induced by a state substitution at a neighboring node. The node increments the dot coverage count for the RotamerDots object representing the alternate state dots for that neighbor.
The procedure is simple at heart – the caching makes it complex.
alt_state_rotamer_dots_ = current_state_rotamer_dots_; //copy dot coverage counts alt_state_rotamer_dots_.decrement( neighbors_curr_state_overlap_with_this ); alt_state_rotamer_dots_.increment_both( neighbors_alternate_state ); return ( alt_state_rotamer_dots_.get_score() - current_state_rotamer_dots_.get_score() );
Extensive caching techniques save time. Each HPatchEdge stores the dot coverage for each pair of HPatchNodes in their current state. These are stored in the RotamerDotsCache objects. Let's name the vertices: this vertex is vertex B. Vertex B is projecting its hpatch deltaE while vertex A is considering a substitution from one state to another. The edge connecting A and B provides node B with the set of masks for the overlap by the atoms of A's alternate state on all atoms of vertex B.
What do we do when the current state is unassigned. Then that results in the alt_state starting from being unassigned and we get problems because the RotamerDots object doesn't have any memory assigned to it if it's in the unassigned state.
it's probable that a sub that was considered before wasn't committed, so the alt state count at this node needs to go back to what the current state count is. the problem that may crop up with the reset here is that on a previous consider call, a set of nodes will update their counts. if commit is not called, and a second consider is called, then this node will have its alt state count reset but what about all the other nodes in the previous set. how will their counts get reset to the current state count? perhaps a check in the commit method can be added. nope, a new method has been added to Nodes and BGNodes to handle this case. alt_state_total_hASA_ = curr_state_total_hASA_;
References core::pack::interaction_graph::TR_NODE(). | https://www.rosettacommons.org/manuals/latest/core+protocols/d8/d4d/classcore_1_1pack_1_1interaction__graph_1_1_h_patch_node.html | CC-MAIN-2020-45 | refinedweb | 1,776 | 55.74 |
Opened 3 years ago
Closed 3 years 3 years ago by
comment:2 Changed 3 years ago by
comment:3 Changed 3 years ago by 3 years ago 3 years ago by 3 years ago by
StreamingHttpResponse could still do with some example code in the docs, even if it doesn't replace the existing example.
comment:7 Changed 3 years ago by
Any ideas regarding what type of example should be given in the docs for StreamingHttpResponse?
comment:8 Changed 3 years ago by
comment:9 Changed 3 years ago by 3 years ago by: 13 Changed 3 years ago by
comment:12 Changed 3 years ago by
You can also test this with an infinite series, such as the classic Fibonacci function, if you replace the range generator with something like:
def fib(): a, b = 0, 1 while 1: yield a a, b = b, a + b
I tested this and the memory use did not increase significantly even after streaming over a gigabyte of data for a single request.
comment:13 Changed 3 years ago by
The example above looks good to me. Please do submit a pull request - thanks.
comment:14 Changed 3 years ago by
I have opened a pull request here:
I am using a slight variation of the above example, using Python 3 friendly code and some additional comments, as suggested by
bmispelon.
comment:15 Changed 3 years ago by
Resubmitted a new pull request:
Yes, and also should link to this, in the text "For instance, it’s useful for generating large CSV files" | https://code.djangoproject.com/ticket/21179 | CC-MAIN-2017-04 | refinedweb | 261 | 52.16 |
10 March 2011 19:07 [Source: ICIS news]
TORONTO (ICIS)--?xml:namespace>
“As we move into 2012, we believe our portfolio of new feedstock projects will begin to dramatically improve our supply position,” Woelfel told analysts during the company’s 2010 fourth-quarter results conference call.
In
NOVA Chemicals’ production hub at Joffre, Alberta, will receive additional ethane from a gas plant in North Dakota by late 2012 when a supply pipeline is due to start up, Woelfel said.
Also in
These deals, and a project to extract olefins-rich feedstocks from the upgrading of oilsands in
In
The arrangement with Caiman is subject to NOVA arranging pipeline transportation to move the ethane from West Virgina to
NOVA is also working with other gas firms to secure additional ethane for Corunna, he said.
A possible ethane pipeline project with US firm Buckeye, announced early last year, “remained an attractive option”, he added.
At the same time, NOVA will upgrade its Corunna cracker to position that plant as “fully light-feed capable”, Woelfel said.
NOVA’s Corunna ethylene plant, often described as a “flexi-cracker”, runs on both heavy and light feedstocks. It can currently take ethane for almost one third of its overall feedstock intake, Woelfel said.
Preparations for the Corunna upgrade would be implemented during a planned turnaround this year, he said.
“We continue to believe that the
On Wednesday, NOVA reported that its 2010 fourth-quarter net income more than tripled to $60m (€43.2m) as demand and selling prices increased for olefins and polyolefins.
NOVA is a wholly-owned subsidiary of
($1 = €0.72) | http://www.icis.com/Articles/2011/03/10/9442805/canadas-nova-chemicals-sees-optimism-on-feedstock-outlook-ceo.html | CC-MAIN-2015-06 | refinedweb | 267 | 58.42 |
IT is the king of capital expenditure in the Hotel Group of Cendant Corp., according to the company’s CFO, so a good working relationship between the CIO and CFO is important to the success of the entire division. The group is the world’s leading franchiser of hotels, counting Travelodge, Ramada, and Days Inn among its nine brands.
“As a franchising business, our sole capex area is in the IT and infrastructure side,” Cendant Hotel Group CFO Bob Loewen said. Loewen dedicates one of his three finance directors solely to working with the IT department.
Mike Kennedy, the CIO of the Hotel Group, is thankful that Loewen has taken the time to understand that IT has different methods of allocating labor—whether it be for projects or core services.
Kennedy also appreciates that he and Loewen can work together to see that some large IT costs, such as communications, are allocated properly to the different business units rather than just coming from one big IT pot.
Kennedy and Loewen have avoided most of the common misunderstandings that occur in these executive relationships by keeping the lines of communication open and working together to ensure that IT projects make good business sense. CIOs have to take time to learn about their colleagues and their work in the financial department to have a similarly successful working relationship.
Mastering a different language
One source of conflict between CIOs and CFOs is language, said consultant Tom Pisello, a former Gartner VP and current president of Alinean, a financial consulting company. CFOs think about revenue; cost of goods sold; sales, general, and administration expenses; depreciation; and shareholder equity, Pisello said, while CIOs tend to talk in project terms—schedules, budgets, technical architecture.
“Until those two kinds of languages are married, I think this problem is going to continue,” Pisello said.
Kennedy has learned to speak in financial terms and understands the importance of the business side of IT projects. When preparing budgets, he tries to make sure that he, like Loewen, is speaking “that kind of universal language—the dollar sign.”
If a CIO doesn’t take time to make a compelling dollars-and-cents case for a project, CFOs have good reason to be skeptical of its feasibility. Pisello cites a Standish Group survey from 2000, which found that only 28 percent of IT projects are completed on time, within budget, and with all features functioning as planned.
With that high risk of failure, companies often have stringent standards for deciding whether or not to fund projects. In addition to ROI, CFOs may want to know the payback period (breakeven point), net present value (NPV), and internal rate of return (IRR).
Learn more about financial terms
The CCH Business Owner’s Toolkit and Alinean’s glossary offer definitions of common accounting terms.
“I really think that the CIOs need to actively learn more about how accounting is done at the particular company where they work,” said Linda Hughes, a business and technology consultant with North Highland. Hughes worked with CFOs as a CIO for a large nonprofit corporation and, prior to that, as a divisional CIO for a major consumer products company. In her current position, she helps IT departments communicate their business cases to executive management.
Hughes also recommends that CIOs who don’t have a rudimentary understanding of finance take an all-day seminar or attend a conference on managing IT spending.
Peter Faletti, a former CFO and Hughes’ colleague at North Highland, said that the biggest issue from the CFO’s perspective is to understand how the IT initiatives proposed in the budget relate to business strategies and objectives.
“A lot of times, people feel that the IT pool becomes one great big bundle, and it’s hard to separate what applies to any of the specific initiatives that the company might have underway—whether it’s breaking into a new market, launching a new product, or creating a different approach for dealing with its customers—people want to see that linkage,“ Faletti said.
One way to begin to sort out IT expenses is to place each budget request in one of three categories, Faletti said:
- Operating expenses
- Upgrade expenses
- New project expenses
The first category is the money needed to support all current systems and infrastructure. The second is money to be spent on enhancing current systems—an upgrade to the accounts payable system, for example. The third is anything that’s a new project. The new projects, especially, need to be prioritized according to their importance to the overall business strategy, in case money runs out at the end of the budgeting process.
Revising the role of the IT department and its budget
Unfortunately, when learning about accounting at their companies, some CIOs are frustrated to find that IT is seen mainly as a source of short-term operational efficiencies rather than as a long-term strategic advantage. Larry Downes, business technology consultant and author of The Strategy Machine, said, “The CFO, in many companies, still is laboring under the old paradigm of thinking about IT that has a 12- to 18-month return on investment.” In those companies, an automation project that would cut the bookkeeping staff with a six-month breakeven point would sail to approval, but efforts to ready the firm for Web services might be rejected.
Hughes’ experiences, on the other hand, give her hope that the old paradigm is fading away.
“A lot of CFOs are being asked to be more than the traditional bean counters,” Hughes said. “They are really trying to understand the business reasons for why things need to be done, so treat your CFO as any other business partner and don’t only talk numbers with them.”
Collaboration allows more innovation
Establishing a solid working relationship with the financial department can help CIOs with short-term and long-term budget needs. Cendant’s CIO Kennedy and CFO Loewen are acting as business partners in a new initiative that goes far beyond the old role of IT. The IT department is building a customer loyalty program for all nine Cendant hotel branches that will also have the participation of some other Cendant brands, such as RCI vacation time-sharing and Avis car rental. Like a “traditional” IT project, Loewen said the company will realize some operational efficiencies in its Parsippany, NJ, offices. But he says the project will excel at the second criterion he uses for evaluating projects, revenue generation, by adding value for the customer.
“We think this will allow us to improve our competitiveness within the marketplace, allowing our customers to earn points across all our different units and to be able to redeem them not only for hotel rooms, but also for time-shares or car rental,” Kennedy said.
And, just as Kennedy is aware of the business goals of the project, Loewen knows that the project to link the 6,600 locations is technically complex.
Kennedy and Loewen have settled on a project plan that includes outsourcing about 50 percent of the work. The IT department will also do some original coding, principally to integrate third-party tools.
“We’re working with partners of Cendant, like Trilegiant, who will actually be providing us with some of the back-end fulfillment capabilities,” Kennedy said. The system will also use Informix database software, Tibco messaging middleware, and Trillium for data cleansing. It will integrate customer information from a number of sources, including point-of-sale systems at the hotels, call centers, and consumer Web sites.
CFOs like Loewen, who consider the strategic value of IT projects in addition to short-term payoffs, can help IT get the funds for innovation. But, like Kennedy, CIOs also have the responsibility to make the business case for those longer-term projects.
Hughes has a final piece of advice for CIOs who want to work successfully with CFOs: “Talk with CFOs at times other than just when the budgets are due.” | https://www.techrepublic.com/article/learning-to-speak-the-cfos-language-can-help-ensure-it-dollars-are-spent-wisely/ | CC-MAIN-2017-47 | refinedweb | 1,329 | 53.75 |
Capitalization is counter intuitive in a URL. Since it's forced by the naming of this project, I recommend changing this project name to "fedora-council" to make it easier to discover. I'm also asking pagure search to be made case-insensitive if possible, but both fixes make sense IMHO. :smiley:
+1 I guess, though I'm not bothered either way really.
I personally do not care about lower/upper cases, but I understand it might be sometimes irritating for someone. So +1 for this.
I noticed this too time ago, but would prefer to make pagure case-insensitive. "Council" with uppercase makes still sense to me, because it's a name in our case, like FAmSCo. I guess you won't have a project called famsco, so +1 but for the pagure option.
Just want to add I browsed the projects yesterday and saw Fedora-Council is indeed the only one which uses uppercase. I even decided to follow the others for FAmSCo trac and named it all lowercase.
If the change for being insensitive takes too much time we should change the name, but we need to update also the wiki pages then (or any other doc we have around).
+1
FWIW Badges is also Mixed Case:
If we change it, is it easy to add a redirect? @pingou , @pfrields ?
Pagure itself doesn't support redirect, so options are:
Turn off issues, pull-request, doc in the settings of the project and edit the README to with a text pointing where the project lives now
Turn off issues, pull-request, doc in the settings of the project and edit the README to with a text pointing where the project lives now
This is further complicated because the actual issues are in a project called "tickets" within the Fedora-Council namespace. Fedora-Council exists a project so that doesn't 404.
I have changed my vote to "DO NOTHING BECAUSE THIS IS NOT WORTH THE RIP UP".
That's pretty much where I'm at too. I can make a fedora-council project which tells people to go here — but I'm not sure that's really helping.
That's pretty much where I'm at too. I can make a fedora-council project which tells people to go here — but I'm not sure that's really helping.
Please no.
I'm going to mark this declined. Sorry, Paul. :)
Metadata Update from @bex:
- Issue close_status updated to: declined
- Issue status updated to: Closed (was: Open)
to comment on this ticket. | https://pagure.io/Fedora-Council/tickets/issue/92 | CC-MAIN-2019-39 | refinedweb | 423 | 70.13 |
An SVG document is contained within an <svg> element, which is required to be the outermost element in an SVG document.
An SVG "document" can range from a single SVG graphics element such as a <rect> to a complex, deeply nested collection of container elements and graphics elements. Also, an SVG document can be embedded inline as a fragment within a parent document (an expectedly common situation within XML Web pages) or it can stand by itself as a self-contained graphics file.
The following example shows a simple SVG document embedded as a fragment within a parent XML document. Note the use of XML namespaces to indicate that the <svg> and <rect> elements belong to the SVG namespace:
<?xml version="1.0" standalone="yes"?> <parent xmlns="" xmlns: <!-- parent stuff here --> <svg:svg <svg:ellipse </svg:svg> <!-- ... --> </parent>
Download this example
This example shows a slightly more complex (i.e., it contains multiple rectangles) stand-alone, self-contained SVG document:
<?xml version="1.0" standalone="no"?> <!DOCTYPE svg PUBLIC "-//W3C//DTD SVG July 1999//EN" ""> <svg width="4in" height="3in"> <desc>Four separate rectangles </desc> <rect width="20" height="60"/> <rect width="30" height="70"/> <rect width="40" height="80"/> <rect width="50" height="90"/> </svg>
Download this example
<svg> elements can appear in the middle of SVG documents. This is the mechanism by which SVG documents can be embedded within other SVG documents.
Another use for <svg> elements within the middle of SVG documents is to establish a new viewport and alter the meaning of CSS unit specifiers. See Establishing a New Viewport: the <svg> element within an SVG document and Redefining the meaning of CSS unit specifiers.
Attribute definitions:
The <g> element is the element for grouping and naming collections of drawing elements. If several drawing elements share similar attributes, they can be collected together using a <g> element. For example:
<?xml version="1.0" standalone="no"?> <!DOCTYPE svg PUBLIC "-//W3C//DTD SVG July 1999//EN" ""> <svg width="4in" height="3in"> <desc>Two groups, each of two rectangles </desc> <g style="fillcolor:red"> <rect x="100" y="100" width="100" height="100" /> <rect x="300" y="100" width="100" height="100" /> </g> <g style="fillcolor:blue"> <rect x="100" y="300" width="100" height="100" /> <rect x="300" y="300" width="100" height="100" /> </g> </svg>
Download this example
A group of drawing elements, as well as individual objects, can be given a name. Named groups are needed for several purposes such as animation and re-usable objects. The following example organizes the drawing elements into two groups and assigns a name to each group:
<?xml version="1.0" standalone="no"?> <!DOCTYPE svg PUBLIC "-//W3C//DTD SVG July 1999//EN" ""> <svg width="4in" height="3in"> <desc>Two named groups </desc> <g id="OBJECT1"> <rect x="100" y="100" width="100" height="100" /> </g> <g id="OBJECT2"> <circle cx="150" cy="300" r="25" /> </g> </svg>
Download this example
A <g> element can contain other <g> elements nested within it, to an arbitrary depth. Thus, the following is valid SVG:
<?xml version="1.0" standalone="no"?> <!DOCTYPE svg PUBLIC "-//W3C//DTD SVG July 1999//EN" ""> <svg width="4in" height="3in"> <desc>Groups can nest </desc> <g> <g> <g> </g> </g> </g> </svg>
Download this example
Any drawing)"/>
In SVG, among the facilities that allow URI references are:
URI references are defined in either of the following forms:
<URI-reference> = [ <absoluteURI> | <relativeURI> ] [ "#" <elementID> ] -or- <URI-reference> = [ <absoluteURI> | <relativeURI> ] [ "#id(" <elementID> ")" ]
where <elementID> is the ID of the referenced element.
SVG supports two types of URI references:
The following rules apply to the processing of URI references:. In those cases where a list of property values are possible and one of the property values is an undefined reference, then the list will be processed as if the invalid reference were removed from the list.
The <defs> element is used to identify those objects which will be referenced by other objects later in the document. It is a requirement that all referenced objects be defined within a <defs> element. (See References and the <defs> element.)
The child elements within a <defs> element are not drawn.
To provide some SVG user agents with an opportunity to implement efficient implementations in streaming environments, creators of SVG documents July 1999//EN" ""> <svg width="4in" height="3in"> <desc>Local URI references within ancestor's <defs> element.</desc> <defs> <linearGradient id="Gradient01"> <stop offset="30%" style="color:#39F"/> </linearGradient> </defs> <g> <g> <rect x="0%" y="0%" width="100%" height="100%" style="fill:url(#Gradient01)" /> </g> </g> < user agent would ignore (i.e., not display) the <desc> <desc>>
The <symbol> element is used to define graphical objects which are meant for any of the following uses:
Closely related to the <symbol> element are
the <marker> and
<pattern> elements.
Any <svg>, <symbol>, <g>, or graphics element that is a child of a <defs> element and has been assigned an ID is potentially a template object that can be re-used (i.e., "instanced") anywhere in the SVG document via a <use> element. The <use> element references another element and indicates that the graphical contents of that element should be included/drawn at that given point in the document. The <use> element conforms to XLink [??? Include reference to XLink]. (Note that the XLink specification is currently under development and is subject to change. The SVG working group will track and rationalize with XLink as it evolves.)
The <use> element can reference either:
Unlike  </svg>
A well-formed example:
<?xml version="1.0" standalone="yes"?> <svg width="4in" height="3in" xmlns=''> <desc>This links to an external image </desc> <image x="200" y="200" width"100px" height="100px" xlink: <title>My image</title> </image> </svg> | https://www.w3.org/1999/07/30/WD-SVG-19990730/struct.html | CC-MAIN-2022-40 | refinedweb | 1,320 | 51.58 |
QT5.2.1 :-1: error: Unknown module(s) in QT: serialport
Hi,
I am trying to work with serial port communication in a little console application on linux and after some research i saw that the module for serialport is in the basic installation qt5.1 or superior.
I added :
'''QT += serialport'''
to my .pro file and :
'''#include <QtSerialPort/QtSerialPort>'''
to my .cpp but it gives me the error :
'''Unknown module(s) in QT: serialport'''
I checked in the installation folder of qt5.2.1 and i found files related to this module but it seems qt don't find them. Also, when including <QtSerialPort/QtSerialPort> it don't find it but it seems to be able to find <QtSerialPort/QSerialPortInfo>.
Do i have to transfert files to my application folder or am i missing a line somewhere in my .pro/.cpp files ?
#include <QtSerialPort/QtSerialPort>
Hi,
I think the right header file is
#include <QtSerialPort/QSerialPort>. Probably an error in the Module's page
- cybercatalyst last edited by
On which OS are you working? Under Ubuntu you have to install the Qt serialport package.
@cybercatalyst
I am working on ubuntu 14.04.
I tried getting the source ( ) but i always had the error :
fatal: unable to connect to code.qt.io:
code.qt.io[0: 54.77.201.214]: errno=Connection timed out
Is there any other way to download them ?
EDIT : I downloaded it here ::
I tried to follow the qtcreator instruction from here :
I added install argument to my make and then tried to rebuild the project but i have a issue :
/home/spi/Bureau/qtserialport/qt-qtserialport/src/serialport/qserialport.h:40: error: QtSerialPort/qserialportglobal.h: No such file or directory
I checked, the file is in the project and in the src folder.
- JKSH Moderators last edited by
I checked in the installation folder of qt5.2.1
- What is the full path to your installation folder?
- How did you install Qt 5.2.1?
If you installed it from the Ubuntu repositories, install Qt Serial Port from the Ubuntu Software Center (libqt5serialport5-dev).
I tried getting the source ( )
Don't do that. There is a high risk that you will end up with multiple copies of the library in your computer, and you will have a hard time debugging if your program links to the wrong version.
Delete all the source code you downloaded and any binaries you built yourself. Use the official installers.
@JKSH
But if i use the install from the ubuntu software center it will only install it in the qt5 version i installed using the USC, right ?
I use qt creator with a qt5 kit and a qt4 kit, i manually installed the qt4 version and i would need to use the serial port too with this version.
- JKSH Moderators last edited by
Please answer the 2 questions in my previous post.
But if i use the install from the ubuntu software center it will only install it in the qt5 version i installed using the USC, right ?
Right.
I use qt creator with a qt5 kit and a qt4 kit, i manually installed the qt4 version and i would need to use the serial port too with this version.
You only asked for Qt 5 in your original post, so I gave you a Qt 5 solution. Anyway, try that first; focus on getting it working for Qt 5 first. Worry about Qt 4 later. | https://forum.qt.io/topic/53597/qt5-2-1-1-error-unknown-module-s-in-qt-serialport/1 | CC-MAIN-2019-47 | refinedweb | 576 | 63.59 |
Gecko 2 introduces a new parser, based on HTML5. The HTML parser is one of the most complicated and sensitive pieces of a browser. It controls how your HTML source code is turned into web pages and, as such, changes to it are rare. The new parser is faster, complies with the HTML5 standard, and enables a lot of new functionality as well.
The new parser introduces these major improvements:
- You can now use SVG and MathML inline in HTML5 pages, without XML namespace syntax.
- Parsing is now done in a separate thread from Firefox’s main UI thread, improving overall browser responsiveness.
- Calls to
innerHTMLare a lot faster.
- Dozens of long-standing parser related bugs are now fixed.
The HTML5 specification provides a more detailed description than previous HTML standards of how to turn a stream of bytes into a DOM tree. This will result in more consistent behavior across browser implementations. In other words, in supporting HTML5, Gecko, WebKit, and Internet Explorer (IE) will behave more consistently with each other.
Changed parser behaviors
Some changes to the way that the Gecko 2 parser behaves, as compared to earlier versions of Gecko, may affect web developers, depending on how you've written your code in the past and what browsers you've tested it on.
Tokenization of left angle-bracket within a tag
Given the string
<foo<bar>, the new parser reads it as one tag named
foo<bar. This behavior is consistent with IE and Opera, and is different from Gecko 1.x and WebKit, which read it as two tags,
foo and
bar. If you previously tested your code in IE and Opera, then you probably don't have any tags like this. If you tested your site only with Gecko 1.x or WebKit (for example, Firefox-only intranets or WebKit-oriented mobile sites), then you might have tags that match this pattern, and they will behave differently with Gecko 2.
Calling document.write() during parsing
Prior to HTML5, Gecko and WebKit allowed calls to
document.write() during parsing to insert content into the source stream. This behavior was inherently racy, as the content was inserted into a timing-dependent point in the source stream. If the call happened after the parser was done, the inserted content replaced the document. In IE, such calls are either ignored or imply a call to
document.open(), replacing the document. In HTML5,
document.write() can only be called from a script that is created by a
<script> tag that does not have the
async or
defer attributes set. With the HTML5 parser, calls to
document.write() in any other context either are ignored or replace the document.
Some contexts from which you should not call
document.write() include:
- scripts created using document.createElement()
- event handlers
- setTimeout()
- setInterval()
<script async
<script defer
If you use the same mechanism for loading script libraries for all browsers including IE, then your code probably will not be affected by this change. Scripts that serve racy code to Firefox, perhaps while serving safe code to IE, will see a difference due to this change. Firefox writes a warning to the JavaScript console when it ignores a call to
document.write().
Lack of Reparsing
Prior to HTML5, parsers reparsed the document if they hit the end of the file within certain elements or within comments. For example, if the document lacked a
</title> closing tag, the parser would reparse to look for the first '<' in the document, or if a comment was not closed, it would look for the first '>'. This behavior created a security vulnerability. If an attacker could force a premature end-of-file, the parser might change which parts of the document it considered to be executable scripts. In addition, supporting reparsing led to unnecessarily complex parsing code.
With HTML5, parsers no longer reparse documents. This change has the following consequences for web developers:
- If you omit the closing tag for <title>, <style>, <textarea>, or <xmp>, the page will fail to be parsed. IE already fails to parse documents with a missing </title> tag, so if you test with IE, you probably don't have that problem.
- If you forget to close a comment, the page will most likely fail to be parsed. However, unclosed comments often already break in existing browsers for one reason or another, so it's unlikely that you have this issue in sites that are tested in multiple browsers.
- In an inline script, in order to use the literal strings
<script,
</script>,and
<!--, you should prevent them from being parsed literally by expressing them as
\u003cscript
,
\u003c/script>, and
\u003c!--. The older practice of escaping the string
</script>by surrounding it with comment markers, while supported by HTML5, is problematic in cases where the closing comment marker is omitted (see preceding point). You can avoid such problems by using the character code for the initial '<' instead. (It is valid to use an escape character, e.g.,
<\/script>, but this strategy does not work for
<scriptand
<!--, because
\sand
\!are not valid JavaScript escapes; the character code strategy is more general-purpose.)
Inline SVG and MathML support
As a completely new parsing feature, HTML5 introduced support for inline SVG and MathML in
text/html. This means that you can now use SVG and MathML inline in
text/html similarly to what has previously been possible in
application/xhtml+xml.
.
Performance improvement with speculative parsing
Unrelated to the requirements of HTML5 specification, the Gecko 2 parser uses speculative parsing, in which it continues parsing a document while scripts are being downloaded and executed. This results in improved performance compared to older parsers, because most of the time, Gecko can complete these tasks in parallel.
To best take advantage of speculative parsing, and help your pages to load as quickly as possible, ensure that when you call document.write(), you write a balanced sub-tree within that chunk of script. A balanced sub-tree is HTML code in which any elements that are opened are also closed, so that after the script, the elements left open are the same ones that were open before the script. The open and closing tags do not need to be written by the same
document.write() call, as long as they are within the same
<script> tag.
Please note that you shouldn't use end tags for void elements that don't have end tags:
<area>,
<base>,
<br>,
<col>,
<command>,
<embed>,
<hr>,
<img>,
<input>,
<keygen>,
<link>,
<meta>,
<param>,
<source> and
<wbr>. (There are also some element whose end tags can be omitted in some cases, such as
<p> in the example below, but it's simpler to always use end tags for those elements than to make sure that the end tags are only omitted when they aren't necessary.)
For example, the following code writes a balanced sub-tree:
<script> document.write("<div>"); document.write("<p>Some content goes here.</p>"); document.write("</div>"); </script> <!-- Non-script HTML goes here. -->
In contrast, the following code contains two scripts with unbalanced sub-trees, which causes speculative parsing to fail and therefore the time to parse the document is longer.
<script>document.write("<div>");</script> <p>Some content goes here.</p> <script>document.write("</div>");</script>
For more information, see Optimizing your pages for speculative parsing. | https://developer.mozilla.org/en-US/docs/HTML/HTML5/HTML5_Parser?redirect=no | CC-MAIN-2017-13 | refinedweb | 1,220 | 62.38 |
So far, we've learned how to display messages in labels, and we've met Tkinter core concepts along the way. Labels are nice for teaching the basics, but user interfaces usually need to do a bit more; like actually responding to users. The program in Example 8-10 creates the window in Figure 8-7.
Figure 8-7. A button on the top
Example 8-10. PP3E\Gui\Intro\gui2.py
import sys from Tkinter import * widget = Button(None, text='Hello widget world', command=sys.exit) widget.pack( ) widget.mainloop( )
Here, instead of making a label, we create an instance of the
Tkinter
Button class. It's attached
to the default top level as before on the default
TOP packing side. But the main thing to
notice here is the button's configuration arguments: we set an option
called
command to the
sys.exit function.
For buttons, the
command
option is the place where we specify a callback handler function to be
run when the button is later pressed. In effect, we use
command to register an action for Tkinter to
call when a widget's event occurs. The callback handler used here
isn't very interesting: as we learned in an earlier chapter, the
built-in
sys.exit function simply
shuts down the calling program. Here, that means that pressing this
button makes the window go away.
Just as for labels, there are other ways to code buttons. Example 8-11 is a version that packs the button in place ...
No credit card required | https://www.safaribooksonline.com/library/view/programming-python-3rd/0596009259/ch08s07.html | CC-MAIN-2018-17 | refinedweb | 256 | 65.22 |
Continuing on the historical vein, once upon a time there was a package included in Red Hat Linux called
pythonlib. One of the things I helped do was to finish killing it off. We went along and then a few releases later, wanted to share some python code again. Thus was born
rhpl – the Red Hat Python Library. It started out simply enough — some wrappers for translation stuff and one or two other little things. And then it began to grow, as these things do over time. Some of the things made sense, some less so. Over time, pieces have moved around into other things (including
rhpxl — the Red Hat Python Xconfig library)
Fast-forward to today and it’s a bit of a mess with things contributed by various people and used in one config tool (or two) and barely maintained. Also a lot of the things being wrapped have gotten a lot better in the python standard library. The
gettext module is leaps and bounds better than the one from python 1.5 and also the
subprocess module is awesome for spawning processes.
Therefore, I think it’s time to continue the cycle and kill off
rhpl for Fedora 12. I’m starting to make patches and file them for packages using
rhpl to transition them over. Help much appreciated from anyone that wants to join in.
For the
rhpl.translate ->
gettext case, you generally want to replace the import of _ and N_ from rhpl.translate with something like
import gettext
_ = lambda x: gettext.ldgettext(domain, x)
N_ = lambda x: x | http://velohacker.com/2009/06/30/repeating-the-cycle-time-to-kill-rhpl/ | CC-MAIN-2014-35 | refinedweb | 265 | 73.98 |
> Maybe we can begin to talk about all
>the other compatibility issues other then True = -1, Short Circuiting, etc,
>*and* how to address them.
Right.
I have been thinking about the control arrays problem. Control arrays are
heavily used, and while delegation gives us what seems to be a far more powerful
mechanism for achieving similar results, it is obviously one area where considerable
incompatability exists.
The way I see it, the problem affects us in two ways, each of which should
perhaps be addressed differently.
1) Migration. This does not seem too much of an issue. In my experience,
the migration wizard does an OK job of control arrays, and the various compatibility
objects in Microsoft.VisualBasic.Compatibility.VB6 namespace would seem to
cater for most "classic" control array needs.
If you want me to post some code on this, just ask - it is really quite simple.
2) New development. Using delegation is more powerful than classic conrol
arrays, but correspondingly less simple - ie the need to write extra code
rather than just work with properties. The added power of delegation is not
meaningful for those who only intend to use it to get the functionality they
already have in VB6 control arrays.
One obvious solution to all this is to add the classic control array back
in to the Windows forms package. This seems to me to be a huge overkill,
considering the ease of duplicating control array functionality, but I am
sure others would disagree. How difficult it would be, I don't know - it
would probably be more likely that it could be fixed up to look and behave
like a classic control array.....
If that one option, then let's consider others.
Addressing point 1 (migration).
Well, control arrays migrate well, and remain functional in my experience.
The code changes a little, that is to be expected, but is easily understandable.
I reiterate my opinion - there seems to be little concern for migrating control
arrays. The wizard does a good job - I will post code upon request.
Addressing Point 2 (New development).
Well writing control array functionality is a little more convoluted, I grant
that. Basically it consists of adding/extending the "Handles" clause to an
existing event handler, which is not, in itself, difficult, but can become
tedious if you are coding for many events. Some automation of this process
would be greatly beneficial....
I propose this: an extension to the property pages. One new item, perhaps
titled "Array Name" or something,combination textbox/combo, & its functionality
to be explained below, and a second new item "Index" behaving just like the
classic VB index property.
Here is how it could work. You add a new command button, call it "cmdTest0".
In "Array Name" property type "cmdTest" ie creating a base control array
handler. "Index" property would then default to 0. Changing the name of this
control should update the name in the "Array Name" property of any other
controls....
Then, add a second button, call it "cmdTest1" (Don't worry about the naming
just yet - I will get to that). In "Array Name" property select "cmdTest"
(it would be there in a dropdown). "Index" property would then default to
1.
From these properties, you can see how the behind-the-scenes code would work.
The IDE could easily handle keeping track of the event handlers for different
controls. Control referencing with the Name(Index) form could be handled
similar to the way the wizard handles the migration of control arrays - basically
by creating an object in code to handle the indexed references. In this example,
that object would be called "cmdTest" as entered in the "Array Name" property,
and it would basically manafacture references to controls by way of an index,
and would be iteratable. we would have functional control arrays without
any real change to the existing WinForms package. Seems like a reasonable
solution to me.
I welcome and ancourage any comment - this is pretty off-the-cuff. Are there
any obvious holes in this that I have overlooked?
Cheers,
Paul
Forum Rules
Development Centers
-- Android Development Center
-- Cloud Development Project Center
-- HTML5 Development Center
-- Windows Mobile Development Center | http://forums.devx.com/showthread.php?53180-Re-A-moderate-view-Adressing-Control-Arrays.....&p=180250&mode=threaded | CC-MAIN-2017-47 | refinedweb | 696 | 55.34 |
OK, I guess I misunderstood you. I don't know how SafeHaskell works, so I don't know whether there might be some interaction. I know that profiling is a static flag which must be set when you initialise the session and cannot be changed afterwards. I assume you are doing that. I checked the source code for getHValue (in 7.0.4) and it calls linkDependencies if the name is external (not 100 percent sure what that means). There is an interesting comment in linkDependencies, though: -- The interpreter and dynamic linker can only handle object code built -- the "normal" way, i.e. no non-std ways like profiling or ticky-ticky. -- So here we check the build tag: if we're building a non-standard way -- then we need to find & link object files built the "normal" way. This is what I've was referring to in my previous mail. Even though you're compiling to machine code, you are using the in-memory linker (i.e., the GHCi linker). It seems like that this is a fundamental limitation of the internal linker. You may be using it in a way that doesn't trigger the sanity check and end up causing a panic. I suggest you pose this question on the glasgow-haskell-users mailing list. On 28 August 2011 17:57, Chris Smith <cdsmith at gmail.com> wrote: > On Sun, 2011-08-28 at 17:47 +0100, Thomas Schilling wrote: >> I don't think you can link GHCi with binaries compiled in profiling >> mode. You'll have to build an executable. > > Okay... sorry to be obtuse, but what exactly does this mean? I'm not > using GHCi at all: I *am* in an executable built with profiling info. > > I'm doing.log_action = addErrorTo codeErrors > } > GHC.setSessionDynFlags dflags' > target <- GHC.guessTarget filename Nothing > GHC.setTargets [target] > r <- fmap GHC.succeeded (GHC.load GHC.LoadAllTargets) > > and then if r is true: > > mods <- GHC.getModuleGraph > let mainMod = GHC.ms_mod (head mods) > Just mi <- GHC.getModuleInfo mainMod > let tyThings = GHC.modInfoTyThings mi > let var = chooseTopLevel varname tyThings > session <- GHC.getSession > v <- GHC.liftIO $ GHC.getHValue session (GHC.varName var) > return (unsafeCoerce# v) > > Here, I know that chooseTopLevel is working, but the getHValue part only > works without profiling. So is this still hopeless, or do I just need > to find the right additional flags to add to dflags'? > > -- > Chris Smith > > > -- Push the envelope. Watch it bend. | http://www.haskell.org/pipermail/haskell-cafe/2011-August/095034.html | CC-MAIN-2014-35 | refinedweb | 404 | 69.38 |
Brian Goetz, Tim Peierls
Mentioned 201
Provides information on building concurrent applications using Java.:
I want to run a thread for some fixed amount of time. If it is not completed within that time, I want to either kill it, throw some exception, or handle it in some way. How can it be done?
One way of doing it as I figured out from this thread is to use a TimerTask inside the run() method of the Thread.
Are there any better solutions for this?
EDIT: Adding a bounty as I needed a clearer answer. The ExecutorService code given below does not address my problem. Why should I sleep() after executing (some code - I have no handle over this piece of code)? If the code is completed and the sleep() is interrupted, how can that be a timeOut?
The task that needs to be executed is not in my control. It can be any piece of code. The problem is this piece of code might run into an infinite loop. I don't want that to happen. So, I just want to run that task in a separate thread. The parent thread has to wait till that thread finishes and needs to know the status of the task (i.e whether it timed out or some exception occured or if its a success). If the task goes into an infinite loop, my parent thread keeps on waiting indefinitely, which is not an ideal situation.
I think you should take a look at proper concurrency handling mechanisms (threads running into infinite loops doesn't sound good per se, btw). Make sure you read a little about the "killing" or "stopping" Threads topic.
What you are describing,sound very much like a "rendezvous", so you may want to take a look at the CyclicBarrier.
There may be other constructs (like using CountDownLatch for example) that can resolve your problem (one thread waiting with a timeout for the latch, the other should count down the latch if it has done it's work, which would release your first thread either after a timeout or when the latch countdown is invoked).
I usually recommend two books in this area: Concurrent Programming in Java and Java Concurrency in Practice.
I sort of understand that AtomicInteger and other Atomic variables allow concurrent accesses. In what cases is this class typically used though?
There are two main uses of
AtomicInteger:
As an atomic counter (
incrementAndGet(), etc) that can be used by many threads concurrently
As a primitive that supports compare-and-swap instruction (
compareAndSet()) to implement non-blocking algorithms.
Here is an example of non-blocking random number generator from Brian Göetz's Java Concurrency In Practice:
public class AtomicPseudoRandom extends PseudoRandom { private AtomicInteger seed; AtomicPseudoRandom(int seed) { this.seed = new AtomicInteger(seed); } public int nextInt(int n) { while (true) { int s = seed.get(); int nextSeed = calculateNext(s); if (seed.compareAndSet(s, nextSeed)) { int remainder = s % n; return remainder > 0 ? remainder : remainder + n; } } } ... }
As you can see, it basically works almost the same way as
incrementAndGet(), but performs arbitrary calculation (
calculateNext()) instead of increment (and processes the result before return).
How I can use AtomicBoolean and what is that class for?
The
AtomicBoolean class gives you a boolean value that you can update atomically. Use it when you have multiple threads accessing a boolean variable.
The java.util.concurrent.atomic package overview gives you a good high-level description of what the classes in this package do and when to use them. I'd also recommend the book Java Concurrency in Practice by Brian Goetz.
Is there any condition where finally might not run in java? Thanks.
Here are some conditions which can bypass a finally block: saw many code invoke the method Thread.currentThread.interrupt() in the catch block, why?
This is done to keep state.
When you catch the
InterruptException and swallow it, you essentially prevent any higher level methods/thread groups from noticing the interrupt. Which may cause problems.
By calling
Thread.currentThread().interrupt(), you set the interrupt flag of the thread, so higher level interrupt handlers will notice it and can handle it appropriately.
Java Concurrency in Practice discusses this in more detail in Chapter 7.1.3: Responding to Interruption. Its rule is:
Only code that implements a thread's interruption policy may swallow an interruption request. General-purpose task and library code should never swallow interruption requests.
Why is
i++ not atomic in Java?
To get a bit deeper in Java I tried to count how often the loop in threads are executed.
So I used a
private static int total = 0;
in the main class.
I have two threads.
System.out.println("Hello from Thread 1!");
System.out.println("Hello from Thread 2!");
And I count the lines printed by thread 1 and thread 2. But the lines of thread 1 + lines of thread 2 don't match the total number of lines printed out.
Here is my code:
import java.util.concurrent.ExecutorService; import java.util.concurrent.Executors; import java.util.logging.Level; import java.util.logging.Logger; public class Test { private static int total = 0; private static int countT1 = 0; private static int countT2 = 0; private boolean run = true; public Test() { ExecutorService newCachedThreadPool = Executors.newCachedThreadPool(); newCachedThreadPool.execute(t1); newCachedThreadPool.execute(t2); try { Thread.sleep(1000); } catch (InterruptedException ex) { Logger.getLogger(Test.class.getName()).log(Level.SEVERE, null, ex); } run = false; try { Thread.sleep(1000); } catch (InterruptedException ex) { Logger.getLogger(Test.class.getName()).log(Level.SEVERE, null, ex); } System.out.println((countT1 + countT2 + " == " + total)); } private Runnable t1 = new Runnable() { @Override public void run() { while (run) { total++; countT1++; System.out.println("Hello #" + countT1 + " from Thread 2! Total hello: " + total); } } }; private Runnable t2 = new Runnable() { @Override public void run() { while (run) { total++; countT2++; System.out.println("Hello #" + countT2 + " from Thread 2! Total hello: " + total); } } }; public static void main(String[] args) { new Test(); } }
i++ is a statement which simply involves 3 operations:
These three operations are not meant to be executed in one step, in other words
i++ is not a compound operation. As a result all sorts of things can go wrong when more than one threads are involved in a single but non-compound operation.
As an example imagine this scenario:
Time 1:
Thread A fetches i Thread B fetches i
Time 2:
Thread A overwrites i with a new value say -foo- Thread B overwrites i with a new value say -bar- Thread B stores -bar- in i // At this time thread B seems to be more 'active'. Not only does it overwrite // its local copy of i but also makes it in time to store -bar- back to // 'main' memory (i)
Time 3:
Thread A attempts to store -foo- in memory effectively overwriting the -bar- value (in i) which was just stored by thread B in Time 2. Thread B has nothing to do here. Its work was done by Time 2. However it was all for nothing as -bar- was eventually overwritten by another thread.
And there you have it. A race condition.
That's why
i++ is not atomic. If it was, none of this would have happened and each
fetch-update-store would happen atomically. That's exactly what
AtomicInteger is for and in your case it would probably fit right in.
P.S.
An excellent book covering all of those issues and then some is this: Java Concurrency in Practice
I'm writing a Java program which uses a lot of CPU because of the nature of what it does. However, lots of it can run in parallel, and I have made my program multi-threaded. When I run it, it only seems to use one CPU until it needs more then it uses another CPU - is there anything I can do in Java to force different threads to run on different cores/CPUs?
First, I'd suggest reading "Concurrency in Practice" by Brian Goetz.
This is by far the best book describing concurrent java programming.
Concurrency is 'easy to learn, difficult to master'. I'd suggest reading plenty about the subject before attempting it. It's very easy to get a multi-threaded program to work correctly 99.9% of the time, and fail 0.1%. However, here are some tips to get you started:
There are two common ways to make a program use more than one core:
At the lowest level, one can create and destroy threads. Java makes it easy to create threads in a portable cross platform manner.
As it tends to get expensive to create and destroy threads all the time, Java now includes Executors to create re-usable thread pools. Tasks can be assigned to the executors, and the result can be retrieved via a Future object.
Typically, one has a task which can be divided into smaller tasks, but the end results need to be brought back together. For example, with a merge sort, one can divide the list into smaller and smaller parts, until one has every core doing the sorting. However, as each sublist is sorted, it needs to be merged in order to get the final sorted list. Since this is "divide-and-conquer" issue is fairly common, there is a JSR framework which can handle the underlying distribution and joining. This framework will likely be included in Java 7.
Anyone have a good rule of thumb for choosing between different implementations of Java Collection interfaces like List, Map, or Set?
For example, generally why or in what cases would I prefer to use a Vector or an ArrayList, a Hashtable or a HashMap?
About your first question...
List, Map and Set serve different purposes. I suggest reading about the Java Collections Framework at.
To be a bit more concrete:
About your second question...
The main difference between Vector and ArrayList is that the former is synchronized, the latter is not synchronized. You can read more about synchronization in Java Concurrency in Practice.
The difference between Hashtable (note that the T is not a capital letter) and HashMap is similiar, the former is synchronized, the latter is not synchronized.
I would say that there are no rule of thumb for preferring one implementation or another, it really depends on your needs.
I wrote a below Singleton class. I am not sure whether this is thread safe singleton class or not?
public class CassandraAstyanaxConnection { private static CassandraAstyanaxConnection _instance; private AstyanaxContext<Keyspace> context; private Keyspace keyspace; private ColumnFamily<String, String> emp_cf; public static synchronized CassandraAstyanaxConnection getInstance() { if (_instance == null) { _instance = new CassandraAstyanaxConnection(); } return _instance; } /** * help me with this? Any thoughts on my above Singleton class will be of great help.
Updated Code:-
I am trying to incorporate Bohemian suggestion in my code. Here is the updated code, I got-
public class CassandraAstyanaxConnection { private static class ConnectionHolder { static final CassandraAstyanaxConnection connection = new CassandraAstyanaxConnection(); } public static CassandraAstyanaxConnection getInstance() { return ConnectionHolder.connection; } /** * take a look and let me know if this time I got it right or not?
Thanks for the help.
No, its not thread-safe if the values returned on the pulbic methods are changeble objects.
To this class be Thread-safe one way is to change it to be immutable.
To do that, you could change this methods like this:
public Keyspace getKeyspace() { // make a copy to prevent external user to modified or ensure that Keyspace is immutable, in that case, you don't have to make a copy return new Keyspace( keyspace ); } public ColumnFamily<String, String> getEmp_cf() { // Same principle here. If ColumnFamily is immutable, you don't have to make a copy. If its not, then make a copy return new ColumnFamily( emp_cf ); }
In this book Java Concurrency in Practice you can see the principle of that immutability.
I am still pretty new to the concept of threading, and try to understand more about it. Recently, I came across a blog post on What Volatile Means in Java by Jeremy Manson, where he writes:
When one thread writes to a volatile variable, and another thread sees that write, the first thread is telling the second about all of the contents of memory up until it performed the write to that volatile variable. [...] all of the memory contents seen by Thread 1, before it wrote to
[volatile] ready, must be visible to Thread 2, after it reads the value
truefor
ready. [emphasis added by myself]
Now, does that mean that all variables (volatile or not) held in Thread 1's memory at the time of the write to the volatile variable will become visible to Thread 2 after it reads that volatile variable? If so, is it possible to puzzle that statement together from the official Java documentation/Oracle sources? And from which version of Java onwards will this work?
In particular, if all Threads share the following class variables:
private String s = "running"; private volatile boolean b = false;
And Thread 1 executes the following first:
s = "done"; b = true;
And Thread 2 then executes afterwards (after Thread 1 wrote to the volatile field):
boolean flag = b; //read from volatile System.out.println(s);
Would this be guaranteed to print "done"?
What would happen if instead of declaring
b as
volatile I put the write and read into a
synchronized block?
Additionally, in a discussion entitled "Are static variables shared between threads?", @TREE writes:
Don't use volatile to protect more than one piece of shared state.
Why? (Sorry; I can't comment yet on other questions, or I would have asked there...)
Would this be guaranteed to print "done"?
As said in Java Concurrency in Practice:
When thread
Awrites to a
volatilevariable and subsequently thread
Breads that same variable, the values of all variables that were visible to A prior to writing to the
volatilevariable become visible to B after reading the
volatilevariable.
So YES, This guarantees to print "done".
What would happen if instead of declaring b as volatile I put the write and read into a synchronized block?
This too will guarantee the same.
Don't use volatile to protect more than one piece of shared state.
Why?
Because, volatile guarantees only Visibility. It does'nt guarantee atomicity. If We have two volatile writes in a method which is being accessed by a thread
A and another thread
B is accessing those volatile variables , then while thread
A is executing the method it might be possible that thread
A will be preempted by thread
B in the middle of operations(e.g. after first volatile write but before second volatile write by the thread
A). So to guarantee the atomicity of operation
synchronization is the most feasible way out.
I saw some examples in java where they do synchronization on a block of code to change some variable while that variable was declared volatile originally .. I saw that in an example of singleton class where they declared the unique instance as volatile and they sychronized the block that initializes that instance ... My question is why we declare it volatile while we synch on it, why we need to do both?? isn't one of them is sufficient for the other ??
public class someClass { volatile static uniqueInstance = null; public static someClass getInstance() { if(uniqueInstance == null) { synchronized(someClass.class) { if(uniqueInstance == null) { uniqueInstance = new someClass(); } } } return uniqueInstance; }
thanks in advance.
This post explains the idea behind volatile.
It is also addressed in the seminal work, Java Concurrency in Practice.
The main idea is that concurrency not only involves protection of shared state but also the visibility of that state between threads: this is where volatile comes in. (This larger contract is defined by the Java Memory Model.)
For years and years, I've tried to understand the part of Java specification that deals with memory model and concurrency. I have to admit that I've failed miserably. Yes' I understand about locks and "synchronized" and wait() and notify(). And I can use them just fine, thank you. I even have a vague idea about what "volatile" does. But all of that was not derived from the language spec - rather from general experience.
Here are two sample questions that I am asking. I am not so much interested in particular answers, as I need to understand how the answers are derived from the spec (or may be how I conclude that the spec has no answer).
I wont try to explain these issues here but instead refer you to Brian Goetz excellent book on the subject.
The book is "Java Concurrency in Practice", can be found at Amazon or any other well sorted store for computer literature.
I'm not going to attempt to actually answer your questions here - instead I'll redirect you to the book which I seeing recommended for advice on this topic: Java Concurrency in Practice.
One word of warning: if there are answers here, expect quite a few of them to be wrong. One of the reasons I'm not going to post details is because I'm pretty sure I'd get it wrong in at least some respects. I mean no disrespect whatsoever to the community when I say that the chances of everyone who thinks they can answer this question actually having enough rigour to get it right is practically zero. (Joe Duffy recently found a bit of the .NET memory model that was surprised by. If he can get it wrong, so can mortals like us.)
I will offer some insight on just one aspect, because it's often misunderstood:
There's a difference between volatility and atomicity. People often think that an atomic write is volatile (i.e. you don't need to worry about the memory model if the write is atomic). That's not true.
Volatility is about whether one thread performing a read (logically, in the source code) will "see" changes made by another thread.
Atomicity is about whether there is any chance that if a change is seen, only part of the change will be seen.
For instance, take writing to an integer field. That is guaranteed to be atomic, but not volatile. That means that if we have (starting at foo.x = 0):
Thread 1: foo.x = 257; Thread 2: int y = foo.x;
It's possible for
y to be 0 or 257. It won't be any other value, (e.g. 256 or 1) due to the atomicity constraint. However, even if you know that in "wall time" the code in thread 2 executed after the code in thread 1, there could be odd caching, memory accesses "moving" etc. Making the variable
x volatile will fix this.
I'll leave the rest up to real honest-to-goodness experts.
I am applying my new found knowledge of threading everywhere and getting lots of surprises
Example:
I used threads to add numbers in an array. And outcome was different every time. The problem was that all of my threads were updating the same variable and were not synchronized.
sidenote:
(I renamed my program
thread_add.java to
thread_random_number_generator.java:-)
I notice you are writing in java and that nobody else mentioned books so Java Concurrency In Practice should be your multi-threaded bible.
The following piece of code tries to accompolish this.
The code loops forever and checks if there are any pending requests to be processed. If there is any, it creates a new thread to process the request and submits it to the executor. Once all the threads are done,it sleeps for 60 seconds and again checks for pending requests.
public static void main(String a[]){ //variables init code omitted ExecutorService service = Executors.newFixedThreadPool(15); ExecutorCompletionService<Long> comp = new ExecutorCompletionService<Long>(service); while(true){ List<AppRequest> pending = service.findPendingRequests(); int noPending = pending.size(); if (noPending > 0) { for (AppRequest req : pending) { Callable<Long> worker = new RequestThread(something, req); comp.submit(worker); } } for (int i = 0; i < noPending; i++) { try { Future<Long> f = comp.take(); long name; try { name = f.get(); LOGGER.debug(name + " got completed"); } catch (ExecutionException e) { LOGGER.error(e.toString()); } } catch (InterruptedException e) { LOGGER.error(e.toString()); } } TimeUnit.SECONDS.sleep(60); } }
My question is most of the processing done by these threads deal with database. And this program will run on a windows machine. What happens to these threads when someone tries to shutdown or logoff the machine.? How to gracefully shutdown the running threads and also the executor.?
The book "Java Concurrency in Practice" states:
7.4. JVM Shutdown
The JVM can shut down in either an orderly or abrupt manner. An orderly shutdown is initiated when the last "normal" (nondaemon) thread terminates, someone calls System.exit, or by other platform-specific means (such as sending a SIGINT or hitting Ctrl-C). [...]. [...]
The important bits are, "The JVM makes no attempt to stop or interrupt any application threads that are still running at shutdown time; they are abruptly terminated when the JVM eventually halts." so I suppose the connection to the DB will abruptly terminate, if no shutdown hooks are there to do a graceful clean up (if you are using frameworks, they usually do provide such shutdown hooks). In my experience, session to the DB can remain until it is timed out by the DB, etc. when the app. is terminated without such hooks.
Suppose I have a static complex object that gets periodically updated by a pool of threads, and read more or less continually in a long-running thread. The object itself is always immutable and reflects the most recent state of something.
class Foo() { int a, b; } static Foo theFoo; void updateFoo(int newA, int newB) { f = new Foo(); f.a = newA; f.b = newB; // HERE theFoo = f; } void readFoo() { Foo f = theFoo; // use f... }
I do not care in the least whether my reader sees the old or the new Foo, however I need to see a fully initialized object. IIUC, The Java spec says that without a memory barrier in HERE, I may see an object with f.b initialized but f.a not yet committed to memory. My program is a real-world program that will sooner or later commit stuff to memory, so I don't need to actually commit the new value of theFoo to memory right away (though it wouldn't hurt).
What do you think is the most readable way to implement the memory barrier ? I am willing to pay a little performance price for the sake of readability if need be. I think I can just synchronize the assignment to Foo and that would work, but I'm not sure it's very obvious to someone reading the code why I do that. I could also synchronize the whole initialization of the new Foo, but that would introduce more locking that actually needed.
How would you write it so that it's as readable as possible ?
Bonus kudos for a Scala version :)
Foois immutable, simply making the fields final will ensure complete initialization and consistent visibility of fields to all threads irrespective of synchronization.
Foois immutable, publication via
volatile theFooor
AtomicReference<Foo> theFoois sufficient to ensure that writes to its fields are visible to any thread reading via
theFooreference
theFoo, reader threads are never guaranteed to see any update
AtomicReference<Foo>, with explicit synchronization coming in second, and use of
volatilecoming in third
volatile
I blame you. Now I'm hooked, I've broken out JCiP, and now I'm wondering if any code I've ever written is correct. The code snippet above is, in fact, potentially inconsistent. (Edit: see the section below on Safe publication via volatile.)
The reading thread could also see stale (in this case, whatever the default values for You can do one of the following to introduce a happens-before edge:
a and
b were) for unbounded time.
volatile, which creates a happens-before edge equivalent to a
monitorenter(read side) or
monitorexit(write side)
finalfields and initialize the values in a constructor before publication
theFooobject
AtomicIntegerfields
These gets the write ordering solved (and solves their visibility issues). Then you need to address visibility of the new
theFoo reference. Here,
volatile is appropriate -- JCiP says in section 3.1.4 "Volatile variables", (and here, the variable is
theFoo):
If you do the following, you're golden:
class Foo { // it turns out these fields may not be final, with the volatile publish, // the values will be seen under the new JMM final int a, b; Foo(final int a; final int b) { this.a = a; this.b=b; } } // without volatile here, separate threads A' calling readFoo() // may never see the new theFoo value, written by thread A static volatile Foo theFoo; void updateFoo(int newA, int newB) { f = new Foo(newA,newB); theFoo = f; } void readFoo() { final Foo f = theFoo; // use f... }
Several folks on this and other threads (thanks @John V) note that the authorities on these issues emphasize the importance of documentation of synchronization behavior and assumptions. JCiP talks in detail about this, provides a set of annotations that can be used for documentation and static checking, and you can also look at the JMM Cookbook for indicators about specific behaviors that would require documentation and links to the appropriate references. Doug Lea has also prepared a list of issues to consider when documenting concurrency behavior. Documentation is appropriate particularly because of the concern, skepticism, and confusion surrounding concurrency issues (on SO: "Has java concurrency cynicism gone too far?"). Also, tools like FindBugs are now providing static checking rules to notice violations of JCiP annotation semantics, like "Inconsistent Synchronization: IS_FIELD-NOT_GUARDED".
Until you think you have a reason to do otherwise, it's probably best to proceed with the most readable solution, something like this (thanks, @Burleigh Bear), using the
@Immutable and
@GuardedBy annotations.
@Immutable class Foo { final int a, b; Foo(final int a; final int b) { this.a = a; this.b=b; } } static final Object FooSync theFooSync = new Object(); @GuardedBy("theFooSync"); static Foo theFoo; void updateFoo(final int newA, final int newB) { f = new Foo(newA,newB); synchronized (theFooSync) {theFoo = f;} } void readFoo() { final Foo f; synchronized(theFooSync){f = theFoo;} // use f... }
or, possibly, since it's cleaner:
static AtomicReference<Foo> theFoo; void updateFoo(final int newA, final int newB) { theFoo.set(new Foo(newA,newB)); } void readFoo() { Foo f = theFoo.get(); ... }
volatile
First, note that this question pertains to the question here, but has been addressed many, many times on SO:
In fact, a google search: "site:stackoverflow.com +java +volatile +keyword" returns 355 distinct results. Use of
volatile is, at best, a volatile decision. When is it appropriate? The JCiP gives some abstract guidance (cited above). I'll collect some more practical guidelines here:
volatilecan be used to safely publish immutable objects", which neatly encapsulates most of the range of use one might expect from an application programmer.
volatileis most useful in lock-free algorithms" summarizes another class of uses—special purpose, lock-free algorithms which are sufficiently performance sensitive to merit careful analysis and validation by an expert.
Following up on @Jed Wesley-Smith, it appears that
volatile now provides stronger guarantees (since JSR-133), and the earlier assertion "You can use
volatile provided the object published is immutable" is sufficient but perhaps not necessary.
Looking at the JMM FAQ, the two entries How do final fields work under the new JMM? and What does volatile do? aren't really dealt with together, but I think the second gives us what we.
I'll note that, despite several rereadings of JCiP, the relevant text there didn't leap out to me until Jed pointed it out. It's on p. 38, section 3.1.4, and it says more or less the same thing as this preceding quote -- the published object need only be effectively immutable, no
final fields required, QED.
One comment: Any reason why
newA and
newB can't be arguments to the constructor? Then you can rely on publication rules for constructors...
Also, using an
AtomicReference likely clears up any uncertainty (and may buy you other benefits depending on what you need to get done in the rest of the class...) Also, someone smarter than me can tell you if
volatile would solve this, but it always seems cryptic to me...
In further review, I believe that the comment from @Burleigh Bear above is correct --- (EDIT: see below)
you actually don't have to worry about out-of-sequence ordering here, since you are publishing a new object to
theFoo. While another thread could conceivably see inconsistent values for
newA and
newB as described in JLS 17.11, that can't happen here because they will be committed to memory before the other thread gets ahold of a reference to the new
f = new Foo() instance you've created... this is safe one-time publication. On the other hand, if you wrote
void updateFoo(int newA, int newB) { f = new Foo(); theFoo = f; f.a = newA; f.b = newB; }
But in that case the synchronization issues are fairly transparent, and ordering is the least of your worries. For some useful guidance on volatile, take a look at this developerWorks article.
However, you may have an issue where separate reader threads can see the old value for
theFoo for unbounded amounts of time. In practice, this seldom happens. However, the JVM may be allowed to cache away the value of the
theFoo reference in another thread's context. I'm quite sure marking
theFoo as
volatile will address this, as will any kind of synchronizer or
AtomicReference.
How do threads that rely on one another communicate in Java?
For example, I am building a web crawler with threads that need data that comes from other threads.
That depends on the nature of the communication.
The simplest and most advisable form of inter-thread communication is simply to wait for the completion of other threads. That's most easily done by using
Future:
ExecutorService exec = Executors.newFixedThreadPool(50); final Future f = exec.submit(task1); exec.submit(new Runnable() { @Override public void run() { f.get(); // do stuff } });
The second task won't execute until the first completes.
Java 5+ has many concurrent utilities for dealing with this kind of thing. This could mean using
LinkedBlockingQueues,
CountDownLatch or many, many others.
For an in-depth examination of concurrency Java Concurrency in Practice is a must-read.
Many people at SO adviced to dive into Java concurrency by reading Java Concurrency in Practice (JCIP), sometimes Doug Lea's book of 1999 is mentioned as well:
After reading JCIP, still feel the need for recapitulation/consolidation of the topic. This mainly because I feel the lack of examples in JCIP, however the book touches almost all aspects of Java multithreading.
Can you recommend any book / resources that would supplement JCIP by lots of examples of java.util.concurrent.* usage?
Any advice or links are welcome. Thanks a lot.
We have developed an Android Application which involves a service in the background. To implement this background service we have used
IntentService. We want the application to poll the server every
60 seconds. So in the
IntentService, the server is polled in a while loop. At the end of the while loop we have used
Thread.sleep(60000) so that the next iteration starts only after 60 seconds.
But in the
Logcat, I see that sometimes it takes the application more than 5 minutes to wake up (come out of that sleep and start the next iteration). It is never
1 minute as we want it to be.
What is the reason for this? Should background Services be implemented in a different way?
Problem2
Android kills this background process (intent service) after sometime. Can't exactly say when. But sometimes its hours and sometimes days before the background service gets killed. I would appreciate it if you would tell me the reason for this. Because Services are not meant to be killed. They are meant to run in background as long as we want it to.
Code :
@Override protected void onHandleIntent(Intent intent) { boolean temp=true; while(temp==true) { try { //connect to the server //get the data and store it in the sqlite data base } catch(Exception e) { Log.v("Exception", "in while loop : "+e.toString()); } //Sleep for 60 seconds Log.v("Sleeping", "Sleeping"); Thread.sleep(60000); Log.v("Woke up", "Woke up"); //After this a value is extracted from a table final Cursor cur=db.query("run_in_bg", null, null, null, null, null, null); cur.moveToLast(); String present_value=cur.getString(0); if(present_value==null) { //Do nothing, let the while loop continue } else if( present_value.equals("false") || present_value.equals("False") ) { //break out of the while loop db.close(); temp=false; Log.v("run_in_bg", "false"); Log.v("run_in_bg", "exiting while loop"); break; } } }
But whenever the service is killed, it happens when the the process is asleep. The last log reads -
Sleeping : Sleeping. Why does the service gets killed?
You could use ScheduledExecutorService designed specifically for such purpose.
Don't use Timers, as demonstrated in "Java Concurrency in Practice" they can be very inaccurate.
Question How can I make sure my application is thread-safe? Are their any common practices, testing methods, things to avoid, things to look for?
Background I'm currently developing a server application that performs a number of background tasks in different threads and communicates with clients using Indy (using another bunch of automatically generated threads for the communication). Since the application should be highly availabe, a program crash is a very bad thing and I want to make sure that the application is thread-safe. No matter what, from time to time I discover a piece of code that throws an exception that never occured before and in most cases I realize that it is some kind of synchronization bug, where I forgot to synchronize my objects properly. Hence my question concerning best practices, testing of thread-safety and things like that.
mghie: Thanks for the answer! I should perhaps be a little bit more precise. Just to be clear, I know about the principles of multithreading, I use synchronization (monitors) throughout my program and I know how to differentiate threading problems from other implementation problems. But nevertheless, I keep forgetting to add proper synchronization from time to time. Just to give an example, I used the RTL sort function in my code. Looked something like
FKeyList.Sort (CompareKeysFunc);
Turns out, that I had to synchronize FKeyList while sorting. It just don't came to my mind when initially writing that simple line of code. It's these thins I wanna talk about. What are the places where one easily forgets to add synchronization code? How do YOU make sure that you added sync code in all important places?
M2C - Java Concurrency in Practice is really good.
I'm trying to get a handle on how to implement threading in a Java application that uses Spring for transaction management. I've found the TaskExecutor section in the Spring documentation, and ThreadPoolTaskExecutor looks like it would fit my needs;
ThreadPoolTaskExecutor.
However I have no idea how to go about using it. I've been searching for good examples for awhile now with no luck. If anyone can help me out I would appreciate it.
Have a look through the Brian Goetz web site. His book Java Concurrency in Practice covers the java.util.concurrent package really well.
Here's the deal. I have a hash map containing data I call "program codes", it lives in an object, like so:
Class Metadata { private HashMap validProgramCodes; public HashMap getValidProgramCodes() { return validProgramCodes; } public void setValidProgramCodes(HashMap h) { validProgramCodes = h; } }
I have lots and lots of reader threads each of which will call getValidProgramCodes() once and then use that hashmap as a read-only resource.
So far so good. Here's where we get interesting.
I want to put in a timer which every so often generates a new list of valid program codes (never mind how), and calls setValidProgramCodes.
My theory -- which I need help to validate -- is that I can continue using the code as is, without putting in explicit synchronization. It goes like this: At the time that validProgramCodes are updated, the value of validProgramCodes is always good -- it is a pointer to either the new or the old hashmap. This is the assumption upon which everything hinges. A reader who has the old hashmap is okay; he can continue to use the old value, as it will not be garbage collected until he releases it. Each reader is transient; it will die soon and be replaced by a new one who will pick up the new value.
Does this hold water? My main goal is to avoid costly synchronization and blocking in the overwhelming majority of cases where no update is happening. We only update once per hour or so, and readers are constantly flickering in and out.
No, the code example is not safe, because there is no safe publication of any new HashMap instances. Without any synchronization, there is a possibility that a reader thread will see a partially initialized HashMap.
Check out @erickson's explanation under "Reordering" in his answer. Also I can't recommend Brian Goetz's book Java Concurrency in Practice enough!
Whether or not it is okay with you that reader threads might see old (stale) HashMap references, or might even never see a new reference, is beside the point. The worst thing that can happen is that a reader thread might obtain reference to and attempt to access a HashMap instance that is not yet initialized and not ready to be accessed.
I'm midway through programming a Java program, and I'm at the stage where I'm debugging far more concurrency issues than I'd like to be dealing with.
I have to ask: how do you deal with concurrency issues when setting out your program mentally? In my case, it's for a relatively simple game, yet issues with threads keep popping up - any quick-fix almost certainly leads to a new issue.
Speaking in very general terms, what techniques should I use when deciding how my application should 'flow' with out all my threads getting in a knot?
Read up on concurrency, or better yet take graduate-level course on concurrent programming if you are still in college. See The Java Tutorials: Lesson: Concurrency. One famous book for Java concurrency is Java Concurrency in Practice. Java has so much built into the framework to deal with concurrency issues, including concurrent collections and
synchronized methods.
In what cases is it necessary to synchronize access to instance members? I understand that access to static members of a class always needs to be synchronized- because they are shared across all object instances of the class.
My question is when would I be incorrect if I do not synchronize instance members?
for example if my class is
public class MyClass { private int instanceVar = 0; public setInstanceVar() { instanceVar++; } public getInstanceVar() { return instanceVar; } }
in what cases (of usage of the class
MyClass) would I need to have methods:
public synchronized setInstanceVar() and
public synchronized getInstanceVar() ?
Thanks in advance for your answers.
. Roughly, the answer is "it depends". Synchronizing your setter and getter here would only have the intended purpose of guaranteeing that multiple threads couldn't read variables between each others increment operations:
synchronized increment() { i++ } synchronized get() { return i; }
but that wouldn't really even work here, because to insure that your caller thread got the same value it incremented, you'd have to guarantee that you're atomically incrementing and then retrieving, which you're not doing here - i.e you'd have to do something like
synchronized int { increment return get() }
Basically, synchronization is usefull for defining which operations need to be guaranteed to run threadsafe (inotherwords, you can't create a situation where a separate thread undermines your operation and makes your class behave illogically, or undermines what you expect the state of the data to be). It's actually a bigger topic than can be addressed here.
This book Java Concurrency in Practice is excellent, and certainly much more reliable than me.
My Rails web app has dozens of methods from making calls to an API and processing query result. These methods have the following structure:
def method_one batch_query_API process_data end .......... def method_nth batch_query_API process_data end def summary method_one ...... method_nth collect_results end
How can I run all query methods at the same time instead of sequential in Rails (without firing up multiple workers, of course)?
Edit: all of the methods are called from a single instance variable. I think this limits the use of Sidekiq or Delay in submitting jobs simultaneously.
Assuming that your problem is a slow external API, a solution could be the use of either threaded programming or asynchronous programming. By default when doing IO, your code will block. This basically means that if you have a method that does an HTTP request to retrieve some JSON your method will tell your operating system that you're going to sleep and you don't want to be woken up until the operating system has a response to that request. Since that can take several seconds, your application will just idly have to wait.
This behavior is not specific to just HTTP requests. Reading from a file or a device such as a webcam has the same implications. Software does this to prevent hogging up the CPU when it obviously has no use of it.
So the question in your case is: Do we really have to wait for one method to finish before we can call another? In the event that the behavior of
method_two is dependent on the outcome of
method_one, then yes. But in your case, it seems that they are individual units of work without co-dependence. So there is a potential for concurrency execution.
You can start new threads by initializing an instance of the Thread class with a block that contains the code you'd like to run. Think of a thread as a program inside your program. Your Ruby interpreter will automatically alternative between the thread and your main program. You can start as been threads as you'd like, but the more threads you create, the longer turns your main program will have to wait before returning to execution. However, we are probably talking microseconds or less. Let's look at an example of threaded execution.
def main_method Thread.new { method_one } Thread.new { method_two } Thread.new { method_three } end def method_one # something_slow_that_does_an_http_request end def method_two # something_slow_that_does_an_http_request end def method_three # something_slow_that_does_an_http_request end
Calling
main_method will cause all three methods to be executed in what appears to be parallel. In reality they are still being sequentually processed, but instead of going to sleep when
method_one blocks, Ruby will just return to the main thread and switch back to
method_one thread, when the OS has the input ready.
Assuming each method takes two 2 ms to execute minus the wait for the response, that means all three methods are running after just 6 ms - practically instantly.
If we assume that a response takes 500 ms to complete, that means you can cut down your total execution time from 2 + 500 + 2 + 500 + 2 + 500 to just 2 + 2 + 2 + 500 - in other words from 1506 ms to just 506 ms.
It will feel like the methods are running simultanously, but in fact they are just sleeping simultanously.
In your case however you have a challenge because you have an operation that is dependent on the completion of a set of previous operations. In other words, if you have task A, B, C, D, E and F, then A, B, C, D and E can be performed simultanously, but F cannot be performed until A, B, C, D and E are all complete.
There are different ways to solve this. Let's look at a simple solution which is creating a sleepy loop in the main thread that periodically examines a list of return values to make sure some condition is fullfilled.
def task_1 # Something slow return results end def task_2 # Something slow return results end def task_3 # Something slow return results end my_responses = {} Thread.new { my_responses[:result_1] = task_1 } Thread.new { my_responses[:result_2] = task_2 } Thread.new { my_responses[:result_3] = task_3 } while (my_responses.count < 3) # Prevents the main thread from continuing until the three spawned threads are done and have dumped their results in the hash. sleep(0.1) # This will cause the main thread to sleep for 100 ms between each check. Without it, you will end up checking the response count thousands of times pr. second which is most likely unnecessary. end # Any code at this line will not execute until all three results are collected.
Keep in mind that multithreaded programming is a tricky subject with numerous pitfalls. With MRI it's not so bad, because while MRI will happily switch between blocked threads, MRI doesn't support executing two threads simultanously and that solves quite a few concurrency concerns.
If you want to get into multithreaded programming, I recommend this book:
It's centered around Java, but the pitfalls and concepts explained are universal.
I am testing spawning off many threads running the same function on a 32 core server for Java and C#. I run the application with 1000 iterations of the function, which is batched across either 1,2,4,8, 16 or 32 threads using a threadpool.
At 1, 2, 4, 8 and 16 concurrent threads Java is at least twice as fast as C#. However, as the number of threads increases, the gap closes and by 32 threads C# has nearly the same average run-time, but Java occasionally takes 2000ms (whereas both languages are usually running about 400ms). Java is starting to get worse with massive spikes in the time taken per thread iteration.
EDIT This is Windows Server 2008
EDIT2 I have changed the code below to show using the Executor Service threadpool. I have also installed Java 7.
I have set the following optimisations in the hotspot VM:
-XX:+UseConcMarkSweepGC -Xmx 6000
but it still hasnt made things any better. The only difference between the code is that im using the below threadpool and for the C# version we use:
Is there a way to make the Java more optimised? Perhaos you could explain why I am seeing this massive degradation in performance?
Is there a more efficient Java threadpool?
(Please note, I do not mean by changing the test function)
import java.io.DataOutputStream; import java.io.FileNotFoundException; import java.io.FileOutputStream; import java.io.PrintStream; import java.util.concurrent.ExecutorService; import java.util.concurrent.Executors; import java.util.concurrent.ThreadPoolExecutor; public class PoolDemo { static long FastestMemory = 2000000; static long SlowestMemory = 0; static long TotalTime; static int[] FileArray; static DataOutputStream outs; static FileOutputStream fout; static Byte myByte = 0; public static void main(String[] args) throws InterruptedException, FileNotFoundException { int Iterations = Integer.parseInt(args[0]); int ThreadSize = Integer.parseInt(args[1]); FileArray = new int[Iterations]; fout = new FileOutputStream("server_testing.csv"); // fixed pool, unlimited queue ExecutorService service = Executors.newFixedThreadPool(ThreadSize); ThreadPoolExecutor executor = (ThreadPoolExecutor) service; for(int i = 0; i<Iterations; i++) { Task t = new Task(i); executor.execute(t); } for(int j=0; j<FileArray.length; j++){ new PrintStream(fout).println(FileArray[j] + ","); } } private static class Task implements Runnable { private int ID; public Task(int index) { this.ID = index; } public void run() { long Start = System.currentTimeMillis(); int Size1 = 100000; int Size2 = 2 * Size1; int Size3 = Size1; byte[] list1 = new byte[Size1]; byte[] list2 = new byte[Size2]; byte[] list3 = new byte[Size3]; for(int i=0; i<Size1; i++){ list1[i] = myByte; } for (int i = 0; i < Size2; i=i+2) { list2[i] = myByte; } for (int i = 0; i < Size3; i++) { byte temp = list1[i]; byte temp2 = list2[i]; list3[i] = temp; list2[i] = temp; list1[i] = temp2; } long Finish = System.currentTimeMillis(); long Duration = Finish - Start; TotalTime += Duration; FileArray[this.ID] = (int)Duration; System.out.println("Individual Time " + this.ID + " \t: " + (Duration) + " ms"); if(Duration < FastestMemory){ FastestMemory = Duration; } if (Duration > SlowestMemory) { SlowestMemory = Duration; } } } }
Below are the original response, update 1, and update 2. Update 1 talks about dealing with the race conditions around the test statistic variables by using concurrency structures. Update 2 is a much simpler way of dealing with the race condition issue. Hopefully no more updates from me - sorry for the length of the response but multithreaded programming is complicated!
The only difference between the code is that im using the below threadpool
I would say that is an absolutely huge difference. It's difficult to compare the performance of the two languages when their thread pool implementations are completely different blocks of code, written in user space. The thread pool implementation could have enormous impact on performance.
You should consider using Java's own built-in thread pools. See ThreadPoolExecutor and the entire java.util.concurrent package of which it is part. The Executors class has convenient static factory methods for pools and is a good higher level interface. All you need is JDK 1.5+, though the newer, the better. The fork/join solutions mentioned by other posters are also part of this package - as mentioned, they require 1.7+.
You have race conditions around the setting of
FastestMemory,
SlowestMemory, and
TotalTime. For the first two, you are doing the
< and
> testing and then the setting in more than one step. This is not atomic; there is certainly the chance that another thread will update these values in between the testing and the setting. The
+= setting of
TotalTime is also non-atomic: a test and set in disguise.
Here are some suggested fixes.
TotalTime
The goal here is a threadsafe, atomic
+= of
TotalTime.
// At the top of everything import java.util.concurrent.atomic.AtomicLong; ... // In PoolDemo static AtomicLong TotalTime = new AtomicLong(); ... // In Task, where you currently do the TotalTime += piece TotalTime.addAndGet (Duration);
FastestMemory / SlowestMemory
The goal here is testing and updating
FastestMemory and
SlowestMemory each in an atomic step, so no thread can slip in between the test and update steps to cause a race condition.
Simplest approach:
Protect the testing and setting of the variables using the class itself as a monitor. We need a monitor that contains the variables in order to guarantee synchronized visibility (thanks @A.H. for catching this.) We have to use the class itself because everything is
static.
// In Task synchronized (PoolDemo.class) { if (Duration < FastestMemory) { FastestMemory = Duration; } if (Duration > SlowestMemory) { SlowestMemory = Duration; } }
Intermediate approach:
You may not like taking the whole class for the monitor, or exposing the monitor by using the class, etc. You could do a separate monitor that does not itself contain
FastestMemory and
SlowestMemory, but you will then run into synchronization visibility issues. You get around this by using the
volatile keyword.
// In PoolDemo static Integer _monitor = new Integer(1); static volatile long FastestMemory = 2000000; static volatile long SlowestMemory = 0; ... // In Task synchronized (PoolDemo._monitor) { if (Duration < FastestMemory) { FastestMemory = Duration; } if (Duration > SlowestMemory) { SlowestMemory = Duration; } }
Advanced approach:
Here we use the
java.util.concurrent.atomic classes instead of monitors. Under heavy contention, this should perform better than the
synchronized approach. Try it and see.
// At the top of everything import java.util.concurrent.atomic.AtomicLong; . . . . // In PoolDemo static AtomicLong FastestMemory = new AtomicLong(2000000); static AtomicLong SlowestMemory = new AtomicLong(0); . . . . . // In Task long temp = FastestMemory.get(); while (Duration < temp) { if (!FastestMemory.compareAndSet (temp, Duration)) { temp = FastestMemory.get(); } } temp = SlowestMemory.get(); while (Duration > temp) { if (!SlowestMemory.compareAndSet (temp, Duration)) { temp = SlowestMemory.get(); } }
Let me know what happens after this. It may not fix your problem, but the race condition around the very variables that track your performance is too dangerous to ignore.
I originally posted this update as a comment but moved it here so that I would have room to show code. This update has been through a few iterations - thanks to A.H. for catching a bug I had in an earlier version. Anything in this update supersedes anything in the comment.
Last but not least, an excellent source covering all this material is Java Concurrency in Practice, the best book on Java concurrency, and one of the best Java books overall.
I recently noticed that your current code will never terminate unless you add
executorService.shutdown(). That is, the non-daemon threads living in that pool must be terminated or else the main thread will never exit. This got me to thinking that since we have to wait for all threads to exit, why not compare their durations after they finished, and thus bypass the concurrent updating of
FastestMemory, etc. altogether? This is simpler and could be faster; there's no more locking or CAS overhead, and you are already doing an iteration of
FileArray at the end of things anyway.
The other thing we can take advantage of is that your concurrent updating of
FileArray is perfectly safe, since each thread is writing to a separate cell, and since there is no reading of
FileArray during the writing of it.
With that, you make the following changes:
// In PoolDemo // This part is the same, just so you know where we are for(int i = 0; i<Iterations; i++) { Task t = new Task(i); executor.execute(t); } // CHANGES BEGIN HERE // Will block till all tasks finish. Required regardless. executor.shutdown(); executor.awaitTermination(10, TimeUnit.SECONDS); for(int j=0; j<FileArray.length; j++){ long duration = FileArray[j]; TotalTime += duration; if (duration < FastestMemory) { FastestMemory = duration; } if (duration > SlowestMemory) { SlowestMemory = duration; } new PrintStream(fout).println(FileArray[j] + ","); } . . . // In Task // Ending of Task.run() now looks like this long Finish = System.currentTimeMillis(); long Duration = Finish - Start; FileArray[this.ID] = (int)Duration; System.out.println("Individual Time " + this.ID + " \t: " + (Duration) + " ms");
Give this approach a shot as well.
You should definitely be checking your C# code for similar race conditions.
Looks like I have messed up with Java Threads/OS Threads and Interpreted language.
Before I begin, I do understand that Green Threads are Java Threads where the threading is taken care of by the JVM and the entire Java process runs only as a single OS Thread. Thereby on a multi processor system it is useless.
Now my questions is. I have two Threads A and B. Each with 100 thousand lines of independent code. I run these threads in my Java Program on a multiprocessor system. Each Thread will be given a native OS Thread to RUN which can run on a different CPU but since Java is interpreted these threads will require to interact with the JVM again and again to convert the byte code to machine instructions ? Am I right ? If yes, than for smaller programs Java Threads wont be a big advantage ?
Once the Hotspot compiles both these execution paths both can be as good as native Threads ? Am I right ?
[EDIT] : An alternate question can be, assume you have a single Java Thread whose code is not JIT compiled, you create that Thread and start() it ? How does the OS Thread and JVM interact to run that Bytecode ?
thanks
Each Thread will be given a native OS Thread to RUN which can run on a different CPU but since Java is interpreted these threads will require to interact with the JVM again and again to convert the byte code to machine instructions ? Am I right ?
You are mixing two different things; JIT done by the VM and the threading support offered by the VM. Deep down inside, everything you do translates to some sort of native code. A byte-code instruction which uses thread is no different than a JIT'ed code which accesses threads.
If yes, than for smaller programs Java Threads wont be a big advantage ?
Define small here. For short lived processes, yes, threading doesn't make that big a difference since your sequential execution is fast enough. Note that this again depends on the problem being solved. For UI toolkits, no matter how small the application, some sort of threading/asynchronous execution is required to keep the UI responsive.
Threading also makes sense when you have things which can be run in parallel. A typical example would be doing heavy IO in on thread and computation in another. You really wouldn't want to block your processing just because your main thread is blocked doing IO.
Once the Hotspot compiles both these execution paths both can be as good as native Threads ? Am I right ?
See my first point.
Threading really isn't a silver bullet, esp when it comes to the common misconception of "use threads to make this code go faster". A bit of reading and experience will be your best bet. Can I recommend getting a copy of this awesome book? :-)
@Sanjay: Infact now I can reframe my question. If I have a Thread whose code has not been JIT'd how does the OS Thread execute it ?
Again I'll say it, threading is a completely different concept from JIT. Let's try to look at the execution of a program in simple terms:
java pkg.MyClass -> VM locates method to be run -> Start executing the byte-code for method line by line -> convert each byte-code instruction to its native counterpart -> instruction executed by OS -> instruction executed by machine
When JIT has kicked in:
java pkg.MyClass -> VM locates method to be run which has been JIT'ed -> locate the associated native code for that method -> instruction executed by OS -> instruction executed by machine
As you can see, irrespective of the route you follow, the VM instruction has to be mapped to its native counterpart at some point in time. Whether that native code is stored for further re-use or thrown away if a different thing (optimization, remember?).
Hence to answer your question, whenever you write threading code, it is translated to native code and run by the OS. Whether that translation is done on the fly or looked up at that point in time is a completely different issue.
AtomicBoolean stores its value in:
private volatile int value;
Then, for example, extracting its value is done like this:
public final boolean get() { return value != 0; }
What is the reason behind it? Why
boolean was not used?
This probably is to be able to base several of the
Atomic classes on the same base (
Unsafe), which uses integer and provides the compare and swap operation.
Concurrency in Practice provides a good explanation of the inner workings.
Say I have two threads and an object. One thread assigns the object:
public void assign(MyObject o) { myObject = o; }
Another thread uses the object:
public void use() { myObject.use(); }
Does the variable myObject have to be declared as volatile? I am trying to understand when to use volatile and when not, and this is puzzling me. Is it possible that the second thread keeps a reference to an old object in its local memory cache? If not, why not?
Thanks a lot.
I am trying to understand when to use volatile and when not
You should mostly avoid using it. Use an AtomicReference instead (or another atomic class where appropriate). The memory effects are the same and the intent is much clearer.
I highly suggest reading the excellent Java Concurrency in Practice for a better understanding.
How do I make sure my java servlets web application is thread safe? What do I need to do in regards to session variables, static variables of a class, or anything else that could be a thread-safety problem?
Wow, that's a loaded question.
To put it simply, you need to ensure that access to any shared data is carefully synchronized. E.g., you may want to synchronize access to a static variable with a mutex or a synchronized function.
Note that you may also need to synchronize at higher levels if you need atomic transactions that modify multiple shared resources at the same time.
Designing a concurrent application is not simple, and there is no magic bullet (unfortunately). I highly recommend the book "Java Concurrency in Practice" for more information on writing safe concurrent code.
I have a quadcore processor and I would really like to take advantage of all those cores when I'm running quick simulations. The problem is I'm only familiar with the small Linux cluster we have in the lab and I'm using Vista at home.
What sort of things do I want to look into for multicore programming with C or Java? What is the lingo that I want to google?
Thanks for the help.
Read that the following code is an example of "unsafe construction" as it allows this reference to escape. I couldn't quite get how 'this' escapes. I am pretty new to the java world. Can any one help me understand this.
public class ThisEscape { public ThisEscape(EventSource source) { source.registerListener( new EventListener() { public void onEvent(Event e) { doSomething(e); } }); } }
I just had the exact same question while reading "Java Concurrency In Practice" by Brian Goetz.
Stephen C's answer (the accepted one) is excellent! I only wanted to add on top of that one more resource I discovered. It is from JavaSpecialists, where Dr. Heinz M. Kabutz analyzes exactly the code example that devnull posted. He explains what classes are generated (outer, inner) after compiling and how
this escapes. I found that explanation useful so I felt like sharing :)
issue192 (where he extends the example and provides a race condition.)
issue192b (where he explains what kind of classes are generated after compiling and how
this escapes.)
The example you have posted in your question comes from "Java Concurrency In Practice" by Brian Goetz et al. It is in section 3.2 "Publication and escape". I won't attempt to reproduce the details of that section here. (Go buy a copy for your bookshelf, or borrow a copy from your co-workers!)
The problem illustrated by the example code is that the constructor allows the reference to the object being constructed to "escape" before the constructor finishes creating the object. This is a problem for two reasons:
If the reference escapes, something can use the object before its constructor has completed the initialization and see it in an inconsistent (partly initialized) state. Even if the object escapes after initialization has completed, declaring a subclass can cause this to be violated.
According to JLS 17.5, final attributes of an object can be used safely without synchronization. However, this is only true if the object reference is not published (does not escape) before its constructor finished. If you break this rule, the result is an insidious concurrency bug that might bite you when the code is executed on a multi-core / multi-processor machines.
The
ThisEscape example is sneaky because the reference is escaping via the
this reference passed implicitly to the anonymous
EventListener class constructor. However, the same problems will arise if the reference is explicitly published too soon.
Here's an example to illustrate the problem of incompletely initialized objects:
public class Thing { public Thing (Leaker leaker) { leaker.leak(this); } } public class NamedThing extends Thing { private String name; public NamedThing (Leaker leaker, String name) { super(leaker); } public String getName() { return name; } }
If the
Leaker.leak(...) method calls
getName() on the leaked object, it will get
null ... because at that point in time the object's constructor chain has not completed.
Here's an example to illustrate the unsafe publication problem for
final attributes.
public class Unsafe { public final int foo = 42; public Unsafe(Unsafe[] leak) { leak[0] = this; // Unsafe publication // Make the "window of vulnerability" large for (long l = 0; l < /* very large */ ; l++) { ... } } } public class Main { public static void main(String[] args) { final Unsafe[] leak = new Unsafe[1]; new Thread(new Runnable() { public void run() { Thread.yield(); // (or sleep for a bit) new Unsafe(leak); } }).start(); while (true) { if (leak[0] != null) { if (leak[0].foo == 42) { System.err.println("OK"); } else { System.err.println("OUCH!"); } System.exit(0); } } } }
Some runs of this application may print "OUCH!" instead of "OK", indicating that the main thread has observed the
Unsafe object in an "impossible" state due to unsafe publication via the
leak array. Whether this happens or not will depend on your JVM and your hardware platform.
Now this example is clearly artificial, but it is not difficult to imagine how this kind of thing can happen in real multi-threaded applications.
I have a program that continually polls the database for change in value of some field. It runs in the background and currently uses a while(true) and a sleep() method to set the interval. I am wondering if this is a good practice? And, what could be a more efficient way to implement this? The program is meant to run at all times.
Consequently, the only way to stop the program is by issuing a kill on the process ID. The program could be in the middle of a JDBC call. How could I go about terminating it more gracefully? I understand that the best option would be to devise some kind of exit strategy by using a flag that will be periodically checked by the thread. But, I am unable to think of a way/condition of changing the value of this flag. Any ideas?
This is really too big an issue to answer completely in this format. Do yourself a favour and go buy Java Concurrency in Practice. There is no better resource for concurrency on the Java 5+ platform out there. There are whole chapters devoted to this subject.
On the subject of killing your process during a JDBC call, that should be fine. I believe there are issues with interrupting a JDBC call (in that you can't?) but that's a different issue.
I've just started looking at Java's
Executors class and the
newCachedThreadPool( ) method. According to the API, the resulting thread pool reuses existing
Thread objects for new tasks.
I'm a bit puzzled how this is implemented because I couldn't find any method in the
Thread API that lets you set the behaviour of an existing
Thread object.
For example, you can create a new
Thread from a
Runnable object, which makes the
Thread call the
Runnable's
run( ) method. However, there is no setter method in the
Thread API that takes a
Runnable as an argument.
I'd appreciate any pointers.
The threadpool has threads that look for runnable jobs. Instead of starting a new thread from the
Runnable the thread will just call the function
run(). So a thread in a
ThreadPool isn't created with the
Runnable you provide,but with one that just checks if any tasks are ready to be executed and calls them directly.
So it would look something like this:
while(needsToKeepRunning()){ if(hasMoreTasks()){ getFirstTask().run();. } else { waitForOtherTasks(); } }
Of course this is overly simplified the real implementation with the waiting is much more elegant. A great source of information on how this really works can be found in Concurrency in Practice
I've read various bits and pieces on concurrency, but was hoping to find a single resource that details and compares the various approaches. Ideally taking in threads, co-routines, message passing, actors, futures... whatever else that might be new that I don't know about yet!
Would prefer a lay-coders guide than something overtly theoretical / mathematical.
Thank you.
I recommend An Introduction to Parallel Programming by Pacheco. It's clearly written, and a good intro to parallel programming.
If you don't care about something being tied to a language, then Java Concurrency in Practice is a great resource.
Oracle's online tutorial is free, but probably a bit more succinct than what you're looking for.
That being said, the best teacher for concurrency is probably experience. I'd try to get some practice, myself. Start out by making a simulation of the Dining Philosophers problem. It's a classic.
At first, let's see if you're interested in the topic or not. To grasp a big picture about concurrency, best practice is to take a look at operating systems books, like Operating systems internal by Stalings or Modern operating systems by Tanenbaum. They can give you an intuition about what this is all about.
There's also an old book, named Concurent programming by Ben-Ari. If you found it, it can be helpful.
Beside reading text books it's good make your hand dirty by writing some concurrent programs. Python is a very good choice if you want to start using threads. Every Python book has a part dedicated to this topic. Also a with a simple search on the web you can find a lot of resources about it, but I give these two higher preference:
Multithreaded Programming (POSIX pthreads Tutorial), A very comprehensive introduction to concurrency and multi-threading. It's mainly about C multi-threading.
The other one is Thread Synchronization Mechanisms in Python.
Now if you still find your self interested about concurrent programming, it's time to go deeper. You almost have the basic knowledge of concurrency, now the best practice in this level is to start solving problem and become familiar with patterns. To achieve this goal, you can use The Little Book of Semaphores. It's one of best books in the field and it's also free. This is a book that can head you toward becoming a proficient.
These should be enough if you want to approach concurrent programming, but if you have enough time, and you're eager, it's good to take a look at some other paradigms of concurrent programming, like actors which used in Erlang. I say it worth to read some chapters of the book Seven Languages in Seven Weeks. especially chapter about Erlang and IO. At first glance, it might be hard and strange, but it's good to become familiar with other solutions to concurrency.
I wrote a server-client communication program and it worked well.
import java.io.*; import java.net.*; class Client { public static void main(String argv[]) throws Exception { String sentence; String modifiedSentence; while(true){ BufferedReader inFromUser = new BufferedReader(new InputStreamReader(System.in)); Socket clientSocket = new Socket("myname.domain.com", 2343); DataOutputStream out = new DataOutputStream(clientSocket.getOutputStream()); BufferedReader in = new BufferedReader(new InputStreamReader(clientSocket.getInputStream())); System.out.println("Ready"); sentence = in.readLine(); out.writeBytes(sentence + '\n'); modifiedSentence = in.readLine(); System.out.println(modifiedSentence); } clientSocket.close(); } }
import java.net.*; public class Server { public static void main(String args[]) throws Exception { String clientSentence; String cap_Sentence; ServerSocket my_Socket = new ServerSocket(2343); while(true) { Socket connectionSocket = my_Socket.accept(); BufferedReader in = new BufferedReader(new InputStreamReader(connectionSocket.getInputStream())); DataOutputStream out = new DataOutputStream(connectionSocket.getOutputStream()); clientSentence = in.readLine(); cap_Sentence = "Raceived:" + clientSentence + '\n'; out.writeBytes(cap_Sentence); } } }
The above is the code for single client - server communication, now I want multiple client to interact with that server.I googled for it and found that it can be done with the use of thread for each single client to talk to the server, but since I am a beginner I don't know exactly how to implement. So somebody please tell me how to do or give me some Idea about it.
You want to look into Java concurrency. That's the concept of one Java program doing multiple things at once. At a high level you will be taking your
while(true) { //... } block and running it as part of the
run() method of a class implementing Runnable. You'll create instances of Thread that invoke that
run() method, probably one per client you expect.
For a really good, deep understanding of all that Java offers when it comes to concurrency, check out Java Concurrency in Practice.
I've been ask to build a multi-threaded java application using the
java.util.concurrent library. I'm not familiar with this library, but have a good understanding of problems with multi-threaded code.
I'm looking for a tutorial and example code that shows this java library in use and it's best practises.
If you are a fast learner, I recommend the site (Java API by Example).
Here's the full link for the concurrent package:
EDIT: If you can spend some cash (and isn't in a hurry), I recommend this book: Java Concurrency in Practice
It is really full of examples and good practices.
I've been playing with functional programming lately and there are pretty good treatments on the topic of side effects, why they should be contained, etc. In projects where OOP is used, I'm looking for some resources which lay out some strategies for minimizing side effect and/or state.
A good example of this is the book RESTful Web Services which gives you strategies for minimizing state in a web application. What others exist?
Remember I'm not looking for another OOP analysts/design patterns book (though good encapsulation and loose coupling help avoid side effects) but rather a resource where the topic itself is state/side effects.
Some compiled answers
I don't think you'll find a lot current material in the OO world on this topic, simply because OOP (and most imperative programming, for that matter) relies on state and side effects. Consider logging, for instance. It's pure side-effect, yet in any self-respecting J2EE app, it's everywhere. Hoare's original QuickSort relies on mutable state, since you have to swap values around a pivot, and yet it too is everywhere.
This is why many OO programmers have trouble wrapping their heads around functional programming paradigms. They try to reassign the value of "x," discover that it can't be done (at least not in the way it can in every other language they've worked in), and they throw up their hands and shout "This is impossible!" Eventually, if they're patient, they learn recursion and currying and how the map function replaces the need for loops, and they calm down. But the learning curve can be very steep for some.
The OO programmers these days who care most about avoiding state are those working on concurrency. The reasons for this are obvious -- mutable state and side effects cause huge headaches when you're trying to manage concurrency between threads. As a result, the best discussion I've seen in the OO world about avoiding state is Java Concurrency in Practice.
The question I am asking is related to StringBuilder and StringBuffer in Java but not the same. I want to see what really happens if a StringBuilder is modified by two threads at the same time.
I wrote the following classes:
public class ThreadTester { public static void main(String[] args) throws InterruptedException { Runnable threadJob = new MyRunnable(); Thread myThread = new Thread(threadJob); myThread.start(); for (int i = 0; i < 100; i++) { Thread.sleep(10); StringContainer.addToSb("a"); } System.out.println("1: " + StringContainer.getSb()); System.out.println("1 length: " + StringContainer.getSb().length()); } } public class MyRunnable implements Runnable { @Override public void run() { for (int i = 0; i < 100; i++) { try { Thread.sleep(10); } catch (InterruptedException e) { e.printStackTrace(); } StringContainer.addToSb("b"); } System.out.println("2: " + StringContainer.getSb()); System.out.println("2 length: " + StringContainer.getSb().length()); } } public class StringContainer { private static final StringBuffer sb = new StringBuffer(); public static StringBuffer getSb() { return sb; } public static void addToSb(String s) { sb.append(s); } }
Initially I kept a StringBuffer in the StringContainer. Since StringBuffer is thread-safe, at a time, only one thread can append to it, so the output is consistent - either both threads reported the length of the buffer as 200, like: 1 length: 200 2 length: 200
or one of them reported 199 and the other 200, like: 2 length: 199aba 1 length: 200
The key is that the last thread to complete reports a length of 200.
Now, I changed StringContainer to have a StringBuilder instead of StringBuffer i.e.
public class StringContainer { private static final StringBuilder sb = new StringBuilder(); public static StringBuilder getSb() { return sb; } public static void addToSb(String s) { sb.append(s); } }
I expect some of the writes to be over-written, which is happening. But the contents of the StringBuilder and the lengths do not match sometimes:
1: ababbabababaababbaabbabababababaab 1 length: 137 2: ababbabababaababbaabbabababababaab 2 length: 137
As you can see the printed content has only 34 chars, but the length is 137. Why is this happening?
@Extreme Coders - I just did one more test run: length: 150 2 length: 150
Java version: 1.6.0_45 and I am using eclipse version: Eclipse Java EE IDE for Web Developers. Version: Juno Service Release 2 Build id: 20130225-0426
UPDATE 1: I ran this outside eclipse and now they seem to be matching, but I am getting ArrayIndexOutOfBoundsException sometimes:
$ java -version java version "1.6.0_27" OpenJDK Runtime Environment (IcedTea6 1.12.5) (6b27-1.12.5-0ubuntu0.12.04.1) OpenJDK Server VM (build 20.0-b12, mixed mode) $ java ThreadTester 1 length: 123 2 length: 123 $ java ThreadTester 2 length: 115 1 length: 115 $ java ThreadTester Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException at java.lang.System.arraycopy(Native Method) at java.lang.String.getChars(String.java:862) at java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:408) at java.lang.StringBuilder.append(StringBuilder.java:136) at StringContainer.addToSb(StringContainer.java:14) at ThreadTester.main(ThreadTester.java:14) 2: abbbbbbababbbbabbbbababbbbaabbabbbaaabbbababbbbabaabaabaabaaabababaabbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb 2 length: 114
The ArrayIndexOutOfBoundsException is also happening when running from eclipse.
UPDATE 2: There are two problems happening. The first problem of the contents of the StringBuilder not matching the length is happening only in Eclipse and not when I run in command line (at least the 100+ times I ran it on command line it never happened).
The second problem with ArrayIndexOutOfBoundsException should be to do with the internal implementation of StringBuilder class, which keeps an array of chars and does an
Arrays.copyOf when it expands the size. But it still beats me how a write is happening before the size is expanded, no matter what the order of execution is.
BTW, I am inclined to agree with @GreyBeardedGeek's answer that this whole exercise is a huge waste of time :-). Sometimes we get to see only the symptoms i.e. the output of some code and wonder what is going wrong. This question declared a priori that two threads are modifying a (very well-known) thread unsafe object.
UPDATE 3: Here is the official answer from Java Concurrency in Practice p. 35:
In the absence of synchronization, the compiler, processor and runtime can do some downright weird things to the order in which operations appear to execute. Attempts to reason about the order in which memory actions "must" happen in insufficiently synchronized multithreaded programs will almost certainly be incorrect.
Reasoning about insufficiently synchronized concurrent programs is prohibitively difficult.
There is also a nice example
NoVisibility in the book on p. 34.
The behavior of a non-threadsafe class when accessed concurrently by multiple threads is by definition "undefined".
Any attempt to ascertain deterministic behavior in such a case is, IMHO, just a huge waste of time.
In Java Concurrency in Practice one of the examples that I think surprises people (at least me) is something like this:
public class Foo { private int n; public Foo(int n) { this.n = n; } public void check() { if (n != n) throw new AssertionError("huh?"); } }
The surprise (to me at least) was the claim that this is not thread safe, and not only it's not safe, but also there is a chance that the check method will throw the assertion error.
The explanation is that without synchronization / marking n as volatile, there is no visibility guarantee between different threads, and that the value of n can change while a thread is reading it.
But I wonder how likely it is to happen in practice. Or better, if I could replicate it somehow.
So I was trying to write code that will trigger that assertion error, without luck.
Is there a straight forward way to write a test that will prove that this visibility issue is not just theoretical?
Or is it something that changed in the more recent JVMs?
EDIT: related question: Not thread safe Object publishing
But I wonder how likely it is to happen in practice.
Highly unlikely esp as the JIT can turn
n into a local variable and only read it one.
The problem with highly unlikely thread safety bugs is that one day, you might change something which shouldn't matter, like your choice of processor or JVM and suddenly your code breaks randomly.
Or better, if I could replicate it somehow.
There is not guarantee you can reproduce it either.
Is there a straight forward way to write a test that will prove that this visibility issue is not just theoretical?
In some cases, yes. but this one is a hard one to prove, partly because the JVM is not prevented from being more thread safe than the JLS says is the minimum.
For example, the HotSpot JVM often does what you might expect, not just the minimum in the docuemtation. e.g. System.gc() is only a hint according to the javadoc, but by default the HotSpot JVM will do it every time.
The ultimate goal is to add extra behavior to ListenableFutures based on the type of the
Callable/
Runnable argument. I want to add extra behavior to each of the Future methods. (Example use cases can be found in AbstractExecutorService's javadoc and section 7.1.7 of Goetz's Java Concurrency in Practice)
I have an existing ExecutorService which overrides newTaskFor. It tests the argument's type and creates a subclass of
FutureTask. This naturally supports submit as well as invokeAny and invokeAll.
How do I get the same effect for the ListenableFutures returned by a ListeningExecutorService?
Put another way, where can I put this code
if (callable instanceof SomeClass) { return new FutureTask<T>(callable) { public boolean cancel(boolean mayInterruptIfRunning) { System.out.println("Canceling Task"); return super.cancel(mayInterruptIfRunning); } }; } else { return new FutureTask<T>(callable); }
such that my client can execute the
println statement with
ListeningExecutorService executor = ...; Collection<Callable> callables = ImmutableSet.of(new SomeClass()); List<Future<?>> futures = executor.invokeAll(callables); for (Future<?> future : futures) { future.cancel(true); }
Here's a list of things I've already tried and why they don't work.
Pass
MyExecutorService to MoreExecutors.listeningDecorator.
Problem 1: Unfortunately the resulting ListeningExecutorService (an
AbstractListeningExecutorService) doesn't delegate to the ExecutorService methods, it delegates to the execute(Runnable) method on Executor. As a result, the
newTaskFor method on
MyExecutorService is never called.
Problem 2:
AbstractListeningExecutorService creates the Runnable (a ListenableFutureTask) via static factory method which I can't extend.
Inside
newTaskFor, create
MyRunnableFuture normally and then wrap it with a
ListenableFutureTask.
Problem 1: ListenableFutureTask's factory methods don't accept RunnableFutures, they accept
Runnable and
Callable. If I pass
MyRunnableFuture as a Runnable, the resulting
ListenableFutureTask just calls
run() and not any of the
Future methods (where my behavior is).
Problem 2: Even if it did call my
Future methods,
MyRunnableFuture is not a
Callable, so I have to supply a return value when I create the
ListenableFutureTask... which I don't have... hence the
Callable.
Let MyRunnableFuture extend
ListenableFutureTask instead of
FutureTask
Problem:
ListenableFutureTask is now final (as of r10 / r11).
Let
MyRunnableFuture extend ForwardingListenableFuture and implement RunnableFuture. Then wrap the
SomeClass argument in a
ListenableFutureTask and return that from
delegate()
Problem: It hangs. I don't understand the problem well enough to explain it, but this configuration causes a deadlock in FutureTask.Sync .
Source Code: As requested, here's the source for Solution D which hangs:
import java.util.*; import java.util.concurrent.*; import com.google.common.collect.ImmutableSet; import com.google.common.util.concurrent.*; /** See */ public final class MyListeningExecutorServiceD extends ThreadPoolExecutor implements ListeningExecutorService { // ===== Test Harness ===== private static interface SomeInterface { public String getName(); } private static class SomeClass implements SomeInterface, Callable<Void>, Runnable { private final String name; private SomeClass(String name) { this.name = name; } public Void call() throws Exception { System.out.println("SomeClass.call"); return null; } public void run() { System.out.println("SomeClass.run"); } public String getName() { return name; } } private static class MyListener implements FutureCallback<Void> { public void onSuccess(Void result) { System.out.println("MyListener.onSuccess"); } public void onFailure(Throwable t) { System.out.println("MyListener.onFailure"); } } public static void main(String[] args) throws InterruptedException { System.out.println("Main.start"); SomeClass someClass = new SomeClass("Main.someClass"); ListeningExecutorService executor = new MyListeningExecutorServiceD(); Collection<Callable<Void>> callables = ImmutableSet.<Callable<Void>>of(someClass); List<Future<Void>> futures = executor.invokeAll(callables); for (Future<Void> future : futures) { Futures.addCallback((ListenableFuture<Void>) future, new MyListener()); future.cancel(true); } System.out.println("Main.done"); } // ===== Implementation ===== private static class MyRunnableFutureD<T> extends ForwardingListenableFuture<T> implements RunnableFuture<T> { private final ListenableFuture<T> delegate; private final SomeInterface someClass; private MyRunnableFutureD(SomeInterface someClass, Runnable runnable, T value) { assert someClass == runnable; this.delegate = ListenableFutureTask.create(runnable, value); this.someClass = someClass; } private MyRunnableFutureD(SomeClass someClass, Callable<T> callable) { assert someClass == callable; this.delegate = ListenableFutureTask.create(callable); this.someClass = someClass; } @Override protected ListenableFuture<T> delegate() { return delegate; } public void run() { System.out.println("MyRunnableFuture.run"); try { delegate.get(); } catch (InterruptedException e) { e.printStackTrace(); } catch (ExecutionException e) { e.printStackTrace(); } } @Override public boolean cancel(boolean mayInterruptIfRunning) { System.out.println("MyRunnableFuture.cancel " + someClass.getName()); return super.cancel(mayInterruptIfRunning); } } public MyListeningExecutorServiceD() { // Same as Executors.newSingleThreadExecutor for now super(1, 1, 0L, TimeUnit.MILLISECONDS, new LinkedBlockingQueue<Runnable>()); } @Override protected <T> RunnableFuture<T> newTaskFor(Runnable runnable, T value) { if (runnable instanceof SomeClass) { return new MyRunnableFutureD<T>((SomeClass) runnable, runnable, value); } else { return new FutureTask<T>(runnable, value); } } @Override protected <T> RunnableFuture<T> newTaskFor(Callable<T> callable) { if (callable instanceof SomeClass) { return new MyRunnableFutureD<T>((SomeClass) callable, callable); } else { return new FutureTask<T>(callable); } } /** Must override to supply co-variant return type */ @Override public ListenableFuture<?> submit(Runnable task) { return (ListenableFuture<?>) super.submit(task); } /** Must override to supply co-variant return type */ @Override public <T> ListenableFuture<T> submit(Runnable task, T result) { return (ListenableFuture<T>) super.submit(task, result); } /** Must override to supply co-variant return type */ @Override public <T> ListenableFuture<T> submit(Callable<T> task) { return (ListenableFuture<T>) super.submit(task); } }
Based on this question and a couple others discussions I've had recently, I'm coming to the conclusion that
RunnableFuture/
FutureTask is inherently misleading: Clearly you submit a
Runnable, and clearly you get a
Future back, and clearly the underlying
Thread needs a
Runnable. But why should a class implement both
Runnable and
Future? And if it does, which
Runnable is it replacing? That's bad enough already, but then we introduce multiple levels of executors, and things really get out of hand.
If there's a solution here, I think it's going to require treating
FutureTask as an implementation detail of
AbstractExecutorService. I'd focus instead on splitting the problem into two pieces:
Future.
Runnable/
Futuredistinction.)
(grumble Markdown grumble)
class MyWrapperExecutor extends ForwardingListeningExecutorService { private final ExecutorService delegateExecutor; @Override public <T> ListenableFuture<T> submit(Callable<T> task) { if (callable instanceof SomeClass) { // Modify and submit Callable (or just submit the original Callable): ListenableFuture<T> delegateFuture = delegateExecutor.submit(new MyCallable(callable)); // Modify Future: return new MyWrapperFuture<T>(delegateFuture); } else { return delegateExecutor.submit(callable); } } // etc. }
Could that work?
Can anybody explain how thread priority works in java. The confusion here is if java does'nt guarantee the implementation of the
Thread according to its priority then why is this
setpriority() function used for.
My code is as follows :
public class ThreadSynchronization implements Runnable{ public synchronized void run() { System.out.println("Starting Implementation of Thread "+Thread.currentThread().getName()); for(int i=0;i<10;i++) { System.out.println("Thread "+Thread.currentThread().getName()+" value : "+i); } System.out.println("Ending Implementation of Thread "+Thread.currentThread().getName()); } public static void main(String[] args) { System.out.println("Program starts..."); ThreadSynchronization th1 = new ThreadSynchronization(); Thread t1 = new Thread(th1); t1.setPriority(1); synchronized(t1) { t1.start(); } ThreadSynchronization th2 = new ThreadSynchronization(); Thread t2 = new Thread(th2); t2.setPriority(9); synchronized (t2) { t2.start(); } System.out.println("Program ends..."); } }
In the above program even if I change the priority I find no difference in the output. Also a real time application of how thread priority can be used would be of great help. Thanks.
Thread priority is just a hint to OS task scheduler and is dependent on the underlying OS. OS will try to allocate more resources to a high priority thread but it does not guarantee it. So if your program is dependent on thread priorities than you are in-turn binding your program to underlying OS, which is bad.
From Java Concurrency in Practice:
Avoid the temptation to use thread priorities, since they increase platform dependence and can cause liveness problems. Most concurrent applications can use the default priority for all threads.
Hi,
I'm using Scala 2.10 with the new futures library and I'm trying to write some code to test an infinite loop. I use a
scala.concurrent.Future to run the code with the loop in a separate thread. I would then like to wait a little while to do some testing and then kill off the separate thread/future. I have looked at
Await.result but that doesn't actually kill the future. Is there any way to timeout or kill the new Scala 2.10 futures?
I would prefer not having to add external dependencies such as Akka just for this simple part.
No - you will have to add a flag that your loop checks. If the flag is set, stop the loop. Make sure the flag is at least
volatile.
See Java Concurrency in Practice, p 135-137.
I am aware that the purpose of volatile variables in Java is that writes to such variables are immediately visible to other threads. I am also aware that one of the effects of a synchronized block is to flush thread-local memory to global memory.
I have never fully understood the references to 'thread-local' memory in this context. I understand that data which only exists on the stack is thread-local, but when talking about objects on the heap my understanding becomes hazy.
I was hoping that to get comments on the following points:
When executing on a machine with multiple processors, does flushing thread-local memory simply refer to the flushing of the CPU cache into RAM?
When executing on a uniprocessor machine, does this mean anything at all?
If it is possible for the heap to have the same variable at two different memory locations (each accessed by a different thread), under what circumstances would this arise? What implications does this have to garbage collection? How aggressively do VMs do this kind of thing?
(EDIT: adding question 4) What data is flushed when exiting a synchronized block? Is it everything that the thread has locally? Is it only writes that were made inside the synchronized block?
Object x = goGetXFromHeap(); // x.f is 1 here Object y = goGetYFromHeap(); // y.f is 11 here Object z = goGetZFromHead(); // z.f is 111 here y.f = 12; synchronized(x) { x.f = 2; z.f = 112; } // will only x be flushed on exit of the block? // will the update to y get flushed? // will the update to z get flushed?
Overall, I think am trying to understand whether thread-local means memory that is physically accessible by only one CPU or if there is logical thread-local heap partitioning done by the VM?
Any links to presentations or documentation would be immensely helpful. I have spent time researching this, and although I have found lots of nice literature, I haven't been able to satisfy my curiosity regarding the different situations & definitions of thread-local memory.
Thanks very much.
It is really an implementation detail if the current content of the memory of an object that is not synchronized is visible to another thread.
Certainly, there are limits, in that all memory is not kept in duplicate, and not all instructions are reordered, but the point is that the underlying JVM has the option if it finds it to be a more optimized way to do that.
The thing is that the heap is really "properly" stored in main memory, but accessing main memory is slow compared to access the CPU's cache or keeping the value in a register inside the CPU. By requiring that the value be written out to memory (which is what synchronization does, at least when the lock is released) it forcing the write to main memory. If the JVM is free to ignore that, it can gain performance.
In terms of what will happen on a one CPU system, multiple threads could still keep values in a cache or register, even while executing another thread. There is no guarantee that there is any scenario where a value is visible to another thread without synchronization, although it is obviously more likely. Outside of mobile devices, of course, the single-CPU is going the way of floppy disks, so this is not going to be a very relevant consideration for long.
For more reading, I recommend Java Concurrency in Practice. It is really a great practical book on the subject.
I know that finally blocks in deamon threads would not be executed. But my meticulous nature tries to understand why and what happens in JVM so special that it could not call the code under this block.
I think that it somehow related to call stack that it whould not unwind, but don't know how. Can someone please shed some light on this. Thanks.
If the JVM exits while the try or catch code is being executed, then the finally block may not execute.
Normal Shutdown - this occurs either when the last non-daemon thread exits OR when Runtime. Daemon threads should be used sparingly few processing activities can be safely abandoned at any time with no cleanup. In particular, it is dangerous to use daemon threads for tasks that might perform any sort of I/O. Daemon threads are best saved for "housekeeping" tasks, such as a background thread that periodically removes expired entries from an in-memory cache.'ve got a highly multithreaded app written in Ruby that shares a few instance variables. Writes to these variables are rare (1%) while reads are very common (99%). What is the best way (either in your opinion or in the idiomatic Ruby fashion) to ensure that these threads always see the most up-to-date values involved? Here's some ideas so far that I had (although I'd like your input before I overhaul this):
synchronizeblocks in my code and I don't see an easy way to avoid it.
freezemethod (see here), although it looks equally cumbersome and doesn't give me any of the synchronization benefits that the first option gives.
These options both seem pretty similar but hopefully anyone out there will have a better idea (or can argue well for one of these ideas). I'd also be fine with making the objects immutable so they aren't corrupted or altered in the middle of an operation, but I don't know Ruby well enough to make the call on my own and this question seems to argue that objects are highly mutable.
Using the lock is the most appropiate way to do this. You can see this presentation by Jim Weirich on the subject: What All Rubyist Should Know About Threading.
Also, freezing an object won't help you here since you want to modify these variables. Freezing them in place means that no further modifications will be applicable to these (and therefore your 1% of writes won't work).
This question relates to Java collections - specifically Hashtable and Vector - but may also apply elsewhere.
I've read in many places how good it is to program to interfaces and I agree 100%. The ability to program to a List interface, for instance, without regard for the underlying implementation is most certainly helpful for decoupling and testing purposes. With collections, I can see how an ArrayList and a LinkedList are applicable under different circumestances, given the differences with respect to internal storage structure, random access times, etc. Yet, these two implementations can be used under the same interface...which is great.
What I can't seem to place is how certain synchronized implementations (in particular Hashtable and Vector) fit in with these interfaces. To me, they don't seem to fit the model. Most of the underlying data structure implementations seem to vary in how the data is stored (LinkedList, Array, sorted tree, etc.), whereas synchronization deals with conditions (locking conditions) under which the data may be accessed. Let's look at an example where a method returns a Map collection:
public Map<String, String> getSomeData();
Let's assume that the application is not concerned at all with concurrency. In this case, we operate on whatever implementation the method returns via the interface...Everybody is happy. The world is stable.
However, what if the application now requires attention on the concurrency front? We now cannot operate without regard for the underlying implementation - Hashtable would be fine, but other implementations must be catered for. Let's consider 3 scenarios:
1) Enforce synchronization using synchronization blocks, etc. when adding/removing with the collection. Wouldn't this, however, be overkill in the event that a synchronized implementation (Hashtable) gets returned?
2) Change the method signature to return Hashtable. This, however, tightly binds us to the Hashtable implementation, and as a result, the advantages of programming to an interface are thrown out the window.
3) Make use of the concurrent package and change the method signature to return an implementation of the ConcurrentMap interface. To me, this seems like the way forward.
Essentially, it just seems like certain synchronized implementations are a bit of a misfit within the collections framework in that, when programming to interfaces, the synchronization issue almost forces one to think about the underlying implementation.
Am I completely missing the point here?
Thanks.
What you are struggling with is the fact that in a multi-threaded environment, a client cannot naively use an object that has mutable, shared state. The collection interface, by itself, tells you nothing about how the object can be used safely. Returning a ConcurrentMap helps give some additional information but only for that particular case.
Normally, you have to communicate the thread safety issues separately in documentation (e.g., javadoc) or by using custom annotations as is described in Java Concurrency in Practice. The client of the returned object will have to use its own locking mechanism or one that you provide. The interface is usually orthogonal to the thread safety.
It's not a problem if the client knows that all the implementations are from the Concurrent implementations, but that information is not communicated by the interface itself.
From Java Concurrency in practice Chapter 3.3.3. ThreadLocal
Thread-local variables are often used to prevent sharing in designs based on mutable Singletons or global variables.
If we wrap the mutable Singleton guy in a ThreadLocal each thread will have its own copy of the Singleton ? How will it remain a singleton then ? Is this what the authors meant or am I missing something pretty obvious here ?
If we wrap the mutable Singleton guy in a ThreadLocal
AFAIK you do not wrap the singleton class with ThreadLocal, but the object contained within the singleton which is mutable or non-thread safe. As the example discusses correctly that the JDBC Connection is not thread safe and will require additional protection, which in turn increases contention.
So in the cases when the Singletons are used just for the purpose of sharing, then replacing those things with ThreadLocal is a good idea, as all the threads have their own Connection and no more added protection is required.
Other good example of use case of ThreadLocal is Random generation, if there is a single
Random object then there is contention within threads for the "seed", so if each thread has its own Random object then there is no contention any more and that makes sense.
I heard many times that Java Swing threading model is wrong. I don't fully understand why, I know that the problem is related to the fact that you can draw on a
Drawable from another thread other than the main UI thread. I know that there are utility functionalities like
SwingUtilities.invokeAndWait and
SwingUtilities.invokeLater that let you do your painting in a
Runnable, that in turn is run by the Event Dispatcher thread. I guess that this way you ensure that painting is done synchronously and this doesn't leave the buffer in an incosistent state.
My question is: how do "good" UI toolkits behave? What solutions are adopted?
Brian Goetz's Java Concurrency in Practice,
9.1 Why are GUIs single-threaded?:
...In the old days, GUI applications were single-threaded and GUI events were processed from a “main event loop”. Modern GUI frameworks use a model that is only slightly different: they create a dedicated event dispatch thread (EDT) for handling GUI events....
I'm currently reviewing/refactoring a multithreaded application which is supposed to be multithreaded in order to be able to use all the available cores and theoretically deliver a better / superior performance (superior is the commercial term for better :P)
What are the things I should be aware when programming multithreaded applications?
I mean things that will greatly impact performance, maybe even to the point where you don't gain anything with multithreading at all but lose a lot by design complexity. What are the big red flags for multithreading applications?
Should I start questioning the locks and looking to a lock-free strategy or are there other points more important that should light a warning light?
Edit: The kind of answers I'd like are similar to the answer by Janusz, I want red warnings to look up in code, I know the application doesn't perform as well as it should, I need to know where to start looking, what should worry me and where should I put my efforts. I know it's kind of a general question but I can't post the entire program and if I could choose one section of code then I wouldn't be needing to ask in the first place.
I'm using Delphi 7, although the application will be ported / remake in .NET (c#) for the next year so I'd rather hear comments that are applicable as a general practice, and if they must be specific to either one of those languages
You should first be familiar with Amdahl's law.
If you are using Java, I recommend the book Java Concurrency in Practice; however, most of its help is specific to the Java language (Java 5 or later).
In general, reducing the amount of shared memory increases the amount of parallelism possible, and for performance that should be a major consideration.
Threading with GUI's is another thing to be aware of, but it looks like it is not relevant for this particular problem.
I need to run some code for a predefined length of time, when the time is up it needs to stop. Currently I am using a TimerTask to allow the code to execute for a set amount of time but this is causing endless threads to be created by the code and is just simply not efficient. Is there a better alternative?
Current code;
// Calculate the new lines to draw Timer timer3 = new Timer(); timer3.schedule(new TimerTask(){ public void run(){ ArrayList<String> Coords = new ArrayList<String>(); int x = Float.valueOf(lastFour[0]).intValue(); int y = Float.valueOf(lastFour[1]).intValue(); int x1 = Float.valueOf(lastFour[2]).intValue(); int y1 = Float.valueOf(lastFour[3]).intValue(); //Could be the wrong way round (x1,y1,x,y)? Coords = CoordFiller.coordFillCalc(x, y, x1, y1); String newCoOrds = ""; for (int j = 0; j < Coords.size(); j++) { newCoOrds += Coords.get(j) + " "; } newCoOrds.trim(); ClientStorage.storeAmmendedMotion(newCoOrds); } } ,time);
If you are using Java5 or later, consider
ScheduledThreadPoolExecutor and
Future. With the former, you can schedule tasks to be run after a specified delay, or at specified intervals, thus it takes over the role of
Timer, just more reliably.
The
Timerfacility manages the execution of deferred ("run this task in 100 ms") and periodic ("run this task every 10 ms") tasks. However,
Timerhas some drawbacks, and
ScheduledThreadPoolExecutorshould be thought of as its replacement. [...]
A
Timercreates only a single thread for executing timer tasks. If a timer task takes too long to run, the timing accuracy of other
TimerTasks can suffer. If a recurring
TimerTaskis scheduled to run every 10 ms and another
TimerTasktakesis that it behaves poorly if a
TimerTaskthrows an unchecked exception. The
Timerthread doesn't catch the exception, so an unchecked exception thrown from a
TimerTaskterminates the timer thread.
Timeralso doesn't resurrect the thread in this situation; instead, it erroneously assumes the entire
Timerwas cancelled. In this case,
TimerTasks that are already scheduled but not yet executed are never run, and new tasks cannot be scheduled.
From Java Concurrency in Practice, section 6.2.5.
And
Futures can be constrained to run at most for the specified time (throwing a
TimeoutException if it could not finish in time).
If you don't like the above, you can make the task measure its own execution time, as below:
int totalTime = 50000; // in nanoseconds long startTime = System.getNanoTime(); boolean toFinish = false; while (!toFinish) { System.out.println("Task!"); ... toFinish = (System.getNanoTime() - startTime >= totalTime); }
In Java concurrency, what makes a thread "active"? Just the fact that it's not idling? Is a "waiting" or "suspended" thread still considered, technically, active?
In this context I take "active" to mean that they are executing code. Inactive threads--those that are blocked on I/O calls or awaiting locks--consume only memory resources without affecting the CPU (or only marginally).
However, it really depends on what your threads are doing. If each thread is iterating over numbers to calculate primes, they are fully CPU-bound, and you should really only have one per core to maximize throughput. If they are making HTTP requests or performing file I/O, you can afford to have quite a few per core.
In the end, a blanket statement covering all threads in general without regard for what they are doing is pretty worthless.
I highly recommend the book Java Concurrency in Practice for a high-quality treatment of the topic of concurrent Java programming.
Will Thread priority increases accuracy of
Thread.sleep(50);?
As we know Threads aren't accurate when you call sleep for 50ms, But does it increases accuracy by any mean? If thread is listed as
MAX_PRIORITY.
Will be thankful for any kind of explanation.
Yes it may make it more accurate.
Nevertheless, from Java Concurrency In Practice, by Brian goetz:
The thread priority mechanism is a blunt instrument, and it's not always obvious what effect changing priorities will have; boosting a thread's priority might do nothing or might always cause one thread to be scheduled in preference to the other, causing starvation.
It is generally wise to resist the temptation to tweak thread priorities. As soon as you start modifying priorities, the behavior of your application becomes platform-specific and you introduce the risk of starvation. You can often spot a program that is trying to recover from priority tweaking or other responsiveness problems by the presence of
Thread.sleepor
Thread.yieldcalls in odd places, in an attempt to give more time to lower-priority threads.
Therefore avoid changing the thread priorities and re-think your design if you really need your
Thread.sleep(50) to be that accurate!
When using threads I sometimes visualise them as weaving together 3 or more dimensional interconnections between Objects in a Spatial context. This isn't a general use case scenario, but for what I do it is a useful way to think about it.
Concurrency is a deep and complicated topic to cover. Books like Java Concurrency in Practice may help.
See Concurrency Utilities Overview for APIs on threading. BlockingQueue<E> can be useful for example.
A Queue that additionally supports operations that wait for the queue to become non-empty when retrieving an element, and wait for space to become available in the queue when storing an element.
See CountDownLatch
A synchronization aid that allows one or more threads to wait until a set of operations being performed in other threads completes.
and CyclicBarrier for some interesting behavior.
A synchronization aid that allows a set of threads to all wait for each other to reach a common barrier point.
Edit: I am reading Java Concurrency in Practice now. It's very good.
From the famous book Java Concurrency in Practice chapter 3.4.1 Final fields
Just as it is a good practice to make all fields private unless they need greater visibility[EJ Item 12] , it is a good practice to make all fields final unless they need to be mutable.
My understanding of final references in Java : A final reference/ field just prevents the the field from getting re initialized but if it references a mutable object , we can still change its state rendering it mutable . So I am having difficulty understanding the above quote . What do you think ?
final fields prevent you from changing the field itself (by making it "point" to some other instance), but if the field is a reference to a mutable object, nothing will stop you from doing this:
public void someFunction (final Person p) { p = new Person("mickey","mouse"); //cant do this - its final p.setFirstName("donald"); p.setLastName("duck"); }
the reference p above is immutable, but the actual Person pointed to by the reference is mutable. you can, of course, make class Person an immutable class, like so:
public class Person { private final String firstName; private final String lastName; public Person(String firstName, String lastName) { this.firstName = firstName; this.lastName = lastName; } //getters and other methods here }
such classes once created, cannot be modified in any way.
I am not understanding this concept in any manner.
public class SomeName { public static void main(String args[]) { } }
This is my class SomeName. Now what is thread here.
You might want to think of thread as CPU executing the code that you wrote.
A thread is a single sequential flow of control within a program.
From Java concurrency in practice:
Thread-safe classes encapsulate any needed synchronization so that clients need not provide their own.
I'd like to kill threads that are stuck in deadlock state. First, we can detect thread ids in deadlock state using the
findDeadlockedThreads() method of the
ThreadMXBean class in
java.lang.management.
Then, I'd like to kill the threads by thread ids, and thus I have two related questions:
(1) How to get the control of a thread by thread id?
(2) How to kill a blocked thread? I think that invokting interrupt() method will give an exception to the thread and will kill the thread.
Even Thread.stop() is unable to stop thread deadlocked on intrinsic lock.If a thread is blocked waiting for an intrinsic lock, there is nothing you can do to stop it short of ensuring that it eventually acquires the lock and makes enough progress that you can get its attention some other way. It works with Lock.lock() through.
Let's say I have the following,
public class Foo{ private String bar; public String getBar(){ return bar; } public void setBar(String bar){ this.bar = bar; } }
Are these methods automatically threadsafe due to the immutable nature of the
String class, or is some locking mechanism required?
No, this is not threadsafe.
Foo is mutable, so if you want to ensure that different threads see the same value of
bar – that is, consistency – either:
bar
volatile, or
synchronized, or
AtomicReference<String>.
The reads and writes of
bar are themselves atomic, but atomicity is not thread safety.
For in-depth coverage of Java concurrency, grab a copy of Java Concurrency in Practice (aka JCIP).
Q1. What is a condVar in Java? If I see the code below, does a condition variable necessarily have to be within the 'mutex.acquire()' and 'mutex.release()' block?
public void put(Object x) throws InterruptedException { mutex.acquire(); try { while (count == array.length) notFull.await(); array[putPtr] = x; putPtr = (putPtr + 1) % array.length; ++count; notEmpty.signal(); } finally { mutex.release(); } }
I have three threads myThreadA, myThreadB, myThreadC running which call the same function commonActivity() which triggers the function myWorkReport() e.g.
public void myWorkReport(){ mutexMyWork.acquire(); try{ while(runMyWork){ doWork(); conditionMyWork.timedwait(sleepMyWork); } } finally{ mutexMyWork.release() } } public void commonActivity(){ try{ conditionMyWork.signal(); }finally{ //cleanup } } public void myThreadA(){ mutexA.acquire(); try{ while(runningA){ //runningA is a boolean variable, this is always true as long as application is running conditionA.timedwait(sleepA); commonActivity(); } } finally{ mutexA.release(); } } public void myThreadB(){ mutexB.acquire(); try{ while(runningB){ //runningB is a boolean variable, this is always true as long as application is running conditionB.timedwait(sleepB); commonActivity(); } } finally{ mutexB.release(); } } public void myThreadC(){ mutexC.acquire(); try{ while(runningC){ //runningC is a boolean variable, this is always true as long as application is running. conditionC.timedwait(sleepC); commonActivity(); } } finally{ mutexC.release(); } }
Q2. Is using timedwait a good practice. I could have achieved the same by using sleep(). If using sleep() call is bad, Why?
Q3. Is there any better way to do the above stuff?
Q4. Is it mandatory to have condition.signal() for every condition.timedwait(time);
Q1: A Condition object is associated (and acquired from) a Lock (aka mutext) object. The javadoc for the class is fairly clear as to its usage and application. To wait on the condition you need to have acquired the lock, and it is good coding practice to do so in a try/finally block (as you have). As soon as the thread that has acquired the lock waits on a condition for that lock, the lock is relinquished (atomically).
Q2: Using timed wait is necessary to insure liveness of your program in case where the condition you are waiting for never occurs. Its definitely a more sophisticated form, and it is entirely useless if you do not check for the fact that you have timed out and take action to handle the time out condition.
Using sleep is an acceptable form of waiting for something to occur, but if you are already using a Lock ("mutex") and have a condition variable for that lock, it make NO sense not to use the time wait method of the condition:
For example, in your code, you are simply waiting for a given period but you do NOT check to see if condition occurred or if you timed out. (That's a bug.) What you should be doing is checking to see if your timed call returned true or false. (If it returns false, then it timed out & the condition has NOT occured (yet)).
public void myThreadA(){ mutexA.acquire(); try{ while(runningA){ //runningA is a boolean variable if(conditionA.await (sleepATimeoutNanos)) commonActivity(); else { // timeout! anything sensible to do in that case? Put it here ... } } } finally{ mutexA.release(); } }
Q3: [edited] The code fragments require a more detailed context to be comprehensible. For example, its not entirely clear if the conditions in the threads are all the same (but am assuming that they are).
If all you are trying to do is insure commonActivity() is executed only by one thread at a time, AND, certain sections of the commonActivity() do NOT require contention control, AND, you do require the facility to time out on your waits, then, you can simply use a Semaphore. Note that sempahore has its own set of methods for timed waits.
If ALL of the commonActivity() is critical, AND, you really don't mind waiting (without timeouts) simply make commonActivity() a synchronized method.
[final edit:)] To be more formal about it, conditions are typically used in scenarios where you have two or more thread co-operating on a task and you require hand offs between the threads.
For example, you have a server that is processing asynchronous responses to user requests and the user is waiting for fulfillment of a Future object. A condition is perfect in this case. The future implementation is waiting for the condition and the server signals its completion.
In the old days, we would use wait() and notify(), but that was not a very robust (or trivially safe) mechanism. The Lock and Condition objects were designed precisely to address these shortcomings.
(A good online resource as a starting point)
I am reading through java codes to ensure it is thread safe.
As I understand, any local variables within the method is thread safe since it belongs to the stack memory address. Any class / instance variables is not thread safe as it belongs to the heap memory is shared by other thread.
By rule of thumb, I can put a synchronized keyword on every method which touches the class variables.
Is there any eclipse plugin, or rules I can analyze / prevent multi-threading issues?
I don't think there is anything that will definitively check for thread safety, there are some tools that have already been mentioned, like findbugs that will do a reasonable job of finding the obvious mistakes.
It is very much up to the programmer to ensure that their program is not leaking variables or references into different threads and where things are used in multiple threads ensuring that each thread see's the 'correct' value.
Design for safety before performance, you might find that it performs fine for your needs but if you put optimisation in you increase complexity and potentially failure, it might not end up being the bottleneck.
I would recommend reading reading specifically Java Concurrency In Practice, you may also find Effective Java helpful as well.
I have always been kind of confused by threads, and my class right now makes heavy use of them. We are using java.util.concurrent but I don't even really get the basics. UpDownLatch, Futures, Executors; these words just fly over my head. Can you guys suggest any resources to help learn what I need from the ground up?
Thanks a lot in advance!
Read "Java Concurrency In Practice" by Brian Goetz. Great book.
Or Doug Lea's "Concurrent Programming In Java". Old school, terrific stuff. Pre-dates the concurrent package, but it's the basis for a lot of it.
I'm assuming that you already went through the Java tutorial's threading chapter?
There are many good books on threading in general, but also specifically in Java.
For example, Java Concurrency in Practice
Or is it?
I have a thread object from:
Thread myThread = new Thread(pObject);
Where pObject is an object of a class implementing the Runnable interface and then I have the start method called on the thread object like so:
myThread.start();
Now, my understanding is that when start() is called, the JVM implicitly (and immediately) calls the run() method which may be overridden (as it is in my case)
However, in my case, it appears that the start() method is not called immediately (as desired) but until the other statements/methods are completed from the calling block i.e. if I had a method after the start() call like so:
myThread.start(); doSomethingElse();
doSomthingElse() gets executed before the run() method is run at all.
Perhaps I am wrong with the initial premise that run() is always called right after the start() is called. Please help! The desired again is making executing run() right after start(). Thanks.
You've started a new thread. That thread runs in parallel to the thread that started it so the order could be:
pObject.run(); doSomethingElse();
or
doSomethingElse(); pObject.run();
or, more likely, there will be some crossover.
pObject.run() may run in the middle of
doSomethingElse() or vice versa or one will start before the other finishes and so on. It's important to understand this and understand what is meant by an atomic operation or you will find yourself with some really hard-to-find bugs.
It's even more complicated if two or more threads access the same variables. The value in one may never be updated in one thread under certain circumstances.
I highly suggest:
You don't make your program multi-threaded unless you absolutely need to; and
If you do, buy and read from cover to cover Brian Goetz's Java Concurrency in Practice.
please let me know a java multithread application which i can refer ( source ) and debug to understand how multithreading actually works in java ?
Sun has a decent tutorial. But in all honestly, multithreaded programming is exremely difficult. There is a well regarded book, Java Concurrency in Practice. If you really want to learn how to take advantage of multiple cores, look into clojure or scala.
There are certain algorithms whose running time can decrease significantly when one divides up a task and gets each part done in parallel. One of these algorithms is merge sort, where a list is divided into infinitesimally smaller parts and then recombined in a sorted order. I decided to do an experiment to test whether or not I could I increase the speed of this sort by using multiple threads. I am running the following functions in Java on a Quad-Core Dell with Windows Vista.
One function (the control case) is simply recursive:
// x is an array of N elements in random order public int[] mergeSort(int[] x) { if (x.length == 1) return x; // Dividing]; // Sending them off to continue being divided mergeSort(a); mergeSort(b); // Recombining++; } return x; }
The other is in the 'run' function of a class that extends thread, and recursively creates two new threads each time it is called:
public class Merger extends Thread { int[] x; boolean finished; public Merger(int[] x) { this.x = x; } public void run() { if (x.length == 1) { finished = true; return; } // Divide]; // Begin two threads to continue to divide the array Merger ma = new Merger(a); ma.run(); Merger mb = new Merger(b); mb.run(); // Wait for the two other threads to finish while(!ma.finished || !mb.finished) ; // Recombine++; } finished = true; } }
It turns out that function that does not use multithreading actually runs faster. Why? Does the operating system and the java virtual machine not "communicate" effectively enough to place the different threads on different cores? Or am I missing something obvious?
As others said; This code isn't going to work because it starts no new threads. You need to call the start() method instead of the run() method to create new threads. It also has concurrency errors: the checks on the finished variable are not thread safe.
Concurrent programming can be pretty difficult if you do not understand the basics. You might read the book Java Concurrency in Practice by Brian Goetz. It explains the basics and explains constructs (such as Latch, etc) to ease building concurrent programs.
what is the difference b/w intrinsic locking, client side locking & extrinsic locking ?
What is the best way to create a thread safe class ?
which kind of locking is prefered & why ?
I would highly recommend you to read "Java Concurrency In Practice" by Brian Goetz. It is an excellent book that will help you to understand all the concepts about concurrency!
About your questions, I am not sure if I can answer them all, but I can give it a try. Most of the times, if the question is "what is the best way to lock" etc, the answer is always it depends on what problem you try to solve.
Question 1:
What you try to compare here are not exactly comparable;
Java provides a built in mechanism for locking, the
synchronized block. Every object can implicitly act as a lock for purposes of synchronization; these built-in locks are called intrinsic locks.
What is interesting with the term
intrinsic is that the ownership of a lock is per thread and not per method invocation. That means that only one thread can hold the lock at a given time. What you might also find interesting is the term
reentrancy, which allows the same thread to acquire the same lock again. Intrinsic locks are reentrant.
Client side locking, if I understand what you mean, is something different. When you don't have a thread safe class, your clients need to take care about this. They need to hold locks so they can make sure that there are not any race conditions.
Extrinsic locking is, instead of using the built in mechanism of synchronized block which gives you implicit locks to specifically use explicit locks. It is kind of more sophisticate way of locking. There are many advantages (for example you can set priorities). A good starting point is the java documentation about locks
Question 2: It depends :) The easiest for me is to try to keep everything immutable. When something is immutable, I don't need to care about thread safety anymore
Question 3: I kind of answered it on your first question
How to manage thread count with respect to available memory in Java. That means, I want to control the count of running threads, by coding, with respect to memory available in the server. Any types of suggestion, tips, tutorial, lib is welcome.
Yes use executorService. In Java Concurrency in Practice it is actually recommended to set the thread count based on the number of processors. I think the formula was thread count = number of processors + one, but I may remember wrong...
I am new to multithreading, and get to know about the functionality of wait, notify and notifyAll. I want three threads to execute one after another and print alphabets from A to Z. I have tried below code and it seems working also, but I doubt if this is the best possible way to tackle the problem. Is there any other way, I can make it more simple and better ? It seems some portion of my code is repeating.
package demo.threading; class Flags { boolean flagA = true; boolean flagB = false; boolean flagC = false; } class Container { Flags flags = new Flags(); int charVal = (int) 'A'; void producer1() { try { while (charVal <= (int) 'Z') { synchronized (this) { if (!flags.flagA) wait(); else { System.out.println(Thread.currentThread().getName() + " Produced : " + (char) charVal); flags.flagA = false; flags.flagB = true; charVal++; notifyAll(); Thread.sleep(1000); } } } } catch (InterruptedException ex) { ex.printStackTrace(); } } void producer2() { try { while (charVal <= (int) 'Z') { synchronized (this) { if (!flags.flagB) wait(); else { System.out.println(Thread.currentThread().getName() + " Produced : " + (char) charVal); flags.flagB = false; flags.flagC = true; charVal++; notifyAll(); Thread.sleep(1000); } } } } catch (InterruptedException ex) { ex.printStackTrace(); } } void producer3() { try { while (charVal <= (int) 'Z') { synchronized (this) { if (!flags.flagC) wait(); else { System.out.println(Thread.currentThread().getName() + " Produced : " + (char) charVal); flags.flagC = false; flags.flagA = true; charVal++; notifyAll(); Thread.sleep(1000); } } } } catch (InterruptedException ex) { ex.printStackTrace(); } } } public class Main { public static void main(String[] args) { Container container = new Container(); Thread t1 = new Thread(() -> container.producer1(), "Thread 1"); Thread t2 = new Thread(() -> container.producer2(), "Thread 2"); Thread t3 = new Thread(() -> container.producer3(), "Thread 3"); t1.start(); t2.start(); t3.start(); } }
Output should be :
Thread 1 Produced : A Thread 2 Produced : B Thread 3 Produced : C Thread 1 Produced : D Thread 2 Produced : E Thread 3 Produced : F
As pointed out before, if you want to do this "one after another", you actually don't need multiple threads. However, you can achieve this by using a
Semaphore:
int numberOfThreads = 3; Semaphore semaphore = new Semaphore(1); for (int i = 1; i <= numberOfThreads; i++) { new Thread(() -> { try { semaphore.acquire(); for (char c : "ABCDEFGHIJKLMNOPQRSTUVWXYZ".toCharArray()) { System.out.println(Thread.currentThread().getName() + " produced: " + c + "."); } } catch (InterruptedException e) { // NOP } finally { semaphore.release(); } }, "Thread " + i).start(); }
I recommend exploring
java.util.concurrent which is available since Java 5. It's a great help to keep your concurrent code concise and simple compared with Java's low-level concurrency primitives such as
wait and
notify. If you're really interested in that topic, Brian Goetz's "Java Concurrency in Practice" is a must-read.
EDIT:
public class ConcurrentAlphabet { private Thread current; public static void main(String[] args) { new ConcurrentAlphabet().print(3, "ABCDEFGHIJKLMNOPQRSTUVWXYZ".toCharArray()); } public void print(int numberOfThreads, char[] alphabet) { Thread[] threads = new Thread[numberOfThreads]; for (int i = 1; i <= numberOfThreads; i++) { int offset = i - 1; threads[offset] = new Thread(() -> { Thread me = Thread.currentThread(); Thread next = threads[(offset + 1) % numberOfThreads]; for (int index = offset; index < alphabet.length; index += numberOfThreads) { synchronized (this) { while (me != current) { try { wait(); } catch (InterruptedException e) { /* NOP */ } } System.out.println(me.getName(); + " produced: " + alphabet[index] + "."); current = next; notifyAll(); } } }, "Thread " + i); } current = threads[0]; for (Thread t : threads) { t.start(); } } }
I can see that
ReentrantLock is around 50% faster than
synchronized and
AtomicInteger 100% faster. Why such difference with the execution time of these three synchronization methods:
synchronized blocks,
ReentrantLock and
AtomicInteger (or whatever class from the
Atomic package).
Are there any other popular and extended synchronizing methods aside than these ones?
I think that you are doing a common mistake evaluating those 3 elements for comparison.
Basically when a ReentrantLock is something that allows you more flexibility when your are synchronizing blocks compared with the synchronized key. Atomic is something that adopts a different approach based on CAS(Compare and Swap) to manage the updates in a concurrent context.
I suggest you to read in deep a bible of concurrency for the Java platform.
Java Concurrency in Practice - Brian Göetz, Tim Peierls, Joshua Bloch, Joseph Bowbeer, David Holmes & Doug Lea
There's a lot difference in having a deep knowledge on concurrency and know what a language can offer you to solve concurrency problems and taking advantage of multithreading.
In terms of performance, it depends on the current scenario.
If I have the following code
class SomeClass { ... public synchronized methodA() { .... } public synchronized methodB(){ .... } }
This would synchronized on the 'this' object.
However, if my main objective here is to make sure multiple threads don't use methodA (or methodB) at the same time, but they CAN use methodA AND methodB concurrently,
then is this kind of design restrictive? since here thread1 lock the object (monitor object associated with the object) for running methodA but meanwhile thread2 is also waiting on the object lock even though methodA and methodB can run concurrently.
Is this understanding correct?
If yes, is this the kind of situation where we use synchronized block on a private dummy object so that methodA and methodB can run parallely with different thread but not methodA (or methodB) with different threads.
Thanks.
You've answered the question yourself: use one lock object per method and you're safe.
private final Object lockA = new Object(); private final Object lockB = new Object(); public void methodA() { synchronized(lockA){ .... } } public void methodB() { synchronized(lockB){ .... } }
For more advanced locking mechanisms (e.g.
ReentrantLock), read Java Concurrency in Practice by Brian Goetz et al. You should also read Effective Java by Josh Bloch, it also contains some items about using
synchronized.
I have a Java program that runs many small simulations. It runs a genetic algorithm, where each fitness function is a simulation using parameters on each chromosome. Each one takes maybe 10 or so seconds if run by itself, and I want to run a pretty big population size (say 100?). I can't start the next round of simulations until the previous one has finished. I have access to a machine with a whack of processors in it and I'm wondering if I need to do anything to make the simulations run in parallel. I've never written anything explicitly for multicore processors before and I understand it's a daunting task.
So this is what I would like to know: To what extent and how well does the JVM parallel-ize? I have read that it creates low level threads, but how smart is it? How efficient is it? Would my program run faster if I made each simulation a thread? I know this is a huge topic, but could you point me towards some introductory literature concerning parallel processing and Java?
Thanks very much!
Update: Ok, I've implemented an ExecutorService and made my small simulations implement Runnable and have run() methods. Instead of writing this:
Simulator sim = new Simulator(args); sim.play(); return sim.getResults();
I write this in my constructor:
ExecutorService executor = Executors.newFixedThreadPool(32);
And then each time I want to add a new simulation to the pool, I run this:
RunnableSimulator rsim = new RunnableSimulator(args); exectuor.exectue(rsim); return rsim.getResults();
The
RunnableSimulator::run() method calls the
Simulator::play() method, neither have arguments.
I think I am getting thread interference, because now the simulations error out. By error out I mean that variables hold values that they really shouldn't. No code from within the simulation was changed, and before the simulation ran perfectly over many many different arguments. The sim works like this: each turn it's given a game-piece and loops through all the location on the game board. It checks to see if the location given is valid, and if so, commits the piece, and measures that board's goodness. Now, obviously invalid locations are being passed to the commit method, resulting in index out of bounds errors all over the place.
Each simulation is its own object right? Based on the code above? I can pass the exact same set of arguments to the
RunnableSimulator and
Simulator classes and the runnable version will throw exceptions. What do you think might cause this and what can I do to prevent it? Can I provide some code samples in a new question to help?
You can also see the new fork join framework by Doug Lea. One of the best book on the subject is certainly Java Concurrency in Practice. I would strong recommend you to take a look at the fork join model.
I'm looking for a good online introduction to memory barriers and the usual pitfalls in Java code:
synchronizedtoo often or not often enough
volatileand
final
I'd be especially interested in code which shows the behavior and/or examples how to solve common problems (like creating a map that several threads can access and where values are added lazily).
I know you said online, but Java Concurrency In Practice is the java concurrency guide these days.
I'm trying to implement multithreading in my Java GUI application to free up the interface when a couple of intensive methods are run. I'm primarily from a C# development background and have used Threads in that environment a couple of times, not having much difficulty of it all really.
Roughly:
C#
Now onto the Java app itself, it's a GUI application that has a few buttons that perform differrent actions, the application plays MIDI notes using the MIDI API and I have functions such as play, stop and adding individual notes. (A key thing to note is that I do not play MIDI files but manually create the notes/messages, playing them through a track).
There are three particular operations I want to run in their own thread
I have a class called MIDIControl that contains all of the functionality necessary such as the actual operations to play,stop and generate the messages I need. There is an instance of this object created in the FooView.Java class for the GUI form itself, this means for example:
I've looked at implementing threads through Java and from what I've seen it's done in a different manner to the C# method, can anybody explain to me how I could implement threads in my situation?
I can provide code samples if necessary, thanks for your time.
Java Concurrency in Practice is your guide. Pls also have a look at SwingWorker. Remember that all UI related changes (either component model or its properties) should always be done on Event Dispatch Thread.
I'm using a service that reads messages from
Kafka and pushes it into
Cassandra.
I'm using a threaded architecture for the same.
There are say,
k threads consuming from Kafka topic. These write into a queue, declared as:
public static BlockingQueue<>
Now there are a number of threads, say
n, which write into Cassandra. Here is the code that does that:
public void run(){ LOGGER.log(Level.INFO, "Thread Created: " +Thread.currentThread().getName()); while (!Thread.currentThread().isInterrupted()) { Thread.yield(); if (!content.isEmpty()) { try { JSONObject msg = content.remove(); // JSON for(String tableName : tableList){ CassandraConnector.getSession().execute(createQuery(tableName, msg)); } } catch (Exception e) { } } } }
content is the BlockingQueue used for read-write operations.
I'm extending the
Thread class in the implementation of threading and there are a fixed number of threads that continue execution unless interrupted.
The problem is, this is using too much of CPU. Here is the first line of
top command:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 46232 vishran+ 20 0 3010804 188052 14280 S 137.8 3.3 5663:24 java
Here is the output of
strace on a thread of this process:
strace -t -p 46322 Process 46322 attached ....and so on
Why I am using
Thread.yield(), is because of this
If you want any other information for debugging, please let me know.
Now the question is, how can CPU utilization be minimized?
From the looks of your code it seems that your consumer threads are always checking for content available. Therefore, your threads are always running and never iddle (waiting for someone to notify them), therefore your CPU is always doing something, even if it is always yielding the thread the current thread.
while (!Thread.currentThread().isInterrupted()) {
Thread.yield();
if (!content.isEmpty()) {
You are clearly tring to solve the producer-consumer issue that many of us faced somewhere over our programming careers.
What you're currently doing is having the consumer proactively continually checking if it has something to consume.
The least and easiest CPU intensive way of solving it is:
Check out this example as it contains a simplest way to do it. You may want to revisit Java Concurrency in Practice for more profound help.
In the following code (copied from Java Concurrency in Practice Chapter 2, section 2.5, Listing 2.8):
@ThreadSafe public class CachedFactorizer implements Servlet { @GuardedBy("this") private BigInteger lastNumber; @GuardedBy("this") private BigInteger[] lastFactors; @GuardedBy("this") private long hits; @GuardedBy("this") private long cacheHits; public synchronized long getHits() { return hits; } public synchronized double getCacheHitRatio() { return (double) cacheHits / (double) hits; } public void service(ServletRequest req, ServletResponse resp) { BigInteger i = extractFromRequest(req); BigInteger[] factors = null; synchronized (this) { ++hits; if (i.equals(lastNumber)) { ++cacheHits; factors = lastFactors.clone(); // questionable line here } } if (factors == null) { factors = factor(i); synchronized (this) { lastNumber = i; lastFactors = factors.clone(); // and here } } encodeIntoResponse(resp, factors); } }
why the
factors,
lastFactors arrays are cloned? Can't it be simply written as
factors = lastFactors; and
lastFactors = factors;? Just because the
factors is a local variable and it is then passed to
encodeIntoResponse, which can modify it?
Hope the question is clear. Thanks.
Is the following code threadsafe ?
public static Entity getInstance(){ //the constructor below is a default one. return new Entity(); }
Thread safety is about access to shared data between different threads. The code in your example doesn't access shared data by itself, but whether it's thread-safe depends on whether the constructor accesses data that could be shared between different threads.
There are a lot of subtle and hard issues to deal with with regard to concurrent programming. If you want to learn about thread safety and concurrent programming in Java, then I highly recommend the book Java Concurrency in Practice by Brian Goetz.
It boils down to one thread submitting job via some service. Job is executed in some TPExecutor. Afterwards this service checks for results and throw exception in original thread under certain conditions (job exceeds maximum number of retries, etc.). Code snippet below roughly illustrate this scenario in legacy code:
import java.util.concurrent.CountDownLatch; public class IncorrectLockingExample { private static class Request { private final CountDownLatch latch = new CountDownLatch(1); private Throwable throwable; public void await() { try { latch.await(); } catch (InterruptedException ignoredForDemoPurposes) { } } public void countDown() { latch.countDown(); } public Throwable getThrowable() { return throwable; } public void setThrowable(Throwable throwable) { this.throwable = throwable; } } private static final Request wrapper = new Request(); public static void main(String[] args) throws InterruptedException { final Thread blockedThread = new Thread() { public void run() { wrapper.await(); synchronized (wrapper) { if (wrapper.getThrowable() != null) throw new RuntimeException(wrapper.getThrowable()); } } }; final Thread workingThread = new Thread() { public void run() { wrapper.setThrowable(new RuntimeException()); wrapper.countDown(); } }; blockedThread.start(); workingThread.start(); blockedThread.join(); workingThread.join(); }
}
Sometimes, (not reproducible on my box, but happens on 16 core server box) exception isn't getting reported to original thread. I think this is because happens-before is not forced(eg. 'countDown' happens before 'setThrowable') and program continues to work(but should fail). I would appreciate any help about how to resolve this case. Constraints are: release in a week, minimum impact on existing codebase is needed.
The code above (as now updated) should work as you expected without the use of further synchronisation mechanisms. The memory barrier and its corresponding 'happens-before' relationship is enforced by the use of the
CountDownLatch
await() and
countdown() methods.
Actions prior to "releasing" synchronizer methods such as Lock.unlock, Semaphore.release, and CountDownLatch.countDown happen-before actions subsequent to a successful "acquiring" method such as Lock.lock, Semaphore.acquire, Condition.await, and CountDownLatch.await on the same synchronizer object in another thread.
If you are dealing with concurrency on a regular basis get yourself a copy of 'Java Concurrency in Practice', it's the Java concurrency bible and will be well worth its weight on your bookshelf :-).
I've programmed in a number of languages, but I am not aware of deadlocks in my code.
I took this to mean it doesn't happen.
Does this happen frequently (in programming, not in the databases) enough that I should be concerned about it?
If you get a chance take a look at first few chapters in Java Concurrency in Practice.
Deadlocks can occur in any concurrent programming situation, so it depends how much concurrency you deal with. Several examples of concurrent programming are: multi-process, multi-thread, and libraries introducing multi-thread. UI frameworks, event handling (such as timer event) could be implemented as threads. Web frameworks could spawn threads to handle multiple web requests simultaneously. With multicore CPUs you might see more concurrent situations visibly than before.
If A is waiting for B, and B is waiting for A, the circular wait causes the deadlock. So, it also depends on the type of code you write as well. If you use distributed transactions, you can easily cause that type of scenario. Without distributed transactions, you risk bank accounts from stealing money.
I was just wondering if it is still necessary to ensure synchronicity in an invokeLater() Runnable.
I am encountering deadlock and need to overcome it while maintaining concurrency.
Would this be an example of good code?:
private String text; private void updateText() { SwingUtilities.invokeLater(new Runnable() { public void run() { synchronized(FrameImpl.this) { someLabel.setText(text); } } }); }
Sorry for the rather bad example, but we must assume that
text is being modified by different threads, cannot be injected, and is reliant on a correct value.
Is this the proper solution or will I unintentionally create a deadlock problem by sending synchronized code off into an unknown context..?
Thanks.
A better solution would be something like this:
public class Whatever { private String text; private final Object TEXT_LOCK = new Object(); public void setText(final String newText) { synchronized (TEXT_LOCK) { text = newText; } SwingUtilities.invokeLater(new Runnable() { public void run() { someLabel.setText(newText); } }); } public String getText() { synchronized (TEXT_LOCK) { return text; } } }
This will ensure that if two threads try to call
setText concurrently then they will not clobber each other. The first thread in will set the value of
text and enqueue a UI update with that value. The second thread will also set the value of
text and enqueue a second UI update.
The end result is that the UI will eventually show the most recent text value, but the internal
text variable will immediately contain the most recent value.
A couple of notes:
TEXT_LOCK) means you are not vulnerable to code somewhere else locking the monitor on the
Whateverinstance and inadvertently causing a deadlock. Best to always keep tight control of your lock objects. It's also best to minimize the size of your synchronized blocks.
setTextmethod synchronized, subject to the caveat that it does make you potentially vulnerable to deadlock as above.
textalso needs to be synchronized even though
Stringsare immutable. There are subtleties to the Java memory model that mean you always need to synchronize around variables that can be read/written by multiple threads.
Check out Brian Goetz's Java Concurrency in Practice for a great dive into the tricky parts of concurrency (including the memory model weirdness).
A common way of gaining access to a field is to synchronize the getters and setters. A simple example with an int would look like:
private int foo = 0; public synchronized int get(){return this.foo;} public synchronized void set(int bar){ this.foo = bar;}
Now, while this is a safe way of making the access thread safe, it also reveals that only one thread can read
foo at a time.
If many threads where to read
foo very often, and only sometimes update this variable, it would be a big waste. The getter instead could be called by multiple threads simultaneously without any problem.
Are there any established patterns about how to deal with this? Or how would you get around this in the most elegant way?
If you are just setting and getting the variable: Then you can use
volatile and remove
synchronized method.
If you are doing any operations on the integer like addition you should use
AtomicInteger.
EDIT:
If there is a scenario that the field is read multiple times and updated few times then there is a pattern called IMMUTABLE. This is one of the way to achieve thread safety.
class ImmutableClass{ private final int a; public ImmutableClass(int a) { this.a = a; } public int getA(){ return a; } /* * No setter methods making it immutable and Thread safe */ }
For more detailed knowledge on Immutability, Java Concurrency In practice is the best resource I would suggest you to read.
For more advanced ways: There is a Read/Write lock
Am writing a multithreading application that has several threads(approxiamately 25) with each thread performing a specific process and then updating the database in the which then gives the next thread the permission to process another process and do the same. Basically, thread1 does process then updates db as complete then when thread two reads the db as complete it begins processing and the process continues until thread 25. Anyone know how this is possible in java?
Your question is incredibly general but the approach I'd take would be something like:
designate one thread as the controller thread. It's job is to listen for the worker threads to complete their processing. The simplest way to do with is with a semaphore object and the
wait/
notify methods - the controller thread would take a lock on the semaphore and then call
wait.
create your worker threads, each with its own semaphore object against which each thread takes a lock on and again calls
wait.
the trigger to start the processing (this could be the application running, the user clicking a button, etc) obtains a lock on the controller's semaphore and calls
notify against it waking the controller thread. The controller's job is to pick one of the worker threads from the pool, obtain a lock on its semaphore and the call
notify causing the worker to awake. The controller then calls
wait on its own semaphore.
the worker thread can then read the database, does the processing and write back to the database before it calls
notify on the controller's semaphore causing the process to start again with the controller calling
notify on one of the worker threads' *semaphore*s and
wait against its own .
Finally a word of warning, this is a very brief outline of what's required to implement the general behaviour you've described. Threading is possbily the most misunderstood topics in computer science imho and very very easy to get wrong. Before leaping into a multi-threaded system make sure at the very least you've read Brian Goatz's - Java Concurrency In Practice
"Since a ConcurrentHashMap cannot be locked for exclusive access, we cannot use client-side locking to create new atomic operations such as put-if-absent, as we did for Vector"
Why we can't just acquire the lock in order to implement additional atomic methods and keep the collection thread-safe (like synchronized collections returned by Collections.synchronizedxxx factory) :
Why? Because the implementation does not support it. Straight from the
ConcurrentHashMap JavaDocs:
There is not any support for locking the entire table in a way that prevents all access
...which is, by definition, "exclusive access."
The bible is Java Concurrency in Practice.
Concurrent Programming in Java(TM): Design Principles and Patterns (2nd Edition) by Doug Lea. This is the book by the author of java.util.concurrent package. Java Concurrency in Practice is a very good book, too.
Can somebody tell me how I can find out "how many threads are in deadlock condition" in a Java multi-threading application? What is the way to find out the list of deadlocked threads?
I heard about Thread Dump and Stack Traces, but I don't know how to implement it.
Please let me know with your comments and suggestions.
If you want to learn about the new concurrent features in Java 5 you could do a lot worse than getting a copy of Java Concurrency in Practice by Brian Goetz (Brian Goetz and a number of the coauthors designed the Java 5 concurrency libraries). It is both highly readable and authoritative , and combining practical examples and theory.
The executive summary of the new concurrent utilities is as follows:
Currently I have a Thread running a Socket listening for connections. When it receives a connection, it needs to upload data gathered in the main thread (i.e. grab data from main thread). However I pass an instance of the Object, but it's never updated with the data that's collected while waiting for a connection.
Is there a proper way to do this? I've googled around and can't seem to find a concrete answer.
Could someone point me in the right direction?
Hopefully this makes sense, but i'll try to explain more with an examples.
class MainThread { private void MainThread() { SomeObj obj = new SomeObj("DATA Needed"); SecondThread second = new SecondThread(obj); second.start(); } } class SecondThread extends Thread { SomeObj obj; public void SecondThread(Object obj) { this.obj = obj; } public void run() { //Listening for connection //Connection get! //Get updated data (Object obj) from main thread. //Upload } }
I appreciate any help you can give me. Please let me know if I am approaching this completely wrong! I would rather learn AND get answers than just get answers.
Thanks so much!
There is a limited set of events that ensure a write in one thread is visible in another. Thread creation is one of them, so any data written into obj initially should be available in the second thread.
One option would be to synchronize on obj. If the main thread only modified it by calling its synchronized methods and the second thread got the data from an obj synchronized method the main thread writes would be visible in the second thread.
If you really want to learn about multi-threading in Java, I recommend Java Concurrency in Practice
From the book Java Concurrency in Practice , Chapter 12.1 Testing for correctness, specifically in sub-section 12.1.3 Testing safety(where the author wants to set up test cases for testing data-race safety of a Bounded Buffer class) thread safe and therefore introduce additional synchronization.Giving each thread its own RNG allows a non-thread-safe RNG to be used.
I do not understand the point made by the author against using Random number generators for generating the test inputs. Specifically the line Random number generation can create couplings between classes and timing artifacts is not clear to me.
Random number generation can create couplings between classes and timing artifacts is not clear to me.
This is more clear by taking into account the next sentence:
because most random number generator classes are thread safe and therefore introduce additional synchronization
It's the memory synchronization that may change the timing of your program. If you look into
Random, you can see that it uses an
AtomicInteger under the covers so using it will cause read and write memory barriers as part of the generation of the test data which may change how the other threads see data and the timing of your application overall.
Which classes and timing artifacts is he referring to here?
Any class that uses threads and relies on memory synchronization may be affected. Basically all threads and classes that they call.
What kind of couplings the RNG can create?
As @Bill the Lizard commented on, the book is saying that by using a RNG, the timing of the program then is relying on or affected by the RNG synchronization.
The real lesson here is that the test data you inject into program should not change the timing of your program if possible. It is often difficult and can be impossible but the goal is to simulate the application behavior (timing, input, output, ...) as much as possible in the test.
In terms of a solution, you could use another simple random algorithm that was not synchronized. You could also generate a class that stored 10000 random numbers (or however many you need) beforehand and then handed them out without synchronization. But by using a class in your tests that did memory synchronization, you are changing the timing of your program.
Hi I have a GUI application that is working fine. I created a socket server. When I create a new object of the Server class in program the GUI application stops responding.
This is my server class. If I do
Server s = new Server();
in my main application it stops working. How should I add it? Making a new thread? I tried
Thread t = new Thread(new Server()); t.start();
but the problem persisted. Please, I'll appreciate your help.
package proj4; import java.net.*; import java.io.*; public class Server implements Runnable { ServerSocket serverSocket = null; Socket clientSocket = null; ObjectOutputStream out = null; ObjectInputStream in = null; int port; static int defaultPort = 30000; boolean isConnected = false; Thread thread; DataPacket packet = null; public Server(int _port) { try { serverSocket = new ServerSocket(_port); serverSocket.setSoTimeout(1000*120); //2 minutes time out isConnected = true; System.out.println("server started successfully"); thread = new Thread(this); thread.setDaemon(true); //thread.run(); } catch (IOException e) { System.err.print("Could not listen on port: " + port); System.exit(1); } try { System.out.println("Waiting for Client"); clientSocket = serverSocket.accept(); System.out.println("Client Connected"); thread.run(); } catch (IOException e) { System.err.println("Accept failed."); System.exit(1); } try { out = new ObjectOutputStream(clientSocket.getOutputStream()); System.out.println("output stream created successfully"); } catch (IOException e) { e.printStackTrace(); } try { in = new ObjectInputStream(clientSocket.getInputStream()); System.out.println("input stream created successfully"); } catch (IOException e) { e.printStackTrace(); } } public Server() { this(defaultPort); //server listens to port 30000 as default } public void run() { System.out.println("Thread running, listening for clients");//debugging purposes while (isConnected) { try { packet = this.getData(); Thread.sleep(0); } catch(InterruptedException e) { e.printStackTrace(); } } } public DataPacket getData() { try { packet = (DataPacket)in.readObject(); } catch (Exception ex) { System.out.println(ex.getMessage()); } return packet; } public void sendData(DataPacket dp) { try { out.writeObject(dp); } catch (IOException e) { e.printStackTrace(); } try { out.flush(); } catch (IOException e) { e.printStackTrace(); } } public void closeConnection() throws IOException { out.close(); in.close(); clientSocket.close(); serverSocket.close(); } }
Your
Server constructor blocks, potentially indefinitely, in
accept().
Two things about Swing programs:
This means that if the server is being started from the Swing event thread -- that is, if it is being started in response to a button click or the like -- then yes you must spawn another thread for your Server object. Otherwise you guarantee that the Swing event thread will be blocked until your thread exits.
You say that your application still stops responding even when you spawn another thread for your server? Ensure that you're calling
Thread.start() and not
run(), or you will accidentally still block yourself by running the "new Thread" actually in your own Thread.
NOTES:
Thread.sleep(0);in your run() loop. This is not guaranteed to do anything whatsoever. If you have a single CPU machine, this may be fairly be implemented as a no-op, allowing the same thread to keep running.
isConnectedto be
volatile-- otherwise there is no guarantee that changes to this variable will be seen by any thread other than the one where it is changed.
isConnectedto false, anywhere, so your
run()will run until the JVM is stopped or until that Thread takes a RuntimeException.
accepton your
ServerSocketuntil you are in your Thread's
run()method! Otherwise your constructor will block waiting for a connection and will not return control to the event thread!
your code is:
thread = new Thread(this); thread.setDaemon(true); //thread.run();
When you had
thread.run() not commented out, you were not starting a new Thread! To do that, you need to do
thread.start(). Instead, you were running this new Thread (which will never stop, for reason #3 above) in the same thread that invoked the constructor. The way your code is written right now, all IOExceptions are logged, but otherwise swallowed. You probably want to set
isConnected to
false on any
IOException, as well as in
closeConnection().
How to determine part of what Java code needs to be synchronized? Are there any unit testing technics?
Samples of code are welcome.
Code needs to be synchronized when there might be multiple threads that work on the same data at the same time.
Whether code needs to be synchronized is not something that you can discover by unit testing. You must think and design your program carefully when your program is multi-threaded to avoid issues.
A good book on concurrent programming in Java is Java Concurrency in Practice.
I am currently learning basics of Threads in Java and I am trying to write a simple Thread Group program. I wrote it same as tutorial website though i'm getting different type of output. Below is my code for which i'm getting different output.
public class ThreadGroupDemo implements Runnable { @Override public void run() { System.out.println(Thread.currentThread().getName()); // get the name of the current thread. } public static void main(String[] args) { ThreadGroupDemo runnable = new ThreadGroupDemo(); ThreadGroup tg1 = new ThreadGroup("Parent Group"); // Creating thread Group. Thread t1 = new Thread(tg1, new ThreadGroupDemo(), "one"); t1.start(); t1.setPriority(Thread.MAX_PRIORITY); Thread t2 = new Thread(tg1, new ThreadGroupDemo(), "second"); t2.start(); t2.setPriority(Thread.NORM_PRIORITY); Thread t3 = new Thread(tg1, new ThreadGroupDemo(), "Three"); t3.start(); System.out.println("Thread Group name : " + tg1.getName()); tg1.list(); } }
I am getting Output :
Thread Group name : Parent Group Three java.lang.ThreadGroup[name=Parent Group,maxpri=10] second one Thread[one,10,Parent Group] Thread[second,5,Parent Group] Thread[Three,5,Parent Group]
The output should be like :
one two three Thread Group Name: Parent ThreadGroup java.lang.ThreadGroup[name=Parent ThreadGroup,maxpri=10] Thread[one,5,Parent ThreadGroup] Thread[two,5,Parent ThreadGroup] Thread[three,5,Parent ThreadGroup]
i am not able to understand why this happening? setting Priority can help with it?
You can't predict the order of executions of your threads even with a level of priority. You have no control on the scheduling. It's your OS which decides.
A good book about concurrency in Java : Java concurrency in practice
I am currently studying concurrent programming patterns. Consider the solution to the producer consumer problem with bounded buffer using semaphores, presented on wikipedia.
What if, at some point, the producer says: this is the last item I'm making. How could I make the program terminate?(the consumer will still wait until it is informed that there is something in the buffer).
Similarly, what if the consumer says: I don't want to consume anymore. How can the producer be informed, so that the program exits? (the producer is waiting to have an available spot to put something).
From Java Concurrency in Practice Book 7.2.3
Another way to convince a producer consumer service to shut down is with a poison pill: a recognizable object placed
on the queue that means "when you get this, stop."
Go through the 7th chapter of this book
Cancellation and Shutdown
I am Java EE developer, and I want to get skills on concurrency development.
Could you provide me some assignments, ideas, or other - just for learning and training concurrency programming?
There's a brilliant book about Java concurrency called "Java Concurrency in Practice". I think this is the best starting point for diving deep into advanced concurrency.
Java Concurrency in Practice (Amazon)
I have blogged about new concurrency solutions with the Spring framework 3 and Java EE 6 here.
It explains how to execute asynchronous methods declaratively with the
@Async or the Java EE's
@Asynchronous annotation.
These annotations are just a way to abstract away the complex concurrency logic.
You can configure Spring to use the excellent
Executor class to do the concurrency logic. The Exceutor class was introduced in Java 5 and is explained well in the Java Concurrency in Practice book together with the other classes in the
java.util.concurrent package.
The article also demonstrates how to use the same
Executor service in the code and by the Spring framework. Which enables you to use the same thread pool for both your programmatic concurrency logic and your concurrency logic handled by an application container.
Else, you can learn a lot from the Java documentation. Read about all the classes in the concurrent package and especially the Executor class. This is at least my most used documetation.
I am new to Java and the following might be obvious, but it is puzzling to me. Consider the following code:
while(1>0){ if(x!=0){ //do something } }
The x variable is changed in a different thread. However, the code in the if statement is never executed even when x is not zero. If I change the code by the following
while(1>0){ System.out.println("here"); if(x!=0){ //do something } }
the code in the if statement is now executed when x is no longer zero. I suspect this has to do with the rules of the Java compiler, but it is very confusing to me. Any help clarifying this would be greatly appreciated.
If
x is changing in a different thread then you are probably seeing a side-effect of the fact that you have not synchronized access to that variable.
The Java memory and threading model is pretty complex, so I'd recommend you get a copy of Java Concurrency in Practice by Brain Goetz and have a read.
The short answer is to make sure that access to
x is enclosed in a
synchronized block:
while (1 > 0) { int temp; synchronized (this) { temp = x; } if (temp != 0) { // Do something } }
And similarly in the code that modifies
x.
Note that this example stores
x in a temporary variable, because you want synchronized blocks to be as small as possible - they enforce mutual exclusion locks so you don't want to do too much in there.
Alternatively, you could just declare
x to be
volatile, which will probably be sufficient for your use case. I'd suggest you go with the
synchronized version because you'll eventually need to know how to use
synchronized properly, so you might as well learn it now.
I am reading the book Java Concurrency in Practice where it says,
CyclicBarrierallows a fixed number of parties to rendezvous repeatedly at a barrier point and is useful in parallel iterative algorithms that break down a problem into a fixed number of independent subproblems.
Can someone give an example of how it breaks down a problem into multiple independent subproblems?
You have to break the problem down into multiple independent subproblems yourself.
Barriers ensure that each party completes the first subproblem before any of them start on the second subproblem. This ensures that all of the data from the first subproblem is available before the second subproblem is started.
A CyclicBarrier specifically is used when the same barrier is needed again and again when each step is effectively identical. For example, this could occur when doing any sort of multithreaded reality simulation which is done in steps. The CyclicBarrier would ensure that each thread has completed a given step before all threads will begin the next step.
I am seeing a lot of classes being added to Java which are not thread safe.
Like StringBuilder is not thread safe while StringBuffer was and StringBuilder is recoomended over Stringbuffer.
Also various collection classes are not thread safe.
Isn't being thread safe a good thing ?
Or i am just stupid and don't yet understand the meaning of being thread safe ?
Because thread safety makes things slower, and not everything has to be multi-threaded.
Consider reading this article to find out basics about thread safety :
When you comfortable enough with the threads/or not, consider reading this book, it has great reviews :
I have read article concerning atomic operation in Java but still have some doubts needing to be clarified:
int volatile num; public void doSomething() { num = 10; // write operation System.out.println(num) // read num = 20; // write System.out.println(num); // read }
So i have done w-r-w-r 4 operations on 1 method, are they atomic operations? What will happen if multiple threads invoke doSomething() method simultaneously ?
volatile ensures that if you have a thread A and a thread B, that any change to that variable will be seen by both. So if it at some point thread A changes this value, thread B could in the future look at it.
Atomic operations ensure that the execution of the said operation happens "in one step." This is somewhat confusion because looking at the code 'x = 10;' may appear to be "one step", but actually requires several steps on the CPU. An atomic operation can be formed in a variety of ways, one of which is by locking using
synchronized:
As you asked in a comment earlier, even if you had three separate atomic steps that thread A was executing at some point, there's a chance that thread B could begin executing in the middle of those three steps. To ensure the thread safety of the object, all three steps would have to be grouped together to act like a single step. This is part of the reason locks are used.
A very important thing to note is that if you want to ensure that your object can never be accessed by two threads at the same time, all of your methods must be synchronized. You could create a non-synchronized method on the object that would access the values stored in the object, but that would compromise the thread safety of the class.
You may be interested in the
java.util.concurrent.atomic library. I'm also no expert on these matters, so I would suggest a book that was recommended to me: Java Concurrency in Practice
I'm just starting to dip into concurrency so bear with me if I ask some obvious/dumb things. I'm trying to take the first steps to revamp a model I have to take advantage of Java currency. Without getting into specifics, I have a portion of the model that loads some file and then when given requests it returns some corresponding data on the file. My challenge is to make this run on it's own thread now, so that while it still only processes one request at a time, it can queue up requests made other parts of the code running on their own threads.
After trying to teach myself concurrency through the great tutorials at jenkov.com I used what I learned and basically created something a lot like a BlockingQueue where there's an object that acts as a lock where requests go in as threads that queue up, and when the current thread is finished processing it unlocks for the next in line. So threads are continually being created, put into wait, started, then eventually destroyed, as each request is a new thread.
What I thought of now though, is instead doing it more like what I originally pictured, where there is only one thread that waits for instructions, then processes those instructions. So instead of requests coming in as threads, there's some singular thread that waits until it has a request, processes that, processes any other requests that have queued up, and if there's no more, waits again. The (supposed) advantage being that requests come in as variables/instructors and threads aren't continually being created/destroyed.
So the question being is there an advantage to rewriting it to be more like that? I know creating/destroying threads probably doesn't create a whole lot of overhead (as long as I'm using the wait/notify functions instead of say a busy wait) but this is a type of model that has to run literally millions of iterations sometimes and even marginal gains would multiply in situations like that.
Don't create a bunch of Threads; use an ExecutorService, initialize it with a SingleThreadExecutor, and give your users (client classes) an API they can call to submit jobs to the Executor. This gives you a lot of future flexibility by just replacing (or specializing) your executor.
Here's a second vote for the comment: Go read "Java Concurrency In Practice" by Brian Goetz - I cannot recommend this highly enough.
While you are waiting for your book to arrive:
I'm reading the book Java concurrency in practice and when I read about the relation between immutability and thread-safety I tried to get deeper. So, I discovered that there is at least a use case in which the construction of an immutable class in Java can lead to the publishing of a non properly constructed object.
According to this link, if the fields of the class are not declated
final, the compiler could reorder the statements that needs to be done in order to construct the object. In fact, according to this link, to build an object the JVM needs to do these non-atomic operations:
My question is: what about Scala? I know that Scala is based on the concurrency model of Java, so it is based on the same Java Memory Model. For example, are
case classes thread-safe wrt the above construction problem?
Thanks to all.
I've made some deep search on Stackoverflow and on the Internet. There is not so much information about the question I've made. I found this question on SO that has an interesting answer: Scala final vs val for concurrency visibility.
As proposed by @retronym I've used
javap -p A.class to destructure a
.class file containing a
case class and compiled by
scalac. I found that the class
case class A(val a: Any)
is compiled by the scala compiler into a corresponding Java class that declares its unique attribute
a as
final.
Compiled from "A.scala" public class A implements scala.Product,scala.Serializable { // Final attribute private final java.lang.Object a; public static <A extends java/lang/Object> scala.Function1<java.lang.Object, A > andThen(scala.Function1<A, A>); public static <A extends java/lang/Object> scala.Function1<A, A> compose(scala .Function1<A, java.lang.Object>); public java.lang.Object a(); public A copy(java.lang.Object); public java.lang.Object copy$default$1(); public java.lang.String productPrefix(); public int productArity(); public java.lang.Object productElement(int); public scala.collection.Iterator<java.lang.Object> productIterator(); public boolean canEqual(java.lang.Object); public int hashCode(); public java.lang.String toString(); public boolean equals(java.lang.Object); public A(java.lang.Object); }
As we know, a
case class in Scala generates automatically a bunch of utilities for us. But also a simple class like this
class A1(val a: Any)
is translated into a Java class that has a
final attribute.
Summarizing, I think we can say that a Scala class that has only
val attributes is translated into a corresponding Java class that has
final attributes only. Due to the JMM of the JVM, this Scala class should be thread-safe during the construction process.
I'm trying to process multiple csv at the same time. My code looks like this :
public class CSVMain{ private static int count = 3; public static void main(String[] a){ ExecutorService e = Executors.newFixedThreadPool(300); for(int i=0;i<count;i++) e.execute(new WebRunner("")); e.shutdown(); } static class WebRunner implements Runnable{ private final String url; public WebRunner(String url) { this.url = url; } @Override public void run() { try { long now = System.currentTimeMillis(); URL MyUrl = new URL(url); HttpURLConnection conn = (HttpURLConnection) MyUrl.openConnection(); conn.connect(); IOUtils.toByteArray(MyUrl.openStream()); System.out.println(new DateTime().toString("HH:mm:ss,SSS") + " finish thread" + Thread.currentThread().getId() + " in " + (System.currentTimeMillis() - now)); } catch (Exception e) { e.printStackTrace(); } } } }
If I set the static
count variable to 1, it would complete in 600-700 ms with my connection. When it's 2, I'll get around 1100-1400ms, when 3, it's 1700-1900ms and so on.
The statistics feels sequential, not parallel.
Am I missing something here?
There are several points that contribute to this.
First, you are measuring the run-time of individual Runnables, not the total run-time for the sequential runs (i.e. you run the
WebRunner three times, one after the other) versus the concurrent runs (you give them to a thread pool, like you are doing here).
Another important fact is that concurrency doesn't mean that things will execute in parallel. Concurrency is not parallelism They might, or they might not. Concurrency is also highly non-deterministic. That is, when the threads are allowed to run and for how long they run depends on a lot of things, including the operating system. Depending on the circumstances, they might even end up running sequentially (regardless of the size of your thread pool). And they might even appear to be executing in parallel, but they might do so in a round-robin fashion, without much benefit from concurrency and and with the overhead of switching. There are a lot of subtleties to concurrency. For an in-depth guide, the best resource is this book.
Also, even if the tasks do run in parallel, there are other restrictions on performance.For ex. you might have limited network bandwidth and when the tasks run in parallel, they might end up competing for that limited resource. The same is true for both computing power and memory.
Another very important thing is that, what you are doing is a naive version of micro benchmarking. In order to get more reliable benchmarking stats, you should probably use something like The JMH framework. Here's a relevant article on the pitfalls of naive java benchmarking and an an article on JMH
The TL;DR; version is: do not try to assign meaning to the performance of concurrent applications without a lot of rigor.
I think I found more bugs in my web application. Normally, I do not worry about concurrency issues, but when you get a ConcurrentModificationException, you begin to rethink your design.
I am using JBoss Seam in conjuction with Hibernate and EHCache on Jetty. Right now, it is a single application server with multiple cores.
I briefly looked over my code and found a few places that haven't thrown an exception yet, but I am fairly sure they can.
The first servlet filter I have basically checks if there are messages to notify the user of an event that occurred in the background (from a job, or another user). The filter simply adds messages to the page in a modal popup. The messages are stored on the session context, so it is possible another request could pull the same messages off the session context.
Right now, it works fine, but I am not hitting a page with many concurrent requests. I am thinking that I might need to write some JMeter tests to ensure this doesn't happen.
The second servlet filter logs all incoming requests along with the session. This permits me to know where the client is coming from, what browser they're running, etc. The problem I am seeing more recently is on image gallery pages (where there are many requests at about the same time), I end up getting a concurrent modification exception because I'm adding a request to the session.
The session contains a list of requests, this list appears to be being hit by multiple threads.
@Entity public class HttpSession { protected List<HttpRequest> httpRequests; @Fetch(FetchMode.SUBSELECT) @OneToMany(mappedBy = "httpSession") public List<HttpRequest> getHttpRequests() {return(httpRequests);} ... } @Entity public class HttpRequest { protected HttpSession httpSession; @ManyToOne(optional = false) @JoinColumn(nullable = false) public HttpSession getHttpSession() {return(httpSession);} ... }
In that second servlet filter, I am doing something of the sort:
httpSession.getHttpRequests().add(httpRequest); session.saveOrUpdate(httpSession);
The part that errors out is when I do some comparison to see what changed from request to request:
for(HttpRequest httpRequest:httpSession.getHttpRequests())
That line there blows up with a concurrent modification exception.
Things to walk away with: 1. Will JMeter tests be useful here? 2. What books do you recommend for writing web applications that scale under concurrent load? 3. I tried placing synchronized around where I think I need it, ie on the method that loops through the requests, but it still fails. What else might I need to do?
I added some comments:
I had thought about making the logging of the http requests a background task. I can easily spawn a background task to save that information. I am trying to remember why I didn't evaluate that too much. I think there is some information that I would like to have access to on the spot.
If I made it asynchronous, that would speed up the throughput quite a bit - well I'd have to use JMeter to measure those differences.
I would still have to deal with the concurrency issue there.
Thanks,
Walter
It's been caused because the list has been modified by another request while you're still iterating over it in one request. Replacing the
List by
ConcurrentLinkedQueue (click link to see javadoc) should fix the particular problem.
As to your other questions:
1: Will JMeter tests be useful here?
Yes, it is certainly useful to stress-test webapplications and spot concurrency bugs.
2: What books do you recommend for writing web applications that scale under concurrent load?
Not specific tied to webapplications, but more to concurrency in general, the book Concurrency in Practice is the most recommended one in that area. You can perfectly apply the learned things on webapplications as well, they are a perfect real world example of "heavy concurrent" applications.
3: I tried placing synchronized around where I think I need it, ie on the method that loops through the requests, but it still fails. What else might I need to do?
You basically need to synchronize any access to the list on the same lock. But just replacing by
ConcurrentLinkedQueue is easier.
This is listing 8.1 in Java Concurrency In Practice:
public class ThreadDeadlock { ExecutorService exec = Executors.newSingleThreadExecutor(); public class RenderPageTask implements Callable<String> { public String call() throws Exception { Future<String> header, footer; header = exec.submit(new LoadFileTask("header.html")); footer = exec.submit(new LoadFileTask("footer.html")); String page = renderBody(); //Will deadlock -- task waiting for result of subtask return header.get() + page + footer.get(); } } }
It's in
Chapter 8: Thread Pools > Section 8.1.1 Thread starvation deadlock
and has the caption:
"Task that deadlocks in a single-threaded
Executor. Don't do this."
Why does this result in a deadlock? I thought
header.get() is called, and then
footer.get() is called, which each result appended to the string. Why wouldn't a single threaded Executor be enough to run these one after the other?
Relevant chapter text: resut.
ThreadDeadlockin Listing 8.1 illustrates thread starvation deadldock.
RenderPageTasksubmits two additional tasks to the
Executoro fetch the page header and footer, renders the page body, waits for the results of the header and footer tasks, and then combines the header, body, and footer into the finished page. With a single-threaded executor,
ThreadDeadlockwill always deadlock. Similarly, tasks coordinating amongst themselves with a barrier could also cause thread starvation deadlock if the pool is not big enough.
The actual deadlock will occur will occur as soon as an instance of
RenderPageTask is submitted to the very same executor instance where it submits its task.
For example, add
exec.submit(new RenderPageTask());
and you will experience a deadlock.
Of course this can be considered a problem of the surrounding code
(i.e., you could simply define and document that your
RenderPageTask should not be submitted to this executor instance), but a good design would avoid such pitfalls entirely.
A possible solution for this would be to use
ForkJoinPool, which uses work stealing to avoid this form of possible deadlocks.
What would you suggest as a road map for becoming very proficient in writing multithreaded applications - beyond "tinkering"?
I am a C# developer - would branching off into some other languages help this endeavor?
Does the parallel addition to .NET 4.0 hide things that you should know in order to make it easier?
Do Java classes have an instance on machine (JVM) level if they contain only static methods and fields?
And if yes, what are the effects of static methods and fields when doing multithreading? Any rules of thumb?
There are no such thing as "static classes" in java. There are inner static classes, but i presume that your question its not about this type of classes.
Classes are loaded once per classloader not per Virtual Machine, this is an important diference, for example applications server like tomcat have different classloaders per application deployed, this way each application is independent (not completely independent, but better than nothing).
The effects for multithreading are the effects of shared data structures in multithreading, nothing special in java. There are a lot of books in this subject like (centered in java) or (that explain difference concurrency models, really interesting book)
Essentially, what I want to do is start all my threads, pause them all, then resume them all, using the multithreading approach. I am just looking for a simple solution to this. I'm not sure if I have to use a timer or what. Right now when I run it, the threads are like being executed in random order (I guess the PC is just randomly picking which ones it wants to run at a certain time).
class ChoppingThread extends Thread { public void run() { for(int j=40;j!=0;j-=10) System.out.println("Chopping vegetables...("+j+" seconds left)"); } } class MixingThread extends Thread { public void run() { for(int k=60;k!=0;k-=10) System.out.println("Mixing sauces...("+k+" seconds left)"); } } class TenderizingThread extends Thread { public void run() { for(int j=50;j!=0;j-=10) System.out.println("Tenderizing meat...("+j+" seconds left)"); } } class MultiThreadTasking { public static void main (String [] args) { ChoppingThread ct = new ChoppingThread(); MixingThread mt = new MixingThread(); TenderizingThread tt = new TenderizingThread(); System.out.println("\nWelcome to the busy kitchen."); //putting threads into ready state ct.start(); mt.start(); tt.start(); } }
To coordinate them use a CyclicBarrier.
To launch them all at the same time use a CountDownLatch.
Google the two classes above for many examples and explanations.
To fully understand what is happening read the Java Concurrency In Practice book.
for learning purpose i have tried to implements a queue data-structure + Consumer/producer chain that is thread-safe, for learning purpose too i have not used notify/wait mechanism :
SyncQueue :
package syncpc; /** * Created by Administrator on 01/07/2009. */ public class SyncQueue { private int val = 0; private boolean set = false; boolean isSet() { return set; } synchronized public void enqueue(int val) { this.val = val; set = true; } synchronized public int dequeue() { set = false; return val; } }
Consumer :
package syncpc; /** * Created by Administrator on 01/07/2009. */ public class Consumer implements Runnable { SyncQueue queue; public Consumer(SyncQueue queue, String name) { this.queue = queue; new Thread(this, name).start(); } public void run() { while(true) { if(queue.isSet()) { System.out.println(queue.dequeue()); } } } }
Producer :
package syncpc; import java.util.Random; /** * Created by Administrator on 01/07/2009. */ public class Producer implements Runnable { SyncQueue queue; public Producer(SyncQueue queue, String name) { this.queue = queue; new Thread(this, name).start(); } public void run() { Random r = new Random(); while(true) { if(!queue.isSet()) { queue.enqueue(r.nextInt() % 100); } } } }
Main :
import syncpcwn.*; /** * Created by Administrator on 27/07/2015. */ public class Program { public static void main(String[] args) { SyncQueue queue = new SyncQueue(); new Producer(queue, "PROCUDER"); new Consumer(queue, "CONSUMER"); } }
The problem here, is that if isSet method is not synchronized , i got an ouput like that :
97, 55
and the program just continue running without outputting any value. while if isSet method is synchronized the program work correctly.
i don't understand why, there is no deadlock, isSet method just query the set instance variable without setting it, so there is no race condition.
set needs to be
volatile:
private boolean volatile set = false;
This ensures that all readers see the updated value when a write completes. Otherwise they will end up seeing the cached value. This is discussed in more detail in this article on concurrency, and also provides examples of different patterns that use
volatile.
Now the reason that your code works with
synchronized is probably best explained with an example.
synchronized methods can be written as follows (i.e., they are equivalent to the following representation):
public class SyncQueue { private int val = 0; private boolean set = false; boolean isSet() { synchronized(this) { return set; } } public void enqueue(int val) { synchronized(this) { this.val = val; set = true; } } public int dequeue() { synchronized(this) { set = false; return val; } } }
Here, the instance is itself used as a lock. This means that only thread can hold that lock. What this means is that any thread will always get the updated value because only one thread could be writing the value, and a thread that wants to read
set won't be able to execute
isSet until the other thread releases the lock on
this, at which point the value of
set will have been updated.
If you want to understand concurrency in Java properly you should really read Java: Concurrency In Practice (I think there's a free PDF floating around somewhere as well). I'm still going through this book because there are still many things that I do not understand or am wrong about.
As matt forsythe commented, you will run into issues when you have multiple consumers. This is because they could both check
isSet() and find that there is a value to dequeue, which means that they will both attempt to dequeue that same value. It comes down to the fact that what you really want is for the "check and dequeue if set" operation to be effectively atomic, but it is not so the way you have coded it. This is because the same thread that initially called
isSet may not necessarily be the same thread that then calls
dequeue. So the operation as a whole is not atomic which means that you would have to synchronize the entire operation.
From Sun's tutorial:.
Q1. Is the above statements mean that if an object of a class is going to be shared among multiple threads, then all instance methods of that class (except getters of final fields) should be made synchronized, since instance methods process instance variables?
In order to understand concurrency in Java, I recommend the invaluable Java Concurrency in Practice.
In response to your specific question, although synchronizing all methods is a quick-and-dirty way to accomplish thread safety, it does not scale well at all. Consider the much maligned Vector class. Every method is synchronized, and it works terribly, because iteration is still not thread safe.
general question it can be in c i guess also
if i have ( in my case http requst class ) that invoked from wrapper function
this wrapper function is public API . then inside the wrapper function i init new Request object that suppose to do request with the parameters coming from the wrapper function
do i need to wrap the request object in thread ( i have thread pool class that execute worker threads )
does creating object on the stack for each request will do ?
for example:
public void Wrapper(String a,String b) { // im doing .. MyRequst req = new MyRequest(a,b); // will do the http requst } or to do : public void Wrapper(String a,String b) { // im doing .. MyThreadPool.GetInstance().RunTask(new MyRequest(a,b)); // will do the http request }
The question isn't very clear, but from what can be inferred, the pertinent question is whether creating local variables is sufficient for thread-safety. The answer is yes.
From Java Concurrency in Practice:
Accessing shared, mutable data requires using synchronization;one way to avoid this requirement is to not share. If data is only accessed from a single thread, no synchronization is needed.
It should be remembered that all objects are stored on the heap. The items on the stack are primitives and references to objects on the heap, and are termed as local variables and are always one-word wide (except for long and double values); these variables are not to be confused with the concept of method-local variables in the Java programming language (which people incorrectly assume to be stored on the stack).
By using local variables, one ensures that the objects on the heap are accessible only to the current thread of execution, unless of course, any attempt was made to share such objects with other threads (in which case appropriate synchronization techniques needs to be employed).
Here is my understanding about Java threads get scheduled when java launches a thread on invoke of
start() api of
java.lang.Thread class running on modern implementations of OS(like solaris 9).
Used the term
LWP because term
kernel thread is generally used in kernel programming to launch a thread.
So, Every
java thread creation using
java.lang.Thread::start() api has 1:1 map with
native thread creation using
pthread_create() or
thr_create() or
CreateThread() on POSIX, Solaris, Windows platforms respectively. In-turn, each
native thread has 1:1 map with an LWP.
My question:
1)
Can i say, there is no Java thread scheduling policy in user space jvm to schedule java threads anymore based on 1-1 threading model in the above diagram?
2) Supplementary: In dual core processor scenario, Does those 2 LWP's(representing an each JVM process) have equal chance to execute simultaneously(parallel)?
Note: As a java beginner, i need this clarity
The diagram details the inner workings of the jvm, and how it abstracts from the lower level operating system threads to the java thread model. This is how the JVM runs on the above operating systems.
To answer your questions directly:
) Everytime you create a thread, you create a thread for the operating system to manage. The JVM has a layer of abstraction between you and the operating system, so that you can work with the same thread model on different systems. In general, for unoptimized java code, a java thread is an OS thread.
) You don't have any guarantees on executing in parallel. Depending on the CPU load, the operating system may need the resources else where for higher priority systems. When writing multithreaded applications, write as though whatever is going on outside the thread is unknown.
If you want to know more about the java multithreaded model, I recommend this book. This book is old, but it is still relevant. It was written by the Java language architect, and it goes into details about the Java multithreading model
I am confused if we should make our own threads in servlet or not,as they have threading mechanism internally?. If yes how can we make sure if the program thread safe? How to implement thread safe mechanism in servlets.
You are asking two different questions:
I am confused if we should make our own threads in servlet or not,as they have threading mechanism internally?.
Normally, you should not start threads in a Java EE application. If you need seperate threads, make sure you use a Scheduler Service that your application knows about, so that it has the chance to shut down the threads when the application is shut down. Quartz is what's used most of the time.
If yes how can we make sure if the program thread safe? How to implement thread safe mechanism in servlets.
Servlets are just like any other Java class. Find a tutorial on thread safety or read Java Concurrency in Practice.
I am reading the book Java concurrency in practice, in section 3.2 , it gives the following code example to illustrate implicitly allowing the
this reference to escape (Don’t do this, especailly in constructor) :
public class ThisEscape { public ThisEscape(EventSource source) { source.registerListener ( new EventListener() { public void onEvent(Event e) { doSomething(e); } } ); } }
The book then says :
When
ThisEscapepublishes the
EventListener, it implicitly publishes the enclosing
ThisEscapeinstance as well, because inner class instances contain a hidden reference to the enclosing instance.
I understand the above words from Java's perspective, but I can't come up with a example how could the above code's
EventListener escaping enclosing reference
this be harmful? In what way?
For example, if I create a new instance of
ThisEscape:
ThisEscape myEscape = new Escape(mySource);
Then, what? How is it harmful now? In which way it is harmful?
Could someone please use above code as the base and explain to me how it is harmful?
======= MORE ======
The book is trying to say something like the anonymous
EventListener holds a hidden reference to the containing class instance which is not yet fully constructed. I want to know in example, how could this un-fully constructed reference be misused, and I prefer to see a code example about this point.
The book gives a right way of doing things, that's to use a static factory method as below:
public static SafeListener newInstance(EventSource source) { SafeListener safe = new SafeListener(); source.registerListener (safe.listener); return safe; }
I just don't get the point of the whole thing.
Consider this slightly modified example:
public class ThisEscape { private String prefixText = null; private void doSomething(Event e) { System.out.println(prefixText.toUpperCase() + e.toString()); } public ThisEscape(EventSource source) { source.registerListener( new EventListener() { public void onEvent(Event e) { doSomething(e); // hidden reference to `ThisEscape` is used } } ); // What if an event is fired at this point from another thread? // prefixText is not yet assigned, // and doSomething() relies on it being not-null prefixText = "Received event: "; } }
This would introduce a subtle and very hard-to-find bug, for example in multithreaded applications.
Consider that the event source fires and event after
source.registerListener(...) has completed, but before
prefixText was assigned. This could happen in a different thread.
In this case, the
doSomething() would access the non-yet-initialized
prefixText field, which would result in a
NullPointerException. In other scenarios, the result could be invalid behavior or wrong calculation results, which would be event worse than an exception. And this kind of error is extremely hard to find in real-world applications, mostly due to the fact that it happens sporadically.
The hidden reference to the enclosing instance would hinder the garbage collector from cleaning up the "enclosing instance" in certain cases.
This would happen if the enclosing instance isn't needed anymore by the program logic, but the instance of the inner class it produced is still needed.
If the "enclosing instance" in turn holds references to a lot of other objects which aren't needed by the program logic, then it would result in a massive memory leak.
A code example.
Given a slightly modified
ThisEscape class form the question:
public class ThisEscape { private long[] aVeryBigArray = new long[4711 * 815]; public ThisEscape(EventSource source) { source.registerListener( new EventListener() { public void onEvent(Event e) { doSomething(e); } private void doSomething(Event e) { System.out.println(e.toString()); } } ); } }
Please note that the inner anonymous class (which extends/implements
EventListener) is non-static and thus contains a hidden reference to the instance of the containing class (
ThisEscape).
Also note that the anonymous class doesn't actually use this hidden reference: no non-static methods or fields from the containing class are used from within the anonymous class.
Now this could be a possible usage:
// Register an event listener to print the event to System.out new ThisEscape(myEventSource);
With this code we wanted to achieve that an event is registered within
myEventSource. We do not need the instance of
ThisEscape anymore.
But assuming that the
EventSource.registerListener(EventListener) method stores a reference to the event listener created within
ThisEscape, and the anonymous event listener holds a hidden reference to the containing class instance, the instance of
ThisEscape can't be garbage-collected.
I've intentionally put a big non-static
long array into
ThisEscape, to demonstrate that the
ThisEscape class instance could actually hold a lot of data (directly or indirectly), so the memory leak can be significant.
I am working on an application, that processes incoming messages. I am not proficient in java multithreading and I am asking your help, folks. Is there anything wrong with the following app structure.
There is main application class with stopRequested boolean field. And there is internal runnable class that listens for incoming messages and process them. Also there is another thread that sets stopRequested to true.
Is this approach working and reliable, or I am wrong?
Below there is a part of my code:
class ApplicationClass { // we set this var in another thread // when it is necessary to stop private stopRequested = false; public ApplicationClass() { // starting message processing thread (new Thread(new MessageProcessing())).start(); } private class MessageProcessing implements Runnable { public void run() { while (!stopRequested) { if (getNewMessagesCount() > 0) { processNewMessages(); } } } } }
Thank you.
There are a few things to think about.
stopRequestedneeds to be volatile to resolve visibility problems (a thread on another core may not see the change otherwise).
getNewMessagesCount()doesn't block then your while loop will spin and consume the core; this will give you the lowest latency but ties up the entire core.
getMessageCount()and
processNewMessages()are invoked before
ApplicationClassis finished being created. Since the instance of
ApplicationClasscould be in an incomplete state you could find a rather nasty bug. (For the same reason you never want to have your code subscribe as a listener to events from a constructor, by the way.) Check out Effective Java for more background on this topic.
while (!stopRequested && !Thread.currentThread().isInterrupted())
Writing correct concurrent programs is hard. I highly recommend reading Java Concurrency in Practice; it will save you a lot of pain.
I don't like to lock up my code with synchronized(this), so I'm experimenting with using AtomicBooleans. In the code snippet, XMPPConnectionIF.connect() makes a socket connection to a remote server. Note that the variable _connecting is only ever used in the connect() method; whereas _connected is used in every other methods that needs to use the _xmppConn. My questions are listed after the code snippet below.
private final AtomicBoolean _connecting = new AtomicBoolean( false ); private final AtomicBoolean _connected = new AtomicBoolean( false ); private final AtomicBoolean _shuttingDown = new AtomicBoolean( false ); private XMPPConnection _xmppConn; /** * @throws XMPPFault if failed to connect */ public void connect() { // 1) you can only connect once if( _connected.get() ) return; // 2) if we're in the middle of completing a connection, // you're out of luck if( _connecting.compareAndSet( false, true ) ) { XMPPConnectionIF aXmppConnection = _xmppConnProvider.get(); boolean encounteredFault = false; try { aXmppConnection.connect(); // may throw XMPPException aXmppConnection.login( "user", "password" ); // may throw XMPPException _connected.compareAndSet( false, true ); _xmppConn = aXmppConnection; } catch( XMPPException xmppe ) { encounteredFault = true; throw new XMPPFault( "failed due to", xmppe ); } finally { if( encounteredFault ) { _connected.set( false ); _connecting.set( false ); } else _connecting.compareAndSet( true, false ); } } }
Based on my code, is it thread safe to the point that if 2 threads attempt to call connect() at the same time, only one connection attempt is allowed.
In the finally block, I am executing two AtomicBoolean.set(..) in succession, will be there be a problem, since during the gap between these 2 atomic calls, some threads might call _connected.get() in other methods ?
When using _xmppConn, should I do a synchronized( _xmppConn ) ?
UPDATE Added missing login call into the method.
Keep in mind that using 3
AtomicBooleans is not the same as guarding those three variables with a single lock. It seems to me like the state of those variables constitutes a single state of the object and thus that they should be guarded by the same lock. In your code using atomic variables, it's possible for different threads to update the state of
_connected,
_connecting, and
_shuttingDown independently -- using atomic variables will only ensure that access to the same variable is synchronized between multiple threads.
That said, I don't think synchronizing on the
this is what you want to do. You only want to synchronize access to the connection state. What you could do is create an object to use as the lock for this state without getting the monitor on
this. Viz:
class Thing { Boolean connected; Boolean connecting; Boolean shuttingDown; Object connectionStateLock = new Object(); void connect() { synchronized (connectionStateLock) { // do something with the connection state. } } void someOtherMethodThatLeavesConnectionStateAlone() { // free range thing-doing, without getting a lock on anything. } }
If you're doing concurrent programming in Java, I would highly recommend reading Java Concurrency In Practice.
I have a general doubt regarding publishing data and data changes across threads. Consider for example the following class.
public class DataRace { static int a = 0; public static void main() { new MyThread().start(); a = 1; } public static class MyThread extends Thread { public void run() { // Access a, b. } } }
Lets focus on main().
Clearly
new MyThread().start(); a = 1;
There we change the shared variable a after the MyThread is started and thus may not be a thread-safe publication.
a = 1; new MyThread().start();
However this time the change in a is safely published across the new thread, since Java Language Specification (JLS) guarantees that all variables that were visible to a thread A when it starts a thread B are visible to thread B, which is effectively like having an implicit synchronization in Thread.start().
new MyThread().start(); int b = 1;
In this case when a new variable is allocated after both the threads have been spawned, is there any guarantee that that the new variable will be safely published to all threads. i.e. if var b is accessed by the other thread, is it guaranteed to see its value as 1. Note that I'm not talking about any subsequent modifications to b after this (which certainly needs to be synchronized), but the first allocation done by the jvm.
Thanks,
I'm not sure what you're asking here. I think you're talking about thread-safe access to the "a" variable.
The issue is not the order of invocation but the fact that
-access to a is not threadsafe. So in an example with multiple threads updating and reading a, you won't ever be able to guarantee that the "a" you're reading is the same value as what you updated from (some other thread may have changed the value).
-in a multithreaded environment the jvm does not guarantee that the updated values for a are kept in sync. E.g.
Thread 1: a=1 Thread 2: a=2 Thread 1: print a <- may return 1
You can avoid this by declaring a "volatile".
As written there are no guarantees at all about the value of a.
BTW, Josh Bloch's Concurrency in Practice is a great book on this subject ( and I say that not having gotten all the way through it yet ;) - it really helped me to understand just how involved threading issues can get.
I have a piece of code that looks like this:
Algorithm a = null; while(a == null) { a = grid.getAlgorithm(); }
getAlgorithm() in my Grid class returns some subtype of Algorithm depending on what the user chooses from some options.
My problem is that even after an algorithm is selected, the loop never terminates. However, that's not the tricky bit, if I simply place a System.out.println("Got here"); after my call to getAlgorithm(), the program runs perfectly fine and the loop terminates as intended.
My question is: why does adding that magic print statement suddenly make the loop terminate?
Moreover, this issue first came up when I started using my new laptop, I doubt that's related, but I figured it would be worth mentioning.
Edit: The program in question is NOT multithreaded. The code for getAlgorithm() is:
public Algorithm getAlgorithm () { return algorithm; }
Where algorithm is initially null, but will change value upon some user input.
There are two scenarios:
You did not mention any context around how this code is run. If it is a console based application and you started from a 'main' function, you would know if there was multi-threading. I am assuming this is not the case since you say there is no multithreading. Another option would be that this is a swing application in which case you should read Multithreaded Swing Applications. It might be a web application in which case a similar case to swing might apply.
In any case you could always debug the application to see which thread is writing to the 'algorithm' variable, then see which thread is reading from it.
I hope this is helpful. In any case, you may find more help if you give a little more context in your question. Especially for a question with such an intriguing title as 'Weird Java problem, while loop termination'.
I have a relatively large amount of code written without threads in mind. I am new at Java, and programming in general, so I am having a little trouble figuring out how to run the program I have built in threads without going back into all my classes and changing them to have a run() method. I can't even imagine how that's possible where I have multiple methods that are meant to be called separately from other classes.
I can't seem to find a way to create a thread for (every/a) new call to the code from the GUI. Say I have a method for inserting data into a database. The method is named and written. I could put a call to that class and method into my main, but what then when I want to call some other method? I have 25+ methods at least, and I can't really see my main class just being overloaded 25 times as "Best practice." Is there a way to create a thread and give it an object to handle dynamically, so to say?
In short: I want to use threads in my program without overloading my main, how do I do so?
Honestly, there is no universal way of converting singlethreaded application to multithreaded because it requires different design depending on your goal. And "multithreaded application" is not really a goal :)
You didn't specify how you built your UI. But if you use Swing and want to do some lengthy task without freezing your user interface use
SwingWorker
On the general note I'd recommend reading book "Java Concurrency in Practice". It's pretty good.
I'm building a GUI'd application with javaFX that supports a long-running CPU intensive operation, something like Prime95 or Orthos.
One of the problems I've run into is trying to get counters to increment nicely. If you think about an
ElapsedTime field with an incrementing counter with millisecond resolution, what I need is a job on the UI thread to call
elapsedTimeTextField.setText("00:00:00.001") to happen 1ms before a corresponding call
elapsedTimeTextField.setText("00:00:00.002"). I also need to let the UI thread do more important jobs between those two calls.
Structuring code to do this has been tedious, and has resulted in a number of our controller classes creating threads that simply loop on code similar to:
Thread worker = new Thread(this::doUpdates); worker.start(); //... private void doUpdates(){ while(true){ String computedTime = computeTimeToDisplay(); runLaterOnUI(() -> textField.setText(computedTime)); sleep(DUTY_CYCLE_DOWNTIME); } }
While this does the job, its unfavorable because:
sleep()s
computeTimeToDisplay()method (or for that fact, in the
runLaterOnUIcall or the
sleep()call) the text field will no longer be updated.
I have addressed each of these concerns reasonably well individually, but I don't have any obvious and reusable idiom for tackling these three problems.
I suspect that the
Future,
Task,
Executor,
ServiceExecutor, etc classes (the classes in the
java.util.concurrent package that aren't a lock or a collection) can help me to this goal, but I'm not sure how to use them.
Can somebody suggest some documentation to read and some idioms to follow that will help me in pursuit of these goals? Is there an agreed on idiom --that doesn't involve anonymous classes and contains minimal boiler-plate-- for this kind of concurrent-job?
You question is multi-faceted and I am not going to pretend that I understand all of it. This answer will address only one part of the question.
It doesn't have any kind of backoff: if the UI thread is flooded with jobs this code is going to exacerbate the problem. Some kind of requeueing scheme, whereby the downtime takes into account the latency of the job and some kind of hard-coded sleep is preferable since it means that if the UI job is flooded we're not asking it to do work unduly.
The in-built java.util.concurrent classes such as Task, Service and ScheduledService include facilities to send message updates from a non-UI thread to a UI thread in way that does not flood the UI thread. You could use those classes directly (which would seem advisable, though perhaps that perception is naive of me as I don't fully understand your requirements). Or you can implement a similar custom facility in your code if you aren't using java.util.concurrent directly.
Here is the relevant code from the Task implementation:
/** * Used to send message updates in a thread-safe manner from the subclass * to the FX application thread. AtomicReference is used so as to coalesce * updates such that we don't flood the event queue. */ private AtomicReference<String> messageUpdate = new AtomicReference<>(); private final StringProperty message = new SimpleStringProperty(this, "message", ""); /** * Updates the <code>message</code> property. Calls to updateMessage * are coalesced and run later on the FX application thread, so calls * to updateMessage, even from the FX Application thread, may not * necessarily result in immediate updates to this property, and * intermediate message values may be coalesced to save on event * notifications. * <p> * <em>This method is safe to be called from any thread.</em> * </p> * * @param message the new message */ protected void updateMessage(String message) { if (isFxApplicationThread()) { this.message.set(message); } else { // As with the workDone, it might be that the background thread // will update this message quite frequently, and we need // to throttle the updates so as not to completely clobber // the event dispatching system. if (messageUpdate.getAndSet(message) == null) { runLater(new Runnable() { @Override public void run() { final String message = messageUpdate.getAndSet(null); Task.this.message.set(message); } }); } } }
The code works by ensuring that a
runLater call is only made if the UI has processed (i.e. rendered) the last update.
Internally the JavaFX 8 system runs on a pulse system. Unless there is an unusually long time consuming operation on the UI thread or general system slowdown, each pulse will usually occur 60 times a second, or approximately every 16-17 milliseconds.
You mention the following:
what I need is a job on the UI thread to call elapsedTimeTextField.setText("00:00:00.001") to happen 1ms before a corresponding call elapsedTimeTextField.setText("00:00:00.002").
However, you can see from the JavaFX architecture description that updating the text more than 60 times a second is pointless as the additional updates will never be rendered. The sample code above from Task, takes care of this by ensuring that a UI update request is only ever issued at a time that the UI update thread can actually reflect the new value in the UI.
Some General Advice
This is just advice, it does not directly solve your problem, take it for what you will, some of it might not even be particularly relevant to your situation or problem.
I'm wondering what the difference is between these ways of synchronization
List<Integer> intList = Collections.synchronizedList(new ArrayList<Integer>()); synchronized (intList) { //Stuff }
and using an object lock
Object objectLock = new Object(); List<Integer> intList = new ArrayList<Integer>(); synchronized (objectLock) { //Stuff }
The code you show is not necessarily thread safe yet!!
The only difference between one excerpt and the other is the object you use as a monitor for synchronization. This difference will determine which object should be used for synchronization by other threads that need access to the mutable data you're trying to protect
great read for this: java concurrency in practice
I've been having this memory leak issue for days and I think I have some clues now. The memory of my java process keeps growing but yet the heap does not increase. I was told that this is possible if I create many threads, because Java threads uses memory outside of the heap.
My java process is a server type program so there are 1000-2000 threads. Created and deleted ongoing. How do I reclaim the memory used by a java thread? Do I simply erase all references to the thread object and make sure that this is terminated?
From the Java API docs threads die when:
All threads that are not daemon threads have died, either by returning from the call to the run method or by throwing an exception that propagates beyond the run method.
Threads die when they return from their run() method. When they die they are candidates for garbage collection. You should make sure that your threads release all references to objects and exit the run() method.
I don't think that nulling references to your threads will really do the trick.
You should also check out the new threading facilities in Java 5 and up. Check the package java.util.concurrent in the API documentation here.
I also recommend you to check the book Concurrency in Practice. It's being priceless for me.
I've inherited some code that uses Executors.newFixedThreadPool(4); to run the 4 long-lived threads that do all the work of the application.
Is this recommended? I've read the Java Concurrency in Practice book and there does not seem to be much guidance around how to manage long-lived application threads.
What is the recommended way to start and manage several threads that each live for the entire live of the application?
I want to master concurrent programming.
I heard that there are good books for concurrent programming in java by Doug Lea
Which book should I read first ? Are there other books. If anyone can tell me also guide me how practice this topic.
Java Concurrency in Practice is the more recent of the two, so I recommend that. It covers lots of stuff, including the new concurrency utilities, which the other doesn't. However, CPiJ also contains stuff which is still relevant and is not repeated in JCiP, so you may want to check that out later too.
Usually I use the first implementation. Couple of days ago I found another. Can anyone explain me the difference between these 2 implementations ? The 2nd implementation is thread safe? What is the advantage of using inner class in the 2nd example?
//--1st Impl public class Singleton{ private static Singleton _INSTANCE; private Singleton() {} public static Singleton getInstance(){ if(_INSTANCE == null){ synchronized(Singleton.class){ if(_INSTANCE == null){ _INSTANCE = new Singleton(); } } } return _INSTANCE; } } //--2nd Impl public class Singleton { private Singleton() {} private static class SingletonHolder { private static final Singleton _INSTANCE = new Singleton(); } public static Singleton getInstance() { return SingletonHolder._INSTANCE; } }
The first implementation uses what is called a "double checked lock". This is a Very Bad Thing. It looks thread-safe, but in fact it is not.
The second implementation is, indeed, thread-safe.
The explanation for why the first implementation is broken is fairly involved, so I'd recommend you get a copy of Brian Goetz's Java Concurrency in Practice for a detailed explanation. The short version is that the compiler is allowed to assign the
_INSTANCE variable before the constructor has completed, which can cause a second thread to see a partially-constructed object.
The Java Tutorials listed a couple of books for further reading regarding threading / concurrency:
Concurrent Programming in Java: Design Principles and Pattern
Java Concurrency in Practice
Concurrency: State Models & Java Programs
(Since going through a book could take a hundred hours,) Out of these three books, which would be the most comprehensive one?
i propose you something slightly different: Programming Concurrency on the JVM.
This will explain to you the different models and the different problems with concurrency on the JVM. Not entirely targeted at Java, but at the JVM ecosystem, it will give you a deep understanding, along with the technical tools.
Can the synchronization statements be reordered. i.e : Can :
synchronized(A) { synchronized(B) { ...... } }
become :
synchronized(B) { synchronized(A) { ...... } }
Yes and no.
The order must be consistent.
Suppose you are creating a transaction between two bank accounts, and always grab the sender's lock first, then grab the receiver's lock. Problem is - say both Dan and Bob want to transfer money to each other at the same time.
Thread 1 might grab Dan's lock, as it processes Dan's transaction to Bob.
Then thread 2 grab's Bob's lock, as it processes Bob's transaction to Dan.
Then, bam, deadlock.
The morals are:
So this is the part of the answer where I guess at other things you might have been trying to ask instead, because the expectation is firmly on me that I act psychic.
The JVM will not acquire the locks in an order different from which you have programmed. How do I know this? Because otherwise it would not be possible to solve the problem in the first half of my answer.
public void execute(Runnable command)
This command object contains the submitted object, but it seems to have been wrapped.
How can I access the submitted object from within a custom thread pool executor? Or is it not such a good idea to try and access the submitted object from inside a ThreadPoolExecutor's before/after/execute methods?
Don't use
execute, use
submit, which returns a
Future, which is a handle to the task. Here's some example code:
ExecutorService service = Executors.newCachedThreadPool(); Callable<String> task = new Callable<String>() { public String call() throws Exception { return "hello world"; } }; Future<String> future = service.submit(task);
Although you can't access the task directly, you can still interact with it:
future.cancel(); // Won't start task if not already started String result = future.get(); // blocks until thread has finished calling task.call() and returns result future.isDone(); // true if complete
You can also interact with the service:
service.shutdown(); //etc
EDITED TO INCORPORATE COMMENTS:
If you want to do some logging, use an anonymous class to override the
afterExecute() method, like this:
ThreadPoolExecutor executor = new ThreadPoolExecutor(1, 1, 1, TimeUnit.SECONDS, new ArrayBlockingQueue<Runnable>(1)) { @Override protected void afterExecute(Runnable r, Throwable t) { // Do some logging here super.afterExecute(r, t); } };
Override other methods as required.
Quick plug: IMHO, the bible for this subject is Java Concurrency in Practice - I recommend you buy it and read it.
How can I make use of parallel in Java? Or do I use normal threads?
What a big topic! A typical solution is using multi-threading, you need a class implemented Runnable interface and put your function into its run() method. For details, I suggest you buy a copy of Java Concurrency in Practice.
I am implementing a game and I want to ask the player to click on a specific view.
I want my control thread to wait until I get a value back (I have clicked on the view and handled the result). Currently I am doing this by creating a thread, running a method that asks them to click and then entering a while loop that is terminated when the mouse click event changes a variable used in the while loop.
I am writing a game where I have a thread constantly receiving events. On a specific event, I want to prompt the user for a response, but to do this would require me to be on the JavaFX thread (to my knowledge).
Is there a better way of doing this in JavaFX? Thanks!
There are several tools "hidden" in the JDK documentation on threads that can help you resolve this type of issue. Usually when we make a routine code wait for some condition that can proceed we use threads synchronizers.
I want my control thread to wait until I get a value back [...]
CountDownLatch, CyclicBarrier and FutureTask may be classes that can solve your problem. The functionality of these is quite simple. They have the function of stopping threads and release them when some condition is met. The difference in each of these classes is just semantics applied for termination and release threads. Read the documentation of each and see which one is most comfortable to you.
You can also take a look at other sources of study. There is no better source of study in the world (in my opinion) than the content within the book "Java Concurrency In Practice" by Brian Goetz. I assure you that you will become able to easily manipulate threads if you buy this book (or at least gain an incredible knowledge on the subject). Make it clear that you do not need to buy the book to solve your current problem. Buying the book is just my suggestion for you to have more knowledge about threads. You probably will solve your problem by looking at the documentation of classes that synchronize threads I mentioned.
Good luck in your projects. ;)
I am learning java concepts...i just want to understand the synchronization with multi threading concept once. When we are using multi threading, we generally go for synchronization in order to align the transactions in sync.
But by adding this u end up with more utilisation of time. How to make application in sync and to use multi threading as well...are their any concepts in java which solve this...
You're thinking of Amdahl's law, which implies that from a big-O standpoint, yes, synchronized sections completely defeat the point of multithreading.
The point is to keep synchronized sections as small as possible so they don't become bottlenecks at the scale you're actually anticipating. Or to use other concurrency patterns that simulate synchronization but do not require a lock. Read Java: Concurrency in Practice.
I am learning Java concurrency and know that the following singleton is not completely thread safe. A thread may get instance before it is initialized because of instructions reordering. A correct way to prevent this potential problem is to use volatile keyword.
public class DoubleCheckedLocking { private static Instance instance; public static Instance getInstance() { if (instance == null) { synchronized (DoubleCheckedLocking.class) { if (instance == null) instance = new Instance(); } } return instance; } }
I tried to reproduce the potential problem without volatile keyword and wrote a demo to show that using the above code may cause a NullPointerException in multithreading environment. But I failed to find a way to explicitly let the Java compiler perform instructions reordering and my demo with the above singleton always works pretty well without any problems.
So my question is how to explicitly enable/disable Java compiler to reorder instructions or how to reproduce the problem without using volatile keyword in a double-checked locking singleton?
This is an excerpt from the Java Concurrency in Practice book: only shows up in production.
So you can give it a try. But there is no 100% sure way of enabling reordering.
I am trying to understand Java multi-threading constructs, and I am trying to write a simple implementation of blocking queue. Here is the code I have written:
class BlockingBoundedQueue<E> { @SuppressWarnings("unchecked") BlockingBoundedQueue(int size) { fSize = size; fArray = (E[]) new Object[size]; // fBlockingQueue = new ArrayBlockingQueue<E>(size); } BlockingQueue<E> fBlockingQueue; public synchronized void put(E elem) { if(fCnt==fSize-1) { try { // Should I be waiting/locking on the shared array instead ? how ? wait(); } catch (InterruptedException e) { throw new RuntimeException("Waiting thread was interrupted during put with msg:",e); } } else { fArray[fCnt++]=elem; //How to notify threads waiting during take() } } public synchronized E take() { if(fCnt==0) { try { // Should I be waiting/locking on the shared array instead ? how ? wait(); } catch (InterruptedException e) { throw new RuntimeException("Waiting thread was interrupted during take with msg:",e); } } return fArray[fCnt--]; //How to notify threads waiting during put() } private int fCnt; private int fSize; private E[] fArray; }
I want to notify threads waiting in Take() from put() and vice versa. Can someone please help me with the correct way of doing this.
I checked the java.utils implementation and it uses Condition and ReentrantLocks which are a little complex for me at this stage. I am okay of not being completely robust[but correct] for the sake of simplicity for now.
Thanks !
The short answer is, call
notifyAll() where you have the comments
//How to notify threads waiting during take()
Now for the more complete answer...
The reference to read is : Java Concurrency in Practice. The answer to your question is in there.
However, to briefly answer your question: in Java, threads synchronize by locking on the same object and using
wait() and
notify() to safely change state. The typical simplified flow is:
synchronizedblock on a lock object
thread.wait(), which is a blocking call that "releases" the lock so other code synchronized on the same lock object can proceed
notifyAll(), thread A will wake up and recheck the condition and (may) proceed
Some things to remember about synchronization are:
waitand
notifyabout the same "subject". Each part of state should be guarded by its own lock object - usually a private field, e.g.
private Object lock = new Object();would be fine
thisas the lock object - doing this is easy but potentially expensive, because you are locking for every call, instead of just when you need to
I created a
FutureTask in an analog way to what is presented in Brian Goetz's book Java Concurrency in Practice (the code sample can be found here, listing 5.12).
The problem is that the task times out even if given 10 seconds. The task just returns
true so there shouldn't be a reason for it to happen:
public static void main(String[] args) throws Exception { FutureTask<Boolean> task = new FutureTask<>(new Callable<Boolean>() { @Override public Boolean call() throws Exception { return true; } }); System.out.println(task.get(10, TimeUnit.SECONDS)); }
This code prints:
Exception in thread "main" java.util.concurrent.TimeoutException at java.util.concurrent.FutureTask.get(Unknown Source) at Main.main(Main.java:19)
You haven't executed the task. There will never be a result available. The javadoc states
This class provides a base implementation of
Future, with methods to start and cancel a computation, query to see if the computation is complete, and retrieve the result of the computation. The result can only be retrieved when the computation has completed
Submit the task to an
ExecutorService to be run asynchronously.
Executors.newSingleThreadExecutor().submit(task); // ideally shutdown the ExecutorService afterwards
or run it synchronously
task.run();
In the links you've given, I'm assuming the
start() method which runs the
FutureTask in a new
Thread is called before attempting to get the result.
I don't know whether this question will be closed or not, though I hope no as I ask this quite seriously.
I want to learn details of Java Threading, maybe a little bit low level of how exactly Java controls threads. The purpose of this is majorly to conquer my future job hunting path.
I have Googled it and I found it seems many people chose these two books
Concurrent Programming in Java™: Design Principles and Pattern (2nd Edition)
Java Concurrency in Practice
The 1st one (Concurrent Programming in Java™) is also recommended by the famous Googler Steve Yegge in his post Get that job at Google
But I found it quite old (published in 1999) and in the reviews at Amazon, also some people say it is too old.
The 2nd one is also recommended by many others.
I don't know how to choose. Could anyone give me some good advices?
P.S. I know a good advice would be "Just Buy both and read both", but I may not be able to take this advice because my budget is not that much and more importantly I don't have that much time to finish studying both. I have to choose only one.
Edit: Of course, if you have better choices (other than these two ) to recommend, please say it loudly
This is not opinion, but fact.
Any book on Java concurrency written in 1999 cannot cover the following:
the java.util.concurrent classes introduced in Java 5, and
the formalization of the Java Memory Model that came with the Java 5 language specification.
Both of these topics are very important.
If I have empirical data on what locks were acquired in what orders by which thread and line of code, how can I then use that data to determine if locking order has deadlock potential?
l = lock u = unlock
e.g.: these are in conflict and might deadlock
thread 1: l1, l2, u2, u1 thread 2: l2, l1, u1, u2
Or even this single thread is in conflict with itself since I don't really know that the second half of the sequence wouldn't be run on a separate thread in a different use case.
thread 1: l1, l2, u2, u1, l2, l1, u1, u2
Is there a suitable algorithm that can be used to determine that from the data?
Note I am asking not what to find (different lock aquisition orders), but what algorithm or data structure to use to find them given a set of empirical data.
If I understand your problem correctly, you have just run into the classic lock-ordering deadlock problem.
A program will be free of lock-ordering deadlocks if all threads acquire the locks they need in a fixed global order. (from Java Concurrency in Practice)
If you detect in your data that different threads acquired the same locks in different order, then your program can potentially suffer a lock-ordering deadlock with unlucky timing. So that's the pattern to look for in your data; it's that simple.
UPDATE: Here is an algorithm how I would do it. I don't know whether it has a name.
For each lock event
li from the left to right:
find its corresponding unlock event (or use the end of the sequence if it is never released)
add all enclosed lock events as pairs
(i,j) where
j is an enclosed lock event
then go to the next lock event and repeat.
An example follows.
For example, for the first event
lA it means scanning through the sequence to find the first occurrence of
uA. This gives us the subsequence:
lA lB uA. For each lock event in this subsequence add a pair into a set. In this case, save
(A,B) in the set. (If we had another lock event in this subsequence, say
lA lB lD uA, we would add the pair
(A,D) as well to the set.)
Now let's prepare the next subsequence. For the next lock event in the orgininal sequence, that is
lB, find the first
uB following it. This gives the subsequence is
lB uA lC uB and the only pair that needs to be saved in the set is
(B,C).
For the third subsequence, for the
lC event there is no pair to be saved as there is no lock event in the
lC uB subsequence.
The set of
thread 1 contains the pairs
(A,B) and
(B,C). I would simple create another set containing the reversed pairs
(B,A) and
(C,B); let's call it the forbidden set.
I would repeat this procedure for
thread 2, prepare the container with the pairs telling which locks were acquired in what order by
thread 2. Now, if the set intersection of this set and the forbidden set of
thread 1 is not empty, then we have detected a potential lock-ordering deadlock.
Hope this helps.
I have a code similar to this which is inside
run() method of a
Runnable and multiple instances of that
Runnable get launched,
do{ try{ String contractNum=contractNums.take(); }catch(InterruptedException e){ logger.error(e.getMessage(), e); } }while(!("*".equals(contractNum)));
Where
contractNums is a
BlockingQueue<String> shared by multiple threads. There are separate
Runnables putting elements to this queue.
I am not sure about next steps after catching
InterruptedException, should I terminate this thread by re throwing a
RuntimeException ( so my
while loop terminates ) or try to take next element from
contractNum queue again and ignoring
InterruptedException?
I am not sure if
InterruptedException to be treated as a fatal condition for thread to terminate or keep it in while loop.
Please suggest.
7.1.2 Interruption policies
Just as tasks should have a cancellation policy, threads should have an interruption policy. An interruption policy determines how a thread interprets an interruption request—what it does (if anything) when one is detected, what units of work are considered atomic with respect to interruption, and how quickly it reacts to interruption. The most sensible interruption policy is some form of thread-level or service- level cancellation: exit as quickly as practical, cleaning up if necessary, and pos- sibly.
7.1.3 Responding to interruption
As mentioned befor, when you call an interruptible blocking method such as Thread.sleep or BlockingQueue.put , there are two practical strategies for handling InterruptedException :
• Propagate the exception (possibly after some task-specific cleanup), making your method an interruptible blocking method, too; or
• Restore the interruption status so that code higher up on the call stack can deal with it.
Java Concurrency in Practice Chapter 7.
Specifically in your code you will need to make sure that if thread is interrupted your application logic is not broken. And it is indeed better to catch your interruption exception. What to with it is up to you just try to make sure that you don't break the application logic.
I have a function that serially (single-threaded-ly) iterates through a directory of files, changing all tab indentation to three-space indentation.
I'm using it as my first attempt at multi-threading. (Am most of the way through Java Concurrency in Practice...surprised it's eight years old now.)
In order to keep it's current single-threaded functionality, but add in the additional possibility of multi-threading, I'm thinking of changing the function to accept an additional Executor parameter, where the original single-threaded function would now be a call to it, passing in a single threaded executor.
Is this an appropriate way to go about it?
One way is as @Victor Sorokin suggests in his answer: wrap the processing of every file in a
Runnable and then either submit to an
Executor or just invoke
run() from the main thread.
Another possibility is to always do the same wrapping in a
Runnable and submit it to an always-given
Executor.
Whether processing of each file is executed concurrently or not would depend on the given
Executor's implementation.
For parallel processing, you could invoke your function passing it i.e. a
ThreadPoolExecutor as an argument, whereas for sequential processing you could pass in a fake
Executor, i.e. one that runs submitted tasks in the caller thread:
public class FakeExecutor implements Executor { @Override public void execute(Runnable task) { task.run(); } }
I believe this way is the most flexible approach.
I have an
@ApplicationScoped bean for all users, that stores the ids-> names & vice versa in
Trove &
java.util maps.
I just build the maps once at construction of bean or (in case of manual refresh by the website admin).
Inside the bean methods, I am just using the
get() with the maps, so not modifying the map. Is this going to be thread safe since it is used only for ready purposes? I am not sharing the maps with any other beans outside & not modifying the maps(adding/removing entries) anytime in my code.
Also, Is it neccesary in this case to make the fields final ?
Bean code as follows:
@ApplicationScoped @ManagedBean(name="directory", eager=true) public class directory { private static TIntObjectHashMap<String> idsToNamesMap; private static TreeMap<String, Integer> namesToIdsMap; @PostConstruct public void buildDirectory(){ // building directory here .... } public String getName(int topicId){ return idsToNamesMap.get(topicId); } public List<Entry<String, Integer>> searchTopicsByName(String query){ return new ArrayList(namesToIdsMap.subMap(query, true, query+"z", true).entrySet()); } }
There could be a visibility issue after the object is constructed. That is, in the immediate aftermath of your constructor calls, the maps may appear populated to the thread that populated them, but not necessarily to other threads, at least not right away. This type of issue is extensively discussed in chapter 3 of Java Concurrency in Practice. However, I think that if you declare the maps as
volatile:
private static volatile TIntObjectHashMap<String> idsToNamesMap; private static volatile TreeMap<String, Integer> namesToIdsMap;
You should be OK.
Update
I just realized something while looking at your code again. The maps are
static - why are they being populated in an instance context by a constructor? First off, it is confusing to the reader. Second, if more than one instance of the object is created, then you will have additional writes to the maps, not just one, possibly while other threads are reading them.
You should either make them non-
static, or populate them in a static initialization block.
I have a number of shared variable x,y,z, all of which can modified in two different methods running in two different threads.(say method 1 in thread 1 and method 2 in thread 2) . If I declare these two methods as synchronized , does it guarantee the consistency of the variables x,y and z. Or should I separately use a lock on each of those variables?
Yes, your approach will guarantee consistency, assuming those variables are all private and are not accessed (read or write) outside of the two synchronized methods.
Note that if you read those variables outside of a synchronized block then you could get inconsistent results:
class Foo { private int x; public synchronized void foo() { x = 1; } public synchronized void bar() { x = 2; } public boolean baz() { int a = x; int b = x; // Unsafe! x could have been modified by another thread between the two reads // That means this method could sometimes return false return a == b; } }
EDIT: Updated to address your comment about the static variable.
Each class should own its own data. If class
A allows direct access to a variable (by making it public) then it is not owning its own data and it becomes very difficult to enforce thread safety.
If you do this:
class A { private static int whatever; public static synchronized int getWhatever() { return whatever; } public static synchronized void setWhatever(int newWhatever) { whatever = newWhatever; } }
then you'll be fine.
Remember that
synchronized enforces a mutex on a single object. For synchronized methods it's
this (or the Class object for static methods). Synchronized blocks on other objects will not interfere.
class A { public synchronized void doSomething() {...} } class B { public synchronized void doSomethingElse() {...} }
A call to
doSomething will not wait for calls to
doSomethingElse because they're synchronized on different objects: in this case, the relevant instance of
A and the relevant instance of
B. Similarly, two calls to
doSomething on different instances of
A will not interfere.
I highly recommend that you take a look at Java Concurrency in Practice. It is an excellent book that explains all the subtleties of the Java thread and memory models.
I have a web app where I load components lazily. There is a lot of
static Bla bla; ... if(bla == null) bla = new Bla();
spread throughout the code. What do I need to do to make sure this is thread safe? Should I just wrap anytime I do one of these initializations in a
synchronized block? Is there any problem with doing that?
The lazy instantiation is only really a part of the problem. What about accessing these fields?
Typically in a J2EE application you avoid doing this kind of thing as much as you can so that you can isolate your code from any threading issues.
Perhaps if you expand one what kind of global state you want to keep there are better ways to solve the problem.
That being said, to answer your question directly, you need to ensure that access to these fields is done synchronized, both reading and writing. Java 5 has better options than using synchronized in some cases. I suggest reading Java Concurrency in Practice to understand these issues.
I have gone through Head First Java and some other sites but I couldn't find complete stuff related to Threads and additional concurrency packages at one place.
Please suggest a book/website which covers complete Threads with more details like
Java Concurrency in Practice is great for coverage of the higher-level stuff in
java.util.Concurrent, but if you want the authoritative answers on
synchronized and
volatile, you need to go to the source. No, not the source code, that would be insane. I mean the spec: Java Language Specification, Third Edition — Chapter 17: Threads and Locks
Or if you want it in book form: The Java™ Language Specification (3rd Edition)
The must-read book about concurrent programming in Java is Java Concurrency in Practice.
Also see Concurrency in Sun's Java Tutorials.
I have written a custom MyLogger library based on Observer design pattern. What I am trying to achieve is this: Every time I call
writeLog(LOG_LEVEL,"Text") method I want it to execute in a new thread. Can someone please suggest what will be the way to achieve this. As in where shall I create threads.
This is how my Logger call looks.
public class Logger extends Subject{ void writeLog(String type, String message) { setData(message); notifyy(type); } }
And this is how I am calling writeLog
appLogger.writeLog("ERROR", "This is error");
You can use an
ExecutorService
// Somewhere in your logging framework ExecutorService service = Executors.new...(); // Choose the one you want ... public MessageRelayingTask implements Runnable { // private fields ... public MessageRelayingTask(String type, String message) { ... } @Override public void run() { setData(message); notifyy(type); } } public class Logger extends Subject implements Runnable { void writeLog(String type, String message) { service.submit(new MessageRelayingTask(type, message)); } }
Some pointers to get you started:
I'm having trouble setting up Spring with Hibernate under GWT framework. I am fairly new to GWT. I have the application context set up and loading without output errors, but my main issue at the moment is that the my Service layer implementation (PobaseServiceImpl) requires a DAO that I set up in the appcontext but its not wrapping the DAO. Naturally my RPC is attempting to call the dao methods resulting in a NullPointerException. The pobaseDao is not being set by the TransactionProxyFactoryBean when I initialize it.
In summary: The DAO should be created by (that is, configured into) Spring just like the rest of my services. Then injected to the services via Spring. Then with the DAO, wrap it in a Spring transaction proxy (org.springframework.transaction.interceptor.TransactionProxyFactoryBean) and give it a Hibernate SessionFactory (org.springframework.orm.hibernate4.LocalSessionFactoryBean). But for some reason its not setting the dao
So my application context is being loaded through a ServletContextListener. Here is my application context:
<?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE beans PUBLIC "-//SPRING//DTD BEAN//EN" ""> <!-- Spring Framework application context definition for the POBASE Website. --> <beans> <!-- Configurer that replaces ${...} placeholders with values from a properties file --> <bean class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer"> <property name="locations"> <list> <value>classpath:/pobase.properties</value> <value>file:${user.home}/pobase.properties</value> </list> </property> <property name="ignoreResourceNotFound" value="no"/> </bean> <!-- Hibernate Data Source --> <bean id="dataSource" class="org.springframework.jdbc.datasource.DriverManagerDataSource"> <property name="driverClassName" value="${pobase.database.driver}" /> <property name="url" value="${pobase.database.url}" /> <property name="username" value="${pobase.database.user}" /> <property name="password" value="${pobase.database.password}" /> </bean> <!-- Hibernate SessionFactory --> <bean id="sessionFactory" class="org.springframework.orm.hibernate4.LocalSessionFactoryBean"> <property name="dataSource"> <ref bean="dataSource" /> </property> <property name="packagesToScan" value="nz.co.doltech.pobase.client.entity"/> <property name="hibernateProperties"> <props> <prop key="hibernate.dialect">${pobase.hibernate.dialect}</prop> <prop key="hibernate.show_sql">${pobase.hibernate.show_sql}</prop> <prop key="javax.persistence.validation.mode">none</prop> </props> </property> </bean> <!-- Transaction manager for a single Hibernate SessionFactory (alternative to JTA) --> <bean id="transactionManager" class="org.springframework.orm.hibernate4.HibernateTransactionManager"> <property name="sessionFactory" ref="sessionFactory" /> </bean> <!-- Default transaction proxy, defining the transactional behaviour for a typical Dao configuration --> <bean id="baseDaoTransactionProxy" class="org.springframework.transaction.interceptor.TransactionProxyFactoryBean" abstract="true"> <property name="transactionManager" ref="transactionManager" /> <property name="transactionAttributes"> <value>*=PROPAGATION_MANDATORY</value> </property> </bean> <!-- Default transaction proxy, defining the transactional behaviour for a typical Service configuration --> <bean id="baseServiceTransactionProxy" class="org.springframework.transaction.interceptor.TransactionProxyFactoryBean" abstract="true"> <property name="transactionManager" ref="transactionManager" /> <property name="transactionAttributes"> <value>*=PROPAGATION_REQUIRED</value> </property> </bean> <!-- ========================= BUSINESS OBJECT DEFINITIONS ========================= --> <bean id="pobaseDao" parent="baseDaoTransactionProxy"> <property name="target" ref="pobaseDaoTarget" /> </bean> <bean id="pobaseDaoTarget" class="nz.co.doltech.pobase.server.dao.PobaseHibernateDao"> <property name="sessionFactory" ref="sessionFactory" /> </bean> <bean id="pobaseService" parent="baseServiceTransactionProxy"> <property name="target" ref="pobaseServiceTarget" /> </bean> <bean id="pobaseServiceTarget" class="nz.co.doltech.pobase.server.service.PobaseServiceImpl"> <property name="pobaseDao" ref="pobaseDao" /> <!-- property <property name="lookupService" ref="lookupService"/> <property name="notificationService" ref="notificationService"/ --> </bean> </beans>
and here is my RPC servlet implementation:
package nz.co.doltech.pobase.server.service; /** * Extends RSS and implements the PobaseService * @author Ben */ @SuppressWarnings("serial") public class PobaseServiceImpl extends RemoteServiceServlet implements PobaseService { @SuppressWarnings("unused") private static final Logger logger = Logger.getLogger(PobaseServiceImpl.class.getName()); private PobaseDao pobaseDao; private final HashMap<Integer, PobaseEntity> pobaseEntities = new HashMap<Integer, PobaseEntity>(); private void fetchExternPobase() { pobaseEntities.clear(); List<PobaseEntity> pobaseList = pobaseDao.getAllPobase(); for (int i = 0; i < pobaseList.size(); i++) { PobaseEntity en = pobaseList.get(i); if(en != null) { pobaseEntities.put(en.getId(), en); } } } public void setPobaseDao(PobaseDao dao) { this.pobaseDao = dao; } public PobaseDao getPobaseDao() { return this.pobaseDao; } public PobaseData addLocalPobase(PobaseData pobase) { PobaseEntity entity = new PobaseEntity(); entity.mirrorObjectData(pobase); pobase.setId(pobaseEntities.size()); pobaseEntities.put(pobase.getId(), entity); return entity.getDataObject(); } public PobaseData updateLocalPobase(PobaseData pobase) { PobaseEntity entity = new PobaseEntity(); entity.mirrorObjectData(pobase); pobaseEntities.remove(entity.getId()); pobaseEntities.put(entity.getId(), entity); return entity.getDataObject(); } public Boolean deleteLocalPobase(int id) { pobaseEntities.remove(id); return true; } public ArrayList<PobaseData> deleteLocalPobases(ArrayList<Integer> ids) { for (int i = 0; i < ids.size(); ++i) { deleteLocalPobase(ids.get(i)); } return getLocalPobaseData(); } public ArrayList<PobaseData> getLocalPobaseData() { ArrayList<PobaseData> pobaseList = new ArrayList<PobaseData>(); Iterator<Integer> it = pobaseEntities.keySet().iterator(); while(it.hasNext()) { PobaseData pobase = pobaseEntities.get(it.next()).getDataObject(); pobaseList.add(pobase); } return pobaseList; } public PobaseData getLocalPobase(int id) { return pobaseEntities.get(id).getDataObject(); } public ArrayList<PobaseData> resyncExternPobase() { fetchExternPobase(); return getLocalPobaseData(); } }
Also here is the start-up log for the web application:
ServletContextListener started Nov 12, 2012 8:20:33 PM nz.co.doltech.pobase.SpringInitialiser initSpringContext INFO: Creating new Spring context. Configs are [/nz/co/doltech/pobase/appcontext.xml] Nov 12, 2012 8:20:33 PM org.springframework.context.support.AbstractApplicationContext prepareRefresh INFO: Refreshing org.springframework.context.support.ClassPathXmlApplicationContext@8423321: startup date [Mon Nov 12 20:20:33 NZDT 2012]; root of context hierarchy Nov 12, 2012 8:20:33 PM org.springframework.beans.factory.xml.XmlBeanDefinitionReader loadBeanDefinitions INFO: Loading XML bean definitions from class path resource [nz/co/doltech/pobase/appcontext.xml] Nov 12, 2012 8:20:33 PM org.springframework.core.io.support.PropertiesLoaderSupport loadProperties INFO: Loading properties file from class path resource [pobase.properties] Nov 12, 2012 8:20:33 PM org.springframework.core.io.support.PropertiesLoaderSupport loadProperties INFO: Loading properties file from URL [file:/home/ben/pobase.properties] Nov 12, 2012 8:20:33 PM org.springframework.beans.factory.support.DefaultListableBeanFactory preInstantiateSingletons INFO: Pre-instantiating singletons in org.springframework.beans.factory.support.DefaultListableBeanFactory@4c56666d: defining beans [org.springframework.beans.factory.config.PropertyPlaceholderConfigurer#0,dataSource,sessionFactory,transactionManager,baseDaoTransactionProxy,baseServiceTransactionProxy,pobaseDao,pobaseDaoTarget,pobaseService,pobaseServiceTarget]; root of factory hierarchy Nov 12, 2012 8:20:33 PM org.springframework.jdbc.datasource.DriverManagerDataSource setDriverClassName INFO: Loaded JDBC driver: org.postgresql.Driver Nov 12, 2012 8:20:33 PM org.hibernate.annotations.common.Version <clinit> INFO: HCANN000001: Hibernate Commons Annotations {4.0.1.Final} Nov 12, 2012 8:20:33 PM org.hibernate.Version logVersion INFO: HHH000412: Hibernate Core {4.1.7.Final} Nov 12, 2012 8:20:33 PM org.hibernate.cfg.Environment <clinit> INFO: HHH000206: hibernate.properties not found Nov 12, 2012 8:20:33 PM org.hibernate.cfg.Environment buildBytecodeProvider INFO: HHH000021: Bytecode provider name : javassist Nov 12, 2012 8:20:34 PM org.hibernate.dialect.Dialect <init> INFO: HHH000400: Using dialect: org.hibernate.dialect.PostgreSQLDialect Nov 12, 2012 8:20:34 PM org.hibernate.engine.jdbc.internal.LobCreatorBuilder useContextualLobCreation INFO: HHH000424: Disabling contextual LOB creation as createClob() method threw error : java.lang.reflect.InvocationTargetException Nov 12, 2012 8:20:34 PM org.hibernate.engine.transaction.internal.TransactionFactoryInitiator initiateService INFO: HHH000399: Using default transaction strategy (direct JDBC transactions) Nov 12, 2012 8:20:34 PM org.hibernate.hql.internal.ast.ASTQueryTranslatorFactory <init> INFO: HHH000397: Using ASTQueryTranslatorFactory Nov 12, 2012 8:20:34 PM org.springframework.orm.hibernate4.HibernateTransactionManager afterPropertiesSet INFO: Using DataSource [org.springframework.jdbc.datasource.DriverManagerDataSource@55acc0d4] of Hibernate SessionFactory for HibernateTransactionManager Starting Jetty on port 8888
Anyone have any ideas as to why its not working? Some specs here:
Appreciate any help I can get!
Cheers, Ben
Your
PobaseServiceImpl class is not thread-safe.
In Java any remote called service(and of course the same goes the singletons in the container too)should be prepared that its methods will be called from the different threads and the same method can be executed simultaneously in the different threads.
The entire class needs a detailed check.
First of all, you should use
ConcurrentHashMap instead of
HashMap. And all methods need to be carefully rewritten - for example the method that return ArrayList
(
public ArrayList<PobaseData> resyncExternPobase())
should make a defensive copy of the data.
As an alternative you should consider making the public methods your service methods synchronized, however, that will force the methods to be executed sequentially and that will undermine performance if the service is expected to be actively used from the different threads.
For the reference, consider reading Brian Goetz Java Concurrency In Practice, chapter 5. Building Blocks and especially 5.1 Synchronized collections and 5.2. Concurrent collections
update: Your PobaseServiceImpl seems to be a servlet that is instantiated by the servlet container - to populate its fields with Spring beans you're you should use Spring utility methods - for example
WebApplicationContextUtils - there are many examples in the internet, like
I'm reading a book which says not to use such a code:
private volatile Thread myThread; .... myThread.stop();
Instead one should use:
if (myThread != null ) { Thread dummy = myThread; myThread = null; dummy.interrupt(); }
Unfortunately the subject is not elaborated any further... Could someone explain me this?
stop() is deprecated. Never never and never use
stop(). You can use java concurrency instead.
This method is inherently unsafe. Stopping a thread with Thread.stop causes it to unlock all of the monitors that it has locked (as a natural consequence of the unchecked ThreadDeath exception propagating up the stack). If any of the objects previously protected by these monitors were in an inconsistent state, the damaged objects become visible to other threads, potentially resulting in arbitrary behavior. Many. If the target thread waits for long periods (on a condition variable, for example), the interrupt method should be used to interrupt the wait.
Have a look at Java Concurrency in Practice chapter 7 (Cancellation and Shutdown)
I'm trying to convert existing single thread flood fill algorithm(s) to multithread one(s).
Input: - 2d bit array and its dims - xy coords where fill should begin
Output: - same 2d bit array with updated bits
Problem: - only 1 thread at the time can write to given 64bits (8x8 pixels) in array, also no other thread can read this 64bits chunk at the write time
I've started with queue approach and thread pool so once thread finishes its job it can take another task from the queue.
How would you organize thread synchronization conforming 'problem' statement?
The main problem is how to assign read/write lockers to given memory chunk?
Generally you want to divide the data as coarsely as possible and minimize communication between threads. Communication includes shared data structures, even the lock free ones. Especially the ones where there are shared variables with write access.
Above general "coarse" policy avoids the common pitfalls (for example false sharing) which prevent scaling.
As for your specific problem, I have to confess, I'm not intimate with flood fill algorithms so I'm not immediately able to sketch out a coarse approach.
However if a coarse approach is not feasible and a strategy for locking individual cells is needed, lock striping could be an approach worth investigating in this case.
A lock free implementation is another possibility. Maybe use compare-and-swap type operation to do the writes (InterlockedCompareExchange64 on VS) combined with retry logic if another thread wrote in the same 8x8 pixel 64bit block.
It could be possible to relax the read locking completely. If 2 threads end up painting the same pixels, it may only waste some cycles, but not corrupt the results.
A lock free implementation could be several times faster.
If you are working in Java Java Concurrency in Practice by Goetz is a great book on things like lock striping.
I'm used to C++/Qt's concept of signals (emit/listen) and now I'm doing a project in Java which requires some sort of data sending/receiving mechanism.
My needs are:
Is this possible in Java, and how? (I'll appreciate a small compilable example/link)
Java by default doesn't have a simple event handling mechanism such as .Net's events or Qt's Signals and Slots. It does have the notion of Listeners in various java GUI frameworks but I don't think that's what you're looking for.
You should consider a pub-sub library like Google Guava's EventBus framework.
If you don't want to use a third party lib then I suggest you start looking into using one of the sub-classes of
BlockingQueue. See the FileCrawler example from page 62 of Java Concurrency in Practice to see how to use a BlockingQueue to send events/data to worker threads.
If you're looking for a more complicated solution for message/event notifications across the process boundary or the local machine boundary then you may want to look into:
I'm using a third party library in my thread which involves some heavy DB operations . Sometimes due to the lock on the DB or any other reasons execution of thread gets stuck . I want to kill the thread irrespective of what it is doing after the particular time interval. Thread.interrupt() is not working out for me because my thread spend most of its time in the third party library which i can't modify and library doesn't throw any Interrupted Exception which i can handle in my code. Thread.stop() is not advised to use since it is deprecated . But i have to kill the thread anyway
void mainThread() { Thread1 t1 = new Thread1(); t1.start(); Thread.sleep(time); if(t1.getState() != State.TERMINATED) { // code for killing the thread } } Thread1() { //doing heavy DB operations using third party Library }
Any suggestion for other designs is welcomed . Is there any alternate method for killing a thread instead of thread.stop()
Wrap your DB access in a
FutureTask and get the result using
FutureTask.get(long, TimeUnit) (it will return before the supplied time, or throw a
TimeoutException).
Read Chapter 6 of Java Concurrency in Practice by Brian Goetz et al for more info
I'm working on small
CRUD application that use plain
JDBC, with a
Connection enum-based singleton, after reading the first part of Java Concurrency in Practice I just liked the
ThreadLocalapproach to write thread-safe code, my question is :
When wrapping global
JDBC connection in a
ThreadLocal considered a good practice ?
When wrapping global JDBC connection in a ThreadLocal considered a good practice ?
Depends a lot on the particulars. If there are a large number of threads then each one of them is going to open up their own connection which may be prohibitive. Then you are going to have connections that stagnate as threads lie dormant.
It would be better to use a reentrant connection pool. Then you can reuse connections that are already open but not currently in use but limit the number of connections to the minimum you need to work concurrently. Apache's DBCP is a good example and is well thought of.
To quote from their docs:.
I'm writing an application that calculates delta and roots of a second degree equation, accepting its coefficients as input. Later, i want to give it a GUI.
This is the class that calculates everything:
package functions; import java.util.*; public class Calculate implements Runnable { double a=0; double b=0; double c=0; double delta = 0; double r1=0; double r2=0; Vector data=new Vector (); public Calculate (Vector v) { synchronized (data) { synchronized (v) { data = v; } a =(double) data.elementAt(0); b =(double) data.elementAt(1); c =(double) data.elementAt(2); } } public double calcDelta () { delta = b*b-4*a*c; return delta; } public double root1 () { r1 = (-b+Math.sqrt(delta))/(2*a); return r1; } public double root2 () { r2 = (-b-Math.sqrt(delta))/(2*a); return r2; } public void createData (Vector z) { synchronized (z) { while (z.size()!=0) { z.removeElementAt(0); } z.add(delta); z.add(r1); z.add(r2); } } public void run () { calcDelta(); root1 (); root2 (); //try { createData (data); //} catch (InterruptedException e) {} } }
which i tested and is working well. The problem is in the test code i wrote for it:
import java.util.*; import functions.*; public class Test { double a=0; double b=0; double c=0; Vector v = new Vector (); public Test (double arturo, double bartolomeo, double cirinci) { a=arturo; b=bartolomeo; c=cirinci; synchronized (v) { v.add(a); v.add(b); v.add(c); } } public Vector makevector () { return v; } public static void main (String [] args) { double art = (double) Integer.parseInt (args[0]); double bar = (double) Integer.parseInt (args[1]); double car = (double) Integer.parseInt (args[2]); Test t = new Test (art, bar, car); Thread launch; Vector data = t.makevector(); Calculate res = new Calculate (data); launch = new Thread (res); launch.start(); if (data.size()!=0) { System.out.println ("Delta: "+data.elementAt(0)); System.out.println ("Radice 1: "+data.elementAt(1)); System.out.println ("Radice 2: "+data.elementAt(2)); } } }
and specifically in the output for Delta. In facts, roots are correctly shown, but instead of delta, it prints the a coefficient (by example, if i pass 1 1 -6, i expect delta to be 25, but it shows 1; if it s 2 2 -12 delta hould still be 25, but it shows 2). Somehow, the first element of this vector doesn't get deleted and replaced, but i don't know why; i just know it's not a matter of synchronisation, since i tried to delete all of the syncs and the output was the same.
So, what's my mistake? Thank you.
Your problem is (probably) that the calculations haven't finished before you print them.
Take a look at this part of your code:
launch.start(); if (data.size()!=0) { System.out.println ("Delta: "+data.elementAt(0)); System.out.println ("Radice 1: "+data.elementAt(1)); System.out.println ("Radice 2: "+data.elementAt(2)); }
When you do launch.start() the calculations start but in other thread, and this thread is still runing, so it starts to print the elements of data Vector. And those elements weren't updated yet.
Try adding
Thread.sleep(2000); before the
if (data.size()!=0) and see if the results change to what you would expect. This way you will make one thread to finish it's job before the other one prints the output.
This is not the solution of course - it will only show where the problem lies. If you want a solution look at java.util.concurrent and there you can finds something useful like CoundDownLatch.
Moreover you're using the
synchronized keyword much to often and not necessarly proper. You should be always careful with using
synchronized.
Try this book:
but keep in mind that concurrency in Java is quite advanced subject.
I needed to use these nested class because nested class can use the variable from the class been nested. How do I move these class to a something.java to simplify my code and the class still have the control of the gui class , such as Jlabel?
this is the cleaned version to show the important part
public class GUI { public GUI(){ VitaminDEngineStarter vdes = new VitaminDEngineStarter(); Registry registry = null; try { registry = LocateRegistry.getRegistry(); } catch (RemoteException e1) { // TODO Auto-generated catch block e1.printStackTrace(); } try { vd = (VitaminD)registry.lookup(VitaminD.SERVICE_NAME); } catch(Exception e) { e.printStackTrace(); } SMS a = new SMS(5); try { arduino.connect("COM3"); } catch (Exception e) { // TODO Auto-generated catch block e.printStackTrace(); } System.out.println("connecting:"+ a.connect()); System.out.println("connected? :" + a.checkConnect()); System.out.println("signal: "+a.checkSignal()); System.out.println("deliver report :" + a.DeliveryReportOn()); SMS.Read read = a.new Read(arduino); } class ShowSense implements Runnable { @Override public void run() { String[] temp; String light = ""; String temperature = ""; String hum = ""; String sens = ""; boolean humanact = false; // TODO Auto-generated method stub while (true){ try { humanact = vd.gethumanActivity(); } catch (RemoteException e) { // TODO Auto-generated catch block e.printStackTrace(); } sens = arduino.getSensor(); temp = sens.split(","); light = temp[1]; temperature = temp[0]; hum = temp[2]; LightIntensity.setText(light); Temperature.setText(temperature); humidity.setText(hum); if (humanact){ personActivity.setText("in place"); } else{ personActivity.setText("absent"); } } } } private JPanel getInputs() { if (Inputs == null) { personActivity = new JLabel(); personActivity.setBounds(new Rectangle(114, 137, 77, 27)); personActivity.setText(""); personActivityLabel = new JLabel(); personActivityLabel.setBounds(new Rectangle(7, 137, 99, 25)); personActivityLabel.setText("Person Activity:"); humidity = new JLabel(); humidity.setBounds(new Rectangle(106, 91, 84, 27)); humidity.setText(""); humidityLabel = new JLabel(); humidityLabel.setBounds(new Rectangle(6, 92, 88, 26)); humidityLabel.setText("Humidity:"); Temperature = new JLabel(); Temperature.setBounds(new Rectangle(101, 50, 89, 30)); Temperature.setText(""); TemperatureLabel = new JLabel(); TemperatureLabel.setBounds(new Rectangle(4, 50, 91, 30)); TemperatureLabel.setText("Temperature:"); LightIntensity = new JLabel(); LightIntensity.setBounds(new Rectangle(110, 6, 84, 34)); lightLabel = new JLabel(); lightLabel.setBounds(new Rectangle(5, 5, 97, 34)); lightLabel.setText("Light Intensity:"); Inputs = new JPanel(); Inputs.setLayout(null); Inputs.setBounds(new Rectangle(14, 63, 200, 183)); Inputs.add(lightLabel, null); Inputs.add(LightIntensity, null); Inputs.add(TemperatureLabel, null); Inputs.add(Temperature, null); Inputs.add(humidityLabel, null); Inputs.add(humidity, null); Inputs.add(personActivityLabel, null); Inputs.add(personActivity, null); th.start(); } return Inputs; } class autopilotthread implements Runnable{ /** The temp. */ private String[] temp; /** The lightintensty. */ private double lightintensty ; /** The temperature. */ private double temperature ; /** The hum. */ private double hum ; /** The sens. */ private String sens = null; /** The humanact. */ private double humanact; /** The result. */ private boolean [] result = {false , false}; /** The fan. */ private boolean fan =false; /** The light. */ private boolean light = false; /** The pstop. */ boolean pstop = false; /* (non-Javadoc) * @see java.lang.Runnable#run() */ @Override public void run() { System.out.println("thread start!"); while(true){ System.out.println("thread loop!"); try { if(vd.gethumanActivity()){ humanact = 250; }else{ humanact = 0; } } catch (RemoteException e) { // TODO Auto-generated catch block e.printStackTrace(); } sens = arduino.getSensor(); temp = sens.split(","); lightintensty = Double.parseDouble(temp[1]); temperature = Double.parseDouble(temp[0]); hum = Double.parseDouble(temp[2]); double [] out ={humanact ,lightintensty , hum, Time.now(),temperature }; System.out.println(""+out[0]+" "+out[1]+" "+out[2]+" "+out[3]+" "+out[4]); result = Matlab.output(out); light = result[1]; fan = result[0]; System.out.println("light:" + light); System.out.println("fan:" + fan ); if(light){ try {X10.lightsOn();} catch (IOException e) {e.printStackTrace();} }else{ try {X10.lightsOff();} catch (IOException e) {e.printStackTrace();} } if(fan){ try {X10.fanOn();} catch (IOException e) {e.printStackTrace();} }else{ try {X10.fanOff();} catch (IOException e) {e.printStackTrace();} } try {TimeUnit.SECONDS.sleep(10);} catch (InterruptedException e) {e.printStackTrace();} if (pstop){ break; } } System.out.println("thread stop!"); } } class Pilotmouse implements MouseListener{ /** The p thread. */ autopilotthread pThread = null; /** The pt. */ Thread pt = null; /** * Instantiates a new pilotmouse. */ Pilotmouse(){ } /* (non-Javadoc) * @see java.awt.event.MouseListener#mouseClicked(java.awt.event.MouseEvent) */ @Override public void mouseClicked(MouseEvent arg0) { // TODO Auto-generated method stub } /* (non-Javadoc) * @see java.awt.event.MouseListener#mouseEntered(java.awt.event.MouseEvent) */ @Override public void mouseEntered(MouseEvent arg0) { // TODO Auto-generated method stub } /* (non-Javadoc) * @see java.awt.event.MouseListener#mouseExited(java.awt.event.MouseEvent) */ @Override public void mouseExited(MouseEvent arg0) { // TODO Auto-generated method stub } /* (non-Javadoc) * @see java.awt.event.MouseListener#mousePressed(java.awt.event.MouseEvent) */ @Override public void mousePressed(MouseEvent arg0) { // TODO Auto-generated method stub } /* (non-Javadoc) * @see java.awt.event.MouseListener#mouseReleased(java.awt.event.MouseEvent) */ @Override public void mouseReleased(java.awt.event.MouseEvent e) { if ((autopilotlable.getText().equalsIgnoreCase("off"))){ autopilotlable.setText("on"); pThread = new autopilotthread(); pt = new Thread(pThread); pt.start(); } else if ((autopilotlable.getText().equalsIgnoreCase("on"))){ autopilotlable.setText("off"); pThread.pstop = true; } } } private JButton getAutopilot() { if (autopilot == null) { autopilot = new JButton(); autopilot.setBounds(new Rectangle(18, 14, 112, 28)); autopilot.setText("Auto Pilot"); autopilot.addMouseListener(new Pilotmouse()); } return autopilot; } public static void main(String[] args) { SwingUtilities.invokeLater(new Runnable() { public void run() { GUI application = new GUI(); application.getJFrame().setVisible(true); } }); } }
As Jochen mentioned you could use Eclipse Refactoring Tools. That won't solve design issues though.
Generally:
The most important advise: Please try to write unit tests for this functionality first using (for example) junit and mockito, and good design will come naturally. Believe me!
EDIT:
Good book about concurrency.
EDIT:
The Clean Code Talks - "Global State and Singletons"
I need to create a File System Manager (more or less) which can read or write data to files.
My problem is how do I handle concurrency?
I can do something like
public class FileSystemManager { private ReadWriteLock readWriteLock = new ReentrantReadWriteLock(); public byte[] read(String path) { readWriteLock.readLock().lock(); try { ... } finally { readWriteLock.readLock().unlock(); } } public void write(String path, byte[] data) { readWriteLock.writeLock().lock(); try { ... } finally { readWriteLock.writeLock().unlock(); } } }
But this would mean all access to the write (for example) will be locked, even if the first invocation is targeting /tmp/file1.txt and the second invocation is targeting /tmp/file2.txt.
Any ideas how to go about this?
I would look deeply into Java 5 and the java.util.concurrent package. I'd also recommend reading Brian Goetz' "Java Concurrency in Practice".
I need to know that how to create Thread Object other than Extending Thread Class or Implementing Runnable Interface.
This Question was ask in One of my Interview.
Thanks
Ever since Java 1.5, you should not create threads manually, you should use high level concurrency tools (see for example Effective Java Item 68: Prefer executors and tasks to threads).
See the Executors page of the Oracle Concurrency trail or better yet, read Java Concurrency in Practice.
I tried to make a event dispatcher in Java that will dispatch events as threads. So all the EventListener classes are essentially implemented the Runnable class. Like how firing of events work traditionally, a method in the event dispatcher class loops through a list of EventListeners and then invoke their handler method, except that this time, I invoke these handler as threads by putting the listeners into new Thread(handlerObject).start(). The actual handling is done in the run() method in the EventListener.
So it looks something like:
for(EventListener listener : listenerList) { if(listener instanceof Runnable) new Thread(listener).start(); }
So all instructions to handle the event in the listener are put inside the run() method, which will be executed when the thread.start().
But the problem is the threads often go into a situation where one of the threads got stuck somewhere and didn't manage to continue. Sometimes, several threads may also get stuck while some managed to run through all instructions in the run() method in the listener. I looked up and this sounds like what it is called a deadlock.
I tried to put the "synchronized" modifier to all my methods but it still has this problem. I thought the synchronized keyword would simply just queue any threads trying to run a similar method until a current thread running the method has finished. But this doesn't solve the problem still. Why doesn't synchronized solve the problem especially when I already have it on all my methods and it should queue any concurrent access that may potentially cause a deadlock? I didn't use any wait() or notify() methods. Just a simple event dispatcher that attempts to run its event listener as a thread.
I am pretty new to threads but have found it very difficult to even debug it because I don't know where has gone wrong.
Thanks for any help.
Your real problem is that you don't understand concurrency well enough to understand why your program is not working, let alone how to solve this. (FWIW - adding
synchronized to all of your methods is only making the problem worse.)
I think that your best plan is take time out to do some reading on concurrency in Java. Here are a couple of good references:
@wheaties has a micro-explanation of what a deadlock is, and @matt_b offers useful advice on how to diagnose a deadlock. However, these won't help a lot unless you know the right way to design and write your multi-threaded code.
I want to know about the threading concept in BlackBerry.
In Android we have async task or handlers to communicate. Is there something similar available in BlackBerry? How do two threads communicate? How do you pass data from a background thread to the UI thread?
Concurrency is not a trivial thing. It's really difficult to craft a Thread-safe solution. In Blackberry Java the situation is even worse, as only JavaME Threading APIs are available, meaning you can't use all the Java SE high level classes (like executors, locks, collections, etc).
My advice would be not to try to port Android's
AsyncTask or any other high level concurrency-related class on your own, since very likely you'll make mistakes (unless you are well versed in concurrent programming). I myself would avoid it as much as possible. Instead keep the concurrent code as simple and small as you can. Most of the times all you need is to refresh the GUI from a worker thread. This can be done easily with
UiApplication.invokeLater and
UiApplication.invokeAndWait, and you don't need to write concurrent code at all.
If you want to learn more about concurrency, I'd start with this tutorial by Oracle. It's aimed for JavaSE, but almost the first half is useful for JavaME as well. In case you want to learn more advanced concurrent programming, this book is a must read.
maybe someone could recommend some good examples concerning thread executions, thread management. Maybe not only examples but article, tutorial if you will with examples.
Generally I have a problem where I need to download a bunch of files from the web, but connection is limited to two. So when I gather up all the url's to files I need, I'd like to download say... 100 files but do so in async manner by two until all the threads finish their job.
Thank, you for support.
I don't have an article, but I do know of a good book that covers general multi-threaded programming using Java. It is called Java Concurrency in Practice. It does cover general usage patterns, etc.
Maybe this is a recurrent question, but I need some customization with my context.
I'm using Spring Batch 3.0.1.RELEASE
I have a simple job with some steps. One step is a chunk like this:
<tasklet transaction- <batch:chunk </batch:chunk>
<bean id="myProcessor" class="org.springframework.batch.item.support.CompositeItemProcessor" scope="step"> <property name="delegates"> <list> <bean class="...MyFirstProcessor"> </bean> <bean class="...MySecondProcessor"> </bean> </list> </property>
With this configuration, my job works perfectly.
Now, I want to convert this to a multi-threaded job. Following the documentation to basic multi-thread jobs, I included a SympleAsyncTaskExecutor in the tasklet, but it failed.
I have readed JdbcCursorItemReader does not work properly with multi-thread execution (is it right?). I have changed the reader to a JdbcPagingItemReader, and it has been a nightmare: job does not fail, writing process are ok, but data has mixed among the threads, and customer data were not right and coherent (customers have got services, addreses, etc. from others).
So, why does it happen? How could I change to a multi-thread job?
I'm very locked and confused with this, so any help would be very appreciated. Thanks a lot.
[EDIT - SOLVED]
Well, the right and suitable fix to my issue is to design the job for multithread and thread-safe execution from the beggining. It's habitual to practice first with one-thread step execution, to understand and know Spring Batch concepts; but if you consider you are leaving this phase behind, considerations like immutable objects, thread-safe list, maps, etc... must raise.
And the current fix in the current state of my issue has been the next I describe later. After test Martin's suggestions and taking into account Michael's guidelines, I have finally fix my issue as good as I could. The next steps aren't good practice, but I couldn't rebuild my job from the beggining:
So, if the delegated bean was:
<bean class="...MyProcessor"> <property name="otherBean" ref="otherBeanID" />
Change to:
<bean class="...MyProcessor"> <property name="otherBean" value="otherBeanID" />
And, inside MyProcessor, get a single instance for otherBeanID from the context; otherBeanID must be configurated with scope="protoype".
As I tell before, they're no good style, but it was my best option, and I can assert each thread has its own and different item instance and other bean instance.
It proves that some classes has not been well designed for a right multithread execution.
Martin, Michael, thanks for your support.
I hope it helps to anyone.
You have asked a lot in your question (in the future, please break this type of question up into multiple, more specific questions). However, item by item:
Is
JdbcCursorItemReader thread-safe?
As the documentation states, it is not. The reason for this is that the
JdbcCursorItemReader wraps a single
ResultSet which is not thread safe.
Are the composite processor and writer right for multithread?
The
CompositeItemProcessor provided by Spring Batch is considered thread safe as long as the delegate
ItemProcessor implementations are thread safe as well. You provide no code in relation to your implementations or their configurations so I can't verify their thread safety. However, given the symptoms you are describing, my hunch is that there is some form of thread safety issues going on within your code.
You also don't identify what
ItemWriter implementations or their configurations you are using so there may be thread related issues there as well.
If you update your question with more information about your implementations and configurations, we can provide more insight.
How could I make a custom thread-safe composite processor?
There are two things to consider when implementing any
ItemProcessor:
ItemProcessorimplementation idempotent, this will prevent side effects from this multiple trips through a processor.
Maybe could it be the JDBC reader: Is there any thread-safe JDBC reader for multi-thread?
As you have noted, the
JdbcPaginingItemReader is thread safe and noted as such in the documentation. When using multiple threads, each chunk is executed in it's own thread. If you've configured the page size to match the commit-interval, that means each page is processed in the same thread.
Other options for scaling a single step
While you went down the path of implementing a single, multi-threaded step, there may be better options. Spring Batch provides 5 core scaling options:
ItemProcessorand composite
ItemWriters in the same step, this may be something to explore (breaking your current composite scenarios into multiple, parallel steps).
ItemProcessor/
ItemWriters- This option allows you to execute the processor logic in a different thread. The processor spins the thread off and returns a
Futureto the
AsyncItemWriterwhich will block until the
Futurereturns to be written.
ItemProcessorlogic is the bottle neck in the flow.
You can read about all of these options in the documentation for Spring Batch here:
Thread safety is a complex problem. Just adding multiple threads to code that used to work in a single threaded environment will typically uncover issues in your code.
Since I've read the book Java Concurrency in Practice I was wondering how I could use immutability to simplify synchronization problems between threads.
I perfectly understand that an immutable object is thread-safe. Its state cannot change after initialization, so there cannot be "shared mutable states" at all. But immutable object have to be use properly to be considered useful in synchronization problems.
Take for example this piece of code, that describes a bank wich owns many accounts and that exposes a method through which we can transfer money among accounts.
public class Bank { public static final int NUMBER_OF_ACCOUNT = 100; private double[] accounts = new double[NUMBER_OF_ACCOUNT]; private Lock lock; private Condition sufficientFunds; public Bank(double total) { double singleAmount = total / 100D; for (int i = 0; i < NUMBER_OF_ACCOUNT; i++) { accounts[i] = singleAmount; } lock = new ReentrantLock(); sufficientFunds = lock.newCondition(); } private double getAdditionalAmount(double amount) throws InterruptedException { Thread.sleep(1000); return amount * 0.04D; } public void transfer(int from, int to, double amount) { try { // Not synchronized operation double additionalAmount = getAdditionalAmount(amount); // Acquiring lock lock.lock(); // Verifying condition while (amount + additionalAmount > accounts[from]) { sufficientFunds.await(); } // Transferring funds accounts[from] -= amount + additionalAmount; accounts[to] += amount + additionalAmount; // Signaling that something has changed sufficientFunds.signalAll(); } catch (InterruptedException e) { e.printStackTrace(); } finally { lock.unlock(); } } public double getTotal() { double total = 0.0D; lock.lock(); try { for (int i = 0; i < NUMBER_OF_ACCOUNT; i++) { total += accounts[i]; } } finally { lock.unlock(); } return total; } public static void main(String[] args) { Bank bank = new Bank(100000D); for (int i = 0; i < 1000; i++) { new Thread(new TransferRunnable(bank)).start(); } } }
In the above example, that comes from the book Core Java Volume I, it is used synchronization through explicit locks. The code is clearly difficult to read and error prone.
How can we use immutability to simplify the above code? I've tried to create an immutable
Accounts class to hold the accounts value, giving to the
Bank class a
volatile instance of
Accounts. However I've not reach my goal.
Can anybody explain me if it is possible to simplify synchronization using immutability?
---EDIT---
Probably I did not explain myself well. I know that an immutable object cannot change its state once it was created. And I know that for the rules implemented in the Java Memory Model (JSR-133), immutable objects are guaranteed to be seen fully constructed after their initialization (with some distingua).
Then I've try to use these concepts to delete explicit synchronization from the
Bank class. I developed this immutable
Accounts class:
class Accounts { private final List<Double> accounts; public Accounts(List<Double> accounts) { this.accounts = new CopyOnWriteArrayList<>(accounts); } public Accounts(Accounts accounts, int from, int to, double amount) { this(accounts.getList()); this.accounts.set(from, -amount); this.accounts.set(to, amount); } public double get(int account) { return this.accounts.get(account); } private List<Double> getList() { return this.accounts; } }
The accounts attribute of the
Bank class have to be published using a
volatile variable:
private volatile Accounts accounts;
Clearly, the transfer method of the
Bank class will be changed accordingly:
public void transfer(int from, int to, double amount) { this.accounts = new Accounts(this.accounts, from, to, amount); }
Using an immutable object (
Accounts) to store the state of a class (
Bank) should be a publishing pattern, that is described at paragraph 3.4.2 of the book JCIP.
However, there is still a race condition somewhere and I can't figure out where (and why!!!).
Suppose I have a class like this:
package com.spotonsystems.bulkadmin.cognosSDK.util.Logging; public class RecordLogging implements LittleLogging{ private LinkedList <String> logs; private boolean startNew; public RecordLogging() { logs = new LinkedList<String>(); } public void log(String log) { logHelper(log); startNew = true; } public void logPart(String log) { logHelper(log); startNew = false; } private void logHelper(String log){ // DO STUFF } public LinkedList<String> getResults() { return logs; } }
Now suppose that I need a thread safe version of this code. I need the tread safe version to implement LittleLogging. I want the thread safe copy to have the same behavior as this class except I would like it to be thread safe. Is it safe to do this:
package com.spotonsystems.bulkadmin.cognosSDK.util.Logging; public class SyncRecordLogging extends RecordLogging { public SyncRecordLoging() { super(); } public syncronized void log(String log) { super.log(log); } public syncronized void logPart(String log) { super.log(log); } public syncronized LinkedList<String> getResults() { return logs; } }
Bonus Question: Where should I look for documentation about syncronization and threading
You can use composition instead. Also note that getResults creates a copy of the list:
public class SyncRecordLogging implements LittleLogging{ private final RecordLogging _log; public SyncRecordLogging() { _log = new RecordLogging(); } public synchronized void log(String log) { _log.log(log); } public synchronized void logPart(String log) { _log.logPart(log); } public synchronized LinkedList<String> getResults() { // returning copy to avoid 'leaking' the enderlying reference return new LinkedList(_log.getResults()); } }
Best read: Java Concurrency In Practice
The documentation of the
tryLock method says that it is a non-blocking method
which allows you to obtain/acquire the lock (if that's possible at the time of calling the method).
But I wonder: how can you obtain a lock and still guarantee at the same time that
your method (
tryLock) is non-blocking?! Acquiring the lock implies that you're
trying to access a guarded section of code so it should block (if you're not lucky
i.e. you should block at least in certain scenarios). Could anyone explain the logic
behind this? Purely from a logical standpoint: I don't quite understand how this can
be done at all (guaranteeing that the method doesn't block). Unless they use another
thread of course within the code of the tryLock itself...
Most implementations of these mechanisms use socalled CAS CPU instructions to do atomic actions based on a variable. CAS means Compare and Swap. These look at the value of a variable and if it is what you expect you change it. This provides a threadsafe (non blocking/locking) way to do comparison on data that is multithreaded.
A CAS instruction does the following atomically:
private int stored = 0; public int compareAndSwap(int expectedValue , int newValue) if(expectedValue == stored) stored = newValue; return stored; }
These non blocking mechanisms generally just retry the above function until it succeeds (the returned value is the expected value). Because the retry loop is very short the chances of a thread interrupting on each iteration are tiny (or in practice the OS scheduler will even make it impossible).
The actual java locks (
Lock is just the interface they implement) are all much more complex because they offer extra features. But in essence the CAS mechanism is the base for most non-blocking threadsafe classes.
If you are interested in the inner workings of locking, Java Concurrency in Practice is a great source. Starting gently with what Java concurrency can do and advancing in how it does it. (it is a great source even for non java programmers). Your question is handled in chapter 15.
This code apparently runs well, but I read about pthreads and two threads can't read/write at the same time.
class Sound{ private: std::vector<int> waveColors; int progress; public: void analyze(){ do{ //Perform FFT and update progress waveColors.push_back(color) }while(progress < 100); } void getWaveColors(std::vector<int> *colors, int offset, int length){ for(int i=offset;i<offset+length;i++){ colors.push_back(waveColors[i]); } } int getProgress(){ return (progress); } } Sound *sound = new Sound(); void *analyzeThread(void *arg){ sound->analyze(); } pthread_t analyzeThreadId; pthread_create (&analyzeThreadId, NULL, analyzeThread, 0); jintArray Java_package_getWaveColors(JNIEnv* env, jobject thiz, jint offset, jint limit){ std::vector<int> colors; sound->getWaveColors(&colors, offset, limit); jintArray out = env->NewIntArray(colors.size()); env->SetIntArrayRegion(out, 0, colors.size(), (jint *)&colors[0]); return out; } jint Java_package_getProgress(JNIEnv* env, jobject thiz){ return (jint)sound->getProgress(); }
What is the correct way (semaphores, mutual exclusions?) in this class for keep running the UI update loop?
Thanks in advance
Based on the guidance on the front page of the Android NDK, I'd suggest writing your application in Java, and then re-writing a very small performance-intensive part of it in C++ - not any of the the UI parts.
The C++ part might be purely the FFT computation itself. Structure it so you can carry out a limited but significant "chunk" of the work in a single step, and have the chunk-size controlled by a variable so you can try adjusting it later.
Finally, when using a background worker thread in this way, create a thread-safe queue to use for communication purposes. The worker can push chunks of results on to the back of the queue as it completes them, and the foreground thread can pop those completed chunks from the front of the queue each time it refreshes the UI. Of course, you need to ensure that you don't modify the chunk objects once they are "published" to the queue.
Reference for thread-safe queues in Java:
Ideally you should read this:
Why do you have to specify, which object has locked a synchronized block of code?
You don't have to specify which object has locked a synchronized method as it is always locked by 'this' (I believe).
I have two questions:
I have read chapter nine of SCJP for Java 6, but I am still not clear on this.
I realize it is probably a basic question, but I am new to Threading.
It is not recommended to lock each method with
this as it reduces the concurrency in most cases. So it is recommended to use Lock Stripping in which only the specific part of the code that needs to be protected is kept in synchronized block.
It is a practice that is explained well in Java Concurrency in Practice. But note this book is useful only when you have some basic experience with threading.
Some nuggets to keep in mind:
Use different locks to protect two unrelated entities, which will increase the chances of concurrency. or else for reading or writing two unrelated entities threads will block on same lock.
public void incrementCounter1(){ synchronized(lockForCounter1){ counter1++; } } public void incrementCounter2(){ synchronized(lockForCounter2){ counter2++; } }
Possible Duplicate:
Difference between volatile and synchronized in JAVA (j2me)
I am a bit confused with the 2 java keywords
synchronized and
volatile.
To what i understand, since java is a
multi-threaded language, and by using the keyword
synchronized will force it to be executed in 1 thread. Am i correct ?
And
volatile also does the same thing ?
Java multi-threading involves two problems, ensuring that multiple operations can be done consistently, without mixing actions by different threads, and making a change in a variable's value available to threads other than the on doing the change.
In reality, a variable does not naturally exist at a single location in the hardware. There may be copies in the internal state of different threads, or in different hardware caches. Simply assigning to a variable automatically changes its value from the point of view of the thread doing the assignment.
If the variable is marked "volatile" other threads will get the changed value.
"synchronized" also ensures changes become visible. Specifically, any change done in one thread before the end of a synchronized block will be visible to reads done by another thread in a subsequent block synchronized on the same object.
In addition, blocks that are synchronized on the same object are forced to run sequentially, not in parallel. That allows one to do things like adding one to a variable, knowing that its value will not change between reading the old value and writing the new one. It also allows consistent changes to multiple variables.
The best way I know to learn what is needed to write solid concurrent code in Java is to read Java Concurrency in Practice
Basically this code has two threads created in two classes, and they are called from the third class. Each thread has a loop, and it sleeps after each iteration.
(code is in the end)
The output is:
CHECK 0 CHECK CHECK 1 CHECK run one in thread1 CHECK 2 CHECK run two in thread2
1) I am not getting any idea why it works this way. I mean it is okay that CHECK 0 CHECK should be printed first. But why does CHECK 1 CHECK get printed before Thread1 (whereas it comes after Thread1 is called in the code) and same for CHECK 2 CHECK and Thread2?
2) If i replace CHECK 2 CHECK with System.exit(0), as in the case above, where printing CHECK 2 CHECK, which is next to Thread2, takes place before running Thread2, Why is System.exit(0) happening after running Thread2 in this case?
Output for second case:
CHECK 0 CHECK CHECK 1 CHECK run one in thread1 run two in thread2
Please tell why this is happening? Why are the threads and code in method, getting mixed up this way? I think i don't have any idea about how threads are managed by java. I tried searching a lot, but could not find anything that i could understand.
Code:
public class Thread1 implements Runnable { public Thread1() { new Thread(this).start(); } public void run() { // TODO Auto-generated method stub System.out.println("run one"); try { for(int i = 0; i < 5;i++) { System.out.println("in thread1 "); Thread.sleep(1000); } } catch(Exception e) { //e.printStackTrace(); } } } public class Thread2 implements Runnable { public Thread2() { new Thread(this).start(); } public void run() { // TODO Auto-generated method stub System.out.println("run two"); try { for(int i=0;i<5;i++) { System.out.println("in thread2 "); Thread.sleep(1000); } } catch(Exception e) { //e.printStackTrace(); } } } public class Threadjava { public static void main(String[] str) { System.out.println("CHECK 0 CHECK"); new Thread1(); System.out.println("CHECK 1 CHECK"); new Thread2(); System.out.println("CHECK 2 CHECK"); //The above is deleted in the second case System.exit(0); System.out.println("CHECK 3 CHECK"); } }
Well, this is a common misconception, that java programs are single threaded by nature, because they are not. When you start a java program it's being executed inside a Java Virtual Machine, which starts several other threads to execute your code. Check this nice blog:
In your case you most important is, that you start a main thread, which executes a main method. From there you start two separate threads Thread1 and Thread2, which are being scheduled to be executed, but you don't know when they will be picked up by the OS scheduler to be actually executed. It's not deterministic for many reasons:
Java concurrency is a hard topic and the blog entry that I've sent you is a good place to start, go with it. For serious reading go here.
Good luck.
Consider a distributed bank application, wherein distributed agent machines modify the value of a global variable : say "balance"
So, the agent's requests are queued. A request is of the form wherein value is added to the global variable on behalf of the particular agent. So,the code for the agent is of the form :
agent { look_queue(); // take a look at the leftmost request on queue without dequeuing lock_global_variable(balance,agent_machine_id); ///////////////////// **POINT A** modify(balance,value); unlock_global_variable(balance,agent_machine_id); /////////////////// **POINT B** dequeue(); // once transaction is complete, request can be dequeued }
Now, if an agent's code crashes at POINT B, then obviously the request should not be processed again, otherwise the variable will be modified twice for the same request. To avoid this, we can make the code atomic, thus :
agent { look_queue(); // take a look at the leftmost request on queue without dequeuing *atomic* { lock_global_variable(balance,agent_machine_id); modify(balance,value); unlock_global_variable(balance,agent_machine_id); dequeue(); // once transaction is complete, request can be dequeued } }
I am looking for answers to these questions :
Q> How to identify points in code which need to be executed atomically 'automatically' ?
A> Any time, when there's anything stateful shared across different contexts (not necessarily all parties need to be mutators, enough to have at least one). In your case, there's
balance that is shared between different agents.
Q> IF the code crashes during executing, how much will "logging the transaction and variable values" help ? Are there other approaches for solving the problem of crashed agents ?
A> It can help, but it has high costs attached. You need to rollback X entries, replay the scenario, etc. Better approach is to either make it all-transactional or have effective automatic rollback scenario.
Q> Again, logging is not scalable to big applications with large number of variables. What can we in those case - instead of restarting execution from scratch ?
A> In some cases you can relax consistency. For example, CopyOnWriteArrayList does a concurrent write-behind and switches data on for new readers after when it becomes available. If write fails, it can safely discard that data. There's also compare and swap. Also see the link for the previous question.
Q> In general,how can identify such atomic blocks in case of agents that work together.
A> See your first question.
Q> If one agent fails, others have to wait for it to restart ?
A> Most of the policies/APIs define maximum timeouts for critical section execution, otherwise risking the system to end up in a perpetual deadlock.
Q> How can software testing help us in identifying potential cases, wherein if an agent crashes, an inconsistent program state is observed.
A> It can to a fair degree. However testing concurrent code requires as much skills as to write the code itself, if not more.
Q> How to make the atomic blocks more fine-grained, to reduce performance bottlenecks?
A> You have answered the question yourself :) If one atomic operation needs to modify 10 different shared state variables, there's nothing much you can do apart from trying to push the external contract down so it needs to modify more. This is pretty much the reason why databases are not as scalable as NoSQL stores - they might need to modify depending foreign keys, execute triggers, etc. Or try to promote immutability.
If you were Java programmer, I would definitely recommend reading this book. I'm sure there are good counterparts for other languages, too.
Hey guys. I am trying to write a file transfer application in java and so far it's been ok: i start the server and the client and then transfer the file. I'm having trouble connecting multiple clients to the same server. I googled it and found out that my server side should run in threads. How can i do that with my application? Thanks.
Server:
package filesharing; import java.io.*; import java.net.*; public class Server { public static void main(String args[])throws Exception { System.out.println("Server pornit..."); /* Asteapta pe portul 1412 */ ServerSocket server = new ServerSocket(1412); /* Accepta socketul */ Socket sk = server.accept(); System.out.println("Client acceptat de catre server pe portul: "+server.getLocalPort()); InputStream input = sk.getInputStream(); BufferedReader inReader = new BufferedReader(new InputStreamReader(sk.getInputStream())); BufferedWriter outReader = new BufferedWriter(new OutputStreamWriter(sk.getOutputStream())); /* Citeste calea fisierului */ String filename = inReader.readLine(); if ( !filename.equals("") ){ /* Trimite status READY catre client */ outReader.write("READY\n"); outReader.flush(); } /* Creaza fila noua in directorul tmp */ FileOutputStream wr = new FileOutputStream(new File("C://tmp/" + filename)); byte[] buffer = new byte[sk.getReceiveBufferSize()]; int bytesReceived = 0; while((bytesReceived = input.read(buffer))>0) { /* Scrie in fila */ wr.write(buffer,0,bytesReceived); } } }
Client:
package filesharing; import javax.swing.*; import java.awt.*; import java.awt.event.*; import java.net.*; import java.io.*; public class Client extends JFrame implements ActionListener { private JTextField txtFile; public static void main(String args[]){ /* Creare pannel client */ Client clientForm = new Client(); clientForm.Display(); } public void Display(){ JFrame frame = new JFrame(); frame.setTitle("Client"); FlowLayout layout = new FlowLayout(); layout.setAlignment(FlowLayout.LEFT); JLabel lblFile = new JLabel("Fisier:"); txtFile = new JTextField(); txtFile.setPreferredSize(new Dimension(150,30)); JButton btnTransfer = new JButton("Transfer"); btnTransfer.addActionListener(this); JPanel mainPanel = new JPanel(); mainPanel.setLayout(layout); mainPanel.add(lblFile); mainPanel.add(txtFile); mainPanel.add(btnTransfer); frame.getContentPane().add(mainPanel); frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); frame.pack(); frame.setVisible(true); } public void actionPerformed(ActionEvent e) { /* Casuta File Open Dialog pentru selectarea fisierului */ JFileChooser fileDlg = new JFileChooser(); fileDlg.showOpenDialog(this); String filename = fileDlg.getSelectedFile().getAbsolutePath(); txtFile.setText(filename); try{ /* Incearca conectarea la serverul localhost pe portul 1412 */ Socket sk = new Socket("localhost", 1412); OutputStream output = sk.getOutputStream(); /* Trimite numele fisierului la server */ OutputStreamWriter outputStream = new OutputStreamWriter(sk.getOutputStream()); outputStream.write(fileDlg.getSelectedFile().getName() + "\n"); outputStream.flush(); /* Asteapta raspunsul de la server */ BufferedReader inReader = new BufferedReader(new InputStreamReader(sk.getInputStream())); String serverStatus = inReader.readLine(); // Citeste prima linie /* Daca serverul e READY trimite fisierul */ if ( serverStatus.equals("READY") ){ FileInputStream file = new FileInputStream(filename); byte[] buffer = new byte[sk.getSendBufferSize()]; int bytesRead = 0; while((bytesRead = file.read(buffer))>0) { output.write(buffer,0,bytesRead); } output.close(); file.close(); sk.close(); JOptionPane.showMessageDialog(this, "Transfer complet"); } } catch (Exception ex){ /* Catch pentru eventuale erori */ JOptionPane.showMessageDialog(this, ex.getMessage()); } } }
To define and start a thread you need an implementation of the class Runnable that is passed to a Thread instance on which you call start.
This question is very similar and will provide you a place to start. One answer points to a socket tutorial that shows how to use multiple clients. If you haven't read the Socket tutorials, you definitely should.
But don't stop at knowing just the basics of threading or you'll run into problems. Concurrency is a hard problem. You will want to read up on the great world of Java concurrency.
Ideally you will be inspired enough to read Java Concurrency in Practice, an amazing reference on concurrency in Java.
I am planning to attend a one week course on this subject. I am primarily involved in Java projects and have decent knowledge of C and C++ too. And, I am interested in learning more on concurrent programming and would like to get feedback on this course. Has someone read the book or found these concepts relevant in contemporary programming?
More information on the course:
I would definitely, suggest you to go with this. But I would like to add another really important resource, specific to java - as you labeled the question 'java' - which is Java Concurrency in Practice.
I'd like a little help understanding how setting an object to null works in java. I have a situation where, seemingly, at first glance it appears that an object that is set to null, is suddenly not null, but obviously this can't be the case.
I have a class in which I create an object. This object is a scene. This is an Open GL ES 2.0 project, so this scene's render() and updateLogic() methods are called from onDrawFrame (this is controlled via a Scene Manager so we can easily switch scenes).
So, I might have something like this (code cut down for the purpose of the question):
public class MyGLRenderer implements GLSurfaceView.Renderer{ MyScene myScene; SomeOtherScene someOtherScene; public void createScenes(){ myScene = new MyScene(this); someOtherScene = new SomeOtherScene(this); SceneManager.getInstance().setCurrentScene(myScene); } public void cleanUp(){ myScene = null; Log.v("tag","myScene (from MyGLRenderer) is: "+myScene); SceneManager.getInstance().setCurrentScene(someOtherScene); //Scene changed but this won't take effect until the next 'tick' } @Override public void onDrawFrame(GL10 gl) { SceneManager.getInstance().getCurrentScene().updateLogic(); SceneManager.getInstance().getCurrentScene().render(); } }
In the above situation, processing is turned over to myScene which would look something like this:
public class MyScene implements Scene{ MyGLRenderer renderer; public myScene(MyGLRenderer renderer){ this.renderer = renderer; } @Override public void render(){ //Render something here } @Override public void updateLogic(){ doSomething(); //The condition here could be anything - maybe the user taps the sceen and a flag is set in onTouchEvent for example if (someConditionIsMet){ renderer.cleanup(); } Log.v("tag","myScene (from within myScene) is: "+this); } }
So, when I set the scene using my scene manager, processing is turned over to that scene and it's updateLogic and render methods get called from onDrawFrame continuously.
When I ran my code, I was suprised it didn't crash with a NullpointerException. The logs were like this: MyGLRenderer) is: null myScene (from within myScene) is: com.program.name.MyScene@26354632
As you can see, 'myScene' is valid up until the cleanUp() method is called and sets it to null. But the code then returns to myScene to finish off, where it's still valid (not null).
I'd really like to understand how thing works in Java - why does it seem to be null one minute (or from one place) and then not (from a different place)?
Looks like you have run into a thread safety bug.
Other threads can see stale values of a variable unless you "safely publish" the change of value. (You have one CPU changing a value in its cache line, but another CPU is still seeing stale cache data - you need to force the cache to write up to main memory but this is expensive so Java does not do this until it is told to).
There are many ways to safely publish a value depending on your exact requirements. Looking at your code it seems that all you need to do is declare the myScene field to be volatile. It probably should be private too. So try:
private volatile Scene myScene;
If you want to properly understand thread safety in Java, then I highly recommend "the Train Book":
I gone through the article "".They mentioned that "The Lock framework is a compatible replacement for synchronisation". I understood that by using Reentrant locks we can hold the lock across the methods, wait for the lock for certain period of time (It is not possible using synchronised block (or) methods). My doubt is, is it possible to replace the application with synchronisation mechanism with Reentrant locks?
For example, I want to implement a thread safe stack data structure, where all the push, pop, getTop methods are synchronised, so in multi threaded environment, only one thread can able to access one synchronised method at a time (If one thread is using push method, no other threads can able to access push, pop, getTop (or) any other synchronised methods of Stack class). Is it possible to implement same thread safe stack data structure using Reentrant lock? If possible, please provide an example to understand this..
I absolutely agree because IMHO this:
synchronized (lock) { // ... }
Is way more readable and less error prone than this:
try { lock.lock(); // ... } finally { lock.unlock(); }
Long story short: from a technical point of view, yes, you could replace
synchronized with
ReentrantLock, but I wouldn't do it per se.
Also checkout these questions:
I have a class "A" with method "calculate()". Class A is of type singleton(Scope=Singleton).
public class A{ public void calculate(){ //perform some calculation and update DB } }
Now, I have a program that creates 20 thread. All threads need to access the method "calculate()". I have multicore system. So I want the parallel processing of the threads.
In the above scenario, can i get performance? Can all threads access the method calculate at same instance of time?
Or, Since the class A is singleton so, the threads needs to be blocked waiting.
I have found similar questions in the web/Stackoverflow. But I cannot get clear answer. Would you please help me?
Statements like "singletons need synchronization" or "singletons don't need synchronization" are overly simplistic, I'm afraid. No conclusions can be drawn only from the fact that you're dealing with the singleton pattern.
What really matters for purposes of multithreading is what is shared. If there are data that are shared by all threads performing the calculation, then you will probably need to synchronize that access. If there are critical sections of code than cannot run simultaneously between threads, then you will need to synchronize that.
The good news is that often times it will not be necessary to synchronize everything in the entire calculation. You might gain significant performance improvements from your multi-core system despite needing to synchronize part of the operation.
The bad news is that these things are very complex. Sorry. One possible reference:
i have an ImageView in which the picture switches every 5s, i am trying to add a pause and
resume button that can stop and restart the action. i am using a Handler, Runnable, and
postDelay() for image switch, and i put the code on onResume. i am thinking about using
wait and notify for the pause and resume, but that would mean creating an extra thread. so
far for the thread, i have this:
class RecipeDisplayThread extends Thread { boolean pleaseWait = false;
// This method is called when the thread runs public void run() { while (true) { // Do work // Check if should wait synchronized (this) { while (pleaseWait) { try { wait(); } catch (Exception e) { } } } // Do work } } }
and in the main activity's onCreate():
Button pauseButton = (Button) findViewById(R.id.pause); pauseButton.setOnClickListener(new View.OnClickListener() { public void onClick(View view) { while (true) { synchronized (thread) { thread.pleaseWait = true; } } } }); Button resumeButton = (Button) findViewById(R.id.resume); resumeButton.setOnClickListener(new View.OnClickListener() { public void onClick(View view) { while (true) { // Resume the thread synchronized (thread) { thread.pleaseWait = false; thread.notify(); } } } });
the pause button seems to work, but then after that i can't press any other button, such as
the resume button.
thanks.
i am using a Handler, Runnable, and postDelay() for image switch
Why? Why not use
postDelayed() and get rid of the
Thread and
Handler?
void doTheImageUpdate() { if (areWeStillRunning) { myImageView.setImageResource(R.drawable.whatever); myImageView.postDelayed(updater, 5000); } } Runnable updater=new Runnable() { public void run() { doTheImageUpdate(); } };
When you want to start updating, set
areWeStillRunning to
true and call
doTheImageUpdate(). When you want to stop updating, set
areWeStillRunning to
false. You'll need to work out the edge case of where the user presses pause and resume within 5 seconds, to prevent doubling things up, but I leave that as an exercise for the reader.
If you really want to use a background thread, you will want to learn more about how to use background threads. For example,
while(true) {} without any form of exit will never work. There are some good books on the subject, such as this one.
I have a single thread pool for task execution. As far as I know, continue working after
OutOfMemoryError is occured is very dangerous. We should terminate our application if this happens. So, consider the following:
ExecutorService es = Executors.newSingleThreadExecutor(); es.submit(new Runnable() { @Override public void run() { throw new OutOfMemoryError(); } }); es.shutdown(); es.awaitTermination(Long.MAX_VALUE, TimeUnit.SECONDS); System.out.println("After throwing OutOfMemoryError");
In this code, we have the task throws
OutOfMemoryError. But even after throwing it continues working and prints
After throwing OutOfMemoryError.
Is it safe? I mean, we may end up with data corruption... Should we be prepared to this sort of scenarios and design tasks to terminate the application if
Error is thrown?
There are two threads in the given example -- the main thread that starts the program and a thread for executing the specified tasks. All threads are allocated their own, independent from each other, stacks. However, all threads use the same memory heap. This is why the
OutOfMemoryError effects the whole program, not just a single thread.
Generally speaking, upon termination (successfully or otherwise), the task executing thread does not effect execution flow of any other thread (unless this is what it was designed for doing). That is why, the main thread in the provided example, keeps running even though the task thread got terminated.
I would highly recommend studying the Java Concurrency in Practice book to get better overall undestanding of the Java concurrency and parallelism.
I have a Spring batch job running 4 times a day.Once the job is started, it checks the status of the previous job. If previous job status is "started", then the job is aborted. I have a query to check that. The problem is that, if one job fails checking the previous job status, its status will be "failed" and the next job will start processing, so that two instances will run in parallel. So we have to manually stop one.This forces us to consistently monitor the job. So, I have also tried by changing the query to check for previous 5 job runs. That is fine. But is there any other way using JVM or some other techniques that we can meet this criteria??
Please suggest..!
If no 2 jobs can run at the same time, it is probably because there is a resource that you want to protect. Locking mechanisms are best suited for this. Read Java Concurrency In Practice for more info on that. If you don't have quick access to this excellent book, you can start at
I have been trying to wrap my head around callbacks and have been struggling to grasp the concept. The following code is an example that I found here
starting from first to last I understand the flow to be such:
CallMeis instantiated, thus calling the constructor of said class
enis set, subsequently instantiating the
EventNotifierclass and calling it's constructor which is passed a reference to the object
CallMe
ieis set to the object
CallMewhich was passed into the constructor
somethinghappenedis set to false (I would assume some conditional statement would be used to determine whether or not to set the value otherwise)
I do not understand this code. How does
doWork get called? How does this signify an event? Why would one not simply call
interestingevent from the constructor of
callme .... For that matter why not just call
dowork in place of whatever would change the value of
somethinghappened?
Try as I might I cannot seem to grasp the idea. I understand that callbacks are used primarily to signify an event has occurred such as a mouse or button click but how does it make the connection between the event occurring and the methods being called? Should there not be a loop that checks for changes, and thus triggers the event?
Can someone please provide a (not over-simplified) explanation of callbacks in java and help clarify how something like this could be useful?
public interface InterestingEvent { public void interestingEvent (); } public class EventNotifier { private InterestingEvent ie; private boolean somethingHappened; public EventNotifier (InterestingEvent event) { ie = event; somethingHappened = false; } public void doWork () { if (somethingHappened) { ie.interestingEvent (); } } } public class CallMe implements InterestingEvent { private EventNotifier en; public CallMe () { en = new EventNotifier (this); } public void interestingEvent () { // Wow! Something really interesting must have occurred! // Do something... } }
EDIT: please see the comments in the approved answer... ---this--- link was very helpful for me =)
There is no main method or static blocks. Nothing is actually run from the code you posted; hence,
doWork() is never called. I read the article and looked at the code, and it appears to be incomplete, or perhaps some code is left out because the author felt that it didn't need to be explained.
Here's the gist:
We have an
interface InterestingEvent, a
class EventNotifier, and another class
CallMe, which
implements InterestingEvent.
EventNotifier takes an
InterestingEvent in its constructor, and sets
somethingHappened to
false.
The constructor for
CallMe initializes its
EventNotifier instance member by passing the
EventNotifier constructor a reference to the
CallMe object, itself.
The following is not in the code, but if we detect that some particular action takes place, we set
somethingHappened = true. So after that, if
doWork() is called for an
EventNotifier,
interestingEvent() will be called on that
EventNotifier's
InterestingEvent ie. We can do this, since
CallMe implements InterestingEvent.
NB: This article was from 1996 and much has changed since then. You mentioned how to detect mouse click events, but this is different. The point of the article, I assume, was to show how you can use objects in conjunction with interfaces and booleans to see if something occurred.
To actually detect a mouse click, take a look at this tutorial. Here's another tutorial on Writing Event Listeners. Finally, since you asked about threading in a comment, here's a great book: Java Concurrency in Practice.
I have java.util.List declared as follows:
private static List<String> extensions = null;
It is populated by single thread always.
But multiple threads can call simultaneously method
contains(E e) on
extensions.
Is it threadsafe ?
List is populated by single thread at startup and cannot be modified afterwards. Is now contains threadsafe?
An object used in that way is called "effectively immutable"---the program is allowed to change the object, but it doesn't. The book, Java Concurrency in Practice by Brian Goetz has a section that deals specifically with safe publication of effectively immutable objects.
The short answer (assuming that I am remembering it correctly) is that if you populate the list inside a constructor, and if no other thread can access the list before the constructor returns, then the list is safely published. I am assuming, of course, that the items you put in the list are also effectively immutable.
"Safe publication" means that other threads will be guaranteed to see the list in its intended, final state. If the list were not safely published, then threads running on different processors could see different versions of the list (potentially including some version where the list was in an inconsistent state that could crash your program when you tried to access it.)
I have just started Android Development, I want to make thread in my application which get the current location after every thirty seconds. Kindly give me some hints or any tutorial link which you think is useful for this problem.
I have lots of legacy code, which to a large extent consists of classes with following structure:
public interface MyFunctionalBlock { // Setters for the inputs void setInput1(final int aInput1); void setInput2(final Object aInput2); // Inside the method run inputs are converted into results void run(); // If this building block needs functionality from some other building blocks, // it gets a reference to them from the Google Guice injector. void setInjector(final Injector aInjector); // Getters for the results long getResult1(); Object getResult2(); Map<String,String> getResult3(); } public class MyFunctionalBlockFactory implements Factory<MyFunctionalBlock> { public MyFunctionalBlock create() { return new DefaultMyFunctionalBlock(); } } class DefaultMyFunctionalBlock implements MyFunctionalBlock { private int input1; private Object input2; private long result1; private long result2; private Map<String,String> result3; private Injector injector; @Override public void run() { // Here the calculations are performed. // If this functional block needs another one, it gets a reference to it using the injector. // AnotherFunctionalBlock is the public interface. Implementations of the interface are // intentionally hidden using injector and package-private declaration. final AnotherFunctionalBlock fb = injector.getInstance(AnotherFunctionalBlock.class); // First, we set the inputs fb.setInput1(...); fb.setInput2(...); [...] fb.setInputN(...); // Now we run the calculation fb.run(); // Now we can use the results fb.getResult1(); fb.getResult2(); [...] fb.getResultN(); } // Implementation of getters and setters omitted }
Basically, the entire application consists of such building blocks, which use each other.
Up to now, the application was used in a single-threaded mode. Now I need to modify it such that
How can I do this?
I thought about putting the code from setting the first input to reading the last result into
synchronized block (something like the code example below), but it would require rewriting the entire application.
final AnotherFunctionalBlock fb = injector.getInstance(AnotherFunctionalBlock.class); synchronized(fb) { fb.setInput1(...); fb.setInput2(...); [...] fb.setInputN(...); fb.run(); fb.getResult1(); fb.getResult2(); [...] fb.getResultN(); }
Update 1 (09.06.2013 21:57 MSK): A potentially important note - the concurrency stems from the fact that there are N web services, which receive a request, then use the old code to make calculations based on that request and return the results to the web service client.
A potential solution would be to add some sort of queue between web services and the old code.
Update 2:
I thought about how to make my code thread-safe with minimum possible effort and found following solution (currently, I don't care about performance).
There are several web service classes, which all have a backend property and access it concurrently.
public class WebService1 { private Backend backend; public Response processRequest(SomeRequest1 request) { return wrapResultIntoResponse(backend.doSomeThreadUnsafeStuff1(request.getParameter1(), request.getParameter2())); } } public class WebService2 { private Backend backend; public Response processRequest(SomeRequest2 request) { return wrapResultIntoResponse(backend.doSomeThreadUnsafeStuff2(request.getParameter1(), request.getParameter2(), request.getParameter3())); } }
All calls to the non-threadsafe code go via the
Backend class (all web services reference one and the same
Backend instance).
If I ensure that the backend processes one request after another (and never processes two requests simultaneously), I can achieve the desired result without re-writing the entire application.
Here's my implementation of Backend class:
public class Backend { private synchronized boolean busy = false; public Object doSomeThreadUnsafeStuff1(Long aParameter1, String aParameter2) { waitUntilIdle(); synchronized (this) { busy=true; // Here comes the non-thread safe stuff 1 busy=false; notifyAll(); } } public Object doSomeThreadUnsafeStuff2(Long aParameter1, String aParameter2, Map<String,String> aParameter3) { waitUntilIdle(); synchronized (this) { busy=true; // Here comes the non-thread safe stuff 2 busy=false; notifyAll(); } } private void waitUntilIdle() { while (busy) { wait(); } } }
Can this solution work?
It's unclear what you're trying to accomplish beyond "making it multi-threaded". Concurrency in Java is a very complex subject, and you're not going to find a single, step-by-step answer for how to convert an entire application from single- to multi-threaded. If you do, I'd mistrust that answer thoroughly. I suggest you pick up "Java Concurrency in Practice", the de facto reference for such things. That's how you'll learn what you need to know in order to tackle this problem.
I have read the Java Concurrency in Practice on page 146, and I have coded the:
class RethroableTask implements Runnable{ private static final ScheduledExecutorService cancelExec = Executors.newScheduledThreadPool(1); private Throwable t; public void run(){ try{ while(true){} }catch(Throwable t){ this.t = t; } } public static void main(String[] args){ RethroableTask task = new RethrowableTask(); final Thread taskThread = new Thread(task); taskThread.start(); cancelExec.schedule(new Runnable(){ public void run(){ taskThread.interrupt();//i want taskThread can catch interruptedException } },1,TimeUnit.SECONDS); } }
I want
taskThread to catch
InterruptedException as
Throwable, and really the
taskThread
isInterrupted is
true,but
taskThread never catches it. Why?
I substitute
while(true){} with
try{ Thread.currentThread().sleep(1000);//a blocking method }catch(InterruptedException e){ System.out.println("interruptedException"); Thread.currentThread().interrupt(); }
it come in catch
An
InterruptedException is only thrown when a thread is waiting on a blocking method call at the moment of interruption.
In all other situations, a thread must check its own interrupted status. If you want to test the class you've written, call a blocking method in your while loop.
I learnt Java back in university. It's been 4 years since I last coded Java. I develop PHP applications mainly. This time I need a language with more powerful concurrency support. I thought to myself, I'll just revise my Java in an hour and I'm ready to go.
As it turned out, there is no human friendly tutorials (!!) that can be easily found. I searched "java tutorial" and the first results are either impossibly abstract with no useful code examples or ad-filled spit outs of the Web 1.0 era! More complex searches just led me to more confusing/outdated posts. I just love PHP for the numerous friendly tutorials out there.
Anyway, to avoid making this a pointless post, can anyone direct me to a readable tutorial to how I can use the thread ExecutorService to 1) queue a few thousand Runnables, 2) have a maximum of 15 threads executing at a time, and 3) if a thread fails, re-queue it or just don't remove it from the Executor's pool.
Thank you in advance!
If you dislike Times New Roman, just change the browser default font to Tahoma or something like.
Then start here and click your way through Next link. Then there are the API docs, each with examples in the introductory text. E.g.
ExecutorService. Then there are books, like Concurrency in Practice.
I'm developing a JavaFX application for read data from a serial device and show a notification when a new device is connected to the computer.
I have a task
DeviceDetectorTask which scans all the ports and creates an event when a new device is connected. This task must be submited every 3 seconds.
When a device is detected the user can press a button to read all the data contained in it. This is performed by another task
ReadDeviceTask. At this point and while the
ReadDeviceTask is running scan operations should not be performed (I cannot read and scan one port at the same time). So only one of the two task can be running at a time.
My actual solution is:
public class DeviceTaskQueue { private ExecutorService executorService = Executors.newSingleThreadExecutor(); public void submit(Runnable task) { executorService.submit(task); } } public class ScanScheduler { private ScheduledExecutorService executor = Executors.newSingleThreadScheduledExecutor(); public void start() { AddScanTask task = new AddScanTask(); executor.scheduleAtFixedRate(task, 0, 3, TimeUnit.SECONDS); } } public class AddScanTask implements Runnable { @Autowired DeviceTaskQueue deviceTaskQueue; @Override public void run() { deviceTaskQueue.submit(new DeviceDetectorTask()); } } public class ViewController { @Autowired DeviceTaskQueue deviceTaskQueue; @FXML private readDataFromDevice() { deviceTaskQueue.submit(new ReadDeviceTask()); } }
My question is: is it ok to add a task to the ExecutorService from the task AddScanTask which has been scheduled by the ScheduledExecutorService?
To answer your simple question in last line:
is it ok to add a task to the ExecutorService from the task AddScanTask which has been scheduled by the ScheduledExecutorService?
Yes. Certainly you can submit a
Callable/
Runnable from any other code. That the submitting code happens to be running from another executor is irrelevant, as code run from an executor is still “normal” Java code, just running on a different thread.
That is the whole point of the executor, to handle the juggling of threads in a manner convenient to you the programmer. Making multi-threaded coding easier and less error-prone is why these classes were added to Java. See the extremely helpful book, Java Concurrency in Practice by Brian Goetz et al. And see other writings by Goetz.
In your case you have two executors each with their own thread, each executing a series of submitted tasks. One has tasks submitted automatically (timed) while the other has tasks submitted manually (arbitrarily). Each executes on their own thread independent of one another. With multiple cores they may execute simultaneously.
Therein lies the bigger problem: In your scenario you don't want them to be independent. You want the reading tasks to block the scanning tasks.
The problem you present is that a regularly occurring activity (scanning) must halt when an arbitrary event (reading) happens. That means the two activities must coordinate with one another. The question is how to coordinate.
When the arbitrary event is happening, it should raise a flag. The recurring activity, when it runs, should always check for that flag. If raised, wait until the flag lowers before proceeding with scan. The
ScheduledExecutorService is designed for this, tolerating a task that may run for a time longer than the scheduled period. If one execution of the task runs long, the SES does not run again, so it does not pile up a backlog of executions. That is just the behavior you want.
Vice versa, if the recurring activity is executing, it should raise a flag. The arbitrary event’s first to-do item is to check for that flag. If raised, wait until lowered. Then proceed, first raising its own flag and then proceeding with the task at hand (scanning).
Perhaps your scenario should be designed with a single flag rather than scanner and reader each having their own. I would have to think about it more and probably know more about your scenario.
The technical term for such flags is semaphore.
Unfortunately your comment says you cannot alter the scanner’s source code. So you cannot implement the semaphores and coordinate the activities. So I am stuck, cannot see a solution.
Given your frozen code, one hack solution, which I do not recommend, is that the regularly occurring activity (the scanning) not actually do the work but instead post a scanning task on another thread (another executor). That other executor would also be the same executor used to post the arbitrary activity (the reading). So there is one single queue of to-do items, a mix of scanning and reading jobs, submitted to a single-thread executor. The single-thread means they get done one at a time in sequence of their submission.
I do not like this hack because if any of the to-do items takes a long while you will begin to accumulate a backlog. That could be a mess.
By the way, no need for the
DeviceTaskQueue in your example code. Just call the instance of the
ExecutorService directly to submit a task. That is the job of an
ExecutorService, and wrapping it adds no value that I can see.
I mean, why do defacto immutable objects exists? Why do we not just use the final static modifiers? What is so important about String that Java makes it immutable?
Making a variable final makes that reference unchangeable. But the object that the reference points to can still change, so if I define:
final List<String> list = new ArrayList<String>();
I can't swap the list out for another list, but I can still modify the contents of the list:
list.add("asdf");
But an immutable object cannot be changed once it is constructed.
(Using static only means the field is defined on the class, not on the instance. It's used for defining constant values (moreso before enums were added) but only because only one value is needed for the class. The static keyword's not directly relevant for immutability.)
Immutable objects are threadsafe and concerns about memory visibility, lost updates, etc. are not applicable, because the object's state is safely published upon construction.
They are easy to reason about because there are no state changes. For things with value-based equality, immutability is a better match for the concept being described. For Strings and numbers, that are unchanging abstractions, immutability is especially appropriate.
If you have a mutable object where a mutable field participates in its equals and hashCode implementation, then you can have a situation where you put it in a collection, then mutate the field, breaking how the collection works. It's better to avoid that kind of thing up front.
Also immutable objects are safer to share, see Java Concurrency in Practice, 3.4:
Immutable objects are also safer. Passing a mutable object to untrusted code, or otherwise publishing it where untrusted code could find it, is dangerous -- the untrusted code might modify its state, or worse, retain a reference to it and modify its state later from another thread. On the other hand, immutable objects cannot be subverted in this manner by malicious or buggy code, so they are safe to share and publish freely without the need to make defensive copies.
I'm writing a game engine which performs alhpa-beta search on at a game state, and I'm trying to parallelize it. What I have so far is working at first, and then it seems to slow to a halt. I suspect that this is because I'm not correctly disposing of my threads.
When playing against the computer, the game calls on the getMove() function of a MultiThreadedComputerPlayer object. Here is the code for that method:
public void getMove(){ int n = board.legalMoves.size(); threadList = new ArrayList<WeightedMultiThread>(); moveEvals = new HashMap<Tuple, Integer>(); // Whenever a thread finishes its work at a given depth, it awaits() the other threads // When all threads are finished, the move evaluations are updated and the threads continue their work. CyclicBarrier barrier = new CyclicBarrier(n, new Runnable(){ public void run() { for(WeightedMultiThread t : threadList){ moveEvals.put(t.move, t.eval); } } }); // Prepare and start the threads for (Tuple move : board.legalMoves) { MCBoard nextBoard = board.clone(); nextBoard.move(move); threadList.add(new WeightedMultiThread(nextBoard, weights, barrier)); moveEvals.put(move, 0); } for (WeightedMultiThread t : threadList) {t.start();} // Let the threads run for the maximum amount of time per move try { Thread.sleep(timePerMove); } catch (InterruptedException e) {System.out.println(e);} for (WeightedMultiThread t : threadList) { t.stop(); } // Play the best move Integer best = infHolder.MIN; Tuple nextMove = board.legalMoves.get(0); for (Tuple m : board.legalMoves) { if (moveEvals.get(m) > best) { best = moveEvals.get(m); nextMove = m; } } System.out.println(nextMove + " is the choice of " + name + " given evals:"); for (WeightedMultiThread t : threadList) { System.out.println(t); } board.move(nextMove); }
And here run() method of the threads in question:
public void run() { startTime = System.currentTimeMillis(); while(true) { int nextEval = alphabeta(0, infHolder.MIN, infHolder.MAX); try{barrier.await();} catch (Exception e) {} eval = nextEval; depth += 1; } }
I need to be able to interrupt all the threads when time is up-- how am I supposed to implement this? As of now I'm constantly catching (and ignoring) InterruptedExceptions.
The most sensitive way is to use interruption mechanism.
Thread.interrupt() and
Thread.isInterrupted() methods. This ensures your message will be delivered to a thread even if it sits inside a blocking call (remember some methods declare throwing
InterruptedException?)
P.S. It would be useful to read Brian Goetz's "Java Concurrency in Practice" Chapter 7: Cancellation and Shutdown.
In my application I have the
MainThread and I have a
SeperateThread
I have the threads working almost to perfection. The only problem is I can't shut the
SeperateThread down.
public void run() { isRunning = true; while(isRunning) { Log.d(TAG, "Running..."); long currentTime = SystemClock.uptimeMillis(); } } public void StopThread() { isRunning = false; } seperateThread.StopThread();
Then in the Thread I have a method that just turns the
volatile boolean isRunning off. Even though I step through in the debugger noting that the thread switches the boolean to off.
You definitely want to be using the new concurrency package stuff for this, and grab a copy of Java Concurrency in Practice, and learn more about this. Threads are deceptively simple.
Why your code isn't working probably has to do with you not setting isRunning to false the way you expect. However, the java.util.concurrency package does this for you anyway.
In your case you should look at the Executor framework. So your code could look something like this:
ExecutorService exec = Executors.newFixedThreadPool(1);//A field somewhere //Start your thread exec.submit(new Runnable(){ @Override public void run() { while (!exec.isShutdown()) { Log.d(TAG, "Running..."); //Do stuff } } }); //when done exec.shutdown();
Thinking about this overnight, this might be a better way:
ScheduledExecutorService exec = Executors.newScheduledThreadPool(1); exec.scheduleWithFixedDelay(new Runnable() { @Override public void run() { //Do stuff } }, 0, 1, TimeUnit.MILLISECONDS); //when done exec.shutdown();
Is there any way to return value (or just
return;) for the outer function from the inner function?
My question is similar to this question: Breaking out of nested loops in Java, but the difference is that in this question the asker asked about breaking the outer loop in nested loops, and I ask about returning for the outer function in nested functions.
public void outerFunction () { runOnUiThread (new Runnable() { @Override public void run () { // Here i want to return;, so the function will not continue after runOnUiThread } }); // Code here should not run. }
I'm not sure I understand the question. But I assume you are not very familiar with multi-threading.
The code within
Runnable and the code after the call to
runOnUiThread actually run at the same time, but on different threads. So it is light-years away from nested loops, or from single-threaded nested methods calls.
You can actually make a "blocking" call to another thread to wait for a value, but I don't think it's appropriate to write all of this out in a stackoverflow answer. You should read up on multi-threading and I strongly recommend Java Concurrency in Practice. But you should be warned that it is not a trivial topic that you can pick up in a few days. You can also look at this tutorial; the interface
Future implements the "blocking" call I was referring to.
EDIT
I wrote the above thinking in Java. Things are more complicated with Android since in order to use a
Future, you would need to get an
ExecutorService for the UI thread, and I'm not sure that is possible.
There are many Android specific multi-threading constructs (see for example this tutorial, or the official doc). It is certainly possible to solve your problem, but we don't have enough details. Also, you should probably not try to find a quick fix for your problem, but rethink your whole design so that fits naturally within the Android multi-threading paradigm.
I would like to create a service which will manage a collection through REST-style controller. I was thinking about what is required to make this Service safe from multiple people hitting it at the same time.
So basically something like this...
@Transactional class NoteService { private static users = [:] //This won't be so simple it the future private static key = 0; def get(id) { log.debug("We are inside the get") return users[id] } def create(obj){ log.debug("We are inside the create") update(key++, obj) } def update(id, obj){ log.debug("We are inside the update") users.put(id,obj) } def delete(id){ log.debug("We are inside the remove") user.remove(id) } }
Will this work if I have multiple controller requests hitting it at the same time? My concerns are that there could be problems if two clients are trying to hit it at the same time. Also would there be a better strategy maybe using promises? I am using 2.3+
No, this is broken and not thread-safe.
If you have no mutable state, you are always thread-safe. Of course if you have no state variables at all then it's even better, but it's fine to have state variables for things like dependency-injected Spring beans, or a logger, etc. As long as you don't change those values, then they're effectively immutable (they get set during startup and aren't changed afterwards), and two concurrent callers won't interfere with each other.
But you have exactly the thing that is most problematic for concurrent access - a state variable (in this case it doesn't help or hurt that it's static because by default Grails services are singleton Spring beans, so that map could be a non-static instance variable and have the same problems) that you change in multiple methods.
The easiest thing to do would be to synchronize on the map. You can't just synchronize the methods - that would only work if one method that accessed the map. Using
synchronized would serialize the calls and guarantee no concurrent access. But you read and write from multiple methods, so serializing calls to each of those doesn't help the interactions between concurrent calls of different methods. Even if you synchronize every method you are still likely to have occasional instances where two methods get called at the same time; being synchronized doesn't help.
So you need a mechanism to synchronize across methods, and you're somewhat lucky here since you only have the one mutable field, so you can synchronize on that (but of course you could always create a dummy 'lock' object and synchronize on that if you had multiple fields being changed). Then all access to all methods (whether they're synchronized or not, and you can now un-synchronize them because that's only slowing things down) is guarded by serializing the calls "through" the map.
This is the easiest, but isn't very performant. If the time spent holding each synchronization lock is short, you probably won't notice much of an issue. Try to make the synchronized blocks as short as possible:
def update(id, obj) { log.debug("We are inside the update") synchronized(users) { users.put(id,obj) } }
A much better solution would be to use the
java.util.concurrent.* locking and concurrency classes that were added in Java 5. This will be very performant if implemented correctly, but getting to the point where you understand how to use these APIs will take a while. The best resource is Java Concurrency in Practice. It was written in 2006 but is still very applicable (it obviously doesn't include updates in newer JDKs, but the APIs available in 1.5 and described in that book are sufficient for many use cases). The book is ~400 pages but the material is difficult (but very well explained), so plan on a multi-month time frame :)
Venkat Subramaniam's Programming Concurrency on the JVM is another great resource. It's newer (2011) and less in-depth than JCIP, so it covers less but is more approachable. And it covers multiple JVM languages including Groovy. Still a multi-month timeframe, but fewer months.
i wrote tthe below method to insert records using threads, but at run time i receive "[SQLITE_BUSY] The database file is locked (database is locked)" error,and i think could be due to conflict of sqlite statement.
i just want to know i am wusing the executorservice correctly in the "insertRecord" method? is there any other variables shoule have been synchronizedß
code:
public void insertRecord(String nodeID, String lat, String lng, String xmlpath) throws SQLException, ClassNotFoundException { if (this.isTableExists(this.TABLE_NAME)) { InsertRun insRun = new InsertRun(this.psInsert, nodeID, lat, lng, xmlpath); this.executor.execute(insRun); } else { Log.e(TAG, "insertRecord", "table: ["+this.TABLE_NAME+"] does not exist"); } } public void flush() throws SQLException { this.psInsert.executeBatch(); this.psInsert.close(); this.connInsert.close(); Log.d(TAG, "insertRecord", "the rest of the records flushed into data base table."); } private class InsertRun implements Runnable { private PreparedStatement psInsert = null; private String nodeID; private String lat; private String lng; private String xmlPath; public InsertRun(PreparedStatement psInsert, String nodeID, String lat, String lng, String xmlpath) { // TODO Auto-generated constructor stub this.psInsert = psInsert; this.nodeID = nodeID; this.lat = lat; this.lng = lng; this.xmlPath = xmlpath; } @Override public void run() { // TODO Auto-generated method stub try { this.psInsert.setString(1, this.nodeID); } catch (SQLException e) { // TODO Auto-generated catch block e.printStackTrace(); } try { this.psInsert.setString(2, this.lat); } catch (SQLException e) { // TODO Auto-generated catch block e.printStackTrace(); } try { this.psInsert.setString(3, this.lng); } catch (SQLException e1) { // TODO Auto-generated catch block e1.printStackTrace(); } try { this.psInsert.setString(4, this.xmlPath); } catch (SQLException e) { // TODO Auto-generated catch block e.printStackTrace(); } try { this.psInsert.addBatch(); } catch (SQLException e) { // TODO Auto-generated catch block e.printStackTrace(); } synchronized(this) { if (++batchCnt == SysConsts.BATCH_SIZE) { try { this.psInsert.executeBatch(); } catch (SQLException e) { // TODO Auto-generated catch block e.printStackTrace(); } batchCnt = 0; Log.d(TAG, "InsertRun", SysConsts.BATCH_SIZE+" records inserted."); } } } }
Additionally, good concurrency design considerations can be difficult to offer without know alot more about the program. Suggest perusing through Java Concurrency in Practice if you plan on writing more multi-threaded applications.
Let's consider this situation:
public class A { private Vector<B> v = new Vector<B>(); } public class B { private HashSet<C> hs = new HashSet<C>(); } public class C { private String sameString; public void setSameString(String s){ this.sameString = s; } }
My questions are:
1)
Vector is thread-safe so when a thread calls over it, for instance, the
get(int index)method Is this thread the only owner of
HashSeths?
2) If a thread call
get(int index) over v and it obtains one B object. Then this thread obtains a C object and invoke
setSameString(String s) method, is this write thread-safe? Or mechanism such as
Lock are needed?
Thank you very much for the explanation.
First of all, take a look at this SO on reasons not to use
Vector. That being said:
1)
Vector locks on every operation. That means it only allows one thread at a time to call any of its operations (get,set,add,etc.). There is nothing preventing multiple threads from modifying
Bs or their members because they can obtain a reference to them at different times. The only guarantee with
Vector (or classes that have similar synchronization policies) is that no two threads can concurrently modify the vector and thus get into a race condition (which could throw
ConcurrentModificationException and/or lead to undefined behavior);
2) As above, there is nothing preventing multiple threads to access
Cs at the same time because they can obtain a reference to them at different times.
If you need to protect the state of an object, you need to do it as close to the state as possible. Java has no concept of a thread owning an object. So in your case, if you want to prevent many threads from calling
setSameString concurrently, you need to declare the method
synchronized.
I recommend the excellent book by Brian Goetz on concurrency for more on the topic.
ok. I need to make three threads: one to get odd numbers, one to get evens, and one to add the odd and evens together. The output would be something like this (1,2,3,3,4,7...). I'm new to threads and still shaky on how they work but this is what I have so far:
class even extends Thread { public void even() { Thread ThreadEven = new Thread(this); start(); } public void run() { try { for(int i = 0; i < 10; i += 2) { System.out.println(i); } Thread.sleep(1000); } catch(Exception e) { System.out.println("Error: Thread Interrupted"); } } } class odd extends Thread { public void odd() { Thread ThreadOdd = new Thread(this); start(); } public void run() { try { for(int i = 1;i < 10; i += 2) System.out.println(i); Thread.sleep(1000); } catch(Exception e) { System.out.println("Error: Thread Interrupted"); } } } class ThreadEvenOdd { public static void main(String args []) { even e = new even(); odd o = new odd(); } }
This prints out 0,2,4...and then 1,3,5. How to interleave? And is interleaving what I want and should I synchronize the threads as well? What I don't understand is how to get the values of the odd and even thread into a third to add the sum. Apologies beforehand if I didn't get the formatting correct for the code.
As noted, this is kind of an advanced problem and yes you should be reading lots of tutorials before attempting this. Even for this short program I could never have written it if I had not spent quite a bit of time poring over Java Concurrency in Practice. (Hint - buy the book. It's all in there.)
The class even and odd are producers. The class sum is a consumer. The producers and consumers share a blocking queue to pass data. The producers use a negative number as a poison pill to indicate that they are finished and no more data will be forthcoming. When the consumer detects the poison pill it decrements the countDown latch. The main thread uses this as a signal that the work is complete.
import java.util.concurrent.ArrayBlockingQueue; import java.util.concurrent.BlockingQueue; import java.util.concurrent.CountDownLatch; public class OddEven { public static void main(String [] args) throws InterruptedException { BlockingQueue<Integer> evens = new ArrayBlockingQueue<Integer>(1); BlockingQueue<Integer> odds = new ArrayBlockingQueue<Integer>(1); even e = new even(evens); odd o = new odd(odds); sum s = new sum(evens, odds); e.start(); o.start(); s.start(); s.waitUntilDone(); } } class sum extends Thread { private final BlockingQueue<Integer> in1; private final BlockingQueue<Integer> in2; private final CountDownLatch done = new CountDownLatch(1); public sum(BlockingQueue<Integer> in1, BlockingQueue<Integer> in2) { this.in1 = in1; this.in2 = in2; } public void waitUntilDone() throws InterruptedException { done.await(); } public void run() { try { while (true) { int a = in1.take(); int b = in2.take(); if (a == -1 && b == -1) break; int c = a + b; System.out.println(a); System.out.println(b); System.out.println(c); } done.countDown(); } catch(Exception e) { System.out.println("Error: Thread Interrupted"); } } } class even extends Thread { private final BlockingQueue<Integer> out; public even(BlockingQueue<Integer> out) { this.out = out; } public void run() { try { for(int i = 0; i < 10; i += 2) out.put(i); out.put(-1); } catch(Exception e) { System.out.println("Error: Thread Interrupted"); } } } class odd extends Thread { private final BlockingQueue<Integer> out; public odd(BlockingQueue<Integer> out) { this.out = out; } public void run() { try { for(int i = 1;i < 10; i += 2) out.put(i); out.put(-1); } catch(Exception e) { System.out.println("Error: Thread Interrupted"); } } }
Typical output is:
0 1 1 2 3 5 4 5 9 6 7 13 8 9 17 Sums are complete
See if you guys could solve this. It is driving me insane.
I have 2 instances of a Class which has private instance
File variables (NOT static, NOT volatile)
private File tmpF;
each instances were then executed in different threads in the same pool.
instance 1 and 2 both create a temp file and assigned it to its
File variable (NOT static). I called
tmpF = File.createTempFile("myTempFile" + unique_Id)
right before temp file creation, I debugged using IntelliJ IDEA and verified that each thread has different
unique_Id.
Here is what is driving me insane. When the latter threads created a temp file and assigned it to its own
tmpF variable, the earlier thread
tmpF variable's value changed to the latter thread's tmpF value. How is this possible when
tmpF is NOT static ???
When I tried changing the variable into a local method variable. The problem disappears... so it is definitely something to do with the fact that is a class field. Adding synchronized doesn't work either interestingly.
The problem sounds like you are sharing mutable data between threads, which ought to be avoided in concurrent environments, as per Brian Goetz's book, Java Concurrency in Practice. You have a few different options, depending on your restrictions.
private final File tmpF;), ensuring that it is instantiated exactly once. The file could be injected from a factory class.
Hope that helps.
Should FIFO queue be synchronized if there is only one reader and one writer?
What do you mean by "synchronized"? If your reader & writer are in separate threads, you want the FIFO to handle the concurrency "correctly", including such details as:
In the Java world there's a good book on this, Java Concurrency In Practice. There are multiple ways to implement a FIFO that handles concurrency correctly. The simplest implementations are blocking, more complex ones use non-blocking algorithms based on compare-and-swap instructions found on most processors these days.
Suppose I have one ClassA whose responsibility is to provide thread-safe API for external usage. and some ClassA API has to invoke some method from another ClassB to fulfill some logic (suppose the ClassB's method is stateless and thread-safe). However, ClassB and ClassA can not be merged as a single class due business logic. For example as following code snippet. Is there better way to accomplish this? of course, I could use finer-granularity concurrency control for ClassA such as synchronized block, concurrent lock-free data structures.
Thanks!
public class ClassA{ public synchronized void method1(ClassB cb){ //do internal stuff1 cb.printAlog(); //do internal stuff2 } public synchronized void method2(){ //do internal stuff3 } ...... } public class ClassB{ public void printALog(){ //.... } ...... }
As far as I understood the question, you are wandering about guarding the non-threadsafe code with thread-safe class. At this point of view the code you provided is completely legal, as soon as you don't expose classB in some way, which makes it possible to break the contract of classA.
This design is also described in Java concurrency in practice very well.
Suppose you have a work queue and there can be thousands of work items. And assume updates for different work items keep coming into the system. Now obviously if we get multiple updates for the same work item, that needs to be locked.
In this situation we can easily run into a situation, where the system could have received 2000 (or some high number) updates at once and hence JVM needs to hold 2000 locks on different objects.
Will it degrade JVM performance alot? Is there any maximum number of locks that JVM can hold at once to not let performance degrade.
I understand you can use hashing technique to stop the number of locks from growing.
Each lock merely stores the object holding it. The first part of the question--how many locks can the JVM hold--is like asking how many
12s it can store. The amount is bounded by memory. As others have noted, performance is impacted most by lock contention.
Use the classes from
java.util.concurrent to build and store your locks and work queue as they were written for safety and performance. I highly recommend the book Java Concurrency in Practice.
we have been trying to benchmark our application performance in multiple way for sometime now. I always believed that object creation in java using Class.newInstance() was not slow (at least after 1.4 version of java). But we anyways did a test to use newInstance method vs mainitain an object pool of 1000 objects. We did about 200K iterations of loading data from DB using JDBC and populating these objects. I was amazed (even shocked) to see that newInstance code compared to object pool code was almost 10 times slower.
These objects represent tables with about 50 fields and all string type.
Can someone share there thoughts on this issue as now I am more confused if object pooling of atleast some DAO instances is a better option. The pool size as I see right now should be large enough to meet size of average requests. There is a flip side as my memory footprint will go up but I am beginning to wonder if this kind of idea makes sense atleast for some of the DAO entities representing tables of about 50 or more columns
Please share your ideas and let me know if this has been tried by someone or am I missing some point here
To quote from Java Concurrency in Practice:
Just say no to object pools
There is likely a simpler option to whatever problems you are having. For instance, why are you instantiating so many instances of your DAOs in the first place? Is a new DAO instance being created every time you need to retrieve data from the database?
Why not just re-use the same instance of the
WhateverDao each time you need access to the
Whatever data? Just make sure that the DAO is thread-safe.
When writing a thread safe class, we use
synchronized keyword at two places in code,
1.Method level synchronization
2.Synchronized blocks
As far as I can think of, interfaces like
Lock (
java.util.concurrent.locks.Lock)can be used in place of
synchronizedonly at block level but not at method level. or is there a way?
I don't have any real need of it but just curios since heard discussions that
Lockcan be a replacement for
synchronized.
public class SampleClass { public synchronized void method(){ ..... } }
No, there's no way to do this since
java.util.concurrent—and therefore
Lock or any other implementation as well—is just a library, whereas
synchronized is part of the Java Language Specification.
Regarding "
Lock can be a replacement for synchronized",.
While reading java concurrency in practice (btw, an excellent book), I ended up with the following related questions. The first one is about documenting thread safety, the second is about proving that a class that depends on an interface is thread safe.
Question 1. Does it make sense to use @ThreadSafe on interfaces? Is it a way of communicating that all the implementations must be thread safe? I am still trying to make up mind here, not sure if it is a good practice or not.
Consider the following example.
public interface Point { int getX(); } class ImmutablePoint implements Point { private final int x; public ImmutablePoint(int x) { this.x = x; } @Override public int getX() { return x; } } class MutablePoint implements Point { public int x; public MutablePoint(int x) { this.x = x; } @Override public int getX() { return x; } }
Question 2. Let's assume that you need to decide if the following class is ThreadSafe or not
// How can someone possibly know if this class is thread safe or not? class IsItThreadSafe { private final int aRandomField; // the custom Point defined before private final Point point; public IsItThreadSafe(int aRandomField, Point point) { this.aRandomField = aRandomField; this.point = point; } public int add(){ return aRandomField + point.getX(); } }
Since this class only depends on the interface Point, and we don't know anything about the implementation at this point (haha), how can you know if your class is thread safe or not?
I can imagine the follow scenario (pseydo code)
class ConcurrentRunner{ public void run() { MutablePoint mutablePoint = new MutablePoint(42); IsItThreadSafe isItThreadSafe = new IsItThreadSafe(13, mutablePoint); // here pass isItThreadSafe and mutablePoint to multiple threads // each thread can modify mutablePoint and then run add() // different threads may get different results, the class is not behaving // the same, so it should not be treated as thread safe. } }
What do you think guys? I am sure that I am missing something fundamental here, but I don't know what!
I have seen this from a recognised sample book so its hard to question then there is something I dont understand.
A class called DataflightsService contains a private static variable called FlightFileAccess that appears to be instantiated everytime we create a new object for DataflightsService as FlightFileAccess's initiation its in the constructor
ie
public class DataflightsService{ private static FlightFileAccess fileAccess=null; public DataflightsService(String path){ fileAccess=new flightFileAccess(path); } public boolean removeflight(String code){ //We use this static instance that wraps functionality to remove a flight fileAccess.remove(code); } }
For me that means that every time we create an instance of
DataflightsService, in the constructor are using a different object all the time for the static variable
FlightFileAccess
In the original
FlightFileAccessClass: we have the remove method that synchronizes a
RandomAccessFile
Class FlightFileAccess{ private RandomAccessFile database = null; private boolean remove(String code){ // Other code goes here and there synchronized (database) { //Perform deletion code } }
So because we are using a different reference of
FlightFileAccess we are also using a different reference of
RandomAccessFile?
That means that having
FlightFileAccess as static in service does not serve here to
synchronize on the
RandomAccessFile because it is a new one every time so each
DataflightsService instance will do their thing on the random access file ignoring the synchronization.
As opposed to instantiating
FlightFileAccess in a static initiator. Am I right?
I would appreciate as many explanations as possible to provide the best way to be able to instantiate
DataflightsService as many times as we want (as lets say imagining each client has their own instance of
DataflightsService) and after that being able to synchronize on a file for removals for example so that there is no mess of several clients accessing the file. Sorry I need to include a
DataflightsService per client bc there are no cookies.
Your example won't compile because the name of the constructor doesn't match the class. But if you mean to name the constructor
public DataflightsService(), then part of the issue is that you are overwriting the static variable each time a new object is created.
It sounds like you want this static variable to be initialized only once. Normally you would just assign the variable directly with
private static final FlightFileAccess fileAccess = new FlightFileAccess(); or if you wanted to add more logic as if you had a constructor, you could use a static initializer block as follows:
public class Dataflights { private static final FlightFileAccess fileAccess; static { // Static initializer block gets run once when the class is first referenced. // Not usually used unless you want to add more logic besides just initializing variables. fileAccess = new FlightFileAccess(); } private final String path; public final int id; public Dataflights(String path) { this.path = path; this.id = fileAccess.generateId(); } static class FlightFileAccess { private volatile int nextId = 0; synchronized public int generateId() { return nextId++; } } public static void main(String[] args) { Dataflights d = new Dataflights("my/path"); System.out.println("Id is: " + d.id); } }
There are many ways to handle contention. I recommend Java Concurrency in Practice if you aren't familiar with Java concurrency.
You are on the right track in your FlightFileAccess class. I can't see the details, but you might also want to use the
synchronized keyword in the signature of the
remove() method to protect the entire function first. Then, once you have things working, use more tightly targeted
synchronize {...} blocks to reduce the amount of code that has to be singly threaded.
In one thread I have
write a = 0 write a = 1 write volatile flag = 1
In 2nd thread I have
read volatile flag // This always happens after I write volatile flag in thread 1 read a
Can a reordering happen so I see
read a returning 0 in the 2nd thread?
If not, could someone, please, explain in detail why?
I'm asking becuase I'm puzzled by this definition from the JLS:
Among all the inter-thread actions performed by each thread t, the program order of t is a total order that reflects the order in which these actions would be performed according to the intra-thread semantics of t.
It looks as if allows for reordering in this situation?
Can a reordering happen so I see read a returning 0 in the 2nd thread?
No, not if your assertion is correct, "This always happens after I write volatile flag in thread 1"
The last update that thread 1 made to the variable
a before it updated the volatile
flag will be visible to thread 2 after thread 2 has read the volatile flag.
See section 3.1.4 of Java Concurrency in Practice by Brian Goetz for a more detailed explanation:
But note! It can be considered to be bad practice to depend on reads and writes of a volatile variable to synchronize other variables. The problem is, the relationship between the volatile variable and the other variables may not be obvious to other programmers who work on the same code.
I've been watching a lot of videos on data structures, and these terms are always being mentioned:
synchronized/not synchronized and
thread-safe/not thread-safe.
Can someone explain to me in simple words what
synchronized and
thread-safe mean in Java? What is
sync and what is
thread?
A class is thread-safe if it behaves correctly when accessed from multiple threads, regardless of the scheduling or interleaving of the execution of those threads by the runtime environment, and with no additional synchronization or other coordination on the part of the calling code.
So thread safety is a desired behavior of the program in case it is accessed by multiple threads. Using the
synchronized block is one way of achieving that behavior. You can also check the following:
What does 'synchronized' mean?
What does threadsafe mean?
An attempt to signal between 2 threads with minimum wasted signals. Since it has been some time I have attempted this - please suggests errors/improvements if any. The intention is : 1.no deadlock 2.no updates being missed from being read 3.not raising missed signals as far as possible.
import java.util.Random; class shareful{ class share{ int sharedata; volatile boolean isbitset=false; public int getstats(){ return sharedata; } public void setstats(int data){ sharedata=data; } public boolean getbitset(){ return isbitset;} public void setbitset(boolean flag){ isbitset=flag;} } share so=new share(); void runStatistics(){ int check=0; Thread t1=new Thread(new Runnable(){ public void run(){ Random r=new Random(); boolean isflagged=false; while (true){ synchronized(so){ try{ while(so.getbitset()==true){ so.wait(); } } catch(InterruptedException e){} so.setstats(r.nextInt(200)); if(so.getbitset()==false) isflagged=true; so.setbitset(true); if(isflagged) so.notify(); } } } }); Thread t2=new Thread(new Runnable(){ public void run(){ boolean isflagged=false; while(true){ synchronized(so){ try{ while(so.getbitset()==false){ so.wait(); } } catch(InterruptedException e){} int curr=so.getstats(); if(so.getbitset()==true) isflagged=true; so.setbitset(false); if(isflagged) so.notify(); } } } }); t1.start(); t2.start(); } public static void main(String[] args){ shareful s=new shareful(); s.runStatistics(); } }
Suggestions in order of effectiveness (or by inverse sanity, depending on your perspective):
Additional suggestion: read Brian Goetz' "Java Concurrency in Practice" to discover that you should never attempt to write code at this level in a typical application.
Due to many misunderstandings, I've reformulated this question from ground up. The intention of the question is unchanged. Many comments still refer to the old question text.
The documentation about
volatile states that it ensures that other threads see memory updates in a consistent fashion. However,
volatile is used rarely.
As far as I know, the purpose of
synchronized blocks is to cause threads not to execute these critical sections simultaneously. Does
synchronized also cause consistent memory updates to other threads, like
volatile does?
Java doesn't really have a concept of "volatile memory". Instead, the
volatile keyword changes the guarantees the JVM makes about when and how a field will be written and read. Wikipedia has a decent breakdown of what Java (and other languages) mean by
volatile.
Very roughly speaking, a
volatile field is equivalent to a field that is always read-from and written-to like so:
// read T local; synchronized { local = field; } // ... do something with local // write synchronized { field = local; }
In other words, reads and writes of
volatile fields are atomic, and always visible to other threads. For simple concurrency operations this is sufficient, but you may well still need explicit synchronization to ensure compound operations are handled correctly. The advantage of
volatile is that it's smaller (affecting exactly one field) and so can be handled more efficiently than a synchronized code block.
You'll notice from my psuedo-translation that any modifications or mutations to the field are not
synchronized, meaning there is no memory barrier or happens-before relationship being imposed on anything you might do with the retrieved object. The only thing
volatile provides is thread-safe field reads and writes. This is a feature because it's much more efficient than a
synchronized block when all you need is to update a field.
I would strongly encourage you to read Java Concurrency in Practice if you haven't before; it provides an excellent overview of everything you need to know about concurrency in Java, including the difference between
volatile and
synchronized.
Obviously, this doesn't happen, or else I would have encountered such a problem during my career. I never did.
Note that this is a fallacious line of reasoning. It is perfectly possible for you to do something wrong yet never be affected by it or simply not notice its effects. For instance over-synchronized code (like
Vector) will work correctly but run much slower than alternative solutions.
when I'm trying to execute the following multhiThreading code multiple times ,the output is not same as previous one . Is that because of JVM behaviour or may be some other reason . please help me some one.
program: package example.thread.com; class MyThread1 implements Runnable { Thread t; MyThread1(String s) { t = new Thread(this, s); t.start(); } public void run() { for (int i = 0; i < 5; i++) { System.out.println("Thread Name :" + Thread.currentThread().getName()); try { Thread.sleep(2000); } catch (Exception e) { } } } } public class RunnableThread1 { public static void main(String args[]) { System.out.println("Thread Name :" + Thread.currentThread().getName()); MyThread1 m1 = new MyThread1("My Thread 1"); MyThread1 m2 = new MyThread1("My Thread 2"); } }
output: if i run 1st time
Thread Name :main Thread Name :My Thread 1 Thread Name :My Thread 2 Thread Name :My Thread 1 Thread Name :My Thread 2 Thread Name :My Thread 2 Thread Name :My Thread 1 Thread Name :My Thread 1 Thread Name :My Thread 2 Thread Name :My Thread 1 Thread Name :My Thread 2
output: if i run 2nd time
Thread Name :main Thread Name :My Thread 2 Thread Name :My Thread 1 Thread Name :My Thread 2 Thread Name :My Thread 1 Thread Name :My Thread 2 Thread Name :My Thread 1 Thread Name :My Thread 1 Thread Name :My Thread 2 Thread Name :My Thread 1 Thread Name :My Thread 2
output: if i run 3rd time
Thread Name :main Thread Name :My Thread 1 Thread Name :My Thread 2 Thread Name :My Thread 2 Thread Name :My Thread 1 Thread Name :My Thread 1 Thread Name :My Thread 2 Thread Name :My Thread 2 Thread Name :My Thread 1 Thread Name :My Thread 2 Thread Name :My Thread 1
like this please suggest .....
The effect you see is only partly the result of JVM behavior. The JVM merely creates the threads and starts them. The operating system is responsible for deciding which thread runs on which processor when. Your threads will contend for use of a processor not just with each other, but with all the work your computer is doing.
When a thread sleeps, it stops being a runnable thread that can use a processor. When it gets to the end of its sleep time it goes back to being runnable, and contending for use of a processor. At some time after that, the operating system will pick it as the thread to run on a processor, and it will go on computing. It keeps the processor until it terminates, sleeps, has to wait for something else, or the operating system decides it is another thread's turn.
There is no reason to expect the ordering between the two threads to be the same from run to run. Writing multi-threaded programs that give consistent results takes some effort.
I suggest starting with some basic tutorials on multi-thread programming. When you are ready to learn more deeply about the subject, I recommend Java Concurrency in Practice
.netalgorithmandroidandroid-ndkandroid-servicearraylistasynchronousatomicatomicityblackberryblockingblockingqueuecc#c++callbackclient-serverclonecollectionsconcurrencyconcurrent-programmingconcurrenthashmapconditional-statementscpu-usagecsvcyclicbarrierdaemondaodata-structuresdeadlockdelphideprecateddesigndistributeddouble-checked-lockingdynamiceclipseencapsulationevent-dispatch-threadeventsexceptionexecutorexecutorservicefilesystemsfinalfinallyflood-fillfunctionfuturegrailsguavagwtheuristicshibernateimmutabilityinfinite-loopinheritanceinner-classesinstanceinterpreterinterruptinvokelaterjavajava-7java-eejava-mejava.util.concurrentjavafxjdbcjlsjsfjvm-hotspotlanguage-agnosticlistlistenerlocal-variableslockingloggingmemorymemory-barriersmemory-leaksmemory-managementmethodsmidimodelmouseeventmulticoremultithreadingmutexnestednested-classnon-staticnullnullpointerexceptionobserver-patternopenjdkoperating-systemoptimizationoracle11gorderoverloadingparallel-processingperformanceproducer-consumerprogram-flowprogramming-languagesproject-planningpthreadsrace-conditionrandomreal-timereflectionrubyruby-on-railsrunnablescalascala-2.10scheduledexecutorserviceseamserial-portservletsside-effectssingletonsmacksocketsspringspring-batchsqlitestatestaticstringstringbufferstringbuilderswingsynchronizationsynchronizedterminationtestingthread-localthread-prioritythread-safetythreadgroupthreadpooltimeouttimertimertaskunit-testinguser-interfacevariablesvectorverificationvisualizationvolatilewaitwhile-loopwindowsxmpp | http://www.dev-books.com/book/book?isbn=0321349601&name=Java-concurrency-in-practice | CC-MAIN-2019-09 | refinedweb | 61,560 | 55.95 |
There..
The only way to learn is to mess around with the code. If you do, KEEP IN MIND, it is a very good idea to always get past the
document.Close(); before you terminate a Debug session.
The Help menu contains links to two important sources of information:
Try them. You will see why they are very important, especially if you are determined to create your own PDFs.:
addCellto
AddCell
RIGHTto
ALIGN_RIGHT
Messageproperty so that I could display the status [of a call to create one of the example PDFs] in the UI's status bar as shown in the pictures above
DoGetImageFile(...)and
DoLocateImageFile(...)to help locate and load sample image files used in some of the tutorial examples. (These image files are in the Images folder.) I tend to prefix names of methods I created with a "Do" for two reasons -- first to distinguish them from methods that I did not create and second because they tend to show up together in the Intellisense pop-up
using System.Drawing;and recommend you do the same, as there are some methods and properties that appear to have slightly different meanings in the iTextSharp namespace
#if ... #else ... #endif
VVX.About is a simple class that provides a "cheap", zero maintenance, "Help | About" message box. If you don't know how to access information in an assembly, it can show you one way of extracting some information from it. is a class to simplify access to MessageBoxes of different kinds. It helped me learn how to use message boxes more efficiently. For example, the MsgBox.Confirm(...) method allows me to do something like this:.
if(VVX.File.Exists(filename)) { //... do something } charities: Year Up or to any similar NGO (non-governmental organization) in the world that is selflessly doing good work and helping people in need. Give a little, get a lot!
General
News
Question
Answer
Joke
Rant
Admin | http://www.codeproject.com/KB/graphics/iTextSharpTutorial.aspx | crawl-002 | refinedweb | 319 | 63.7 |
Visual GraphQL Programming
Tomek Poniatowicz
・1 min read
As GraphQL seems to enjoy interest of DEV.to community I want to share with you a video showing how easy is creating a GraphQL Schema with GraphQL Editor.
You can play with the editor here or check out Github repo:
graphql-editor
/
graphql-editor
GraphQL Visual Node Editor
GraphQL Editor
GraphQLEditor makes it easier to understand GrapHQL schemas. Create a schema by joining visual blocks. GraphQLEditor will transform them into code.
With GraphQLEditor you can create visual diagrams without writing any code.
Live Demo
Here is a live demo example of GraphQLEditor.
Docs
Full docs are available here. How to use in your project, develop etc.
License
MIT
How It Works
Create GraphQL nodes and connect them to generate a database schema. You can also use builtin text IDE with GraphQL syntax validation
GraphiQL in Cloud
live demo also provides GraphiQL in cloud and faker based GraphQL mock backend
Develop or use standalone
npm i graphql-editor
import * as React from 'react' import { render } from 'react-dom' import { Editor } from '../src/index' class App extends React.Component< {}, { editorVisible: boolean; }…
Hope you gonna enjoy! Cheers!<<
I like the idea.
But I think giving attributes the same visual representation as the types they belong to is strange.
Hi,
We will try to differentiate them somehow later. Thanks! | https://dev.to/tomekponiat/visual-graphql-programming-38jn | CC-MAIN-2019-35 | refinedweb | 224 | 66.13 |
C | Operators | Question 2
#include <stdio.h> int main() { int i = 1, 2, 3; printf("%d", i); return 0; }
(A) 1
(B) 3
(C) Garbage value
(D) Compile time error
Answer: (D)
Explanation: Comma acts as a separator here. The compiler creates an integer variable and initializes it with 1. The compiler fails to create integer variable 2 because 2 is not a valid identifier.
Quiz of this Question
Attention reader! Don’t stop learning now. Get hold of all the important C++ Foundation and STL concepts with the C++ Foundation and STL courses at a student-friendly price and become industry ready.
My Personal Notes arrow_drop_up | https://www.geeksforgeeks.org/c-operators-question-2/?ref=rp | CC-MAIN-2021-10 | refinedweb | 107 | 58.38 |
Avoid common certification failures
Review this list to help avoid issues that frequently prevent apps from getting certified, or that might be identified during a spot check after the app is published..
- Make sure that your app doesn't crash without network connectivity. Even if a connection is required to actually use your app, it needs to perform appropriately when no connection is present.
Make sure that your app's description clearly represents what your app does. For help, see our guidance on writing a great app description.
Be sure to give your app an appropriate age rating. Most apps should have a rating of 12+, unless they are specifically intended for a younger audience. If you're having trouble deciding between two age ratings for your app, choose the higher (stricter) one. Also remember that certain markets may require you to submit an age rating certificate from a ratings board.
If your app uses the commerce APIs from the Windows.ApplicationModel.Store namespace, make sure to test the app and verify that it handles typical exceptions. Also, make sure that your app uses the CurrentApp class (not the CurrentAppSimulator class, which is for testing purposes only).
Don't declare your app as accessible unless you have specifically engineered and tested it for accessibility scenarios.
- Make sure that you provide any necessary info required to use your app, such as the user name and password for a test account if your app requires users to log in to a service, or any steps required to access hidden or locked features. | https://msdn.microsoft.com/en-us/library/windows/apps/jj657968.aspx | CC-MAIN-2015-32 | refinedweb | 260 | 53.1 |
At 09:24 PM 11/24/2002, rbb@apache.org wrote:
>>.
Of course, there are none that inline functions in libraries. inline functions
need to exist in headers. We might end up with a scenario such as...
apr_mmap.h [public header]
#include "apr.h"
/* Incomplete type */
typedef struct apr_mmap_t apr_mmap_t;
/* Prototype */
APR_DECLARE_INLINE(footype) apr_mmap_foo_get(apr_mmap_t *mmap);
#if APR_HAS_INLINE && APR_USE_INLINE
# include "apr_mmap_inline.h"
#endif
apr_mmap_inline.h
/* This private header is included for inline declarations.
* use of the embedded structures binds your application
* to this specific build of APR. These structures are subject
* to change without warning. */
struct apr_mmap_t {
footype foo; ...
} apr_mmap_t;
/* Implementation */
APR_DECLARE_INLINE(typefoo) apr_mmap_foo_get(apr_mmap_t *mmap) {
}
mmap_inline.c
#undef APR_USE_INLINE
#include "apr_mmap.h"
/* capture inline functions as exported real functions */
#include "apr_mmap_inline.h"
mmap.c
#define APR_USE_INLINE /* if it's not simply global */
#include "apr_mmap.h"
APR_DECLARE(apr_status_t) apr_mmap_create(...) {
...
}
Obviously, this is skeletal. It provides that all functions are exported,
and some simple functions may be inlined.
>>.
Sorry, wrong phrase. Different target machines. Think binary builds of
packages, where the package doesn't distribute APR, but presumes it
will be installed on the machine. Binary compatibility to the inlines would
never fly, so that package builds without APR_USE_INLINE, for example.
Anyone building the package to include the specific release of APR would
use inline, of course.
>>.
I don't believe what you suggest is portable. Of course my VC is very happy
to parse a struct def with a pointer to an incomplete type anywhere within
the structure, or an undefined array at the *end* of the structure. But since
we can't partially define a single structure, we will end up with either a pointer
and two allocs, or some cruft to perform a single allocation and stash a pointer
or 'presume' that the opaque component follows the transparent structure.
Either way, you still end up with a legitimate sizeof() unless the structure
allows some incomplete component at the end of the declaration. Some
compilers don't.
>>.
Yes, but as I just stated in the last paragraph, I don't believe it's available
to all compilers.
>The idea of wrapping APR in an OO language has never been called out as a
>goal, but even if it is made into a goal, splitting the types into a
>combination of complete and incomplete will not hurt that goal at all.
No, it would not if that facility were portable :-)
Let me close by offering that we *better* not find out you are responding
to list messages while waiting in the delivery room/suite :-) We can pick
up the dialog after you get back. God bless the three of you :-)
Bill | http://mail-archives.apache.org/mod_mbox/apr-dev/200211.mbox/%3C5.1.0.14.2.20021124215316.038c37e8@pop3.rowe-clan.net%3E | CC-MAIN-2017-09 | refinedweb | 441 | 66.84 |
How to use a map (like google maps) with Qt Quick?
I want to display a map with my Qt Quick 2.1 application, but I cant find a good example on how to do that. I dont develop for a nokia device, but for android.
So can somebody post a working example of using a map? It could be google maps, open streetmap or whatever :)
Or give me a hint, where I can find more info on that topic that is not relatetd to a nokia device?
Edit: WebView doesnt work on mobile devices, so that is no option.
One of my bookmark : "examples":
Its QtQuick 1 but, it would not be difficult for you to bring it on QQ2.
Ignore Previous, try this ""example"":
EDIT : Broken link
Thanks for your reply. The first example (which I should ignore ;)) is an example I tried and couldnt get to work.
The second example I wasnt able to get to work with QtQuick 2. But unfortunately it uses a webview, which doesnt work on mobile devices, because of the lack of a mobile library. So it wouldnt help :(
During the last hours I built my own webview gmaps example and saw the problem with mobile devices and webviews :(
So I need something that works without the webview.
Oh Yes, it is "not available": yet, afaik.
Now, i am curious how to show the gmaps with out webkit!
It doesnt have to be gmaps, just any maptool would be fine, if it works on the mobile plattforms like android.
There is the QTLocation Module () which should also provide a Map Element, but it seems that it isn't currently bundled and there is no stable release of it yet. But you can try to use it.
Thanks, I will give it a try, but I think I am going to have a hard time to add this external lib to the android deployment process.
If you get it to work, please report back how you did it :)
- chrisadams
The location module should work, I think. But you'll need to either write your own data source / plugin, or find one somewhere on the web, as the mobility ones won't work with it, and I can't remember whether or not the ones included in the Qt5 QtLocation module can be used without a license.
Anyway, a webview should still work:
@
import QtQuick 2.0
import QtWebKit 3.0
WebView {
url: ""
}
@
Cheers,
Chris.
The Qt Location module is planned to be released as part of Qt 5.2, so the deployment process should become easier once Qt 5.2 is released. It includes both the nokia and Open Street Map plugins. The OSM plugin is usable out-of-the-box. To use the nokia plugin you will need to obtain application tokens by registering at "developer.here.com":
So waiting for QT 5.2 and becoming familiar with the qt location module seems to be the way to go :)
@Chris
WebView doesnt work on mobile devices yet. | https://forum.qt.io/topic/31535/how-to-use-a-map-like-google-maps-with-qt-quick | CC-MAIN-2019-04 | refinedweb | 505 | 72.87 |
Play JSON Schema ValidatorPlay JSON Schema Validator
This is a JSON schema (draft v4) validation library for Scala based on Play's JSON library.
If you experience any issues or have feature requests etc., please don't hesitate to file an issue. Thanks!
Installation
Add an additional resolver to your
build.sbt file:
resolvers += "emueller-bintray" at ""
Then add the dependency:
libraryDependencies += "com.eclipsesource" %% "play-json-schema-validator" % "0.9.4"
For using the current 0.9.5 milestone with v7 support, use:
libraryDependencies += "com.eclipsesource" %% "play-json-schema-validator" % "0.9.5-M1"
Please also see the respective release notes.
UsageUsage
Schemas can be parsed by passing the schema string to
Json.fromJson, for instance like this:
val schema = Json.fromJson[SchemaType](Json.parse( """{ |"properties": { | "id": { "type": "integer" }, | "title": { "type": "string" }, | "body": { "type": "string" } |} }""".stripMargin)).get
With a schema at hand, we can now validate
JsValues via the
SchemaValidator (note that since 0.8.0 the validator is a class and not an object anymore):
val validator = new SchemaValidator() validator.validate(schema, json)
validate returns a
JsResult[A].
JsResult can either be a
JsSuccess or a
JsError.
validate is also provided with overloaded alternatives where Play's
Reads or
Writes instances can be passed additionally. This is useful for mapping
JsValues onto case classes and vice versa:
validate[A](schemaUrl: URL, input: => JsValue, reads: Reads[A]) : JsResult[A] validate[A](schemaUrl: URL, input: A, writes: Writes[A]): JsResult[JsValue] validate[A: Format](schemaUrl: URL, input: A): JsResult[A]
Error ReportingError Reporting
In case the
validate method returns an failure, errors can be converted to JSON by calling the
toJson method. Below is given an example taken from the example app:
import com.eclipsesource.schema._ // brings toJson into scope val result = validator.validate(schema, json, Post.reads) result.fold( invalid = { errors => BadRequest(errors.toJson) }, valid = { post => ... } )
Errors feature a
schemaPath, an
instancePath, a
value and a
msgs property. While
schemaPath and
instancePath should be self explanatory,
value holds the validated value and
msgs holds all errors related to the validated value. The value of the
msgs property is always an array. Below is an example, again taken from the example app.
{ "schemaPath" : "#/properties/title", "keyword": "minLength", "instancePath" : "/title", "value" : "a", "msgs" : [ "a violates min length of 3", "a does not match pattern ^[A-Z].*" ], "errors": [] }
The value of
schemaPath will be updated when following any refs, hence when validating
{ "properties": { "foo": {"type": "integer"}, "bar": {"$ref": "#/properties/foo"} } }
the generated error report's
schemaPath property will point to
#/properties/foo.
idid
In case the schema to validate against makes use of the
id property to alter resolution scope (or if the schema has been loaded via an
URL), the error report also contains a
resolutionScope property.
anyOf, oneOf, allOfanyOf, oneOf, allOf
In case of
allOf,
anyOf and
oneOf, the
errors array property holds the actual sub errors. For instance, if we have a schema like the following:
{ "anyOf": [ { "type": "integer" }, { "minimum": 2 } ] }
and we validate the value
1.5, the
toJson method returns this error:
[ { "schemaPath" : "#", "errors" : { "/anyOf/0" : [ { "schemaPath" : "#/anyOf/0", "errors" : { }, "msgs" : [ "Wrong type. Expected integer, was number" ], "value" : 1.5, "instancePath" : "/" } ], "/anyOf/1" : [ { "schemaPath" : "#/anyOf/1", "errors" : { }, "msgs" : [ "minimum violated: 1.5 is less than 2" ], "value" : 1.5, "instancePath" : "/" } ] }, "msgs" : [ "Instance does not match any of the schemas" ], "value" : 1.5, "instancePath" : "/" } ]
Customizable error reportingCustomizable error reporting
The validator allows you to alter error messages via scala-i18n, e.g. for localizing errors reports. You can alter messages by placing a
messages_XX.txt into your resources folder (by default
conf). The keys used for replacing messages can be found here. In case you use the validator within a Play application, you'll need to convert Play's
Lang and make it implicitly available for the
SchemaValidator, e.g. via:
implicit def fromPlayLang(lang: Lang): com.osinka.i18n.Lang = com.osinka.i18n.Lang(lang.locale)
ExampleExample
An online demo of the library can be found here.
See the respective github repo for the source code. | https://index.scala-lang.org/eclipsesource/play-json-schema-validator/play-json-schema-validator/0.9.4?target=_2.12 | CC-MAIN-2020-40 | refinedweb | 668 | 50.33 |
Notes on GridFS and MongoDB
Today we made several tests with MongoDB’s GridFS. And, as simple as it is, we had a hard time trying to find detailed documentation and samples on C++ of this monster (on a good way
).
After brushing some bits, groking headers and a very thorough analysis of the C++ driver source code, we got a simple code snippet that stores an arbitrary file on GridFS, and changes its metadata. This is the sample that worked – if you have any suggestions, please comment!
Enjoy!
<br /> ...<br /> #include "mongo/client/dbclient.h"<br /> #include "mongo/client/gridfs.h"</p> <p>using namespace mongo;</p> <p>...<br /> // The file's full path<br /> m_NomeArquivo = ...;<br /> QFileInfo info(m_NomeArquivo);</p> <p>...<br /> DBClientConnection c;<br /> // local connection to MongoDB.<br /> c.connect("localhost");</p> <p>// Here we "map" GridFS with the prefix we desire<br /> GridFS gfs = GridFS(c, "gridfs", "myfiles");<br /> BSONObj ret = gfs.storeFile(m_NomeArquivo.toStdString(), info.fileName().toStdString());<br /> BSONObjBuilder b;<br /> b.appendElements(ret);<br /> // Here we can add any additional information we want<br /> b.append("fileInserted", "0001");<br /> BSONObj o = b.obj();</p> <p>// And now we update the document on MongoDB.<br /> c.update("gridfs.myfiles.files", BSON("filename" << retorno.getField("filename")), o, false, false);<br />
Syndicated 2010-10-16 00:44:24 (Updated 2010-10-25 22:46:40) from #include "ebf.h" | http://www.advogato.org/person/ebf/diary.html?start=36 | CC-MAIN-2015-35 | refinedweb | 233 | 52.66 |
>> index of the first unique character in a given string using C++
Given a string ‘s’, the task is to find the first unique character which is not repeating in the given string of characters and return its index as output. If there are no such characters present in the given string, we will return ‘-1’ as output. For example,
Input-1 −
s = “tutorialspoint”
Output −
1
Explanation − In the given string “tutorialspoint”, the first unique character which is not repeating is ‘u’ which is having the index ‘1’. Thus we will return ‘1’ as output.
Input-2 −
s = “aaasttarrs”
Output −
-1
Explanation − In the given string “aaasttarrs’, there are no unique characters. So, we will return the output as ‘-1’.
The approach used to solve this problem
To find the index of the first unique character present in the given string, we can use the hashmap. The idea is to go through all the characters of the string and create a hashmap with Key as the character and Value as its occurrences.
While traversing through each character of the string, we will store the occurrences of each character if it appears. It will take O(n) linear time to store the occurrences of each character. Then we will go through the hashmap and will check if there is any character whose frequency is less than 2 or equal to ‘1’. And we will return the index of that particular character.
Take a string ‘s’ as an Input.
An Integer function uniqueChar(string str) takes a string as an input and returns the index of the first appearing unique character.
Iterate over the string and create a hashmap of char and its occurrences while going through each of the characters of the string.
If there is a character whose frequency is less than 2 or equal to 1, then return the index of that particular character.
If there are no unique characters present in the string, return ‘-1’ as Output.
Example
#include<bits/stdc++.h> using namespace std; int uniqueChar(string str){ int ans = -1; unordered_map<char,int>mp; for(int i=0;str[i]!='\0'){ mp[str[i]]++; } for(int i=0;i<s.size();i++){ for(auto it= mp.begin();it!=mp.end();it++){ if(it->first==str[i] && it->second==1){ ans= i; } } } return ans; } int main(){ string s= "tutorialspoint"; cout<<uniqueChar(s)<<endl; return 0; }
Output
Running the above code will print the output as,
1
Explanation − The input string ‘tutorialspoint’ contains the unique characters as ‘u’, ’r’, and ‘l’, and the first unique character ‘u’ has the index ‘1’. So, we get ‘1’ as the output.
- Related Questions & Answers
- Program to find the index of first Recurring Character in the given string in Python
- Find the first repeated character in a string using C++.
- How to return the index of first unique character without inbuilt functions using C#?
- Finding the index of the first repeating character in a string in JavaScript
- First Unique Character in a String in Python
- Python program to change character of a string using given index
- How to find the first character of a string in C#?
- Return index of first repeating character in a string - JavaScript
- Return the index of first character that appears twice in a string in JavaScript
- How to find a unique character in a string using java?
- Find the character in first string that is present at minimum index in second string in Python
- Java program to find the Frequency of a character in a given String
- Find last index of a character in a string in C++
- How to return the first unique character without using inbuilt functions using C#?
- How to find the shortest distance to a character in a given string using C#? | https://www.tutorialspoint.com/find-the-index-of-the-first-unique-character-in-a-given-string-using-cplusplus | CC-MAIN-2022-33 | refinedweb | 629 | 66.98 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.