text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Getting current date-time information is very different for different systems and programming languages. We will look at this simple but big issue for the vast majority of operating systems and programming languages.
Linux/FreeBSD/Unix
We can date information from the GUI panel clock but we will look for the command line. We will issue date command to get the current date and time. Linux date format has a lot of options. Alternatively, we can call these as Unix date or bash date examples.
$ date
Only Hour and Minute
We can provide parameters to get only time partition from date command.
$ date +%H:%M
Year Month Day Format
We can get current year month day like below
$ date +%Y_%m_%d
Year-Month-Day Format
We can use ready to use parameters like +%F which will list year-month-day
$ date +%F
Month/Day/Year Format
We can use +%D to get a date according to locale settings
$ date +%D
Display Date and Time In Windows Command Line
Windows operating system provides
date and
time commands in order to print and set current date and time. After issuing the command date and time the current date and time will be displayed and then we will be asked for a new date and time. We can skip this just by pressing Enter or simply do not entering any value.
> date > time
Python Datetime Module and Functions
Python is a scripting language and to get current date time information python function in a code file can be used. An alternative is python interpreter. In this example, we will use an interpreter.
from datetime import datetime now=datetime.now() now.strftime("%A, %d. %B %Y %I:%M%p")
We can get time information with %I:%M format string parameters.
now.strftime("%I:%M")
We get the year, month and day as a string like below
now.strftime("%Y-%m-%d") | https://www.poftut.com/how-to-print-current-date-and-time-in-linux-and-windows-from-command-line/ | CC-MAIN-2020-34 | refinedweb | 318 | 61.36 |
Be Social by running SocialSite on Tomcat
Sometime ago, I saw an announcement of the first release of SocialSite. Basically it is a Java implementation to add social features to your existing website. Check out Arun Gupta's blog on the usage of SocialSite. I thought of trying it out and it works fine on Glassfish. Out of the box there was no way of getting it to work on another container, such as Tomcat. Not very social I thought :-)
Anyway thought of trying to get it to run on Tomcat. Following is the procedure to get SocialSite to work on Tomcat. If you understand and follow what is going on, and why each of these steps is being done, it should be trivial to port SocialSite to run on any other container.
These instructions are for Tomcat 5.5 and MySQL. I would rather you proceed with the source code download of SocialSite and build the distribution from there. You can download the distribution from the SocialSite site, and add the following things mentioned in your downloaded bundle, but I haven't tested that. But I don't see a reason why that shouldn't work either.
So go ahead, download the source code for the SocialSite as per instructions given here. Once you have the source code downloaded, follow the steps given below to get SocialSite to work on Tomcat.
SOCIALSITE_SRC_HOME is where you downloaded the source code for SocialSite.
- Configure the JNDI Resource for the database and Mail
- Update persistance related entries of SocialSite so Tomcat loves them
- Build the SocialSite distribution
- Create tables in database of your choice
- Create the Realm for SocialSite
- Enable SSO Valve
- Prepare Tomcat for being SocialSite enabled by making JARS available to apps.
- Enable JSTL in Tomcat
- Enable SSL on tomcat, or disable SSL login in SocialSite
Configure the Global JNDI Resource for the database.
SocialSite uses a JNDI resource by the name of 'jdbc/SocialSite_DB' to access your database. So you need to create a JNDI resource by that name. You can refer to the Tomcat 5.5 documentation page here to get information on how to define a per-application JNDI resource using the <Resource> tag in your context.xml. Or you can read along as I show below how to do it.
Under the SOCIALSITE_SRC_HOME/web/META-INF there is a file called context.xml. Make sure it looks following and save the file. Ofcourse make sure that entries for url, username and password do match for your MySQL database, so that the connection to the DB succeeds. The other JNDI resource is about javax.mail.Session that SocialSite uses to send out e-mails.
<Context path="/socialsite">
<Resource name="jdbc/SocialSite_DB" auth="Container"
type="javax.sql.DataSource" user="root" username="root" password="abc123"
driverClassName="com.mysql.jdbc.Driver" url="jdbc:mysql://localhost:3306/socialsite"
maxActive="100" maxIdle="30" maxWait="10000"/>
<Resource name="mail/SocialSite/Session" auth="Container"
type="javax.mail.Session"
description="Mail Session for SocialSite" scope="Shareable" />
</Context>
Update persistance related entries of SocialSite so Tomcat loves them
The default SocialSite install works on Glassfish. Glassfish is a container which supports the JavaEE 5 specification and hence has a persistence engine built-in. SocialSite uses the JPA features to interface to a database. So to get SocialSite to work with Tomcat we need to enable persistence capabilities in it. Fortunately SocialSite bundles the EclipseLink libraries which include a persistence provider.
However, the persistence.xml included in the SocialSite includes a persistence.xml which tries to lookup the DB via a JNDI lookup. This doesn't work outside a EJB container by default, and Tomcat is not a EJB container. So to enable EclipseLink to pick up the DB details we need to configure it as we would be doing in a pure Java SE environment. For doing this you will have to modify the SOCIALSITE_SRC_HOME/src/java/META-INF/persistence.xml accordingly. The entries which you have to modify are indicated in red. Make sure the entries for the DB marked in red are correct for your DB install.
<!--
DO legal/LICENSE.txt. See the License for the specific language governing
permissions and limitations under the License.
When distributing the software, include this License Header Notice in each
file and include the License file at.
-->
<persistence xmlns="" version="1.0">
<persistence-unit -->
<!--<non-jta-data-source>java:comp/env/jdbc/SocialSite_DB</non-jta-data-source>-->
Relationship</class>
<class>com.sun.socialsite.pojos.GroupRequest<>
<properties>
<property name="eclipselink.jdbc.platform" value="oracle.toplink.platform.database.mysql"/>
<property name="eclipselink.jdbc.driver" value="com.mysql.jdbc.Driver"/>
<property name="eclipselink.jdbc.url" value="jdbc:mysql://localhost:3306/socialsite"/>
<property name="eclipselink.jdbc.user" value="root"/>
<property name="eclipselink.jdbc.password" value="abc123"/>
</properties>
</persistence-unit>
<persistence-unit name="SocialSite_PU_Standalone"Request</class>
<class>com.sun.socialsite.pojos.GroupRelationship<>
</persistence-unit>
</persistence>
Build the SocialSite distribution
From SOCIALSITE_SRC_HOME execute
ant dist
The above will build a folder called SOCIALSITE_SRC_HOME/dist under which the SocialSite distribution is created. In the distSocialSite/webapp folder is the socialsite.war file. Make sure the two steps you did above do end up showing properly in the built war file.
Create tables in database of your choice
Under the SOCIALSITE_SRC_HOME/dist/SocialSite/dbscripts directory there are directories, one for each database supported. Choose MySQL ( since I did the setup with MySQL ). Login to MySQLAdmin, and create a database of name 'socialsite'. The DB name can be anything of your choice. Make note of this name, as this is the name of the DB we will be using in all JDBC URL's which get configured in different steps in this blog. Now go ahead and execute the createdb.sql script in the context of the 'socialsite' database.
Create the Realm for SocialSite
Socialsite by default authenticates users to a realm named "SocialSite_UserRealm". So you must create this realm in Tomcat. You can refer to docs for information on how to create a realm. Or if you are impatient here is how you do it. Open the file TOMCAT_HOME/conf/server.xml and add the following line to it. There are several entries in this file for JDBCRealm, but they are commented by default. Uncomment the one for MySQL, and make sure your entry matches the one below. Ofcourse, change parameters to match your database configuration.
<Realm className="org.apache.catalina.realm.JDBCRealm"
driverName="org.gjt.mm.mysql.Driver"
connectionURL="jdbc:mysql://localhost:3306/socialsite"
connectionName="root" connectionPassword="abc123"
userTable="userapi_user" userNameCol="username" userCredCol="passphrase"
userRoleTable="userapi_userrole_view" roleNameCol="rolename"
resourceName="SocialSite_UserRealm" digest="SHA"/>
Enable SSO Valve
Enable the SSO valve in TOMCAT_HOME/conf/server.xml. It is already there and probably commented out.
<Valve className="org.apache.catalina.authenticator.SingleSignOn" />
Prepare Tomcat for being SocialSite enabled by making JARS available to apps.
Glassfish is a JavaEE compliant App server and hence has the full webservices stack bundled with it. For Tomcat you need to enable this stack, since SocialSite uses some parts of it. For that purpose,
- Download the JAX-WS RI from hte JAX-WS site. Copy all jars (except sjsxp.jar) from the JAX-WS-INSTALL/lib folder to the TOMCAT_HOME/shared/lib folder.
- Copy sjsxp.jar to the TOMCAT_HOME/common/endorsed directory.
- Also download the stax.jar from here and copy that jar also the TOMCAT_HOME/common/endorsed directory.
- Copy mail.jar and MySQL JDBC Driver (mysql-connector-java-3.1.12-bin.jar) to TOMCAT_HOME/common/lib folder.
Enable JSTL in Tomcat
Tomcat 5.5 doesn't have JSTL support by default. From the Jakarta JSTL site, they seem to suggest to use the Glassfish JSTL implementation. So following that, we pick up the appserv-jstl.jar from the GLASSFISH_HOME/lib to TOMCAT_HOME/common/lib folder.
Enable SSL on tomcat, or disable SSL login in SocialSite
Follow this page on Tomcat docsIf you enable SSL on Tomcat on port 8443.
SocialSite Admin Site
To enable SocialSite administration, under the SOCIALSITE_SRC_HOME/dist/SocialSite/bin folder there is a setup-glassfish.xml file. In this there is a create-admin-user target. Run that target as follows:
ant -f setup-glassfish.xml create-admin-user
to create an admin user for your socialsite installation. You might have to fill in properties in the sjsas.props file to get it to work fine.
Posted by insidemyhead [Sun] ( November 03, 2008 10:36 AM ) Permalink | Comments[8]
Charting Portlet using JSR-286
A while ago I had written an SDN article on how to generate charts dynamically in portlets. Following up on that, I have created a portlet that uses the JSR-286 feature of serve resources, to dynamically generate charts by reading its data from a JDBC compliant database. The essential feature of this portlet is that the image is generated dynamically and served via a serve resource call and isn't stored in any temporary location. The portlet can be run under any portlet container that supports JSR-286 specification. It has been tested on OpenPortal Portlet Container which supports the draft 19 of the JSR_286 specification. You can download the portlet war file from here.
Posted by insidemyhead [Personal] ( January 02, 2008 08:28 PM ) Permalink | Comments[6]
JSR-286 compliant ajax enabled Atom and RSS news reader Portlet
A colleague of mine, Satya a few days ago talked about JSR-286 and resource serving. He was describing what it is, and how it is a part of the JSR-286 spec. Satya incidentally was one of the key folks in implementing the same for the Open Source Portlet Container. He pointed me also to a nice article by Deepak Gothe and himself about the new JSR-286 specific features in the portlet container.
After going through the article I was waiting to get started with the JSR-286 implementation of the portlet container and resource serving in particular. So I have implemented a "RSS and Atom feed reader" portlet using the Open Source Portlet Container. The portlet is AJAX enabled as well. To see a small screencast of it in action click here.
This is how it looks (click to enlarge and see in a new window):
If you want to use this portlet, or try it out, you will need a portlet container which supports the JSR-286 spec, and resource serving in special. As mentioned earlier the Open Source Portlet Container is one of the best. You can get it from the web site as mentioned in the link, or get is as part of the JavaEE Tools Bundle (Choose to "Download with Tools"). If there is interest in this portlet, let me know and I can make it available.
Read where you can get this portlet source code from on my personal blog@blogspot.
Posted by insidemyhead [Sun] ( October 17, 2007 12:34 AM ) Permalink | Comments[0]
Presence - Dial tone of this century ...
How many times you have wanted to make an important business or a personal call, and picked up the phone to call the other party, but the call ended unsuccessfully, since all you managed to reach was the other party's voice mail? Such unsuccessful calls can be a serious drag on productivity and not to mention the frustration it causes, since instead of talking to each other the parties get to talk to the infamous 'voice mail'.
Or maybe the person you thought was in office actually was on the customer site today and was not reachable on his office number you just dialed. Well this brings in the concept of 'presence' which is so prevalent in the IM world. You begin to chat with a party only when his icon in the chat indicates that he is 'present'.
Now wouldn't that be really nice if we can apply this concept to telephone dialing as well. And why just telephones, we should be able to extend the concept of presence far beyond that.
A user, may opt for several ways in which he can be reached, phone, IM, SMS, Email etc. All of these are different modes of reaching the person. Now, imagine that this same person, can specify or update a unique "address" or a "URI" for each of the means to reach him. Then under his buddy icon, in a client application ( a universal communicator client), on right clicking his name, there appear several of these options to interact with him.
Clicking on the URI corresponding to a phone, will place a IP telephony based call, clicking the URI for email, could launch an application capable of sending an email. As long as the person has kept his information updated on the "Presence" server, the other parties are always able to get in touch with the person, based on his preferences of course.
Thus it would be appropriate to call "Presence" as the dial tone of this century. SIP ( Session Initiation Protocol ) seems to be right way to achieve this kind of communications.
From the portal perspective, the concept of "Presence" makes an absolute sense, since portal is a great way of achieving collaboration and bringing people together. Using a SIP server and a proxy, it seems that getting this above mentioned concept of "presence" into a portal has great potential value. Watch this space for more information on this topic.
Posted by insidemyhead [Sun] ( October 08, 2007 10:31 AM ) Permalink | Comments[0]
Flex or Ajax .. well both!
The release of Flash Player 9 included a new External API using a class file called the ExternalInterface that allows for the communication between JavaScript and Flash applications. To create the communication link and pass data between JavaScript and Flex, the developer had to create custom code to handle each call.
The newly introduced Flex-Ajax Bridge has simplified the usage of this API. By simply including the necessary classes, the bridge exposes all of the classes within the Flex application to JavaScript. You can almost create anything in your Flex application using JavaScript instead of ActionScript.
So if you are a JavaScript pro and want to try out Flex this is a good time to do so!
Posted by insidemyhead [Personal] ( August 07, 2007 08:57 AM ) Permalink | Comments[0]
Update: Flex up your portlets
A while ago I had talked about Rich Internet Applications on my blog and talked in detail about Flex. You can read that entry here. I had promised to put a tutorial/entry on how to create a portlet out of your Flex application, but I
didn't get time to post the entry.
Recently, I helped my colleague Murali on getting his Flex app converted to a portlet and he has been prompt enough to post the instructions on his blog here. Thanks Murali.
Posted by insidemyhead [Personal] ( August 03, 2007 12:27 PM ) Permalink | Comments[0]
A Solution: Maths Fun Again.. Again
So the solution to my previous entry and the one prior to that was provided by a few folks as (root 2), which is actually quite trivial in my opinion. The actual question that has been bothering me was what i posted in the comments to whatever few replies i got:
That is , what actually is
. If the answer to both questions (blog entries) is
, then how come
is 2 as per my first blog entry and equal to 4 as per the second entry. This is the one I was actually looking for an answer to. I have kind of an idea what it could be, I will put my explanation here, and hopefully it makes some sense.
is actually limit of the sequence
,
,
...
It is trivial to prove that the above sequence is increasing and is upper bounded by 2.
Coming back to the original question (the previous two blog entries marked above) which can be rewritten as
.
I tried to plot the graph of this function and it comes out as follows:
This proves that given a value of x, there are two values for y. And for x =
, we know these values are 2 and 4. But we just indicated above that the sequence
is upper bounded by 2. So the value 4 can't be possible for
.
Hope it makes some sense.
Posted by insidemyhead [Personal] ( August 01, 2007 09:22 PM ) Permalink | Comments[1]
Maths Fun Again .. Again
Some of the folks answered this in the comments to my previous entry as the answer to x being equal to
. However if that is true then what is the value of x in the following equation?
Getting interesting now I guess :-)
Posted by insidemyhead [Personal] ( July 27, 2007 09:17 PM ) Permalink | Comments[6]
How about a Semantic Community Portal?
Posted by insidemyhead [Sun] ( July 23, 2007 10:51 PM ) Permalink | Comments[0]
Maths Fun Again
Given that :
What is the value of x?
Posted by insidemyhead [Personal] ( July 20, 2007 08:17 PM ) Permalink | Comments[2]
Locker Mania - Answer.
Posted by insidemyhead [Personal] ( July 20, 2007 12:50 PM ) Permalink | Comments[0]
Locker Mania
There is a locker room in the club and they are numbered from 1 to 100. There are 100 members of the club who will enter the locker room in the following sequence. When the first member enters, he opens all the lockers. When the second member enters the locker room, he closes all even numbered lockers. When the next member enters the room, he toggles the state of every locker which is a multiple of 3.
This process continues till all the 100 members have been through the locker room. At the end of this process, which are the lockers that are open?
Posted by insidemyhead [Personal] ( July 17, 2007 07:40 PM ) Permalink | Comments[4]
Secure_3<<
Posted by insidemyhead [Personal] ( July 11, 2007 10:59 PM ) Permalink | Comments[1]
New Version of Eclipse Portal Pack with Action Support
I have uploaded the source code for the new version of Eclipse-Portalpack. I have been playing with the idea of making portlet development easy... and not just for rendering the view mode or the edit mode of the portlet but also in processing actions easily from the portlets. To this extent this is my first attempt in making this happen. Check it out and let me know if you find it useful and suggest any enhancements requests if any. I will be enhancing this further myself ( as there are areas I think that need it, but do let me know your suggestions )
Okay back to the new version. The new version supports the following new things :
- It is now built on the latest version of JAXB .i.e version 2.1.
- It has a new feature of adding a new portlet action very easily to your portlet, and to implement the action you just need to implement one method "execute()" in your action class and almost no code change to your portlet class is needed. You just specify what page to go to once the action is processed and the portlet has the code to automatically send you to the new page once the action is executed sucessfully. In case of an error that happens in your action's "execute()" method, the portlet will redirect you to the error page that gets created automatically for you when the portlet project was initially created.
Enhancements to this :
There are a couple of enhancements that I am still working on. If you think you can contribute to any of these please join in and contribute.
- There are enhancements to the GUI for action creation needs some work.
Suppose I want to develop a simple portlet which shows a form to collect some inputs from the user. The form might ask the user for his first name, last name and the email address. When the user submits this form, the data is collected at the server, and persisted to a data store. Once the data is persisted the user is shown a thank you page. Now assume the above needs to be done in a portlet. The scenario described above can be done using a portlet as shown below in the picture:
When developing this portlet the developer needs to write code in his processAction() method to do the following :
- Recognise the type of action this request is. In our case there is only one action so this case is simple.
- Collect the data from the form and populate a bean with that data.
- Put the bean in session data to be accessed in doView() mode.
- Now when the portlet container calls the doView() method, access the bean stored in session in previous step and do the actual action
- Dispatch the request to the destination JSP page.
All the above needs to be done by the developer whenever he writes a particular action. ( This is one way of doing this )
I wanted to make this a little easier to do, so I have added new code to the Eclipse Portal Pack project to make this easier. This is a very early version of it, so please bear with it, as I make further improvements. But feel free to try it out and give suggestions, or better still join on the project and contribute.
Developing the above portlet with the Eclipse Portal Pack Project
A screencast of creating a simple portlet project and a sample portlet is shown here.
Now that we have a sample portlet ready. Let's go about adding this new action that we wanted. Let's first create the JSP page that will show the form to the user in the view mode. By default the project would have created a JSP for the view mode called view.jsp in your /WEB-INF/jsps/ folder. Open this JSP file and add the following code to it.
<="submit" name="Submit">
</form>
The lines in bold indicate the form that will be displayed to the user in the view mode. It presents the user with fields to accept his first name, last name and email address and then there is the submit button which submits the form to the portlet via the portlet:actionURL variable defined above.
Now this is where the new Eclipse Plugin can help you in making your life easier. Let's call the action to collect the user data and put it to datastore as "CollectData". Right click on the portlet project in Eclipse, and down at the bottom you will see a menu entry called "Portlets->Create new Portlet Action" as shown below in the picture.
So here is what we want to do, we want to define a new action called CollectData, which we want implemented by a class of our choice, in this case lets say com.sun.portal.test.actions.CollectDataAction, and when our action is complete we want the portlet to take us to the page defined in /WEB-INF/jsps/done.jsp. So fill up the dialog box as shown below and click Ok. ( Like I said this is the first version so I haven't added any error checking as of yet, it will be coming pretty soon though ).
On clicking Ok, you will see a new action class called CollectDataAction created under the package com.sun.portal.test.actions. Open this class in Eclipse and it will appear as below:
package com.sun.portal.test.actions;
import javax.portlet.ActionRequest;
import javax.portlet.RenderRequest;
import javax.portlet.RenderResponse;
import com.sun.portlet.core.actions.ActionExecuteException;
import com.sun.portlet.core.actions.BaseActionFormBean;
public class CollectDataAction extends BaseActionFormBean {
public CollectDataAction(ActionRequest request) {
super(request);
}
@Override
public boolean execute(RenderRequest portletRequest,
RenderResponse portletResponse) throws ActionExecuteException {
// Put your logic of executing the action for this request here.
return true;
}
So as you can see, all you need to do is implement the method "execute()" in your CollectDataAction.java class. In the execute() method, you can access the form parameters that your view.jsp sent in as getParameter(String), getParameterNames() and getParameterValues(String) as usual.
Also to invoke this action the only other thing you will need to do is, add a hidden field in your view.jsp form called ACTION_NAME, and its value must exactly be the action you want executed when the form is submitted, which in our case is "CollectData". So our view.jsp page in completion will look as below :
<="hidden" name="ACTION_NAME" value="CollectData">
<input type="submit" name="Submit">
</form>
The only changed line is shown in Bold and Italic.
That's all that is needed to implement this action from your side. Ofcourse, you need to write the done.jsp file to which the portlet will dispatch the view after the action is executed.
To summarise, to implement any new action, all you need to do in addition to writing the source JSP and the target JSP is :
- A hidden variable called ACTION_NAME in the source JSP page form.
- Implement the execute() method in your Action Bean.
Rest of the stuff is handled for you!!
Let me know if you like it, by posting your comments.
Posted by insidemyhead [Sun] ( July 03, 2007 04:03 PM ) Permalink | Comments[46]
Possible answer to the 123 black hole
Here is a explanation of the mathemagical black hole (123) in my opinion. If we consider the transformation on x as a function f(x), such that f(x) = E(x) O(x) T(x)
where E(x) = Number of even digits in x, O(x) = Number of odd digits in x and T(x) = Number of digits in x
Then for all x,
f(x) < x
So eventually as we recurse this process, at some point f(x) will fall below 1000. In that case the only possibilities for E(x) O(x) and T(x) are
1 2 3
2 1 3
3 0 3
Each of the above in the next iteration will give 123. Hence the proof.
Posted by insidemyhead [Personal] ( July 03, 2007 02:39 PM ) Permalink | Comments[0] | http://blogs.sun.com/insidemyhead/ | crawl-002 | refinedweb | 4,330 | 63.39 |
Groovy, a dynamic language for the JVM, has built-in support for REST. The RESTClient class is an extension of HTTPBuilder, designed specifically for making REST calls easier (for example, by having convenience methods for GET, POST, etc.).
Issuing GET Requests
Given RESTClient, it's pretty easy:
import groovyx.net.http.RESTClient def client = new RESTClient( '' ) def resp = client.get( path : 'products/3322' ) // ACME boomerang assert resp.status == 200 // HTTP response code; 404 means not found, etc. println resp.getData()
The response field data is the parsed response content, always buffered in-memory.
Issuing POST Requests
Equally simple, using the same class:
import groovyx.net.http.RESTClient def client = new RESTClient( '' ) def resp = client.post( path : '/user/details', body : [ firstName:'John', lastName:'Doe' ] ) assert resp.status == 200
Here, too, the response is available in resp.getData().
3 comments:
Thanks for an excellent tutorial, very well written.
Could you please include example with headers?
A sample with extracting cookies from login response and using it for further queries would be of great help | http://rest.elkstein.org/2008/02/using-rest-in-groovy.html | CC-MAIN-2022-27 | refinedweb | 172 | 60.41 |
Home » Support » Index of All Documentation » How-Tos » How-Tos for Web Development »
Wing IDE is an integrated development environment that can be used to write, test, and debug Python code that is run by the mod_python module for the Apache web server.
This document assumes mod_python is installed and Apache is configured to use it; please see the installation chapter of the mod_python manual for information on how to install it.
Since Wing's debugger takes control of all threads in a process, only one http request can be debugged at a time. In the technique described below, a new debugging session is created for each request and the session is ended when the request processing ends. If a second request is made while one is being debugged, it will block until the first request completes. This is true of requests processed by a single Python module and it is true of requests processed by multiple Python modules in the same Apache process and its child processes. As a result, it is recommended that only one person debug mod_python based modules per Apache instance and production servers should not be debugged.
Quick Start
- Copy wingdbstub.py from the Wing IDE installation directory into either the directory the module is in or another directory in the Python path used by the module.
- Edit wingdbstub.py if needed so the settings match the settings in your preferences. Typically, nothing needs to be set unless Wing's debug preferences have been modified. If you do want to alter these settings, see the Remote Debugging section of the Wing IDE reference manual for more information.
- Copy wingdebugpw from your User Settings Directory into the directory that contains the module you plan to debug. This step can be skipped if the module to be debugged is going to run on the same machine and under the same user as Wing IDE. The wingdebugpw file must contain exactly one line.
- Insert import wingdbstub at the top of the module imported by the mod_python core.
- Insert if wingdbstub.debugger != None: wingdbstub.debugger.StartDebug() at the top of each function that is called by the mod_python core.
- Allow debug connections to Wing by setting the Accept Debug Connections preference to true.
- Restart Apache and load a URL to trigger the module's execution.
Example
To debug the hello.py example from the Publisher chapter of the mod_python tutorial, modify the hello.py file so it contains the following code:
import wingdbstub def say(req, what="NOTHING"): if wingdbstub.debugger != None: wingdbstub.debugger.StartDebug() return "I am saying %s" % what
And set up the mod_python configuration directives for the directory that hello.py is in as follows:
AddHandler python-program .py PythonHandler mod_python.publisher
Then set a breakpoint on the return "I am saying %s" % what line, make sure Wing is listening for a debug connection, and load http://[server]/[path]/hello.py in a web browser (substitute appropriate values for [server] and [path]). Wing should then stop at the breakpoint.
Notes
In some cases, we've seen Wing fail to debug the second+ request to mod_python. If this happens, try the following variant of the above code:
import wingdbstub import time if wingdbstub.debugger != None: wingdbstub.debugger.StopDebug() time.sleep(2) wingdbstub.debugger.StartDebug()
This reinitialized debugging with each page load. The time.sleep() duration may be shortened, or may need to be lengthened if Wing does not manage to drop the debug connection and initiate listening for a new connection quickly enough.
Related Documents
- Wing IDE Reference Manual, which describes Wing IDE in detail.
- Mod_python Manual, which describes how to install, configure, and use mod_python. | http://www.wingware.com/doc/howtos/mod_python | CC-MAIN-2014-41 | refinedweb | 609 | 54.73 |
#include <stdint.h>
#include "libavutil/pixfmt.h"
#include "avcodec.h"
Go to the source code of this file.
Definition in file internal.h.
Maximum size in bytes of extradata.
This value was chosen such that every bit of the buffer is addressable by a 32-bit signed integer as used by get_bits.
Definition at line 120 of file internal.h.
Referenced by avcodec_open2(), and avformat_find_stream_info().
Definition at line 1948 of file utils.c.
Referenced by ff_tls_deinit(), and ff_tls_init().
Definition at line 1966 of file utils.c.
Referenced by ff_codec_get_id(), mpeg_decode_frame(), MPV_common_init(), and validate_codec_tag().
Definition at line 1957 of file utils.c.
Referenced by ff_tls_deinit(), and ff_tls_init().
Check AVPacket size and/or allocate data.
Encoders supporting AVCodec.encode2() can use this as a convenience to ensure the output packet data is large enough, whether provided by the user or allocated in this function.
Definition at line 934 of file utils.c.
Referenced by pcm_encode_frame().
Return the hardware accelerated codec for codec codec_id and pixel format pix_fmt.
Definition at line 1916 of file utils.c.
Referenced by decode_slice_header(), ff_h263_decode_init(), vc1_decode_init(), and vcr2_init_sequence().
does needed setup of pkt_pts/pos and such for (re)get_buffer();
Definition at line 276 of file utils.c.
Referenced by avcodec_default_reget_buffer(), and ff_thread_get_buffer().
Determine whether pix_fmt is a hardware accelerated format.
Definition at line 299 of file imgconvert.c.
Referenced by avcodec_default_get_format().
Return the index into tab at which {a,b} match elements {[0],[1]} of tab.
If there is no such matching pair then size is returned.
Definition at line 1869 of file utils.c.
Referenced by ff_h263_encode_picture_header(), MPV_encode_init(), and svq1_write_header().
Remove and free all side data from packet.
Definition at line 34 of file avpacket.c.
Referenced by av_destruct_packet(), and avcodec_decode_video2(). | http://ffmpeg.org/doxygen/0.10/libavcodec_2internal_8h.html | CC-MAIN-2019-22 | refinedweb | 284 | 55.2 |
Check boxes
Posted on March 1st, 2001
WEBINAR: On-demand webcast
How to Boost Database Development Productivity on Linux, Docker, and Kubernetes with Microsoft SQL Server 2017 REGISTER >
A check box provides a way to make a single on-off choice; it consists of a tiny box and a label. The box typically holds a little ‘x’ (or some other indication that it is set) or is empty depending on whether that item was selected.
You’ll normally create a Checkbox using a constructor that takes the label as an argument. You can get and set the state, and also get and set the label if you want to read or change it after the Checkbox has been created. Note that the capitalization of Checkbox is inconsistent with the other controls, which could catch you by surprise since you might expect it to be “CheckBox.”
Whenever a Checkbox is set or cleared an event occurs, which you can capture the same way you do a button. The following example uses a TextArea to enumerate all the check boxes that have been checked:
//: CheckBox1.java // Using check boxes import java.awt.*; import java.applet.*; public class CheckBox1 extends Applet { TextArea t = new TextArea(6, 20); Checkbox cb1 = new Checkbox("Check Box 1"); Checkbox cb2 = new Checkbox("Check Box 2"); Checkbox cb3 = new Checkbox("Check Box 3"); public void init() { add(t); add(cb1); add(cb2); add(cb3); } public boolean action (Event evt, Object arg) { if(evt.target.equals(cb1)) trace("1", cb1.getState()); else if(evt.target.equals(cb2)) trace("2", cb2.getState()); else if(evt.target.equals(cb3)) trace("3", cb3.getState()); else return super.action(evt, arg); return true; } void trace(String b, boolean state) { if(state) t.appendText("Box " + b + " Set\n"); else t.appendText("Box " + b + " Cleared\n"); } } ///:~
The trace( ) method sends the name of the selected Checkbox and its current state to the TextArea using appendText( ) so you’ll see a cumulative list of the checkboxes that were selected and what their state is.
There are no comments yet. Be the first to comment! | https://www.codeguru.com/java/tij/tij0140.shtml | CC-MAIN-2017-51 | refinedweb | 348 | 64.41 |
import "compress/gzip"
Package gzip implements reading and writing of gzip format compressed files, as specified in RFC 1952.) }
Output:
Name: a-new-hope.txt Comment: an epic space opera by George Lucas ModTime: 1977-05-25 00:00:00 +0000 UTC A long time ago in a galaxy far, far away...
const ( NoCompression = flate.NoCompression BestSpeed = flate.BestSpeed BestCompression = flate.BestCompression DefaultCompression = flate.DefaultCompression HuffmanOnly = flate.HuffmanOnly ) Reader struct { Header // valid after NewReader or Reader.Reset // contains filtered or unexported fields }
A Reader is an io.Reader that can be read to retrieve uncompressed data from a gzip-format compressed file.
In general, a gzip file can be a concatenation of gzip files, each with its own header. Reads from the Reader return the concatenation of the uncompressed data of each. Only the first header is recorded in the Reader fields.
Gzip files store a length and checksum of the uncompressed data. The Reader will return an ErrChecksum when Read reaches the end of the uncompressed data if it does not have the expected length or checksum. Clients should treat data returned by Read as tentative until they receive the io.EOF marking the end of the data.
NewReader creates a new Reader reading the given reader. If r does not also implement io.ByteReader, the decompressor may read more data than necessary from r.
It is the caller's responsibility to call Close on the Reader when done.
The Reader.Header fields will be valid in the Reader returned.
Close closes the Reader. It does not close the underlying io.Reader. In order for the GZIP checksum to be verified, the reader must be fully consumed until the io.EOF.
Multistream controls whether the reader supports multistream files.
If enabled (the default), the Reader expects the input to be a sequence of individually gzipped data streams, each with its own header and trailer, ending at EOF. The effect is that the concatenation of a sequence of gzipped files is treated as equivalent to the gzip of the concatenation of the sequence. This is standard behavior for gzip readers.
Calling Multistream(false) disables this behavior; disabling the behavior can be useful when reading file formats that distinguish individual gzip data streams or mix gzip data streams with other data streams. In this mode, when the Reader reaches the end of the data stream, Read returns) }
Output:
Name: file-1.txt Comment: file-header-1 ModTime: 2006-02-01 03:04:05 +0000 UTC Hello Gophers - 1 Name: file-2.txt Comment: file-header-2 ModTime: 2007-03-02 04:05:06 +0000 UTC Hello Gophers - 2
Read implements io.Reader, reading uncompressed bytes from its underlying Reader.
Reset discards the Reader z's state and makes it equivalent to the result of its original state from NewReader, but reading from r instead. This permits reusing a Reader rather than allocating a new one.
type Writer struct { Header // written at first call to Write, Flush, or Close // contains filtered or unexported fields }
A Writer is an io.WriteCloser. Writes to a Writer are compressed and written to w..
Close closes the Writer by flushing any unwritten data to the underlying io.Writer and writing the GZIP footer. It does not close the underlying io.Writer..
Reset discards the Writer z's state and makes it equivalent to the result of its original state from NewWriter or NewWriterLevel, but writing to w instead. This permits reusing a Writer rather than allocating a new one.
Write writes a compressed form of p to the underlying io.Writer. The compressed bytes are not necessarily flushed until the Writer is closed.
Package gzip imports 8 packages (graph) and is imported by 19386 packages. Updated 2020-06-02. Refresh now. Tools for package owners. | https://godoc.org/compress/gzip | CC-MAIN-2020-29 | refinedweb | 629 | 59.19 |
Custom Datatype SPARQL Indexes900576 Mar 13, 2012 3:53 PM
Hello,
This content has been marked as final. Show 8 replies
1. Re: Custom Datatype SPARQL IndexesMatperry-Oracle Mar 14, 2012 8:14 AM (in response to 900576)Hi Michael,
Lexical values for RDF terms are stored in the MDSYS.RDF_VALUE$ table, and (for typed literals) functions are used to map those lexical values to native Oracle types so that FILTERs can be evaluated. The SEM_APIS.GetV$DateTimeTZVal() function that you mentioned is one such function that converts xsd:dateTime literals into Oracle TIMESTAMP WITH TIMEZONE values. There are similar functions to get NUMBER, VARCHAR2 and SDO_GEOMETRY values.
A SPARQL FILTER submitted through Jena Adapter or SEM_MATCH is converted into a native Oracle SQL query and executed against MDSYS.RDF_VALUE$ and MDSYS.RDF_LINK$. For example, FILTER (?x < "2012-03-14T12:00:00Z"^^xsd:dateTime) would translate in to something like
FROM ..., MDSYS.RDF_VALUE$ V1
WHERE SEM_APIS.GetV$DateTimeTZVal(V1.VALUE_TYPE, V1.VNAME_PREFIX, ....) < SEM_APIS.GetV$DateTimeTZVal('LIT', '2012-03-14T12:00:00Z',...)
Now, if we have a function based index on MDSYS.RDF_VALUE$ for SEM_APIS.GetV$DateTimeTZVal() and the FILTER is selective enough, an index-based evaluation will be used instead of a row-by-row table scan.
We provide a datatype indexing API for convenience that creates these function-based indexes behind the scenes:
With regards to your question about custom datatype indexes, the answer is no you cannot currently create custom datatype indexes. You could easily create custom functions and indexes on MDSYS.RDF_VALUE$ but the SPARQL-to-SQL translation logic doesn't know about these functions/indexes so the functions would not be used in the generated SQL and therefore would not be used to evaluate your SPARQL query.
We would very much like to hear more about your use case though, as we are always taking new requirements for future versions of Oracle Semantic Technologies.
For you second question, the answer is yes any datatype indexes created will be used by both SEM_MATCH and Jena Adapter.
Hope this helps,
Matt
2. Re: Custom Datatype SPARQL Indexes900576 Mar 14, 2012 8:53 AM (in response to Matperry-Oracle)Matt,
There was a previous post that explained the background about the use case we have:
Formatting Lexical Values using Jena
To summarize, we have a datatype of 'ns:dollar' that we are converting a decimal value from our RDB to the triplestore and setting it's datatype to 'ns:dollar'. As described in this post, we are storing the lexical form of that value in the triplestore (e.g. "$90,000.00"^^ns:dollar).
We would like to add a FILTER clause to be able to compare the value of this literal with a dollar amount provided.
For example:
?s ns:dollarValue ?DOLLAR
FILTER( ?DOLLAR < "$150,000.00"^^ns:dollar)
We would like this to return all objects that have a dollarValue less than $150,000.
From what we understand, simple comparisons will not work because the literals are treated as strings rather than a numeric value. The indexes would a solution since we are 'required' to store the lexical form of this literal in the triplestore.
If we could provide our own function that would translate a custom datatype from the lexical format stored in the triplestore to a comparable/native datatype within the Oracle DB, and then inform the SPARQL translator of that function (and function based index). We would (in theory) be able to perform comparisons with the 'ns:dollar' datatype in the same manner as 'xsd:dateTime'.
We appreciate your help and suggestions.
Thanks
-MichaelB
3. Re: Custom Datatype SPARQL IndexesMatperry-Oracle Mar 14, 2012 11:00 AM (in response to 900576)Hi Michael,
You can implement this custom FILTER in two ways:
1) With a larger SQL query that contains a SEM_MATCH subquery:
Assume you create a PL/SQL function
getV$DollarVal(value_name varchar2, literal_type varchar2) return NUMBER
You can use this function in the outer block of a SQL SEM_MATCH query:
SELECT s, c
FROM TABLE (sem_match(
'{ ?s :hasCost ?c }',
SEM_MODELS('DATA'),
...) ) sm
WHERE getV$DollarVal(sm.c, sm.c$rdfltyp) < 150000;
2) By creating a custom Java function to compare two ns:dollar values and adding this to the ARQ function Library for Jena Adapter:
SELECT ?s ?c
WHERE { ?s :hasCost ?c
FILTER (ns:dollarComp(?c, "150,000.00"^^ns:dollar) < 0) }
An indexed based execution is not possible with the Jena Adapter option and is highly unlikely to be used in the SQL case. Enabling an indexed-based evaluation for such custom filters/datatypes is an interesting requirement that we will definitely consider.
Thanks,
Matt
4. Re: Custom Datatype SPARQL IndexesMatperry-Oracle Mar 14, 2012 1:59 PM (in response to Matperry-Oracle)Hi Michael,
I forgot one other option.
You could get the same query as option (1) from my previous post using the ORACLE_SEM_AP_NS namespace with Jena Adapter:
- Matt
5. Re: Custom Datatype SPARQL Indexes900576 Mar 14, 2012 4:34 PM (in response to Matperry-Oracle)Thanks for the suggestions.
I have dabbled in all of them, I have to throw out option 1 due to the fact that we are selecting data with the Jena Adapter using SPARQL.
Option 2 was interesting. I created the custom function and registered it with Jena, however when the sparql was run with the FILTER clause calling the function, it appeared to be looping over a TON of data, way more that we have. So it still does not work for me.
Option 3 was also interesting however, i have not been able to get it to work through Jena. The following line was added to the sparql string (and the necesarry function was also created).
PREFIX ORACLE_SEM_AP_NS:<>
This function works stand alone (using in in Option 1) and also using it in the FILTER paramter of the SEM_MATCH function.
From what I understand, the string 'file_folder_util.parseDollar(CURRENCY1,CURRENCY1$rdfltyp)=90000' will be pulled from the PREFIX and placed in that FILTER parameter when it calls sdo_rdf_match. However that parameter was an empty string.
Let me know if you see something that I have completely wrong.
Thanks
-MichaelB
6. Re: Custom Datatype SPARQL Indexes900576 Mar 15, 2012 10:52 AM (in response to Matperry-Oracle)I turned on Debug logs and here is some interesting output. (This is attempting Option #2)
DEBUG[2012-03-15 11:27:02,151] - [OracleOpExecutor:debug:202] - buildSemIterator: sqleacleGraphBase:debug:202] - processSqlExceptionAfterQueryExecution:acleModelBase:debug:202] - getStatusForEntailedGraph: start
DEBUG[2012-03-15 11:27:02,151] - [OracleModelBase:debug:202] - getStatusForEntailedGraph: done
DEBUG[2012-03-15 11:27:02,151] - [OracleRepeatApply:debug:202] - nextStage: sqle error code 29532
DEBUG[2012-03-15 11:27:02,151] - [OracleRepeatApply:debug:213] - nextStage: fall back
java.sql.SQLException:
I am adding the function to the registry prior to executing the sparql:
FunctionRegistry fr = FunctionRegistry.get();
fr.put("", DollarValue.class);
Here is a sparql snippet that matches what I am using:
PREFIX func : <>
SELECT ?v0 ?label
WHERE {
?v0 :displayLabel ?label
?v0 :dollarAmount ?DOLLAR1 .
FILTER( func:DollarValue(?DOLLAR1, 90000) < 0 )
}
For the DollarValue function implementation, i take the dollar type variable and get the Double value by calling the unparse method of the Datatype. And then I return the Double.compareTo(Double) result.
-MichaelB
7. Re: Custom Datatype SPARQL IndexesMatperry-Oracle Mar 15, 2012 2:32 PM (in response to 900576)Hi Michael,
We have some Oracle extension functions, e.g. orageo:withinDistance(), that are handled natively. It looks like Jena Adapter is incorrectly passing the extension function through to be handled natively on the SQL side.
It would be very helpful if you could please create a small reproducible test case and email it to me at matthew dot perry at oracle dot com.
As a workaround, you could try the UEAP syntax (url encoded version of the additional predicate) with Jena.
We can easily convert the dollar value into a number with TO_NUMBER(value, format) in SQL.
So your FILTER could be implemented as a SQL expression:
SQL> select to_number('$90,000.00', '$999,999,999.00') from dual; TO_NUMBER('$90,000.00','$999,999,999.00') ----------------------------------------- 90000
We can add this additional predicate to the WHERE clause of the SQL translation with UEAP syntax (note the url encoding and the addition of ?dollar1 to the SELECT clause):
(to_number(dollar1, '$999,999,999.00') < 90000)
With debug on, you should be able to see if the expression was added correctly to the SQL query.
PREFIX ORACLE_SEM_UEAP_NS: <> SELECT ?v0 ?label ?dollar1 WHERE { ?v0 :displayLabel ?label . ?v0 :dollarAmount ?DOLLAR1 . }
(If you are using Joseki, then we need to set -Doracle.spatial.rdf.client.jena.allowAP=true)
Hope this helps,
Matt
8. Re: Custom Datatype SPARQL Indexes900576 Mar 15, 2012 3:29 PM (in response to Matperry-Oracle)Matt,
I was able to get that to work:
String prefix = "PREFIX ORACLE_SEM_UEAP_NS: <> ";
However adding this prefix is quite complex to find all literals that are less than $90,000.00 and greater than $30,000.00
I also found that if I added more than one "PREFIX ORACLE_SEM_UEAP_NS" it would imply an OR between the statements.
I will try to find some time to send you a test case with the reproducable issue.
Thanks
-Micahel | https://community.oracle.com/thread/2361582?tstart=45 | CC-MAIN-2015-14 | refinedweb | 1,540 | 53.31 |
05 August 2009 15:46 [Source: ICIS news]
By Julia Meehan
LONDON (ICIS news)--European caprolactam sellers are to seek August price increases due to higher benzene costs and a tight supply situation, producers said on Wednesday.
“We have just started our discussions and [the August caprolactam contract price] will be a three-figure increase…benzene is up another €88/tonne ($128/tonne) and [caprolactam] availability is tight, so there is lots of pressure coming from the producer side,” said one producer.
Another producer said: “I am fed up saying to polyamide 6 producers that I need at least the benzene, and for August €120/tonne ($174/tonne) is what I have mentioned.
“I am buying as much material as possible because we are short on product, not because we are not producing, but because we are getting many enquiries from all over the world,” the producer added.
Because of the sharp increase in the value of caprolactam in Asia compared with values in Europe, European producers have been flaking caprolactam for shipment to ?xml:namespace>
Caprolactam prices in
According to global chemical market intelligence service ICIS pricing, caprolactam in the key Chinese market was valued at $2,150-2,200/tonne CFR (cost and freight)
Market sources have attributed the gains to higher benzene values and tighter supply as a result of production cutbacks.
The June caprolactam contract settled at a pre-discounted price of €1,568-1,624/tonne FD (free delivered) NWE (northwest
“Caprolactam in Europe is completely undervalued and the longer the lower prices continue, the less volume there will be in
“The consumption is higher because orders are increasing…this is the situation and if this continues I foresee bigger problems, because there is no product in the chain,” the producer added.
Another supplier said: “There is a lot of flaking going on and demand is better, and this is what is driving the market.”
The selling source also spoke about the recent lack of visibility in the market, saying that this was slowly improving.
“Compared to April and May, visibility has increased and we have moved away from this two-week thinking and back to four weeks. But we are far away from planning three to four months ahead,” he said.
As European caprolactam buyers were bracing themselves for another price increase, they continued to express the difficulties they faced when trying to recover costs in the polyamide 6 market.
“Suppliers are trying to recover the maximum, but I think [the August contract price] could be half – not the full – benzene, since we paid close to the full benzene in June and July,” said a major buyer.
“We are not able to pass this increase on month by month and we are losing margin…more than benzene is not acceptable,” the buyer added.
A second European consumer described the €88/tonne benzene increase as “a disaster”.
“We [the producers] have absorbed benzene in June and July and now August. When we sell fibres, we are not recovering a single penny. It’s just not reasonable to sell polymer and buy caprolactam at the same price,” said the source.
“[Polyamide 6] producers continue to create their own problems and it would help if they bought less caprolactam and stopped producing so much polymer,” concluded a European caprolactam producer.
Major European caprolactam producers include BASF, Domo Caproleuna, DSM and LANXESS.
For more on caprolact. | http://www.icis.com/Articles/2009/08/05/9237724/europe-caprolactam-sellers-seek-august-price-hikes.html | CC-MAIN-2013-20 | refinedweb | 568 | 53.55 |
Criteria Query with Left Outer Join
In Grails app development, when using criteria queries, we often find cases for Outer joins. Lets take an example. We have two domain classes with us:
class Blog { String title String content static hasMany = [comments: Comment] static constraints = { } }
and
class Comment { String text static belongsTo = [blog: Blog] static constraints = { } }
Now, lets say, we want get the title of all the blogs along with the count of the comments on it. The query that we try is:
List blogStats = Blog.createCriteria().list { createAlias('comments', 'c') projections { groupProperty('id') groupProperty('title') count('c.id') } }
Interestingly, we do not get what we want. We do get a list of list of id, title and comments count, but only for those blogs for which comments exist. So, with creating an alias, we just created a join but what we need is a left outer. As always, stackoverflow had the answer. The createAlias method can take another parameter to specify that we want the join to be a Left outer join.
We’ll need to add the following import:
import org.hibernate.criterion.CriteriaSpecification
And the our query looks like:
List blogStats = Blog.createCriteria().list { createAlias('comments', 'c', CriteriaSpecification.LEFT_JOIN) projections { groupProperty('id') groupProperty('title') count('c.id') } }
Looks pretty simple now that I know it
How about you? Hope it helps.
CriteriaSpecification.LEFT_JOIN has become deprecated. We can use JoinType.LEFT_OUTER_JOIN for recent versions.
@Shubham: Not sure what you mean by “Creating alias in another statement”. But if you are looking at reusing a part of the criteria, may be this helps you:
after creating “Blog.createCriteria()” can i use create alias in another statement?
Very helpful blog. Thanks a lot!
amazing it is really helpful with little control if one is not willing to use hql
Simple, neat and very useful. Thanks.
Thanks a lot! It saves my life!
| https://www.tothenew.com/blog/criteria-query-with-left-outer-join/ | CC-MAIN-2019-51 | refinedweb | 312 | 60.61 |
Lot many times an application is required to do many time consuming operations at start-up. Sometimes you need to read data from a database, sometimes you may need to retrieve some data from a web service. When this happens, it's often useful to display a "Splash Screen" to the user, with a company logo or something, along with a progress bar to indicate how much longer it will take for the app to load. While it may sound simple at first, it can be a bit tricky; if you simply show a screen, and do your time consuming operations, your UI will hang and your progress bar will never update. Therefore, there's some threading involve, so we will demonstrate a simple example here.
Start off by creating a simple Windows Forms project. Once it's loaded, add another window for and call it "SplashScreen". To get the look and feel right, let's set some properties:
Now, in the properties window, find the BackgroundImage property and click the little elipsis {...} and select a picture from your hard drive. The BackgroundImageLayout property is changed to None but you can do whatever you want. Then, add a progress bar to the bottom of the form, and set the Dock property to Bottom. Here's what our splash screen looks like in the designer:
Figure 1: Splash Screen
Now, we need to give access to someone outside of this class to update the progress (the progressBar is a private member and can't be accessed.) Here's the problem; if we simply wrap the progress bar's Value property in our own getter/setter like this:
Listing1: Code displaying the wrapping of progress bar’s Value Property
Public int Progress { get { return this.progressBar1.Value; } set { this.progressBar1.Value = value; } }
While you can do that, remember, this splash screen will be shown in a separate thread. If you then try to access this property from your main thread, you'll get an InvalidOperationException that "Cross-thread operation not valid:
Control 'progressBar1' accessed from a thread other than the thread it was created on." So, in order to be able to set any of the UI elements from another thread, we need to call the form's Invoke method which takes a delegate.
Here's the complete code for the SplashScreen class:
Listing2: Code displaying the SplashScreenclass
using System.Windows.Forms; namespace SplashScreenTesting { public partial class SplashScreen : Form { private delegate void ProgressDelegate(int progress); private ProgressDelegate del; public SplashScreen() { InitializeComponent(); this.progressBar1.Maximum = 100; del = this.UpdateProgressInternal; } private void UpdateProgressInternal(int progress) { if (this.Handle == null) { return; } this.progressBar1.Value = progress; } public void UpdateProgress(int progress) { this.Invoke(del, progress); } } }
As you can see, we created a delegate that we'll use to invoke the update to the progress bar. Let's create a class that simulates a time consuming operation. Basically, it just calculates Math.Pow for numbers 1 - 100 raised to the 1 - 500,000th power. Every time we move on to another outer number (the 1 - 100) we raise an event that reports progress. Once it's done, we raise an event that we're done. Here's the complete class:
Listing3: Code displaying the complete class
using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace SplashScreenTesting { public class Hardworker { public event EventHandler<HardWorkerEventArgs> ProgressChanged; public event EventHandler HardWorkDone; public void DoHardWork() { for (int i = 1; i <= 100; i++) { for (int j = 1; j <= 500000; j++) { Math.Pow(i, j); } this.OnProgressChanged(i); } this.OnHardWorkDone(); } private void OnProgressChanged(int progress) { var handler = this.ProgressChanged; if (handler != null) { handler(this, new HardWorkerEventArgs(progress)); } } private void OnHardWorkDone() { var handler = this.HardWorkDone; if (handler != null) { handler(this, EventArgs.Empty); } } } public class HardWorkerEventArgs : EventArgs { public HardWorkerEventArgs(int progress) { this.Progress = progress; } public int Progress { get; private set; } } }
Now, on to the main part of the app; the displaying of the splash screen: In the Form1's load event, we'll spawn off another thread to actually display the splash screen and then on the main thread we'll do our "Hard Work". Once the HardWorker has reported that the progress is complete, we'll dispose of the splashscreen and display the main Form.
Listing 4: Code to display the splashing screen
using System; using System.Windows.Forms; using System.Threading; namespace SplashScreenTesting { public partial class Form1 : Form { private SplashScreen splashScreen; private bool done = false; public Form1() { InitializeComponent(); this.Load += new EventHandler(HandleFormLoad); this.splashScreen = new SplashScreen(); } private void HandleFormLoad(object sender, EventArgs e) { this.Hide(); Thread thread = new Thread(new ThreadStart(this.ShowSplashScreen)); thread.Start(); Hardworker worker = new Hardworker(); worker.ProgressChanged += (o, ex) => { this.splashScreen.UpdateProgress(ex.Progress); }; worker.HardWorkDone += (o, ex) => { done = true; this.Show(); }; worker.DoHardWork(); } private void ShowSplashScreen() { splashScreen.Show(); while (!done) { Application.DoEvents(); } splashScreen.Close(); this.splashScreen.Dispose(); } } }
In the constructor we just hook into the Load event and new up the SplashScreen. Then, in the Load event handler, we first hide the current form, because we don't want that to be seen just yet. We then create a Thread and pass in a delegate to our ShowSplashScreen method. The show splash screen method is what's actually going to be run on a seperate thread. First, it displays the SplashScreen. Then, it just sits there in a constant loop waiting for the "done" bool to be set to true. The key ingredient here is the call to Application.Doevents(). aListBox and add DoEvents to your code, your form repaints when another window is dragged over it. If you remove DoEvents from your code, your form will not repaint until the click event handler of the button is finished executing.
Basically, it allows other events to be taken care of, even though the current procedure is blocking execution. This allows our progress bar to be updated.
Back in the form load, we then new up our HardWorker and hook into it's events. Basically, every time the HardWorker reports progress, we update our progress bar. Then when it's done, we set our done flag to true, and show the main form. Finally, we actually kick it off with a call to DoHardWork();
Try it out and run it. It's actually kind neat to see it in action. This is obviously a very rough example, but it should give you a basic idea on how to get this done. Hope you enjoyed reading the tutorial.
Software Developer from India. I hold Master in Computer Applications Degree and is well versed with programming languages such as Java, .Net, C and C++ and possess good working knowledge on Mobile Platforms as well. | http://mrbool.com/how-to-make-a-splash-screen-in-csharp/26598 | CC-MAIN-2015-32 | refinedweb | 1,108 | 65.83 |
The engine is simply a translator Just like we can interact with somebody who doesn’t know our language. And this special translator is called the engine to understands what they mean. Almost everyone that has worked with Javascript has to stimulate with the V8 engine. The shape of the JS engine. And most people know that Javascript is a single-threaded language. That is using a callback queue. we also hear phrases like JS is an interpreted language. So why we quite discover the whole of this? We can just write code without knowing each of these principles and we’ll be…
React hook is a new feature that was introduced in February of 2019 with react sixteen point eight hooks is a really really exciting feature. It aims to replace the way that we write certain components as well as how we write our applications. Hooks are a way for us to write functional components but gain access to new functionality that was previously only available to us if we wrote class components. We can’t use hook inside the class component. We can only use them inside the functional components.
So let’s begin…
useState is exactly like it sounds it allows…
CSS flexbox is a one-dimensional layout pattern that makes it easy to design flexible and effective layouts. Divide space between items and control their alignment in a given container flex layout. It also provides a lot of flexibility. With flexbox, we can organize items from left to right, top to bottom, and at the same time control the spacing and order of the items in the container.
In flexbox, there are mainly two entities: a parent container (the flex container) and the immediate children elements (flex items).
We also have to deal with the axes: the main axis and the…
A modern introduction to some of the new features in ES6 is also known as ECMAScript 6 and ECMAScript 2015.
ES2015 important syntax addition arrow function. Arrow function provides a short and concise syntax for writing functions and they also simplified the behavior of this keyword in javascript. First things we omit the function keyword. we can just use the parentheses and followed by the greater than sign and right arrow without spacing. And the right-hand side we wrote curly braces and then execute other codes inside this curly braces.
//Normal function
function sayHello(value) {
return value;
}//Arrow function
const…
Common Javascript interview question with answer
Ans: Javascript is a lightweight Object-Oriented programming language. Which used for client-side scripting and validation. It’s introduced in 1995 when it supports only the Netscape navigator browser. Then it’s adopted all kinds of other browsers. it mostly features interact webpage directly without page reloading. Although Javascript has no connectivity with java programming.
Ans: Some of the features of javascript are:
Ans: Some of the disadvantages of Javascript.
Ans: Javascript Function accepts arguments is…
React is not a framework it’s a javascript library. We can import its built-in method and use it. React work with an entire network of tools all work together. Webpack, npm, node, J.S.X, and Babel. There are alternatives for tools but this is the most common use.
It looks like everything in the HTML tag But this is all about Javascript. In react, app JSX babel preset will transform these expressions into actual Javascript code. Babel is typically used and recommended for transforming JSX expression.
//JSX
return (
<div className="title">
<h1>Hello World!</h1>
</div>
);
This is a simple JSX code…
Here have the most commonly asked HTML5 And CSS interview questions and answers.
Ans: HTML means hypertext markup language. It helps us to create an easy graphical web page audio video image animation and also link different pages. some point I mention
Ans: HTML5 version introduces a new element <dialog> and it’s made easy to create a popup in a single line. This element makes it natural to create popup dialogs and modals on a web page.
Ans: The
<input type="date"/>defines a date picker.
Ans: HTML tags-only opening and closing tags without…
In these stories, we will jump some javascript features. That we must use in our codes.
Away we know about the static websites and dynamic websites. most website is dynamic they store data into the server and serves data depending on a client request to display data into the client-side. But when we went to some of the data we need to store on the client base like shopping cart, website theme, color,font-size. This time helps us client-side storage. client-side storage work for storing data directly within the browser. Client storage is browser-independent.
There are a few methods as we…. This method is like an array value finding.
const movie = "Thor"
console.log(movie[0])
//T
const fristName = "Jon";
const lastName = "Doe";
console.log(fristName.concat(" ",lastName))
Includes method checks if a specific value exists or not, and it returns a boolean value.
Note: The includes methods are case-sensitive.
const country = "Bangladesh…
In this tutorial we will learn very basics of redux and implement our react countDown app. let’s begin.
First, we create a react app as we know with
npx create-react-app myapp
Now we install our redux in this app with this command
npx install redux
okay now we completed our installation it’s time to run our app. I believe. you know how to run the app better than me. just open your terminal and goto project directory with cd myapp and run npm start.
Create a folder in our src directory and name it redux also create a file myRedux.js…
Front End Developer | https://jobayerdev.medium.com/?source=post_internal_links---------6---------------------------- | CC-MAIN-2021-25 | refinedweb | 958 | 66.03 |
Hi,
I have a txt file where ive split up the text using a delimiter and printed the results. I been trying to make it so the java will read the file and where a delimiter is missing it wil output the result separtely as invalid. any help/pointers in the right direction greatly appreciated. heres my code:
import java.io.*;
import java.util.Scanner;
import java.io.IOException;
public class Generator {
private File file;
public static void main(String args[]) throws FileNotFoundException {
Generator String = new Generator("results.txt"); //line to input the text file
String.processLineByLine(); //line by line reading from the text
}
public Generator(String txt) {
file = new File(txt); //open a new text file for scan after
}
public void processLineByLine() throws FileNotFoundException {
Scanner scan1 = new Scanner(file);
try { //
//Scanner to get each line
while ( scan1.hasNextLine() ){ //continue to the next line
processLine( scan1.nextLine() );
}
}
finally {
scan1.close(); //no next line = scanner close
}
}
public void processLine(String aLine){
//use a second Scanner to string the content of each line
Scanner scanner = new Scanner(aLine);
scanner.useDelimiter(",");
while ( scanner.hasNext() ){
String event = scanner.next(); // name the columns
String co1 = scanner.next();
String co2 = scanner.next();
String co3 = scanner.next();
String competitor1 = scanner.next();
String competitor2 = scanner.next();
String competitor3 = scanner.next();
System.out.println( event + ": (Gold) " + competitor1 + co1 + ", (Sliver) " + competitor2 + co2 + ", (Bronze) " + competitor3 + co3);
} //print out the result
scanner.close(); // close the scanner
} | http://www.javaprogrammingforums.com/%20whats-wrong-my-code/12986-need-help-validating-results-printingthethread.html | CC-MAIN-2014-52 | refinedweb | 234 | 61.63 |
Use Vue.js Data Binding Options for Reactive Applications
Vue.js is known as a “progressive framework for building user interfaces”. There’s a lot to unpack in this simple statement. It’s easy to get started with Vue.js, with a minimal feature set, and then layer in more of the framework as you need it.
Unike React, it has full support for the MVC (Model View Controller) pattern out-of-the-box.
It’s easier to use and grow with than Angular.
And, if you couldn’t tell, I’m a little biased.
Vue.js has full support for ECMA 6 (sometimes referred to as ES6 or ES2015). This means it’s now very easy to make your applications modular as well as being able to support modern syntax, like:
import.
Vue.js has a lot of options for managing reactive data-binding in your application. This is the ability for views to automatically update when models (data) change.
In this post, you’ll look at three different approaches, each with their own pros and cons. For each of the three approaches, you’ll work with the same application: a progress bar that you can control with buttons. Then, you’ll dig deeper into the last option with a more complex code example.
The application also uses the BootstrapVue project which gives us a set of easy tags and components to work with for demonstration. You’ll make extensive use of the progress bar component.
Later, you’ll make use of the Vuex library for formal management of data stores. You’ll see how we can use these data stores to manage login and logout with Okta. First, let’s look at: Why use Okta?
Why Use Okta for Authentication?
While the example app in this post is focused on data binding, you’re going to be building a real-world application. The application includes authentication using the OpenID Connect standard in conjunction with Okta and stores the results of the authentication in the advanced data store for Vue.js. Okta makes identity management easier, more secure, and more scalable than what you’re used to. Okta is an API service that allows you to create, edit, and securely store user accounts and user account data, and connect them with one or more applications. As a developer, I know that I need authentication. But, I’ve seen enough horror stories from breaches over the years that I am happy to not handle credentials directly. Our API enables you to:
- Authenticate and authorize your users
- Store data about your users
- Perform password-based and social login
- Secure your application with multi-factor authentication
- And much more! Check out our product documentation for more information
To get started on this tutorial, register for a forever-free developer account, or sign in if you already have one. When you’re done, come back to learn more about building a secure SPA app with Vue.js and Vuex.
Use a Global Data Object for Simple Requirements
Using a global data object is straightforward and functional. It’s very accessible and the easiest of the approaches you’ll look at. It’s also the most fragile approach and requires duplicated code.
Let’s start by building and running the application. Clone the vue-data-binding-approaches GitHub project.
Note: The Vue CLI project was used to create each of the projects found in the repo.
Switch to the project folder, and run:
cd basic npm install npm run serve
This runs a local instance of the application. Launch your browser and navigate to:
For this section of the post, you’ll be using the
Global tab. Click Advance progress bar and you’ll see the progress bar move. Click the Two tab, and you should see the progress bar at the same point. The progress bars on tabs One and Two are kept in sync automatically through a global data object.
Let’s look at the code that backs the Global tab:
In the
main.js file, I define the global data object:
export const globalData = { state: { max: 50, score: 0 } }
Notice that within the
globalData object, there’s a
state object. Within
state are the actual properties we want to make sure are reactive. Due to the limitations of modern JavaScript, Vue.js cannot detect property addition or deletion. As long as we preserve
globalData.state, Vue.js will be able to keep variables inside it reactive.
Let’s take a look at the parts of the
basic/src/components/data-binding-global/One.vue template which manipulates the progress bar and keeps the data in sync.
Starting with the
<script> section first, you can see that the
globalData object defined in
main.js is imported into this template.
import { globalData } from '../../main'
The
data function binds the
globalData object to a local variable in this template:
data() { return { scoreState: globalData.state } }
The
advance and
reset functions manipulate the values in the
globalData object. This is done by using the local template reference, which “points” to the object within our data structure. This is what preserves the reactive nature of the data in the template and why we need to nest the data properties in
globalData.
advance: function () { if (this.scoreState.score < this.scoreState.max) { this.scoreState.score += 10 } }, reset: function () { this.scoreState.score = 0 }
Tying it all together is the template section. Here’s the progress bar:
<b-progress : </b-progress>
In the
<b-progress> tag, the value for
max is bound to the local template data object:
scoreState.max. In the
<b-progress-bar> tag, the value for
value is bound to the local template data object:
scoreState.score.
Finally, the template has buttons that when clicked call the
advance and
reset functions respectively to allow you to manipulate the progress bar.
basic/src/components/data-binding-global/Two.vue is almost an exact replica of
One.vue. And, herein lies the issue with this approach: lots of repeated code.
Try out the
Global tab on the app and you should see that however far you advance the
One tab within it, when you click the
Two tab, the progress bar will be at the same location. This is proof that our global reactive data binding is working.
If you look at
One.vue and
Two.vue in
basic/src/components/data-binding-global, you can see that there’s a lot of repeated code. We can improve on this code using the Storage Pattern to centralize initialization and logic.
Use the Storage Pattern to Centralize Data Update Logic
Click on the
Storage Pattern tab of the app. You can use the inside tabs to switch between the
One and
Two tabs. It looks just the same as the previous example. You’ll see the difference as we dig into the code.
In this version of the code, you use a centralized data store which includes not only the data object and initial states but also all the business logic.
Take a look at the
basic/src/model/scoreStore.js:
export default { state: { max: 50, score: 0 }, reset: function () { this.state.score = 0 }, score: function (score) { if (this.state.score < this.state.max) { this.state.score += score } }, bumpScore: function () { this.score(10) } }
There’s still a
state object that contains the internals of the data we want to be reactive in the application. There’s also
reset,
score, and
bumpScore functions containing the business logic that was repeated across components in the previous example.
Now, take a look at the script section of
basic/src/components/data-binding-storage/One.vue:
import scoreStore from '../../model/scoreStore' export default { name: 'One', data() { return { scoreState: scoreStore.state } }, methods: { advance: () => scoreStore.bumpScore(), reset: () => scoreStore.reset() } }
the
data() function is very similar to what we saw before. You bind the data
state from the central model store.
The local functions in the methods section simply refer to functions from the central store.
There’s a great reduction in the repeated code in the components and any additional logic can be added to the central store.
This approach is robust and functional for simple projects. There are some shortcomings, however. The central store has no record of which component changed its state. Further, we want to evolve the approach where components can’t directly change state, but rather trigger events that notify the store to make changes in an orderly manner. For more complex projects, it’s useful to have a more formal data binding paradigm.
This is where Vuex comes in.
Use Vuex for Modern Data Binding
Vuex is inspired by other modern state management frameworks, like flux. It accomplishes two primary goals:
- A centralized, reactive data store
- Components cannot directly change state
Look at
basic/src/model/scoreStoreVuex.js:
import Vue from 'vue' import Vuex from 'vuex' Vue.use(Vuex) export default new Vuex.Store({ state: { max: 50, score: 0 }, mutations: { bumpScore (state) { if (state.score < 50) { state.score += 10 } }, reset (state) { state.score = 0 } } })
This time, you’re instantiating a
Vuex object. Similar to the storage pattern from earlier, we define a state object in the
Vuex.Store. What’s different here, is that there’s a
mutations section. Each function in this section receives
state as a parameter. These functions are not called directly.
You can see how changes to the datastore state are made by looking at
basic/src/components/data-binding-vuex/One.vue. Take a look at the
script section:
import scoreStoreVuex from '../../model/scoreStoreVuex' export default { name: 'One', data() { return { scoreState: scoreStoreVuex.state } }, methods: { advance: () => scoreStoreVuex.commit('bumpScore'), reset: () => scoreStoreVuex.commit('reset') } }
Notice that the
advance and
reset functions call the
commit function on
scoreStoreVuex. The
commit function takes a text parameter which is the name of one of the mutations we defined in the Vuex store.
Once again, you can see that our progress meter is kept in sync across the
One and
Two views.
The progress meter is a very simple example. In the next section, we’ll examine more complex uses of Vuex.
Advanced Data Binding with Vuex
Aside from ensuring that data elements cannot be directly changed, Vuex has a number of other features that adds to its usefulness. In this section, you’ll examine store injection, a helper for computed fields and managing more complex data objects, like arrays and javascript objects.
Everything in this section can be found in the
vuex-advanced folder of the source code.
Vuex Store Injection
In this section, you use the
vuex-advanced folder in the okta-vuejs-data-binding-example project.
At the top level of your app, you can inject the Vuex store. This will make it available to all components in the project without needing to explicitly import it.
It looks like this:
const store = new Vuex.Store({ ... }) new Vue({ store, render: h => h(App), }).$mount('#app')
Now, in any component, you need only refer to:
this.$store to work with the Vuex data store.
Vuex Computed Fields
Computed fields are a key feature of Vue.js in general. Vuex provides advanced functionality to hook into the store to capture changes and make it easy to update the view automatically.
The
mapState is a helper wrapper that can be used for computed fields. It looks like this:
import { mapState } from 'vuex'; computed: mapState([ 'ary', 'obj' ])
This is a shorthand for referencing
state.ary and
state.obj from the Vuex store. With this setup, you can then reference the computed value in your template:
<h1>{{ary}}</h1>
Managing Arrays and Objects with Vuex
In the
basic section of this post, you were working with very simple values, like an integer for the progress meter.
Let’s take a look at how to manage arrays and objects with Vue.js and Vuex.
Because of the reactive nature of the data in the Vuex store, it’s important that you don’t delete or replace more complex objects and arrays. Doing so would break reactivity and updates would no longer be shown in views.
In the
vuex-advanced application, you can add and delete elements from both arrays and objects.
Take a look at the add and del functions in the Vuex store:
const store = new Vuex.Store({ state: { ary: [], obj: {} }, mutations: { addAry: function (state, elem) { state.ary.push(elem) }, delAry: function (state) { state.ary.splice(-1, 1) }, addObj: function (state, elem) { Vue.set(state.obj, elem.key, elem.value) }, delObj: function (state, name) { Vue.delete(state.obj, name); } } })
For arrays, use the
push function to add elements and the
splice function to remove elements.
For objects, use the
Vue.set to add elements and
Vue.delete to remove elements.
In either case, the original object is never destroyed, preserving reactivity.
Use Okta and Vuex for Easy Login with OpenID Connect
Now that you’ve seen various approaches for data binding with Vue.js, let’s take a look at a practical application of Vuex.
In this section, you’ll develop a small, Single Page App (SPA) that integrates with Okta for authentication.
The app makes use of OpenID Connect, so let’s start with a quick overview of this standard for authentication and identity management.
Everything in this section can be found in the
vuex-okta folder of the source code.
A Five-Minute Overview of OpenID Connect
OpenID Connect is an identity and authentication layer that rides on top of OAuth 2.0. In addition to “knowing” who you are, you can use OIDC for Single Sign-On.
OIDC is built for web applications as well as native and mobile apps. It’s a modern approach to authentication that was developed by Microsoft, Google and others. It supports delegated authentication. This means that I can provide my credentials to my authentication provider of choice (like Okta) and then my custom application (like a Vue.js app) gets an assertion in the form of an ID Token to prove that I successfully authenticated. OpenID Connect uses “flows” to accomplish delegated authentication. This is simply the steps taken to get from an unauthenticated state in the application to an authenticated state. For the SPA app, you’ll use the implicit flow for obtaining an ID Token. Here’s what the interaction looks like:
When you click the Login button in the app, you’re redirected to Okta to authenticate. This has the advantage of your app not being responsible for handling credentials. Once you’ve authenticated at Okta, you’re redirected back to the app with an ID Token. The ID Token is a cryptographically signed JSON Web Token (JWT) that carries identity information in its payload. The app can then extract user information from the token. Additionally, the app uses Vuex to store the ID Token, which can be used later to log out. To learn more about OAuth 2.0 and OIDC, check out these blog posts:
- An OpenID Connect Primer
- 7 Ways an OAuth Access Token is like a Hotel Key Card
- Is the OAuth 2.0 Implicit Flow Dead?
Note: In cases where you are using OpenID Connect to interact with an API, you’d also want to get back an access token. For these use cases, you would not want to use the implicit flow, but rather the authorization code wtih PKCE flow.
Set Up Okta for the SPA App
Head on over to to create an Okta org.
Login to your Okta org. Click Applications on the top menu. Click Add Application. Click Single-Page App and click Next.
Give your app a name. Change the
Login redirect URIs field to
Click Done.
There’s one more thing we need to configure in order to support logout.
Click Edit. Uncheck Allow Access Token with implicit grant type (we will only be using the ID Token in this example). Click Add URI next to Logout redirect URIs. Enter:
Use Vuex and the Okta Auth Javascript Library
The okta-auth-js library includes support for OpenID Connect.
You can add it to your Vue.js project like so:
npm install @okta/okta-auth-js --save
Just like before, you configure Vuex in
main.js
const store = new Vuex.Store({ state: { user: {}, idToken: '' }, mutations: { setUser: function (state, elem) { Vue.set(state.user, elem.key, elem.value); }, setIdToken: function (state, value) { state.idToken = value; } } });
In this case, the data store keeps information about the
user and the raw JWT in
idToken.
The
Home.vue file is the only component in this app. Okta will redirect back to this component both when you log in and when you log out.
Here’s the code to import the okta-auth-js library and set up some constants:
import OktaAuth from '@okta/okta-auth-js'; const ISSUER = ' const CLIENT_ID = '{yourClientId}'; const REDIRECT_URI = ' var authClient;
The
created function is run when the component is first created. In this function, the
authClient is setup:
created() { authClient = new OktaAuth({ issuer: ISSUER, clientId: CLIENT_ID, redirectUri: REDIRECT_URI }); }
The template is very simple. It shows the user information (which will be empty if you’re not logged in). It shows a Login button if you’re not currently authenticated and a Logout button if you are already authenticated.
<template> <div> <h1>Data Binding with Vue.js</h1> <h3>User Info:</h3> <div class="div-centered"> <codemirror :</codemirror> </div> <b-button Login</b-button> <b-button Logout</b-button> </div> </template>
The app uses CodeMirror to display the JSON representing the user information formatted, indented and with line numbers.
Notice
:value="userStr" in the
<codemirror> tag. If you examine the
computed section of the script, you can see how
userStr is computed:
userStr() { return JSON.stringify(this.$store.state.user, null, '\t') }
JSON.stringify is used so that codemirror can display the information properly. The important bit is:
this.$store.state.user. This retrieves the bound value from the Vuex store.
When you first browse over to the app at the view is pretty sparse:
When you click Login, you’re redirected over to Okta:
Here’s the
login function in the
methods section:
login() { authClient.token.getWithRedirect({ responseType: 'id_token', scopes: ['openid', 'email', 'profile'] }) }
The options passed into
getWithRedirect ensure that you get back an
id_token as well as specifying some default scopes in the request.
After you authenticate, Okta redirects back to The
mounted function is executed once the component is completely loaded and ready for action.
async mounted() { // check for tokens from redirect if (location.hash) { var tokenInfo = await authClient.token.parseFromUrl(); this.$store.commit( 'setUser', {key: 'claims', value: tokenInfo.claims} ); this.$store.commit('setIdToken', tokenInfo.idToken); } }
The redirect from Okta includes the
id_token value in the URL in the fragment section. It looks something like this:
The
mounted function first checks to see if there’s a hash (#) in the location URL. Note: When you first browse to the app, there is no hash in the URL, so the
if statement is not entered.
authClient.token.parseFromUrl() grabs the id_token from the url fragment, validates the cryptographic signature and extracts the json payload (the claims) from it.
The next two lines save the parsed claims as well as the raw JWT in the Vuex store. As we saw before, the code is using
this.$store.commit to take advantage of the mutations defined in the store.
Because of the data binding and computed values we setup earlier, the user info is now displayed in the component.
Now when you click Logout, the app uses the information in the Vuex store to properly execute the logout and destroy your session with Okta.
To log out with OIDC, you make a GET request of a
/logout endpoint. You pass along the ID Token (as the raw JWT), as well as a redirect URI so that Okta can redirect back to the app after logout is complete. This is all set up in the
logout function in the SPA app:
logout() { window.location.href = ISSUER + '/v1/logout?id_token_hint=' + this.$store.state.idToken + '&post_logout_redirect_uri=' + REDIRECT_URI }
This closes the loop on our SPA app, its use of the okta-auth-js library in conjunction with Vuex to manage data stores and how that data is bound to the component.
Pick the Optimal Vue.js Data Binding Approach
All the code for this post can be found on GitHub.
In the simplest cases, a global data store may suit your needs. Even for more complex applications, the storage pattern may suffice. I’ve written a number of Vue.js applications that are in production that use the storage pattern. This includes the online version of the Zork game that teaches you a little about OAuth 2.0.
Vuex offers a tradeoff between slightly more complex code and a high degree of stability and testability. Vuex makes it easy to inject the data store into your components using
this.$store and ensures that data cannot be directly updated by components.
As is almost always the case, you’ll need to pick the approach that makes the most sense for your use-case.
Learn More About Vue.js and Secure User Management
Okta’s written a Vue.js integration that makes integrating with Okta for secure auth a snap. It’s part of our open-source javascript OpenID Connect library. You can go directly to the Vue.js integration as well.
At Okta, we say: friends don’t let friends build auth! If you’re working on a project that requires secure, reliable authentication and authorization, get a free developer account from Okta.
Here are some more Vue.js posts that might interest you:
- Build Your First PWA with Vue and TypeScript
- Use Schematics with Vue and Add Authentication in 5 Minutes
- Build a Single-Page App with Go and Vue Check out the Okta Developer YouTube channel.
You can follow us on social @oktadev
Okta Developer Blog Comment Policy
We welcome relevant and respectful comments. Off-topic comments may be removed. | https://developer.okta.com/blog/2019/07/18/vuejs-data-binding-options | CC-MAIN-2022-21 | refinedweb | 3,653 | 66.44 |
menu_pattern(3) UNIX Programmer's Manual menu_pattern(3)
menu_pattern - get and set a menu's pattern buffer
#include <menu.h> int set_menu_pattern(MENU *menu, const char *pattern); char *menu_pattern(const MENU *menu);
Every menu has an associated pattern match buffer. As input events that are printable ASCII characters come in, they are appended to this match buffer and tested for a match, as described in menu_driver(3). NULL on error. The func- tion set_menu_pattern may return the following error codes: E_OK The routine succeeded. E_SYSTEM_ERROR System error occurred (see errno). E_BAD_ARGUMENT Routine detected an incorrect or out-of-range argument. E_NO_MATCH Character failed to match.. | http://mirbsd.mirsolutions.de/htman/sparc/man3/menu_pattern.htm | crawl-003 | refinedweb | 105 | 50.94 |
My 2AA battery sensor
Don't know if it's too obviuos but I haven't seen many basic designs so I thought I'd share mine. I have 10+ of these with runtime 6-8 months with very good results.
Worth to note is that this "basic" design is based on a normal 3.3V 8MHz Arduino Pro Mini with just the voltage regulator and power led removed. The other common low-power tweak with new bootloader, decreased clock speed and disabled brown out voltage detection is very nice but not a part of this design because I think it is one step beyond "basic". So keep it simple for now... (Or look here.)
It's a normal battery powered slim one piece unit with battery monitoring and room for small sensors inside. In the picture a DHT22 has been squeezed but with better preparations there'll be more useable space.
Some benefits I noticed with this design is:
- Easy access to the pro mini serial pins, the reset button and to view the pin 13 led.
- Easy remove/replace batteries.
- Easy in/out of components since the just attached in place with their firm-flexible wires.
- The "keyhole" in the cap is still available so the unit easily can hang on the wall.
- The radio is located far from the wall-side and with reasonable small risk of getting in the shadow behind the batteries.
- Small but still good battery life using (the most?) common standard batteries.
- (I'd thought I'd put good waf here but she said the box color was "too gray" )
The cost with building this sensor is that it requires some soldering and small/short wires work. (I prefer small/medium size "bread board type" wires to this.). I unsoldered the radio module pin headers and trimmed the ones on the step-up. After building 10 of these in 3 batches I've focused on productivity and less on the product. So I'm aware of improvements and maybe I'll introduce some in my next ongoing batch of 10.
The 3.3V stepup-supply feeds the arduino and sensors that requires 3.3V to work. The radio is powered straight from the batteries because it's high qauality demand on its supply and that it's good with a voltage all the way down to 1.9V. I still use the 4,7uF capacitor on radio as precaution.
Since its battery and low power is wanted, I always remove the voltage regulator and power led from the Arduino pro mini and the power led from the step-up regulator (or the series resistor).
I set my power monitoring to report linear with 1.9V as 0% (Vmin_radio) and 3.3V as 100% (instead of 3.44 just to increase resolution). Power consumption looks very good. I usually use normal or high frequency update in my nodes (30s - 3min) and I've tried with really poor used batteries but all my batterylevels in datamine (Vera-plugin) are looking good. I've measured ~1mAac/0.1mAdc current draw in sleep mode. Considering the results from reality with my nodes with longer sleep durations, the mAdc value is closest to the truth.
More photos of a complete node to be found here below.
Main material (note that some links are for >1pcs). All except battery holder are from MySensors store.
- 27x54x75mm Case
- 2AA Battery holder (don't mind it's actually another on this particular image.)
- Arduino Pro Mini 3.3V 8Mhz
- Radio NRF24L01+
- 3.3V stepup regulator (remove power led and DO NOT FEED THE RADIO WITH THIS, just arduino and sensors.
- Prototype PCB (sliced by 2 row pieces, but I suggest to make 3 rows instead to better fit with step up and more space is easier to work with when soldering..)
- Sensors eg DHT, DS18B20s etc
- Wires. I used these, but I'm sure any wire good for breadboarding will do.
Edit:
I forgot to list the 2 resistors and 1 capacitor for battery monitoring as well as the sensor dependent 4k7 pullup resistor (if used). The battery monitoring circuit is under the 2-row proto-pcb and the pullup under the arduino. See the build site and store for more info. With a 3-row proto-pcb I'd place the pullup here as well.
The 4.7uF cap for radio is of course straight on top between vcc and gnd, but a little above so wires can be soldered here too.
I can also mention that I find a good order for making these are:
- Glue the battery holder to the case "bottom".
- a. Pre-assemble arduino with radio soldering as short wires as possible. And attach all (~30mm) wires leaving these two.
b. Pre-a. the proto-pcb with all its components and with the step-up regulator.
c. Pre-a. sensor(s) that will be used.
- Connect all things made in the previous step.
- Connect battery when needed but don't put it all in to case until software has been loaded and first tests been done.
As always; if unsure or in trouble; test individual parts before putting them together.
Edit 2:
Here's the layout with connections. Since the schematic is simple and covered elsewhere at the Build site I found it more informative to show the design as a layout with wiring.
Edit 3:
Some example sketches are found further below in this thread. E.g.
Then the usual ads linked from the Bom above:
@Tibus Tests and forum experience shows that the radio is too noise or ripple sensitive and will work with very poor performance. Relatively large low-ESR (expensive) capacitor can compensate for this, but I prefer to simply bypass the stepup to the radio. If you're not aware of this issue you'll look forward to a lot of frustration with your setup. Look here for some more info.
I updated the first post regarding sleep mode power consumption. Current draw <0.1mA was too optimistic. My Fluke 87 detected mostly AC current. Beginners mistake and really embarrassing. I excuse me with that I do this hobby between 1 an 2 am.
Edit: This post should have been more elaborate. Please read further below for more info. To sum up - it's pretty good after all. ?
Since its battery and low power is wanted, I always remove the voltage regulator and power led from the Arduino pro mini and the power led from the step-up regulator (or the series resistor).
Do you mind to share the how ?
Thanks ?
No. There are two parallell supplies from the same battery. One is simple battery->radio and the other one battery->step-up-regulator->arduino (and sensors).
Since its battery and low power is wanted, I always remove the voltage regulator and power led from the Arduino pro mini and the power led from the step-up regulator (or the series resistor).
Do you mind to share the how to?
There are descriptions on the build site. Instead of cutting the traces I just unsolder the regulator (the black piece with 5 legs) and the red power led (usually right next to it). If you're planning to reuse any, it's probably better to cut traces or unsolder series resistors. For the step-up in particular it's easy to see which resistor the led connects ?
Because of the step-up regulator the internal vRef-method isn't possible. I use voltage divider from build site and the code I posted earlier here. But for 2AAs use VMIN=1.9, VMAX= 3.3V and Volts-per-bit = 0.003363075; Vmin and Vmax you can adjust if you want.
Thanks
You're welcome.
Edit: Logical level difference between radio and arduino is outside specification when radion operates at 1.9V and arduino at 3.3V, but from my own tests and experiemce 1.9 is working ok as a very low limit. A battery alarm should of course be set above 0% anyhow.
As a general comment I'd like to complete the cross reference to this thread. It says why you should avoid the china step-up converter due to its high quiescent power draw (~ 1mA). If space is available e.g. a 4AA-supply via APM inbuilt regulator can give <25% sleep mode power consumption.
Edit: After a few months testing, I'm reassessing the efficiency of the China step-up. See below in this thread.
runtime 1-4 weeks
So this gives a runtime of 1-4 weeks? I am just starting out and wanted to confirm this. There are other posts claiming 6 months but I suspect that is in theory and the reality is 1-4 as you have experienced. Also, are any of your sensors interrupt driven that only wake up on outside feedback such as door switch? - I wonder if that will have more battery life.
@MatrixIII No no, that was for how long I have had my system running when I wrote the first post in this thread. The battery time in practice highly depends on how much the node sleeps. Always on and always asleep will give you the extreme. I'd say 6 months is very reasonable. I'll see if I can show some examples....
Ok here are some of my logged battery levels. Note that some nodes had been running before logging was turned on and that I also like to drain already used batteries. I'll leave to the reader to extrapolate the trends and estimate life time, but I suspect that the last 20% shouldn't be taken into account. First exampel shows briefly why;
First one odd but illustrating example where I used two rechargeable 700mAh AAAs with one DS18B20, varying (very) temp and only 30s sleep. Any nodes transmission activity will temporary produce a lower voltage reading and make the curve look noisy. This band grows thick near the end of life and these low spikes are the ones to keep an eye on. Apart from that we see the typical "tilted S"-shaped battery discharge curve.
Next one is a rather nervous DHT22 with 30s sleep and supplied by two (initially fresh) normal alkaline AAs.
This one has two DS18B20s, 30s sleep. The sensors are attached to a central heating pipe. The heating system was turned on 1 Nov which is visible as new more battery consuming trend.
Here is the interrupt-driven (front) door switch sensor as requested. Unfortunately not with fresh AAs. Since it's almost always sleeping I think the power draw is surprisingly high. Explanation is of course the 1mA to the china step-up.
Finally are there two nodes started up at the same time, equally equipped with one BMP180 and one DHT22 each. Sleep time are more normal 15+5min. (15+5 because I now have learned to filter the DHT-readings and I do this here by measure a value 5 min before the real processing for avaraging purpose and to reduce risk of invalid readings.)
I'm confident this will survive 6 months since the decrease rate is 5% per month and it's still in the beginning of the "tilted S". The upper curve is with fresh batteries, the lower with used.
Great job with the graphs! Really interesting.
- Anticimex Contest Winner last edited by
Very good analysis. I have just bought two lipo cells and a charger and a bunch of 9V battery cables. Sometime next year I hope to get the time to evaluate their use for power source to the sensors. Then I will also test my slightly more conservative battery level circuit described here.
Thanks.
I thought someone would ask why I expect a 3000mAh battery to supply 1mA for 6 months ?
- RJ_Make Hero Member last edited by RJ_Make
Thanks.
I thought someone would ask why I expect a 3000mAh battery to supply 1mA for 6 months ?
Nice Work!! Oh and with the mods you have in place, I don't think you should have much problem getting to 6 months..
- Zeph Hero Member last edited by
I thought someone would ask why I expect a 3000mAh battery to supply 1mA for 6 months ?
Ok, I'll be the straight man here: why do you expect a 3000mAh battery to supply 1mA for around 4400 hours?
@Zeph Thank you.
Because it's 90% mAac (from dc source) and I think the china step-up isn't really that crappy after all. Energizing-deenergizing a a coil with no load will look like this without any real power produced. Of course there are loss power and wear on battery etc, but not compareable to 1mAdc. Deeper investigation to this should be the subject of a new thread.
- RJ_Make Hero Member last edited by
I'm getting about 3-4 months from my less active sensors, and the only hardware mod I'm using is the Arduino LED removal. So I can image removing the regulator and step LED would save a fair amount of energy.
I really need to get around to removing those....
This are my results, with low power arduino powerd on 2 AA alkaline batteries.
Door/window sensor (front door - opens frequently). On ocrober 4 battery status 64%, december 18 62% (still at 62%). 100% spike is when I reprogram my sensor. In 75 days battery drops for 2 % -> 100% in more than 10 years.
Code is available on GitHub.
EasyIoT server battery status
Temperature and humidity sensor with DHT22 and step up regulator. Arduino and NFR24L is powered directly on 2 AA batteries, DHT22 is powered on step up regulator but only when measure is taken. Battery drops for 14% in 75 days. Actually this design is not so good. I'm testing new design without step up regulator and better temperature/humidity sensor. 2AA alkaline batteries could last 10 years.
This is my water leak sensor. I'm using 2 wasted AA batteries for testing, and voltage actually rise. Current consumption is about 6uA (self discharge is about 10 times bigger) practically all the time.
@dopustko Looks nice. Can you please share a more detailed description of your hardware?
Edit: I searched and saw that you're running at 1MHz and soon with HTU21T. Very elegant!
@m26872 here is my complete description for MySensors door/window sensor. For other sensors I will add descriptions in the future...
@dopustko Great! I love your low-power guide. Wow! That should be mandatory for all sensors that work below 3.3V like e.g. a door switch. And you're able to use the internal battery monitoring method and all.
Also, when I looked at your diagram I realized that I probably did a mistake with my door switch connection to arduino and have a substantial current draw through the switch. If true it means that the step-up is innocent.
@m26872 In fact most of sensors can work on 2 AA batteries without step up regulator if sensor is selected carefully. That also minimize number of components and battery consumption.
I'm using 1M pull up resistor instead of internal resistor (50-60K) - this lower power consumption. 1M is quite big but wires are short, so it's working ok.
Hi @dopustko , i've checked your cool website. can I ask few question regarding your setup?
I saw this page, in order to get low power consumption, you did burn fuse and disable brown-out? Possible to use it without usbtinyisp?
@funky81 Thx. You can burn fuses with other Arduino - that's how I do it. Just google ArduinoISP.
- Patrick Mcgillan last edited by
Good info here. Thought comes to mind about using a small solar cell off a defectivet walkway light to add a little charge back in to the system, when there is any kind of light around.
Just an update on my battery levels. My VeraLite Datamine-plugin won't longer plot for me. I suspect I'm out of VeraLite memory to handle all the data. This is it. (Look above for last graphs.)
Node 16: BatteryLevel 51
Node 101: Dead 03 Feb
Node 110; BatteryLevel 57
Node 105: BatteryLevel 80
Node 106: BatteryLevel 53
As I hoped, the decrease rate for 105 and 106 is now less than 5% per month and 6-month target is already passed by half.
The revenge of the Chinese step-up ?!
Someone else noticed that one of the two batteries drained by the step-up always has negative(!) charge? Due to the AC-load? Could it be possible to extend battery life by adding some kind of capacitor?
Node 105 and 106 down just 1% since last post (22 days ago). Not bad. Relatively stable readings and low activity, but it doesn't matter since we're all expected the sleep mode consumption to be the worst.
@EasyIoT: Thank you for your nice descriptions of the low power sensors.
I have some questions regarding the temperature sensor:
Did you use a "Low power modified" Arduino for that as well?
How are you able to power the sensor via the step-up only when needed? Do you power the step-up via an digital pin or something?
@m26872: Also thanks for your design. I am thinking of building something similar. So you got now about 5% battery drop per month using the china stepup with desoldered LED?
So you got now about 5% battery drop per month using the china stepup with desoldered LED?
Today I read Node 105: 78% and Node 106: 51%. That's only 2% in the last 38 days.
Edit: Perhaps using the "%" here is a little careless, but I think everyone knows that we're talking battery level.
@daenny yes, I'm using DO pin and MOSFET connected to step up regulator. But this is complicated solution. I'ts better to use different temperature sensor which can operate down to 1.8V. Right now I'm testing custom board with NRF24L and HTU21D (it's not cheap, but it can work down to 1,8V). Results are pretty good. Temperature and humidity sensor can operate more than 5 years on 2 AA batteries. For example door window sensor and water leak sensor can work more than 10 years on 2 AA batteries. It's seems that for those sensors AA batteries are overkill.
@m26872 I still dont get it. Why I cant get the same efficient power like you.
I already follow all of your requirement in the first thread - difference only I use breadboard.
The minimum consumption (sleep) that I've got still 2.4mA, compare to you, it still huge differences. I dont know what's wrong with my setup.
My sketch wake up every 10s (consumption up to 3.5mA), while sleep the lowest is 2.4mA.
#include <avr/sleep.h> // Sleep Modes #include <MySensor.h> #include <SPI.h> MySensor gw; #define BATTERY_SENSE_PIN #define SLEEP_IN_MS 86400000 // 1 day #define PROD false int oldBatLevel; void setup() { gw.begin(NULL,1); gw.sendSketchInfo("Basic-Sketch-MySensor", "1.0"); pinMode(2,INPUT); digitalWrite (2, LOW); pinMode(3,INPUT); digitalWrite (3, LOW); pinMode(4,OUTPUT); digitalWrite (4, LOW); pinMode(5,INPUT); digitalWrite (5, LOW); pinMode(6,INPUT); digitalWrite (6, LOW); pinMode(7,INPUT); digitalWrite (7, LOW); pinMode(8,INPUT); digitalWrite (8, LOW); oldBatLevel = -1; sendValue(); Serial.println("Startup Finished"); TurnOnLed(); } void TurnOnLed(){ digitalWrite(4,HIGH); gw.sleep(1000); digitalWrite(4,LOW); gw.sleep(100); } void loop () { sendValue(); gw.sleep(1000*10); } // end of loop void sendValue() { gw.powerUp(); int batLevel = getBatteryLevel(); if (!PROD){ gw.sendBatteryLevel(batLevel); TurnOnLed(); } else{ if (oldBatLevel != batLevel) { gw.sendBatteryLevel(batLevel); oldBatLevel = batLevel; TurnOnLed(); } } gw.powerDown(); } // ADMUX = _BV(REFS0) | _BV(MUX3) | _BV(MUX2) | _BV(MUX1); delay(2); //)); result = ADCL; result |= ADCH<<8; result = 1126400L / result; // Back-calculate AVcc in mV return result; }```
@m26872 I still dont get it. Why I cant get the same efficient power like you....
I see that your sketch uses the internal battery monitoring method. That isn't possible when using the design as in this threads subject, with a step-up regulator that makes Vbat different from Vcc. Are you sure you're not confusing my design with EaysIoTs ?
Anyhow, I can't explain the sleep mode consumption, it seems like something isn't sleeping like it should.
Here's my sketch for Node 105/106:
// Egen kombo-nod för DHT22, BMP180 och batterimonitering. // 20141019 // DHT-22 Working voltage: DC 3.3-5.5V (so connect after step-up regulator). // BMP180 (compatible with BMP085) working voltage 1.8-3.6V. Different boards with different power and pullups. Datasheet for ic only: #include <SPI.h> #include <MySensor.h> #include <DHT.h> #include <Wire.h> #include <Adafruit_BMP085.h> // N.b. The new library includes the function readSealevelPressure() #define NODE_ID 106 // Manually asign Node Id <<<<<<<<<<<<<<<<<<< ENTER THIS !!!! #define ONE_WIRE_BUS 3 // Pin where dallase sensor bus is connected. <<<<<<<<<<<<<<< DON'T FORGET 4k7 PULL BARO_CHILD 2 //#define TEMP_CHILD 3 // Commented since don't need the Bmp tempsensor #define HUMIDITY_SENSOR_DIGITAL_PIN 3 // <<<<<<<<<<<<<<<<<<<<<<<< Use 4k7 Pull-up. MySensors webpage doesn't use, datasheet does. Adafruit does unsigned long SLEEP_TIME = 15*60000; // sleep time between reads (in milliseconds) unsigned long preSleepTime = 5*60000; // sleep time for extra pre hum-reading for simple avaraging filter Adafruit_BMP085 bmp = Adafruit_BMP085(); // Digital Pressure Sensor MySensor gw; DHT dht; float lastTemp; float lastHum; float lastPressure = -1; //float lastBmpTemp = -1; // Commented since don't need the Bmp tempsensor MyMessage msgHum(CHILD_ID_HUM, V_HUM); MyMessage msgTemp(CHILD_ID_TEMP, V_TEMP); //MyMessage tempMsg(TEMP_CHILD, V_TEMP); // Commented since don't need the Bmp tempsensor MyMessage pressureMsg(BARO_CHILD, V_PRESSURE); int BATTERY_SENSE_PIN = A0; // select the input pin for the battery sense point int oldBatteryPcnt = 0; void setup() { analogReference(INTERNAL); // use the 1.1 V internal reference for battery level measuring delay(500); // Allow time for radio if power used as reset <<<<<<<<<<<<<< Experimented with good result gw.begin(NULL,NODE_ID); // Startup and initialize MySensors library. Set callback for incoming messages. dht.setup(HUMIDITY_SENSOR_DIGITAL_PIN); gw.sendSketchInfo("EgHumBarTemBatv2", "2.0 20141110"); // Send the Sketch Version Information to the Gateway if (!bmp.begin()) { Serial.println("Could not find a valid BMP085 sensor, check wiring!"); while (1) { } } gw.present(CHILD_ID_HUM, S_HUM); // Register all sensors to gw (they will be created as child devices) gw.present(CHILD_ID_TEMP, S_TEMP); gw.present(BARO_CHILD, S_BARO); // gw.present(TEMP_CHILD, S_TEMP); // Commented since don't need the Bmp tempsensor } void loop() { delay(dht.getMinimumSamplingPeriod()); float humidity_1 = dht.getHumidity(); if (isnan(humidity_1)) { Serial.println("Failed reading humidity_1 from DHT"); } // else { // Serial.print("Hum1: "); // Serial.println(humidity_1); // } gw.sleep(preSleepTime); //sleep shortly until the "real" processing happens delay(dht.getMinimumSamplingPeriod()); float humidity_2 = dht.getHumidity(); if (isnan(humidity_2)) { Serial.println("Failed reading humidity_2 from DHT"); } // else { // Serial.print("Hum2: "); // Serial.println(humidity_2); // } if ( isnan(humidity_1) && isnan(humidity_2) ) { Serial.println("Failed reading humidity_1 and humidity_2 from DHT"); } else if ( isnan(humidity_2) && !isnan(humidity_1) ) { // && abs(humidity_1-lastHum)>2. ) { // Ev hysteres gw.send(msgHum.set(humidity_1, 1)); lastHum = humidity_1; // Serial.println("Humidity_1 sent to gw."); } else if ( isnan(humidity_1) && !isnan(humidity_2) ) { // && abs(humidity_2-lastHum)>2. ) { gw.send(msgHum.set(humidity_2, 1)); lastHum = humidity_2; // Serial.println("Humidity_2 sent to gw."); } else { // Serial.println("Else mean calc"); float humidity_m = (humidity_1+humidity_2)/2. ; // Averageing the two. // if ( abs(humidity_m-lastHum) > 2. ) { gw.send(msgHum.set(humidity_m, 1)); lastHum = humidity_m; // Serial.println("The mean humidity_m sent to gw."); // } } float temperature = dht.getTemperature(); if (isnan(temperature)) { Serial.println("Failed reading temperature from DHT"); } else if (temperature != lastTemp) { lastTemp = temperature; gw.send(msgTemp.set(temperature, 1)); // Always send temperature // Serial.print("Sent temp: "); // Serial.println(temperature); } float pressure = bmp.readPressure()/100; // float pressureS = bmp.readSealevelPressure(75.)/100; // Serial.print("Pressure = "); Serial.print(pressure); Serial.println(" Pa"); // Serial.print("SeaLevelPressure = "); Serial.print(pressureS); Serial.println(" Pa"); if (pressure != lastPressure) { gw.send(pressureMsg.set(pressure, 0)); lastPressure = pressure; // Serial.print("Sent pressure: "); // Serial.println(pressure); } // float bmpTemperature = bmp.readTemperature(); // Commented since don't need the Bmp tempsensor //// Serial.print("Temperature = "); Serial.print(bmpTemperature); Serial.println(" *C"); // if (bmpTemperature != lastBmpTemp) { // gw.send(tempMsg.set(bmpTemperature,1)); // lastBmpTemp = bmpTemperature; // } int sensorValue = analogRead(BATTERY_SENSE_PIN); // Battery monitoring reading // Serial.println(sensorValue); float Vbat = sensorValue * VBAT_PER_BITS; //; } gw.sleep(SLEEP_TIME); //sleep a bit }
@m26872 in my previous sketch, yes, i'm trying to use EasyIOT's (Thanks @EasyIoT )
anyway, i'll flash with yours. I'll let know the result
I declare the 6 months criteria easily passed. And the chinese step up regulator to be good value for money. Next target; 12 months.
Hi!
This is also what im trying to do... have the same components/material as you and added the resistors for battery measurement. I dont get this long life (sleeps 15min, activated and sends).
Can you post some sort of schematics or more detailed pictures so i can see your connections?
Would be great!
Here is my own try for a schematics:
A tip for much better batterylife: burn a new bootloader for 1mhz. Now you can attach the arduino and nrf directly to the batteries and you can measure the battery with the Internal voltage meter of the 328p. If you use a dht you attach the step-up conv. And only attach the dht to it.
@sundberg84
I've looked at your lovely drawing before and I don't see anything wrong with it except from that 0.1uF capacitor should be in parallell just with the 470k resistor, but it shouldn't matter regarding battery life time. What would be more interesting is to see your sketch. Or do you use the same as I do?
A schematic and a few better pictures is on my to do list, but will probably come sooner now since you asked for it.
@m26872
Pretty much the same, i have stolen majority of your code:
Ive a array to get an average battery value each hour:
#include <SPI.h> #include <MySensor.h> #include <DHT.h> //ställ in varje gång!========================= #define SketchName "Sovrum Uppe" #define SketchVer "1.0" #define NodeID HUMIDITY_SENSOR_DIGITAL_PIN 3 int BATTERY_SENSE_PIN = A0; // select the input pin for the battery sense point unsigned long SLEEP_TIME = 15*60000; //unsigned long SLEEP_TIME = 5000; //debug //Ställ även in gw.begin(,,,) //ställ in varje gång!========================= float lastTemp; float lastHum; boolean metric = true; int batteryPcnt = 0; int batLoop = 0; int batArray[3]; MySensor gw; DHT dht; MyMessage msgHum(CHILD_ID_HUM, V_HUM); MyMessage msgTemp(CHILD_ID_TEMP, V_TEMP); void setup() { // use the 1.1 V internal reference analogReference(INTERNAL); delay(500); // Allow time for radio if power used as reset <<<<<<<<<<<<<< Experimented with good result gw.begin(NULL, NodeID, false); / dht.setup(HUMIDITY_SENSOR_DIGITAL_PIN); // Send the Sketch Version Information to the Gateway gw.sendSketchInfo(SketchName, SketchVer); // Register all sensors to gw (they will be created as child devices) gw.present(CHILD_ID_HUM, S_HUM); gw.present(CHILD_ID_TEMP, S_TEMP); metric = gw.getConfig().isMetric; } void loop() { int sensorValue = analogRead(BATTERY_SENSE_PIN); // Battery monitoring reading delay(1000); delay(dht.getMinimumSamplingPeriod()); float temperature = dht.getTemperature(); Serial.print("Temp: "); Serial.print(temperature); Serial.println(" C"); if (temperature > 0) { gw.send(msgTemp.set(temperature, 1)); } delay(1000); float humidity = dht.getHumidity(); Serial.print("Hum: "); Serial.print(humidity); Serial.println(" %"); if (humidity > 0) { gw.send(msgHum.set(humidity, 1)); } delay(1000); float Vbat = sensorValue * VBAT_PER_BITS; int batteryPcnt = static_cast<int>(((Vbat-VMIN)/(VMAX-VMIN))*100.); Serial.print("Battery percent: "); Serial.print(batteryPcnt); Serial.println(" %"); batArray[batLoop] = batteryPcnt; if (batLoop > 2) { batteryPcnt = (batArray[0] + batArray[1] + batArray[2] + batArray[3]); batteryPcnt = batteryPcnt / 4; Serial.print("Battery percent (Avg (4):) "); Serial.print(batteryPcnt); Serial.println(" %"); gw.sendBatteryLevel(batteryPcnt); batLoop = 0; } else { batLoop = batLoop + 1; } gw.sleep(SLEEP_TIME); //sleep a bit }
@Sweebee
It's already been discussed earlier in this thread (see EasyIoTs posts above), it's nevertheless a very good tip. I also have some test nodes running, but I consider it to be the next step from the "basic" design which was this threads concept. If I'll post something with this low-power-tweaked design it'll therefore be in a new thread. I should probably write something about this as an update in the first post...
@sundberg84
I'm no code expert, and the only thing I can think of is that your "delay(1000)" will keep the power hungry radio awake. If you need delays (apart from debug) I'd prefer sleep instead. What's your battery life time look like?
UPDATE
I've updated the first post with
- A wiring layout.
- Added a text about the low-power option
- A few minor details.
- sundberg84 Hardware Contributor last edited by
Great layout/way to show your project - really nice!
Its exactly as I do it so im getting a bit jealus here... . One node is down after a week.
It might be the hardware or me not so good soldering so i short something out.
I will try some more nodes now you have comfirmed I do it the right way and that should work.
@sundberg84
Thx, good you liked it.
With such fast battery drain it should be easier to trace the current draw. First you'll need to measure the current. Second option is of course the good old switch-and-replace troubleshooting method.
I've read about hardware and quality issues, but let's hope it isn't...
@m26872 What kind of pro-mini do you use?
Im using deek robot 3.3v 8mhz and it seems like there is a problem with this: () i kill them when i try to remove voltage regulator so i guess i havent suceeded yet with that.
I dont have any good equipment to measure current... Ill have made a new sensor today - lets see how it operates.
Here are some close-up pictures of my Node 105 (which I opened with maximum care not to spoil my long term testing)
Node 105 starts to become famous here
We all follow him(?) with great excitement.
- epierre Hero Member last edited by
The news is that my Vera Lite gave up (soft-wise) after all hacking, plug-ins and un-clean factory resets, so I gave up on her (and Z-wave too, at least for a while). We've been fighting with each other for too long, so I'd say the divorce was welcome.
Instead, in the controller jungle I decided to try Fhem, aware of the challanges as I'm not German- (nor Perl) speaking. I'm still in the sens-but-no-control-phase and have a few test nodes running, including "the" node 105. (Very easy transition btw, just one press at reset button then a node is in production with a new controller.)
Back home today after 2w holiday I think it's looking good. The only problem now is of course that my trending is broken. If anyone knows how to merge the old trend data with the new, you're very welcome? Or just on how to rescue the Datamine data (from NAS)? If possible? I haven't researched anything yet.
Just an idea... I'm not sure if someone mentioned it:
If sending consumes so much power... what about collecting sensor data every 5 minutes and then send the data (array) once per hour? Of course there would be some time calculating before logging in the controller (fhem), but this way, I can save power... What do you think?
@Meister_Petz Kind of what I already do in the test nodes 105/106 if you look at code above. Update frequencey and battery life time is a balance that can be adjusted just as you wish. Right now these node send update every 15 min and take sample for calculation 5 min before, which is about what I need. If you and your application is more interested in trending, history and data collection, I think your suggestion is excellent. If you need a fast response or action from your automation system it's not.
But, what I think is most unneseccary with my test nodes 105/106 is that they check battery level every 15 min and send update if it has changed. Since the level typically is unstable right after the first transmission, it will generate a lot of useless transmissions. The only reason for me not to update code in node 105/106 is to keep it as an untouched study case. In my new nodes I usually set a fixed battery check/update frequency (1/week) which also works as a heartbeat signal for less active nodes.
- TheoL Contest Winner last edited by
Not sure if any one has had the same question. But the other day I stumbled on this little thing It's a really small rechargeable battery. Now I don't no much about batteries. But I know that my LiOn powerd handdrill is very strong. The batteries of my handdril hardly lose power over time.
So I was wondering if this could be used to power an Arduino and some sensors and how long it will take for the battery te become empty. It would make my 3.3V Sensors really small. I could almost install them in a small matchbox. Which I think is really cool!
@TheoL There are endless discussions about different batteries in the forum. Just search and read. I recall one issue with LiXx batteries is that max voltage is too high for the NRF24L01+ radio.
Ok, another thing: why not sending everything at once?
As I understand you are using FHEM which allows a lot of coding on the controller side. So you could send batteryLevel, humidity and temperature in one go.
Something like (this is no real code, just an idea):
theValues = "battlvl:hum:temp"
gw.send(msgAll.set(theValues, 1));
and in fhem you split it at ":"
So you have just one instead of 3 transmits.
@Meister_Petz I think it's a very good idea! It would be a nice option to have. Parsing with Perl should be simple even for a beginner like me, but to change the Mysensors core and fhem plugin is far beyond what I can. If you can do it, just go ahead and plz share.
ok, I did some testing and succeeded:
This is the important part of the Arduino code:
String allSens = String(sensor1) + " " + String(sensor2); gw.send(msg1.set(allSens.c_str()));
This is the FHEM code:
define Sensor MYSENSORS_DEVICE 20 attr Sensor IODev MSGateway attr Sensor mapReading_brightness1 1 brightness attr Sensor mode node attr Sensor version 1.5 attr Sensor userReadings Bat Deg # split and copy to userReadings define Sensor.copy at +*00:02:00 {my $a = ReadingsVal("Sensor","brightness1",1);; my @b = split(/\ /,$a);; fhem("setreading Sensor Bat $b[0]");; fhem("setreading Sensor Grad $b[1]");;}
P.S.: when I had this in fhem.cfg I had to use @@ instead of only one @
This is my complete Arduino code for a Battery Sensor with Dallas Temp:
#include <MySensor.h> #include <SPI.h> #define SketchName "Sensor-send-all" #define SketchVer "1.2" #define NodeID 20 unsigned long SleepTime = 828523; // 55234 = 1 Minute --- 828523 = 15 Minuten MySensor gw; // this is the one which sends everything #define SENSOR1_ID 1 // Sensor 1 - allAtOnceMessage //------------------------------------------------ // Sensor PINS #define SENSOR1_PIN A0 // BattLevel #define SENSOR2_PIN 6 // Dallas // MyMessage V_TEMP, V_LIGHT_LEVEL, ... depending on Sensor Type MyMessage msg1(SENSOR1_ID,V_LIGHT_LEVEL); // Sensor 5 - allAtOnceMessage //------------------------------------------------ // Dallas Temp #include <OneWire.h> #include <DallasTemperature.h> OneWire oneWire(SENSOR2_PIN); DallasTemperature sensors(&oneWire); boolean receivedConfig = false; float firstRun = 1; boolean metric = true; void setup() { // use the 1.1 V internal reference analogReference(INTERNAL); delay(500); // Allow time for radio if power used as reset <<<<<<<<<<<<<< Experimented with good result gw.begin(NULL, NodeID, false); // Send the sketch version information to the gateway and Controller gw.sendSketchInfo(SketchName, SketchVer); // present all Sensors to Gateway - S_TEMP, S_LIGHT_LEVEL,... depending an SENSOR type gw.present(SENSOR1_ID, S_LIGHT_LEVEL); sensors.begin(); } void loop() { gw.process(); // wait some time to get all parts powered up if (firstRun == 1){ delay(5000); firstRun = 0; } else { delay(1000); } //---------------------- // Sensor1 - Battery Voltage - Part 1 int sensorRead10 = analogRead(SENSOR1_PIN); //---------------------- // Sensor2 - DAL sensors.requestTemperatures(); int16_t conversionTime = sensors.millisToWaitForConversion(sensors.getResolution()); gw.sleep(conversionTime); float sensorRead2 = static_cast<float>(static_cast<int>((metric?sensors.getTempCByIndex(0):sensors.getTempFByIndex(0)) * 10.)) / 10.; //---------------------- // Sensor1 - Battery Voltage - Part 2 int sensorRead11 = analogRead(SENSOR1_PIN); //Volt der Battery float sensorRead1 = (sensorRead10 + sensorRead11 + sensorRead12) / 3 * 0.0033; // should be 0.003363075 my Batteries are different float test = sensorRead10; //---------------------- // send all to fhem String allComb = String(sensorRead1) + " " + String(sensorRead2); gw.send(msg1.set(allComb.c_str())); gw.sleep(SleepTime); }
@Meister_Petz Wow! Well done! I think this can be very useful.
In theory some power should be saved. Estimation from here is that a 250kbit/s transmission of a maximum size (32 byte) packet is >1.5ms. I think at around 20mA. Massage header is 7 byte, but a normal massage just a few bytes, i.e. just 30% of a message. Startup time for radio is fast (140us), so that shouldn't be an issue. (A side note here is that wake up atmega328p from sleep takes long (6ms?).)
Is there a reason that you wrote the Fhem sonsor.copy macro as an "at" and not a "notify" ? The intuitive way would be to copy upon a new reading.
Soon 11 months of production including a fair amount of too frequent battery uppdates. Most exciting is wheter the initially used batteries in 106 will make it 12 months or not.
- Anticimex Contest Winner last edited by
@m26872 perhaps you could reduce the redundant updates by filtering. If the current sample is lower than the "next" sample ignore it. If your node does not support charging, it will never really truly read an increase in battery voltage.
@Anticimex Yes, but in this particular case its double. Conclusion from above was that frequency and magnitude of the "low spikes" gives you information of how near the end you are. On the other hand I would never run like this if it wasn't for keeping the experiment "untouched". I think my thought for future sensors is to preserve that information, but save battery, just by reducing the number of samples.
I'm still not sure of the best filter/algorithm to predict battery end of life from present and historic data. It will probably differ a lot wheter there's a step-up or a nicer load.
@m26872 Just a suggestion: once the node dies you could take the raw data and run some filtering on it like @Anticimex suggested.
It'll give you a quick way to experiment with different filter settings.
Once you find an optimal filter you can implement it in the node(s).
@Yveaux You're right. I expect a good correlation between "spike depth" and the load profile just prior to sample. By controling this it should be quite repeatable.
This post is deleted!
This post is deleted!
- ahhk Hardware Contributor last edited by
i think it would be better to open a new thread for this, instead to discuss two topics in one thread....the 2xaa battery sensor is very interesting
The 12 months target was done a few days ago. Today's status 105: 63% and 106: 27%
I had a thought to celebrate their birthday by retrieve old Vera data and make nice plot, but that has to wait. Next, 18 months.
Just to clarify that these are no "sleeping" nodes; they transmit 100-150 messages each every day.
Battery level status report after 18 months:
Node 105: 51%
Node 106 (with used batteries): Died early January.
Next 24 months.
Hey @m26872 Congrats on the testing and results! Looks great! I was wondering if I could get your feedback on something. I'm trying to build a very low powered ( 2 x AA ) Battery powered 3 in 1 sensor. ( Platform is a Pro Mini 8Mhz, 3.3v, with an PIR HC-SR501, NRF24L01, and a DHT 11 Temp and Hum sensor. The sensor should also report battery power periodically ideally using the vcc library ( not implemented yet ). Because these components use power hungry devices ( PIR, DHT ), I have already moded the PIR to work at 3.3, and have removed the regulator / LED from the pro mini. Would it make sense to use a step up / down regulator, or power directly from the VCC pin? I can't decide what would be more effective in prolonging battery life? Also, in terms of reporting battery state. I'm not sure what would be best also, hence the question around the VCC library?
Below is my sketch. It is really taken from various posts on this forum and stitched together. If you could have a peak at it, and see if it is "optimal", as I don't believe it is today. ie: Perhaps I could only report temp / hum if it changes +/- a degree? Or send everything in a single radio announcement, then "Deep Sleep"? Lots of questions, and of course I am sure there are lots of answers. I'm just trying to build the most optimal battery efficient ( 2 x AA ) powered sensor no problem right! Below is my current code:
#define MY_GW_ID 31 // Enable debug prints #define MY_DEBUG // Enable and select radio type attached #define MY_RADIO_NRF24 //#define MY_RADIO_RFM69 #include <SPI.h> #include <MySensor.h> #include <DHT.h> int BATTERY_SENSE_PIN = A0; // select the input pin for the battery sense point //Constants #define SKETCH_NAME "3 in 1 Sensor" #define SKETCH_VERSION "1.0" #define CHILD_ID_HUM 0 // Child id for Humidity #define CHILD_ID_TEMP 1 // Child id for Temperature #define CHILD_ID_MOT 2 // Id of the sensor child #define HUMIDITY_SENSOR_DIGITAL_PIN 7 // Where is my DHT22 data pin connected to #define DIGITAL_INPUT_SENSOR 3 // The digital input you attached your motion sensor. (Only 2 and 3 generates interrupt!) #define INTERRUPT DIGITAL_INPUT_SENSOR-2 // Usually the interrupt = pin -2 (on uno/nano anyway) //Misc. variables uint8_t switchState = 2; unsigned long SLEEP_TIME = 820000; // Sleep time between reads (in milliseconds) (close to 15') unsigned long SLEEP_TIME2 = 275000; // Sleep time after int. (in milliseconds) (close to 5') uint8_t cycleInProgress = 0; uint8_t firstReportDone = 0; uint8_t firstReportWithZeroDone = 0; int oldBatteryPcnt = 0; MySensor gw; DHT dht; float lastTemp = 0 ; float lastHum = 0 ; boolean lastTripped = false ; boolean metric = true; MyMessage msgHum(CHILD_ID_HUM, V_HUM); MyMessage msgTemp(CHILD_ID_TEMP, V_TEMP); MyMessage msgMot(CHILD_ID_MOT, V_TRIPPED); void setup() { // use the 1.1 V internal reference #if defined(__AVR_ATmega2560__) analogReference(INTERNAL1V1); #else analogReference(INTERNAL); #endif gw.begin(); dht.setup(HUMIDITY_SENSOR_DIGITAL_PIN); // Send the Sketch Version Information to the Gateway gw.sendSketchInfo("3 in 1 Sensor", "1.0"); pinMode(DIGITAL_INPUT_SENSOR, INPUT); // sets the motion sensor digital pin as input // Register all sensors to gw (they will be created as child devices) gw.present(CHILD_ID_HUM, S_HUM); gw.present(CHILD_ID_TEMP, S_TEMP); gw.present(CHILD_ID_MOT, S_MOTION); metric = gw.getConfig().isMetric; } void loop() { delay(dht.getMinimumSamplingPeriod()); float temperature = dht.getTemperature(); float humidity = dht.getHumidity(); if (isnan(temperature)) { Serial.println("Failed reading temperature from DHT"); } else if (temperature != lastTemp) { lastTemp = temperature; if (!metric) { temperature = dht.toFahrenheit(temperature); } gw.send(msgTemp.set(temperature, 1)); Serial.print("T: "); Serial.println(temperature); } if (isnan(humidity)) { Serial.println("Failed reading humidity from DHT"); } else if (humidity != lastHum) { lastHum = humidity; gw.send(msgHum.set(humidity, 1)); Serial.print("H: "); Serial.println(humidity); } // // Read digital motion value boolean tripped = digitalRead(DIGITAL_INPUT_SENSOR) == HIGH; if (tripped != lastTripped ) { Serial.println(tripped); gw.send(msgMot.set(tripped?"1":"0")); // Send tripped value to gw } if (oldBatteryPcnt != batteryPcnt) { // Power up radio after sleep gw.sendBatteryLevel(batteryPcnt); oldBatteryPcnt = batteryPcnt; } // Sleep until interrupt comes in on motion sensor. Send update every two minute. gw.sleep(INTERRUPT, CHANGE, 0 ); }
thanks very much!
- AWI Hero Member last edited by
@rhuehn It looks like you are repeating the conversation in this thread. Its better to continue the discussion there.
I agree with AWI, but would like to add one thing: By using the DHT11 you are really fighting an uphill battle. There are several sensors that are more suitable for battery-powered sensors. See the study referenced by @AWI in
Thanks @mfalkvidd Starting to realize that..... I might swap to the HTU21D, but keep in mind I'm also using a PIR..... Is there a reliable, low voltage PIR sensor that works well? I haven't seen one yet ( 2 x AA BATT's, under 2.8 V )?
Thanks
I have updated the original post if anyone can offer any suggestions. it is located Here
Any feedback is greatly appreciated
- Nicklas Starkel last edited by
@EasyIoT , how are your sensors doing, its been 2 years
And also, @m26872 , the node 105. Is it alive?
@Nicklas-Starkel I've been busy for few months moving to a new house and have had my sensor network offline until now because of this. Node 105 was kept on and handled with care, but I was pretty sure it would be dead when I started my gateway yesterday. BUT, it was ALIVE ?! at 14% battery level. Amazing! 24 months with such poor and simple hardware and software. Same batteries and only one restart (due to my controller change last year).
One little remark though, it doesn't seem to send temperature och pressure, only humidity and battery level. Maybe an issue of voltage recovery time between transmissions.
- sineverba Hardware Contributor last edited by
This post is deleted!
- sineverba Hardware Contributor last edited by
Sorry for the noob question...
I would re-create your setup. At which point of it you insert the multimeter to measure the current?
Is it ok to "interrupt" the ground (proximity to the step up) and measure the current there?
Thank you
@sineverba Just cut one of the battery wires of the finished assembly. Solder together again when your done measuring.
- palmerfarmer last edited by
Has anyone got a V2 version working for this?
@palmerfarmer said in My 2AA battery sensor:
Has anyone got a V2 version working for this?
Do you mean running with MySensors 2.3?
- palmerfarmer last edited by
Any Ver 2.xx,
So I have cobbled together/hacked this and it seem to work, battery has stayed at 97% for 2 weeks now. Feel free to Hack about and improve i have this mounted on an easyPCB board, x2AA batteries, NRF radio, BME280 with a 3.3v convertor .
/* * use mysensors library 2.2.0 * ALL WORKING AND TESTED ON EASY PCB BOARD WITH BATTERY SAVING CODE 29/11/18 * * Time SET to 15 mins * also added BME280.writeMode (smSleep); to reduce quisecent to current to 40uA * BME280 sensor connections: 3.3v, 0V, SCL to A5, SDA to A4, BAT to A0 **/ //Enable debug prints to serial monitor #define MY_DEBUG // Enable and select radio type attached #define MY_RADIO_NRF24 #define MY_RF24_PA_LEVEL RF24_PA_MIN //settings are MIN, LOW, HIGH, MAX #define MY_NODE_ID 18 //#define MY_RADIO_RFM69 #include <SPI.h> #include <MySensors.h> #include <Wire.h> #include <BME280_MOD-1022.h> // BME280 libraries and variables // Bosch BME280 Embedded Adventures MOD-1022 weather multi-sensor Arduino code // Written originally by Embedded Adventures // #define BARO_CHILD 0 #define TEMP_CHILD 1 #define HUM_CHILD 2 int BATTERY_SENSE_PIN = A0; // select the input pin for the battery sense point unsigned long SLEEP_TIME = 15*60000; // sleep time between reads (seconds x*1000 milliseconds) //unsigned long SLEEP_TIME = 20000; // Debug test time 20 seconds int oldBatteryPcnt = 0; long interval = 1000; // 1 min interval at which to send (milliseconds) long previousMillis = interval; // will store last time data was sent const float ALTITUDE = 0; // <-- adapt this value to your location's altitude (in m). Use your smartphone GPS to get an accurate value!) }; const int LAST_SAMPLES_COUNT = 5; float lastPressureSamples[LAST_SAMPLES_COUNT]; // this CONVERSION_FACTOR is used to convert from Pa to kPa in the forecast algorithm // get kPa/h); // // Pressure in hPa --> forecast done by calculating kPa/h = getLastPressureSamplesAverage(); } else if (minuteCount == 35) { float lastPressureAvg = getLastPressureSamplesAverage(); float change = (lastPressureAvg - pressureAvg) * CONVERSION_FACTOR;) * CONVERSION_FACTOR;) * CONVERSION_FACTOR;) * CONVERSION_FACTOR;) * CONVERSION_FACTOR;) * CONVERSION_FACTOR;; } return forecast; } void setup() { // use the 1.1 V internal reference #if defined(__AVR_ATmega2560__) analogReference(INTERNAL1V1); #else analogReference(INTERNAL); #endif metric = getControllerConfig().isMetric; // was getConfig().isMetric; before MySensors v2.1.1 Wire.begin(); // Wire.begin(sda, scl) // use the 1.1 V internal reference #if defined(__AVR_ATmega2560__) analogReference(INTERNAL1V1); #else analogReference(INTERNAL); #endif } void presentation() { // Send the sketch version information to the gateway and Controller sendSketchInfo("BME280_BAT_V2", "V2e."); // Register sensors to gw (they will be created as child devices) present(BARO_CHILD, S_BARO); present(TEMP_CHILD, S_TEMP); present(HUM_CHILD, S_HUM); } // LoopBatteryLevel(batteryPcnt); oldBatteryPcnt = batteryPcnt; } // sleep(SLEEP_TIME); } unsigned long currentMillis = millis(); if(currentMillis - previousMillis > interval) { // save the last time sent the data previousMillis = currentMillis; analogReference(INTERNAL); wait(500); // need to read the NVM compensation parameters BME280.readCompensationParams(); // Normal mode for regular automatic samples BME280.writeStandbyTime(tsb_0p5ms); // tsb = 0.5ms BME280.writeFilterCoefficient(fc_16); // IIR Filter coefficient 16 BME280.writeOversamplingPressure(os16x); // pressure x16 BME280.writeOversamplingTemperature(os8x); // temperature x8 BME280.writeOversamplingHumidity(os8x); // humidity x8 BME280.writeMode(smNormal); // Just to be sure, wait until sensor is done mesuring while (BME280.isMeasuring()) { } // Read out the data - must do this before calling the getxxxxx routines BME280.readMeasurements(); float temperature = BME280.getTemperatureMostAccurate(); // must get temp first float humidity = BME280.getHumidityMostAccurate(); float pressure_local = BME280.getPressureMostAccurate(); // Get pressure at current location float pressure = pressure_local/pow((1.0 - ( ALTITUDE / 44330.0 )), 5.255); // Adjust to sea level pressure using user altitude int forecast = sample(pressure); if (!metric) { // Convert to fahrenheit temperature = temperature * 9.0 / 5.0 + 32.0; } //**/ Serial.println(); Serial.print("Temperature = "); Serial.print(temperature); Serial.println(metric ? " °C" : " °F"); Serial.print("Humidity = "); Serial.print(humidity); Serial.println(" %"); Serial.print("Pressure = "); Serial.print(pressure); Serial.println(" hPa"); Serial.print("Forecast = "); Serial.println(weather[forecast]); Serial.println(); //*/ send(tempMsg.set(temperature, 1)); wait(50); send(humMsg.set(humidity, 1)); wait(50); send(pressureMsg.set(pressure, 2)); wait(50); BME280.writeMode (smSleep); sleep(SLEEP_TIME); } } | https://forum.mysensors.org/topic/486/my-2aa-battery-sensor/23 | CC-MAIN-2019-22 | refinedweb | 8,222 | 59.09 |
import "github.com/go-errors/errors"
Package errors provides errors that have stack-traces.
This is particularly useful when you want to understand the state of execution when an error was returned unexpectedly.
It provides the type *Error which implements the standard golang error interface, so you can use this library interchangably with code that is expecting a normal error return.
For example:
package crashy import "github.com/go-errors/errors" var Crashed = errors.Errorf("oh dear") func Crash() error { return errors.New(Crashed) }
This can be called as follows:
package main import ( "crashy" "fmt" "github.com/go-errors/errors" ) func main() { err := crashy.Crash() if err != nil { if errors.Is(err, crashy.Crashed) { fmt.Println(err.(*errors.Error).ErrorStack()) } else { panic(err) } } }
This package was original written to allow reporting to Bugsnag, but after I found similar packages by Facebook and Dropbox, it was moved to one canonical location so everyone can benefit.
error.go parse_panic.go stackframe.go
The maximum number of stackframes on any error.
Is detects whether the error is equal to a given error. Errors are considered equal by this function if they are the same object, or if they both contain the same error inside an errors stacktrace will point to the line of code that called New.
ParsePanic allows you to get an error object from the output of a go program that panicked. This is particularly useful with.
Wrap.
WrapPrefix makes an Error from the given value. If that value is already an error then it will be used directly, if not, it will be passed to fmt.Errorf("%v"). The prefix parameter is used to add a prefix to the error message when calling Error(). The skip parameter indicates how far up the stack to start the stacktrace. 0 is from the current call, 1 from its caller, etc.
Callers satisfies the bugsnag ErrorWithCallerS() interface so that the stack can be read out.
Error returns the underlying error's message.
ErrorStack returns a string that contains both the error message and the callstack.
Stack returns the callstack formatted the same way that go does in runtime/debug.Stack()
func (err *Error) StackFrames() []StackFrame
StackFrames returns an array of frames containing information about the stack.
TypeName returns the type this error. e.g. *errors.stringError.
type StackFrame struct { // The path to the file containing this ProgramCounter File string // The LineNumber in that file LineNumber int // The Name of the function that contains this ProgramCounter Name string // The Package that contains this function Package string // The underlying ProgramCounter contained this frame. 483 packages. Updated 2018-08-13. Refresh now. Tools for package owners. | https://godoc.org/github.com/go-errors/errors | CC-MAIN-2018-51 | refinedweb | 440 | 57.47 |
public class RmiBookModelImpl extends UnicastRemoteObject implements BookModel { private ArrayList observers = new ArrayList(10); public void addObserver(BookView bv) throws RemoteException{ synchronized(observers){ observers.add(bv); } } public void update(int recNo, String [] data) throws RecordNotFoundException, RemoteException{ BookView bv; Integer recObj = new Integer(recNo); //db is the Data class. db.update(recNo,data); for (int i=0; i<observers.size(); i++) { try{ bv = (BookView)observers.get(i); bv.update(recObj); }catch(Exception e){ //if failed to update.the server will believe the observer is shuted down.so the server delete it. observers.remove(i); } } } ... }
Originally posted by song bo: I define 3 interfaces for model,view ,controller.then I implement these interfaces separately for Local mode and RMI mode.
Originally posted by song bo: 1.Refreshing all the view of the clients is slow.especially some clients is disconnected.
Originally posted by song bo: 2.when the network of one client named "A" is dropped,at this time,another client book a record.the model will delete the client "A" that could not be refreshed.after a while the network of client "A" resume normal,it can be booked but the View can not be refreshed for ever.because the view of "A" has deleted.
Do you mean that you have one view for displaying records when you are in local mode, and one view for displaying records when you are in networked mode? If so, then I think you will loose points in the OO section of the marking.
I take it that your Model is on the server then. Is your Model the database itself? Or is it a model of the database?
I asked before whether 1.your model was on the server. This would mean that at a minimum your Controller(s) and possibly your View(s) need to be aware of the communications protocols involved. Right now you might only have one View. But in the future that might expand to 10 Views, all working off the same Model. And if someone then decided tp change the network protocol, all 10 Views and the 10 Controllers will need to be modified.
2.your model is the database If this is the case, then ask yourself whether there is any work done on the data at any point before it gets to the View. For example, do you convert any of the Strings into Integers, or do you convert the array of record numbers returned by the call to find() into an array of Records() (or an array of String[])). If so, then isn't this sort of conversion likely to be common to the entire application? In which case wouldn't this logic be better in the Model rather than in the View? (You don't really want much logic in the View - it is only meant to be displaying the data, not doing large amounts of database specific processing.)
I could not understand still
if you have distributed many client programms and they have all worked fine.But after a time,the boss decide to change the book rule.because the business is wrapped in the client,so you have to modify every client programm and distribute again,if you put the model on the server,you only modify the server code,the clients do not know,because the client only guther the data that user input and pass it to the model,so it need to change. | http://www.coderanch.com/t/184484/java-developer-SCJD/certification/MVC-design-RMI-reasonable | CC-MAIN-2014-42 | refinedweb | 572 | 65.22 |
QtCharts app won't run (The program has unexpectedly finished.)
I use Qt 5.7 and get the QtCharts examples up and running just fine. But when I try to build a simple app to display
chart, it's just not running (with the message "The program has unexpectedly finished."). I believe I have setup everything
to use QtCharts correctly (follow the example). Am I missing anything?
Here's my .pro file
QT += qml quick charts core gui widgets CONFIG += c++11 SOURCES += main.cpp RESOURCES += qml.qrc
Here's my simple qml to display chart.
import QtQuick 2.0 import QtCharts 2.0 Item { anchors.fill: parent //![1] } } } //![1] }
Any help would be appreciated.
:D:D:D
Found it. Right there in the example file.
// Qt Charts uses Qt Graphics View Framework for drawing, therefore QApplication must be used.
So switching to QApplication solved my problem.
Thanks!
:D:D:D | https://forum.qt.io/topic/71563/qtcharts-app-won-t-run-the-program-has-unexpectedly-finished | CC-MAIN-2018-39 | refinedweb | 151 | 80.28 |
Small library for scraping electricity usage information from Helsingin Energia website
Project description
This small script fetches per-hour electricity usage from Helsingin Energia (Helen) website. This is unofficial implementation which may break at any time, if Helen changes their website or implements any additional validations.
Installation:
pip install helen_electricity_usage
Usage:
import helen_electricity_usage helen = helen_electricity_usage.Helen(username, password, metering_point_number) helen.login() print helen.get_date("20141225")
To obtain username and password, register using web interface (Flash and paper invoice is required). After registering and signing in, metering point number is available on the top-right corner.
Sample output (python dictionary):
[ { "milestones" : [ ], "month" : 12, "value" : 0.22999, "year" : 2014, "day" : 25, "hour" : 0 }, { "milestones" : [ ], "month" : 12, "value" : 0.15, "year" : 2014, "day" : 25, "hour" : 1 }... ]
There is no way to check whether data for specific date is available. If data is not available, all fields are provided, but values are 0.0. There is no way to distinct between missing data and hours with no electricity consumption. Usually the data is available next day, but that is not guaranteed.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/helen_electricity_usage/ | CC-MAIN-2019-22 | refinedweb | 200 | 51.04 |
Jeroen Frijters wrote:
Dalibor Topic wrote:The problem is illustrated by this example from kaffe's regression test suite:import java.io.File; public class FileTest { public static void exists(File f) { String type = "File"; if (f.isDirectory()) { type = "Directory"; } System.out.print(type + " \"" + f.getPath() ); if (f.exists()) { System.out.println("\" does exist"); } else { System.out.println("\" does not exist"); } } public static void main(String[] argv) throws Exception { File g5 = new File("","NotExist2"); exists(g5); } } it should print File "/NotExist2" does not exist on Unix and File "\NotExist2" does not exist on Windows.Thanks! I made an additional change to your patch to prevent to accidental forming of a UNC path prefix (\\) by trimming the leading separators from name. It's attached.
thanks a lot! Works for me on kaffe, I've checked it in there. Feel free to check it in for GNU Classpath as our cumulative patch ;)thanks a lot! Works for me on kaffe, I've checked it in there. Feel free to check it in for GNU Classpath as our cumulative patch ;)
cheers, dalibor topic | http://lists.gnu.org/archive/html/classpath-patches/2004-08/msg00079.html | CC-MAIN-2016-30 | refinedweb | 180 | 66.64 |
()
A little unorthodox, but you can monkey-patch sys.stdin:
# We're going to monkey-patch stdin
import sys
from cStringIO import StringIO
old = sys.stdin
sys.stdin = StringIO('hello')
# Now read from stdin
result = raw_input('foo')
# And replace the regular stdin
sys.stdin = old
This will work as if you typed 'hello' as the input for raw_input. Of
course, rather than calling raw_input yourself, you'd be calling your foo
function. I think if I were going to do this more than once, I'd use a
context manager to be certain I undo the monkey patch:
import sys
from cStringIO import StringIO
class PatchStdin(object):
def __init__(self, value):
self._value = value
self._stdin = sys.stdin
def __enter__(self):
# Monkey-patch stdin
sys.stdin = StringIO(self
t0 = temp
doesn't actually perform a copy. It makes the names t0 and temp both refer
to the same array. You probably want
t0 = temp.copy()
which makes a new, independent array.
Your dataString is incorrect, you forgot a '&' between vars :
var dataString = 'mac=' + mac + '&store=' + store;
Plus there is no need for a string, just pass an object, jQuery will do the
rest :
$.ajax({
type: 'POST',
url: 'ajax/subscribe.php',
data : {mac: mac, store: store},
etc...?
Perhaps you could use php like this: running php script (php function) in
linux bash
And then you something like
domdocument() to read and the
right the xml.
Of course this is assumes you have php installed.
VAR="INPUTFILENAME"
# One solution this does not use the VAR:
touch INPUTFILENAME{1,2,3,4,5,6,7,8,9,10}
# Another
for i in `seq 1 20` ; do
touch "${VAR}${i}"
done
And there are several other ways.
Try using this
if [ $# -eq 0 ]; then
cat > temporary.txt; sed '/^$/d' <temporary.txt
else
cat $@ | sed '/^$/d'
fi
A space is needed between [ and $@ and your usage of $@ is not good. $@
represents all arguments and -eq is used to compare numeric values.
I didnt test it, but this might work:
jQuery(document).ready(function () {
$("#logo2").change(function(){
jQuery('#form_up').ajaxSubmit({
dataType: 'json',
success: bol
});
});
});
Check the Plugin API if this dont work, I just quick checked it, and saw
there is a ajaxSubmit...
One solution would be to execute the script this way:
In [1]: !python my_script.py < datafile
Obviously, this works around the run command, but it allows you to execute
your script from within IPython.
What do you need the <&1 for?
Remove it, and it works.
while read CMD; do
./test.sh < input.txt
BEGIN
get_start
END
get_stop
END
get_uknown_command
END
There are several ways to accomplish this. One would be:
function click(e) {
var elementIDs = {
company: document.getElementById("nameID").value
};
chrome.tabs.executeScript({
code: 'window.elementIDs='+JSON.stringify(elementIDs)
}, function() {
chrome.tabs.executeScript({file: "ScriptA.js"});
});
window.close();
}
This way, ScriptA will be able to access the values in window.elementIDs.
This will work because content scripts from the same extension on the same
page will share the execution environment, and the chaining of the calls to
chrome.tabs.executeScript ensures that the script defining the global
variable has run before ScriptA is run.
You should be using the function source_file() rather than eval_string().
Take a look into the parser.h file which unfortunately doesn't have a lot
of comments. The names are quite self-explanatory so you shouldn't have a
lot of problems.
Also, you're pretty much trying to reimplement Octave's source function. If
you really want to implement it again, look into the oct-parse.cc file
(generated during the build process with flex and bison).
If you crawl through the code and methodically replace the junk identifier
names, it makes a lot more sense. Most of the code and identifiers are
just there to obscure, and a lot of the rest of it just looks like
boilerplate code for browser-compatible ways to do things like attach
events.
It also makes a pretty big deal of checking that you're on Windows, for
some reason, but that doesn't change the payload that's eventually
delivered.
The piece that deploys the payload to the page is here:
var newElement = window[_document][_createElement](_div);
newElement[_innerHTML] = decodeString(key, payload);
newElement[_style][_display] = _none;
window[_document][_body][_appendChild](newElement);
The payload decrypts as:
<iframe src='.<OBFUSCATED>
errors = open('errors.txt', 'w')
try:
execfile("script.py")
except Exception as e:
errors.write(e)
try:
execfile("other.py")
except Exception as e:
errors.write(e)
errors.close()
Your usage of else if is incorrect, it should be elif
if [ -f "$i" ]; then
echo "this is a file and I'll run a script if it is a file"
elif [ -d "$i" ]; then
echo "this is a dir and I'll run a script if it is a directory"
fi
You need to pass the input stream in as the input property of an object.
var inputStream = new java.io.FileInputStream("fileInput.txt");
runCommand("somescript.sh", { input: inputStream });
If input is not an InputStream it will be converted to a string and sent to
the command directly. Similarly, you can add output and/or err properties
to capture the command's standard output and standard error (documentation
here).
I would try doing somehting like this.
os.system("appcfg.py arg1 arg2 arg3")
I would look into this portion of the os documentation.
Goodluck. | http://www.w3hello.com/questions/Getting-file-input-into-Python-script-for-praw-script | CC-MAIN-2018-17 | refinedweb | 893 | 57.87 |
1 – Introduction
Recently i worked on one of the HYPE topic of the moment called ChatBot or interactive agent.
So i decided to realise this serie of posts, to share this experience by using a practical end-to-end scenario.
I will use SAP Cloud Platform and especially HANA MDC as Back-End technology that expose data through an HTTP REST service.
Using a chatbot tool that will consume this service and integrate it with FaceBook Messenger.
The article will be divided in 3 parts :
- Back-End development.
- Using DialogFlow tool to create an Agent and consume the Back REST API.
- Integrating the Agent with FaceBook Messenger.
So let’s start.
2 – Back-End – REST API & Odata Service creation
To follow this article you need to create an SAP Cloud Platform Trial Account.
Go to
Open your SAP Cloud Platform Cockpit and navigate to Databases & Schemas
Choose your MDC Database
Open Editor link to access SAP HANA Web-based Development Workbench
2.1 – Back-End Project development
In this step you can choose what ever technology you want, an SAP ECC on-premise system or a Third party system. In my case and to be the most generic possible for my article i choose an MDC HANA XS in the Cloud because is free of charge.
Right-click on the root level of your IDE and choose New -> Package
Choose Project Name
Create two sub packages data and services inside your new created project. After that right-click on your project and choose New -> Create Application
A pop-up wll appear choose Empty application and click Create button
Delete index.html generated after this step. The final project would look like
The use case will be very simple. I will create an OData service that expose Sales representante informations.
2.1.1 – Content of data Package :
Right-Click on data Packge and choose New -> File. Enter SCHATBOTSM.hdbschema as name
SCHATBOTSM.hdbschema source code
schema_name="SCHATBOTSM";
Right-Click on data Packge and choose New -> File. Enter chatbotsm.hdbdd as name
chatbotsm.hdbdd source code
namespace ChatBotProject.data; @Schema: 'SCHATBOTSM' context chatbotsm { type CString : String(5); type SString : String(40); type tt_error { HTTP_STATUS_CODE: Integer; ERROR_MESSAGE: String(100); DETAIL: String(200); }; @Catalog.tableType : #COLUMN Entity Agent { KEY AGENT_ID: CString; FIRST_NAME: SString; LAST_NAME: SString; }; };
2.1.2 – Content of services Package :
Right-Click on data Packge and choose New -> File. Enter ChatBotSM.xsodata as name
ChatBotSM.xsodata source code
service namespace "ChatBotProject.services.ChatBotSM" { "ChatBotProject.data::chatbotsm.Agent" as "Agents"; }
2.2 – Back-End Administration Task
Log in as SYSTEM user on and open Security link.
Create new Role and call it R_CHAT_BOT_SM. Open the Object Privileges tab and add the Schema created before in XS Project.
Don’t forget to Save those modifications.
Assign this role to your development user. Select your development user under Users node and open the tab Granted Roles and add the R_CHAT_BOT_SM created previously and click Save button.
2.3 – Back-End Test OData Service
Back to the XS Project and Launch Open OData Explorer of ChatBotSM.xsodata
Select Agents Entity and click Generate Data button.
Again from the XS Project select ChatBotSM.xsodata file from project structure and click run button.
The metadata content of the OData service is shown as below. The service contains one Entity “Agents”.
To display Agents Entity values change the URL on your Browser ( Chrome in my case ) as below
If all is done correctly the generated data will be displayed
2.4 – Back-End REST API
The exercise will be so easy if i consume my OData directly from the external side. So to add some tricky stuff i will add an XSJS file that consume the ChatBotSM.xsodata service and apply some logic to transform the OData result.
2.4.1 – Create an HTTP Destination
Right-click on services node and choose New -> File. Choose DEST_BOT_ODATA.xshttpdest as name.
DEST_BOT_ODATA.xshttpdest would look like
2.4.1 – Create XSJS Script
Create new file under services node. Choose GetAgentsList.xsjs as name
GetAgentsList.xsjs source code
try { var v_dest = "DEST_BOT_ODATA"; var v_pack = "ChatBotProject.services"; var v_query = "/Agents?$format=json"; switch ($.request.method) { case $.net.http.GET: case $.net.http.POST: case $.net.http.PUT: //Reading the destination properties var odestination = $.net.http.readDestination(v_pack, v_dest); //Creating HTTP Client var oclient = new $.net.http.Client(); //Creating Request var orequest = new $.web.WebRequest($.net.http.GET, v_query); //Add Header param orequest.contentType = "application/json"; //Call the OData service oclient.request(orequest, odestination); //Receiving OData service response var odata_Response = oclient.getResponse().body.asString(); var JSONObj = JSON.parse(odata_Response); var botResponse; if (JSONObj.d.results.length > 0) { botResponse = "Agents List: "; for (var i = 0; i < JSONObj.d.results.length; i++) { botResponse += " "; // Concatenate First & Last Agents names botResponse += JSONObj.d.results[i].FIRST_NAME + JSONObj.d.results[i].LAST_NAME; } } else { botResponse = "No data Found"; } $.response.status = $.net.http.OK; $.response.contentType = "application/json"; $.response.setBody(JSON.stringify({ "speech": botResponse, "displayText": botResponse })); break; default: $.response.status = $.net.http.METHOD_NOT_ALLOWED; $.response.setBody("Request method not allowed"); break; } } catch (e) { $.response.setBody("Execution error: " + e.toString()); }
Select GetAgentsList.xsjs and click run button.
The result is shown as bellow
Our XSJS require an authentication login/password. Launch the XS Admin tools as SYSTEM user to set No Authentication Required option like this for our Package. Save your configuration.
Open a new private Chrome window and put the url below
The call is working without asking for Login/Password. Great!!! The first step is done and in the next post i will explain how to consume this HTTP REST ( XSJS script ) by a ChaBot Tool.
Hi,
when I tried to run GetAgentsList.xsjs, there is a syntax error “Execution error: SyntaxError: JSON.parse: unexpected character at line 1 column 1 of the JSON data.”.
I think I followed step-by-step without any problem, could you recommend where should I check to solve this error?
Hi Joongwon
Are you able to resolve the issue? I am also facing the same problem. Please can you let me know how can we resolve the issue?
Thanks
Srikanth
I have the similar issue like Joongwon and Srikanth. Can you guys let me know how you resolved it
Regards
Guru
I have the similar issue. Could you let me know how you resolve it ?
Regards
Soichiro
I don’t use proxy.
So I comment out the followings in “DEST_BOT_ODATA.xshttpdest” ;
proxyType = http;
proxyHost = “proxy-trial”;
proxyPort = 8080;
Then I face the following error ;
Execution error: Error: HttpClient.getResponse: Can’t get the response from the server: internal error occurred “Connection to xxxtrial.hanatrial.ondemand.com lost while reading response.”
Soichiro
Hi did you fixed the issue? Former Member,
Is your xsjs script is running? because i am getting this error “Execution error: SyntaxError: JSON.parse: unexpected character at line 1 column 1 of the JSON data” as other.
Hi,
I need your help to integrate dialogflow with the xsjs sap service. My requirement is to get the specific order detail from the xsjs service by passing parameter from the chatbot.
I am not able to pass the parameter from dialogflow chatbot to sap service. Please help me on this.
Hi Former Member,
How did you fixed this error “Execution error: SyntaxError: JSON.parse: unexpected character at line 1 column 1 of the JSON data”.
Hi,
Facing same issue “Execution error: SyntaxError: JSON.parse: unexpected character at line 1 column 1 of the JSON data”. Please help to over come this error.
Thanks,
Albert Tigga
Hi,
I am also facing same issue Execution error: SyntaxError: JSON.parse: unexpected character at line 1 column 1 of the JSON data
when executing GetAgentsList.xsjs.
please help.
Hi DId anyone solved this issue?
I have two patterns of error not sure which is correct one.
var v_dest = “DEST_BOT_ODATA”;
var v_pack = “ChatBotProject.services”;
var odestination = $.net.http.readDestination(v_pack, v_dest);
Execution error: SyntaxError: JSON.parse: unexpected character at line 1 column 1 of the JSON data
or
var v_dest = “DEST_BOT_ODATA”;
var v_pack = “ChatBotProject.Services”;
var odestination = $.net.http.readDestination(v_pack, v_dest);
Execution error: Error: User is not authorized to use destination (package: ChatBotProject.services, name: DEST_BOT_ODATA)
Hi,
Did you get any solution for this error?
Thanks
Hello,
Somebody got the solution for this?
Facing same issue “Execution error: SyntaxError: JSON.parse: unexpected character at line 1 column 1 of the JSON data”.
Hi ,
I have tried the same thing using the odata service of hana system and it works perfect. Only issue while trying the hana database it is not working.
I have tried the same thing using the odata service of s4hana on premise system and it works perfect. Only issue while trying the hana database on the cloud it is not working.
Has anyone got solution for error message
“Execution error: SyntaxError: JSON.parse: unexpected character at line 1 column 1 of the JSON data”
Thanks.
Hi,
Did you get any solution for this error?
Thanks | https://blogs.sap.com/2017/11/29/building-chatbot-using-sap-cloud-platform-part-1/ | CC-MAIN-2018-26 | refinedweb | 1,489 | 60.21 |
Red Hat Bugzilla – Bug 787242
Unexpected failure (permission denied) on attempt to change scheduling class to SCHED_FIFO.
Last modified: 2013-04-30 19:51:00 EDT
Created attachment 559323 [details]
Reproducer
Description of problem:
A program attempts to change its scheduling class to SCHED_FIFO. Program has CAP_SYS_NICE privilege.
Success when user logged in locally.
Failure (EPERM) when user logged in using VNC.
Version-Release number of selected component (if applicable):
Linux ... 2.6.38.8-35.fc15.x86_64 #1 SMP Wed Jul 6 13:58:54 UTC 2011 x86_64 x86_64 x86_64 GNU/Linux
tigervnc-server-1.0.90-4.fc15.x86_64
How reproducible:
Build and run a test program.
Steps to Reproduce:
1. Compile the program:
#include <errno.h>
#include <stdlib.h>
#include <stdio.h>
#include <sys/capability.h>
#include <sched.h>
int main() {
cap_t caps = cap_get_proc();
if (caps == NULL) {
perror("cap_get_proc()");
} else {
ssize_t s = 0;
char * p = cap_to_text(caps, &s);
if (p == NULL) {
perror("cap_to_text()");
} else {
printf("cap_to_text()" ": %s\n", p);
cap_free(p);
p = 0;
}
s = 0;
cap_free(caps);
caps = NULL;
}
struct sched_param sched;
sched.sched_priority = sched_get_priority_min(SCHED_FIFO);
if (sched_setscheduler(0, SCHED_FIFO, &sched) < 0) {
perror("sched_setscheduler(SCHED_FIFO)");
} else if (sched.sched_priority = 0, sched_setscheduler(0, SCHED_OTHER, &sched) < 0) {
perror("sched_setscheduler(SCHED_OTHER)");
} else {
}
return errno;
}
$ cc -lcap ...
2. Grant CAP_SYS_NICE to the resulting ./a.out.
3. Run ./a.out
Actual results (login via VNC):
[model@amdx2 ~]$ getcap ./a.out
./a.out = cap_sys_nice+ep
[model@amdx2 ~]$ ./a.out
cap_to_text(): = cap_sys_nice+ep
sched_setscheduler(SCHED_FIFO): Operation not permitted
Expected results (login locally):
[model@amdx2 ~]$ getcap ./a.out
./a.out = cap_sys_nice+ep
[model@amdx2 ~]$ ./a.out
cap_to_text(): = cap_sys_nice+ep
Additional info:
Program source is attached.: | https://bugzilla.redhat.com/show_bug.cgi?id=787242 | CC-MAIN-2018-26 | refinedweb | 270 | 53.78 |
Installing and Configuring HoloViews ¶
HoloViews and its required dependencies Numpy and Param are pure Python packages, and can thus be installed on any platform that has Python 2.7 or Python3.
That said, HoloViews is designed to work closely with many other libraries, which can make installation and configuration more complicated. This user guide page describes some of these less-common or not-required options that may be helpful for some users.
Other installation options ¶ on conda and can be installed using: .jupyter/jupyter_notebook_config.py with code that changes the limit:
c = get_config() c.NotebookApp.iopub_data_rate_limit=100000000
hv.config
settings
¶:
style_17: Enables the styling and defaults as found in HoloViews 1.7. supports an rc file that is searched for in the following places:
"~/.holoviews.rc"
,
"~/.config/holoviews/holoviews.rc"
and the in parent directory of the top-level
__init__.py
file (useful for developers working out of the HoloViews git repo). A different location to find the rc file can be specified via the
HOLOVIEWSRC
environment variable.
This rc file is executed right after HoloViews, imports. For instance you can use an rc file with code like:
import holoviews as hv hv.config(warn_options_call=True) hv.extension.case_sensitive_completion=True
to include the various options discussed on this page. | http://holoviews.org/user_guide/Installing_and_Configuring.html | CC-MAIN-2017-30 | refinedweb | 209 | 57.87 |
.
[edit] Example
#include <stdio.h> #include <locale.h> #include <string.h> #include <uchar16_t c16; char *ptr = str, *end = str + strlen(str); int rc; while(rc = mbrtoc16(&c16, ptr, end - ptr, &state)) { printf("Next UTF-16 char: %#x obtained from ", c16); if(rc == -3) puts("earlier surrogate pair"); else if(rc > 0) { printf("%d bytes [ ", rc); for(int n = 0; n < rc; ++n) printf("%#x ", +(unsigned char)ptr[n]); puts("]"); ptr += rc; } } }
Output:
Processing 10 bytes: [ 0x7a 0xc3 0x9f 0xe6 0xb0 0xb4 0xf0 0x9f 0x8d 0x8c ] Next UTF-16 char: 0x7a obtained from 1 bytes [ 0x7a ] Next UTF-16 char: 0xdf obtained from 2 bytes [ 0xc3 0x9f ] Next UTF-16 char: 0x6c34 obtained from 3 bytes [ 0xe6 0xb0 0xb4 ] Next UTF-16 char: 0xd83c obtained from 4 bytes [ 0xf0 0x9f 0x8d 0x8c ] Next UTF-16 char: 0xdf4c obtained from earlier surrogate pair
[edit] References
- C11 standard (ISO/IEC 9899:2011):
- 7.28.1.1 The mbrtoc16 function (p: 398-399) | http://en.cppreference.com/w/c/string/multibyte/mbrtoc16 | CC-MAIN-2016-30 | refinedweb | 159 | 61.6 |
Template talk:Sex/Archive1
From RationalWiki
This is an archive page, last updated 8 November 2016. Please do not make edits to this page.
12/13[edit]
Seems to be something wrong with this template - sometimes it displays 12 entries, sometimes 13. My limited Wiki skillz are insufficient to locate the problem, though. --AKjeldsenGodspeed! 12:00, 11 October 2007 (EDT)
- The template was a part of the Human Sexuality category. I made the category part "includeonly" and now get 13 results consistently (refreshed 10-20 times or so without getting 12 results), so I think it's fixed... --Sid 15:42, 11 October 2007 (EDT)
- Thanks guys! I was surprised I got it to work at all... human
17:01, 11 October 2007 (EDT)
- I don't think it worked, I just counted 12. Here's what I think is happening - it is quite possible for the template to include the article currently being viewed, and I never really worried about it, thinking if that happened at would just turn up as a black link. However, I bet that it is simply not listing it when that happens. So testing the template itself won't show that issue (now that the cat has been includeonlyed). Is there a way to eliminate the "current pagename" from consideration for the random list? Not that this is really important, except for people thinking we can't count sometimes... or that we have cheap bakers... human
17:50, 11 October 2007 (EDT)
- I know how to fix it, but I'm hesitating:).
- It's exactly what we're looking for, but "may lead to runtime errors" sounds kinda ominous - no idea how they'd show up or what they'd do. This isn't my place, I'm not a sysop, so I won't put stuff in that may do damage.
- I'll try something else now, but I felt like pointing out that there is an easy, but potentially buggy solution anyway. So "BRB". --Sid 18:38, 11 October 2007 (EDT)
- And back. I think the workaround does what I want it to do, although there might be bugs (which I doubt at this point). I took out all pages matching the title {{PAGENAME}} from the selection between the steps "all pages from this category" and "13 random articles". Checking a couple of times led to 13 articles, so I think it worked.
- My main concern had been that the matching would be too greedy, as in: the Sex page would never have Sex toys in its list. However, the matching seems to be exact, so the only worry would be two articles with the same pagename, but in different namespaces. But I don't think we actually have that case... and even then, we could probably work around that, too... maybe... --Sid 18:47, 11 October 2007 (EDT)
- Yup, you didn't break it, and might have fixed it. And if we had, say, "sex" and "fun:sex" they should either be merged, or link to each other explicitly anyway. human
18:57, 11 October 2007 (EDT)
category[edit]
Do you guys think it is good practice to automate the catting like I did? human
18:58, 11 October 2007 (EDT) | https://rationalwiki.org/wiki/Template_talk:Sex/Archive1 | CC-MAIN-2020-29 | refinedweb | 540 | 71.75 |
Gunicorn not using right settings file Django?
In my
wsgi.py I am conditionally setting
DJANGO_SETTINGS_MODULE to two different files(local and production).
On the server I have set "PROD" variable in
/etc/profile
if "PROD" in os.environ: os.environ.setdefault("DJANGO_SETTINGS_MODULE", "myproject.settings") else: os.environ.setdefault("DJANGO_SETTINGS_MODULE", "myproject.settings_dev")
But, still I am getting an error because right settings file is not being set. So maybe if condition isn't working. See below pic.
My gunicorn.service
[Unit] Description=gunicorn daemon After=network.target [Service] User=root Group=www-data WorkingDirectory=/home/myproject/myproject ExecStart=/usr/local/bin/gunicorn --access-logfile - --workers 3 --bind unix:/home/myproject/myproject/myproject.sock myproject.wsgi:application [Install] WantedBy=multi-user.target
1 answer
- answered 2018-01-14 11:28 illagrenan
You should set
PRODenvironment variable in your Gunicorn configuration this way:
[Service] environment=PROD=True User=root Group=www-data WorkingDirectory=/home/myproject/myproject ExecStart=/usr/local/bin/gunicorn --access-logfile - --workers 3 --bind unix:/home/myproject/myproject/myproject.sock myproject.wsgi:application
// EDIT: I overlooked that you set ENV var in
/etc/profile... You can try my solution, otherwise I will delete my answer..
- Django AD authentication
I have a django project and have hosted on azure IIS and i want to validate the user-id and password (not search) with my organization AD. Please can anyone suggest a way? I have already tried django_auth_ldap but it is not suggesting any way to authenticate. I have tried ldap to install but its showing error about VC++ but i have already VC++ install the error is
C:\Users\username\AppData\Local\Programs\Common\Microsoft\Visual C++ for Python\9.0\VC\Bin\amd64\cl.exe /c /nologo /Ox /MD /W3 /GS- /DNDEBUG -DHAVE_TLS -DHAVE_LIBLDAP_R -DHAVE_SASL -DLDAPMODULE_VERSION=2.5.2 "-DLDAPMODULE_AUTHOR=python-ldap project" "-DLDAPMODULE_LICENSE=Python style" -IModules -I/usr/include -I/usr/include/sasl -I/usr/local/include -I/usr/local/include/sasl -Ic:\python27\include -Ic:\python27\PC /TcModules/LDAPObject.c /Fobuild\temp.win-amd64-2.7\Release\Modules/LDAPObject.obj LDAPObject.c c:\users\username\appdata\local\temp\pip-build-dqfo_q\python-ldap\modules\errors.h(7) : fatal error C1083: Cannot open include file: 'lber.h': No such file or directory error: command 'C:\\Users\\username\\AppData\\Local\\Programs\\Common\\Microsoft\\Visual C++ for Python\\9.0\\VC\\Bin\\amd64\\cl.exe' failed with exit status 2.
- Dynamic field in Django forms
I have a django form where a particular field (called parameter) is repeated multiple times as per the users choice. For eg, when a user is creating an object, he/she may choose to include 1, 2 or n parameters. So I want the form to be initialized with only 1 parameter field. Then if the user clicks on say a + symbol, another parameter field would pop up. Is there a way to do this with Django forms?
- How to create Deadlock in Windows or linux with a programing language?
I want to create a deadlock situation in windows and in linux as well . I want to write a code that will cause me this situation. In what language should i write te code ? I want to use python but im not sure if is platform independent or dependent. Any suggestions?!
- tcl sed with variable
hi I try sed command in tcl but there's something misunderstanding
i do a for loop to run the sed command, the item of list is actually in the file (is read from the file) but there're nothing to be replaced
and i also puts the variable to check if the item are wrong, but the targets are all what i want
how can i modify my sed command
for {set i 0} {$i < [llength $moduleList]} {incr i} { set mTarget [lindex $moduleList $i] puts $mTarget set mReplace [lindex $moduleReplaceList $i] #puts $mReplace exec sed -i {s/$mTarget/$mReplace/g} $verilog_new }
i find a workaround is to write a csh script and then execute it something like
set fout [open ./sedShell.sh w] puts $fout "#/bin/csh\n" for {set i 0} {$i < [llength $moduleList]} {incr i} { set mTarget [lindex $moduleList $i] set mReplace [lindex $moduleReplaceList $i] set tmp "" puts $fout [append tmp "sed -i 's/" $mTarget "/" $mReplace "/g' " $verilog_new ] } close $fout exec chmod 755 sedShell.sh exec sedShell.sh exec /bin/rm -f sedShell.sh
it can correctly do what i want but i don't know why i can not do the same thing in tcl
- bash + how to avoid specific messages in the log file
when I run the bash script on my Linux machine we get the following errors in my log ,
note - we set in the script:
exec > $log 2>&1 , ( in order to send all standard error/output to $log )
the errors messages:
tput: No value for $TERM and no -T specified
in order to filter this errors messages we try to set in the bash script that:
export TERM=xterm
but without help
after digging we found that happened in some case for example when we perform ssh to remote machine and runs commands on remote machine VIA ssh
in order to avoid that we set TERM=xterm in the bash script as the following:
ssh user@$remote_machine "TERM=xterm /tmp/script.sh"
but it’s very not elegant solution and because my script use a lot of ssh and curl command then it’s not practical solution to set this on each SSH or curl etc
so my big question is
how to filter the message – tput: No value for $TERM and no -T specified ?
so we not get these ugly message in my $log file
- What is the point of using Gunicorn when hosting a Django project on Linux VM
I have an Linux instance running on Google Compute Engine. I installed pip and django on it and cloned a Django project that I worked on locally. Like I would on localhost I ran my app like so:
python3 manage.py runserver 0.0.0.0:8080, and my server was up and running with no problems. I read online on how WSGI servers are required for python apps to run well on servers however I don't see why I would need something like gunicorn to run my app
- 'Welcome' message from Nginx shown when I run my wsgi Django project + Nginx + Gunicorn
I'm beginner in Django and I've finished my first project. I have an Ubuntu server in Digital Ocean and this is what I've done:
My project nginx configuration file:
server { server_name domain"'; }
My project is located in
/opt/myenv/myenv/
When I execute
gunicorn myproject.wsgiit looks like it's running
Listening at: (1481)
But when I access into my IP, I just see an Welcome message from Nginx. What is happening? (Sorry my bad english)
- Gunicorn Reap workers waitpid 134
I have an issue with gunicorn which is closing workers after status 134 from os.waitpid.
def reap_workers(self): """\ Reap workers to avoid zombie processes """ try: while True: wpid, status = os.waitpid(-1, os.WNOHANG) if not wpid: break if self.reexec_pid == wpid: self.reexec_pid = 0 else: # A worker was terminated. If the termination reason was # that it could not boot, we'll shut it down to avoid # infinite start/stop cycles. exitcode = status >> 8 if exitcode == self.WORKER_BOOT_ERROR: reason = "Worker failed to boot." raise HaltServer(reason, self.WORKER_BOOT_ERROR) if exitcode == self.APP_LOAD_ERROR: reason = "App failed to load." raise HaltServer(reason, self.APP_LOAD_ERROR) worker = self.WORKERS.pop(wpid, None) if not worker: continue worker.tmp.close() self.cfg.child_exit(self, worker) except OSError as e: if e.errno != errno.ECHILD: raise
So in this method from file gunicorn/arbiter.py os.waitpid returns status 134 and after that the worker is popped from the list of workers. My question is why do I get this status? Thanx | http://codegur.com/48249084/gunicorn-not-using-right-settings-file-django | CC-MAIN-2018-05 | refinedweb | 1,314 | 54.42 |
Web Components are a set of APIs that allow you to create custom HTML tags and use them alongside standard tags. They provide many advantages such as portability across projects, encapsulation of state, and access to a shadow DOM. Web Components are also functionally independent from frontend frameworks. You can use your components in a React, Angular, Vue, or vanilla project. For example, let’s say you wanted to create a simple feed of photos with related titles in a React app. Instead of writing some typical JSX like this:
photos.map(photo => {
return (
<div className="photo">…
I am a student and software developer from Texas, studying SE at UTD. I read, create websites, apps, and drone videos. Check out my projects at liam.mcmains.net | https://medium.com/@liam.john.mcmains | CC-MAIN-2021-25 | refinedweb | 126 | 57.98 |
When you run the following code using VPython's VIdle on Windows, an interactive window pops up where you can rotate the scene by holding down the right mouse button:
from visual import *Does anyone know a different way to draw and interact with 3D vectors?
floor = box(pos=(0,0,0), length=4, width=4,
height=0.01, color=color.blue)
xaxis = arrow(pos=(0,0,0), axis=(1,0,0), color=color.green)
yaxis = arrow(pos=(0,0,0), axis=(0,1,0), color=color.red)
2 comments:
You can draw vectors like this in VMD. See . I have no idea if the Python API supports the "draw" commands.
We use vmd to draw vectors, we hacked together a handy little tcl script which draws the vector with the origin on a given atom and draws a label at the end.
It's extra nice since you can easily draw orbitals and stuff in VMD also. | http://baoilleach.blogspot.com/2011/01/debugging-vectors-in-3d.html | CC-MAIN-2016-36 | refinedweb | 158 | 72.16 |
> -----Original Message-----
> From: Martin Sebor [mailto:msebor@gmail.com] On Behalf Of Martin Sebor
> Sent: Tuesday, July 01, 2008 11:08 PM
> To: dev@stdcxx.apache.org
> Subject: Re: Tuple status
>
> Eric Lemings wrote:
> >
> > I got this problem fixed with my latest change.
> >
> > The tuple test suite needs to be bolstered (suggestions for
> additional
> > test strategies greatly appreciated BTW) but the implementation as a
> > whole is stable enough for code review.
>
> Excellent! It looks very clean -- I like it. Just a few quick
> questions and comments on the implementation (I haven't looked
> at the tests yet):
>
> 1. <tuple> should only #include the traits header(s) for the
> the traits it needs, not all of <type_traits>, and use only
> the __rw_xxx traits (not std::xxx)
I stopped including individual headers at one point when I reached
7 or 8 headers so I just included the whole lot. That was before
my last big change however so I could probably go back to individual
headers.
>
> 2. would it be possible (and would it make sense) for <tuple>
> to forward declare reference_wrapper instead of #including the
> whole definition of the class to reduce the size of translation
> units that don't use the class?
I believe it would.
> 3. why is is the base class __rw_tuple rather than std::tuple?
> (seems there's a lot of duplicate code between the two)
It's primarily because of the specialization for pairs. For that
specialization to work with the tuple helpers, it needs to have the
same structural layout (i.e. recursive inheritance and data storage)
as the generalization. The best way for both to work in the same
helpers (mini-algorithms really) is to use a common base.
I agree though: there is a lot of duplication and if it were not for
the special pair ctors and assignment operators, there would probably
be only one class template.
>
> 4. assuming the answer to (3) is yes, are the casts in tuple
> copy ctor and copy assignment necessary or might there be
> a cleaner way to accomplish the same thing? if not, please
> consider adding a brief comment explaining why they are
> important
The casts are necessary to make sure the public ctors and operators
bind to the correct ctors and operators in the internal base class.
The copy ctor for example:
tuple (const tuple& __tuple)
: _Base (_RWSTD_STATIC_CAST (const _Base&, __tuple)) { /* empty
*/ }
The object type and argument type are both `std::tuple<_TypesT...>'.
We want this ctor to call the corresponding ctor in the base class.
Namely,
__rw_tuple (const __rw_tuple& __tuple)
...
Without the cast, the argument type passed to the internal ctor is
still `std::tuple<_TypesT...>' and, because of the template ctors,
the compiler would therefore bind to the templated copy ctor wisely
deeming this ctor a better "fit".
template <class _HeadU, class... _TailU>
__rw_tuple (const __rw_tuple<_HeadU, _TailU...>& __tuple)
...
This is not what we want. (On a side note, it appears that the C++0x
mode in GCC 4.3.x -- because of rvalue references I'm guessing -- is
much more type strict than current and past versions.)
In the helper functions, the accessors are defined in the internal
class template. (The public class template doesn't even really have
a head and tail.) So in this case, the casts are needed to expose
these internal accessors.
>
> 5. assuming the answer to (3) is yes, would it be better to
> define the [in]equality comparison for __rw_tuple as (possibly
> member) functions called _C_equal and _C_less to avoid the
> casts in the same operators in tuple and reduce the size of
> the overload set?
I tried doing it that way but the terminating specialization proved a
little tricky (they would have to be added to the empty specialization)
so I just wrote them outside the template.
>
> 6. we can and should remove the _RWSTD_NO_EXT_CXX_0X guards
> from all implementation headers under include/rw/
The assumption is that only we include those headers and presumably
we only include them in the right public headers and IF end users
include these internal headers (which they shouldn't do) then they
should get errors?
>
> 7. the right place to #define _RWSTD_XXX macros is <rw/_defs.h>
Which ones in particular?
>
> 8. why is std::ignore typedef'd to __rw_ignore when the latter
> isn't used anywhere (AFAICS), and shouldn't it be defined in
> a .cpp file to avoid multiply defined symbols?
It can be used in the tie() function which I still need to implement.
>
> 9. shouldn't tuple_element take size_t as template argument
> as per LWG issue 755
Yes.
>
> 10. the non-trivial bits could use some comments :)
Non-trvial? Which parts are non-trivial?
Hehe.
Just point them out and I'd be glad to comment them.
>
> In __rw_tuple, are the accessors necessary? Would it be possible
> (and cleaner) to declare the appropriate specialization(s) of
> __rw_tuple a friend instead and have it (them) access the data
> members directly?
I try to avoid friends as a general rule but I did consider it and
IIRC the conclusion I came to was that it would really make a mess.
>
> A few notes on style :)
>
> a) please re-read points (0), (1), and (5) here:
>
Yeah, need to update for 1 and 5. Don't see 0 though.
>
> b) please use the name _TypeT (as opposed to _Type) for template
> parameters
What's wrong with _Type? :)
Okaaayy fine. (Still doesn't make sense though and I still think this
convention ought to be relaxed.)
>
> c) in ctor initializer lists spanning multiple lines, please
> avoid dropping the colon or the comma; i.e.,
>
> struct A: Base {
> int _C_i;
> A (int i): Base (), _C_i (i) { /* empty */ }
>
> or (for long initializer lists):
>
> A (int i):
> Base (very_long_sequence_of_arguments),
> _C_i (i) {
> // empty
> }
>
> d) I realize we haven't formally closed the VOTE on the variadic
> template parameters but I think we're all in agreement so we
> might as well start using the right naming convention (_TypeT
> and _TypesT, etc.)
Not a problem... for me at least. I think it will make this code in
particular though harder to read. The reader will constantly have to
remind himself/herself "_TypeT is the head, _TypesT is the tail".
Brad. | http://mail-archives.apache.org/mod_mbox/incubator-stdcxx-dev/200807.mbox/%3CCFFDD219128FD94FB4F92B99F52D0A4901DB0612@exchmail01.Blue.Roguewave.Com%3E | CC-MAIN-2016-30 | refinedweb | 1,037 | 62.07 |
I'm fairly new to the C++ world (did some C work in the past years ago) so I'm a little rusty on some things :) I've got a small table in a database and I'm trying to grab some date information out of a UNIX EPOCH time stamp (all seconds). I've tried tackling this with time.h but I can't seem to find anything in time.h that allows a 'from string' type of approach so I thought I'd ask for some advice.
My database is on MySQL 5 (5.0.45) and is on a Fedora box. The table looks like this:
Table: weather +-------------+---------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +-------------+---------+------+-----+---------+-------+ | id | char(4) | NO | PRI | | | | last_update | int(11) | YES | | 0 | | +-------------+---------+------+-----+---------+-------+
The data looks like this:
mysql> select * from site where id = 'kict'; +------+-------------+ | id | last_update | +------+-------------+ | kict | 1218689222 | +------+-------------+ 1 row in set (0.07 sec)
The date string you see in the table is pulled via cURL as a the FILETIME. I checked to see if you could specify the date format that it supplies but this is not possible (as far as I could tell).
What I am trying to accomplish is taking the date from the above database and pulling out certain pieces such as the month, day and year.
Here is why code so far with my 'rough' implementation but it isn't working.
#include <iostream> #include <iomanip> #include <mysql++.h> #include <time.h> using namespace std; char dbHost[20] = "localhost"; char dbUser[20] = "***"; char dbPass[20] = "***"; char dbName[20] = "weather"; int main(int argc, char *argv[]) { // Connect to the DB mysqlpp::Connection conn(false); if (!conn.connect(dbName, dbHost, dbUser, dbPass)) { cerr << "DB Connection Failed: " << conn.error() << endl; return 1; } // Query mysqlpp::Query query = conn.query("select id,last_update,from_unixtime(last_update) from site"); if (mysqlpp::StoreQueryResult res = query.store()) { for (int i = 0; i < res.num_rows(); ++i) { // Here is where I want to convert the time from the result set cout << res[i][0] << " - " << res[i][1] << endl; } } else { cerr << "Failed to get item list: " << query.error() << endl; return 1; } return 0; }
Could someone give me some pointers? Like I said, I'm at wit's end here trying to figure this out!
Thanks!
Kelly | https://www.daniweb.com/programming/software-development/threads/141151/mysql-result-set-and-time-h | CC-MAIN-2017-34 | refinedweb | 374 | 80.82 |
Hi,
My app don't have changes in native part, some days ago, my Android Environment deployed to my Google Nexus S works fine, after some web resources updated or settings changed (i.e. server port), now my Android app don't work, at app launch from emulator, I get an error 'The connection to the server was unsuccessful (), if I run it with my Google Nexus S, no errors raised but the screen don't response my submit request (it call worklight adapter to login), and nothing shown in worklight server log, I updated {showLogger: true}, but no log window shown in Android app.
And I want to check server address from Settings (even if I have checked the wlclient.properties and application-descriptor.xml) but I can't pop up options menu of the worklight app too, this is very strange, some days ago, it works fine and native application for Android is generated by Worklight Studio automatically, no changed made by manual.
I remembered that if my server port is wrong, during app launch, our app open an error dialog with 3 buttons, 3rd button can be used to change server address. now my login page shown correctly, after clicked login button, no any logs shown in worklight server.
I'm very strange what's wrong, I cannot get any logs. For comparison, The app works fine with an Android Browser, desktop browser, etc in previous mode.
Regards
Daniel
Topic
This topic has been locked.
Pinned topic The connection to the server was unsuccessful
2012-06-19T09:22:14Z |
- AntonAleksandrov 270005D80F22 Posts
Re: The connection to the server was unsuccessful2012-06-20T07:29:33Z
This is the accepted answer. This is the accepted answer.First of all, remove showLogger:true property, it is not required on Android environment.
edit your application-descriptor.xml file and make sure that the <worklightURL> element contains the valid URL that your Worklight server can be accessed at.
Uninstall the old application from the device and build/deploy new one.
In case application does not work try to open device's browser and browse to the Worklight server console (usually)
Re: The connection to the server was unsuccessful2012-07-04T01:06:40Z
This is the accepted answer. This is the accepted answer.
- AntonAleksandrov 270005D80F
- 2012-06-20T07:29:33Z
Thanks to Anton,
I verified server URL, I can run the app from preview mode (From Both PC browser and Mobile Browser), the server address is OK and worklight console is open too. I uninstall my app and delete folder /data/data/com.MyApp/
with same codes, on Google Nexus S, login page shown, after input login name/password I clicked login button nothing captured into logcat (I put a WL.Logger.debug('xxx') at begin of JavaScript function), and log messages not captured into logcat in method wlCommonInit too.
it seems that my JavaScript not run for potential error, but I can't get more errors except the following:
I/Web Console(17687): Error in success callback: Network Status2 = TypeError: Cannot call method 'apply' of undefined at.
I don't changed any generated files by worklight studio, at line phonegap.js#708, it is a logging statement only, with above-mentioned error, I can't get call stack, and don't know where the error come from.
run with emulator, it always give 'The connection to the server was unsuccessful, ()', then I updated my activity to following, but still timeout on emulator, my web pages don't refer to remote resources, all JS/CSS/Image packaged locally,
public class MyApp extends WLDroidGap {
@Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
super.setIntegerProperty("loadUrlTimeoutValue", 30000);
super.loadUrl(getWebMainFilePath(), 60000);
}
}
Anybody get same error or know how to trace the actual root error of above-mentioned phonegap#708?
background:
1.My app uses adapter-based authentication with mode onDemand.
2.Android source code is generated by worklight studio 4.2.1.1183 same as server's version, except setIntegerProperty() part.
thanks for any hints.
Daniel
- DA81_Celine_Calista_Leong 270005DA811 Post
Re: The connection to the server was unsuccessful2012-07-23T07:29:53Z
This is the accepted answer. This is the accepted answer.
- SystemAdmin 110000D4XK
- 2012-07-04T01:06:40Z
I have been getting the same problem as you and after adding this line in the myApp.java file located in the src folder:
super.setIntegerProperty("loadUrlTimeoutValue",60000);
My app started working fine. Maybe increasing the timeout value will do the trick for you.
Regards,
Celine
[Solved] Re: The connection to the server was unsuccessful2012-08-10T02:56:57Z
This is the accepted answer. This is the accepted answer.
- DA81_Celine_Calista_Leong 270005DA81
- 2012-07-23T07:29:53Z
thanks all,
I forget to remove second parameters from invocation to super.loadUrl(), remove it then ok.
by the way, how to make the question as resolved? | https://www.ibm.com/developerworks/community/forums/html/topic?id=77777777-0000-0000-0000-000014845589 | CC-MAIN-2015-27 | refinedweb | 807 | 54.52 |
-- 14 January 1999 -- Leading the Web to its full potential, the World Wide Web Consortium (W3C) today. A W3C Recommendation indicates that a specification is stable, contributes to Web interoperability and has been reviewed by W3C Membership who are in favor of its adoption by the industry..
The design of "Namespaces in XML" is the direct result of W3C's experience with evolving Web technologies. Namespaces allow the Web to scale in a way that promotes interoperability. "We've seen what it takes for technology to move forward in practice. This Recommendation is engineering to make the Web capable of evolving - not just good, but capable of becoming ever better," says Tim Berners-Lee, W3C Director. "As the Web gets bigger, new technology must be able to move slowly from invention in a small community to global adoption. And that must be possible without anyone having to recode existing applications to meet the new standard."
Every XML document contains elements just as every HTML document contains elements (such as the familiar "P", "TABLE", etc.). XML, unlike HTML, allows people to create their own elements to meet their particular needs. A collection of elements is called a "namespace" and this W3C Recommendation describes how to mix two or more of these namespaces. The specification ensures that when two namespaces both contain an element with the same name, applications can distinguish the names by a prefix (just as two telephone numbers may be the same in two cities, distinguished by an area code). It is possible, for example, to mix HTML and MathML, to put structured math content in the middle of a Web page.
Namespaces are already used in W3C's current Working Draft on Reformulating HTML in XML.
Namespaces also mean that applications processing a document will work even if they don't understand all of the namespaces in that document. "Think of an invoice," suggests Dan Connolly, W3C XML Activity Lead. "Most of an invoice like the addresses and quantities and amounts are in regular commercial language. But maybe the descriptions of exactly what parts have been ordered would only be understood by experts manufacturing or using the parts. Still, many people can understand the invoice without having to understand what the part description means. XML namespaces allows a digitally coded document like this invoice to be processed -- without everyone who uses invoices having to agree on a vocabulary for turbojet engine side intake manifold monitor valve mounting nuts, or whatever."
Further information about upcoming developments of XML | http://www.w3.org/Press/1999/XML-Namespaces-REC | CC-MAIN-2015-22 | refinedweb | 419 | 51.48 |
I need to make a program that calculates the area of a circle but I have to use an int, char, and float. No matter what I do though I can't get the second and third variables to output meaningful values, only the first. This is what I have written down so far.
I'm not sure what to do and I've been working on this for hours:( Thanks for any help or advice!I'm not sure what to do and I've been working on this for hours:( Thanks for any help or advice!Code:
#include <stdio.h>
#include "conio.h"
#define PI 3.14159
int main()
{
/* The following is a program to demonstrate several different c numerical input funtions:interger, float, character.*/
float radius1, area_float; //declares the radius that will be used for float
int radius, area_int;
char radius3, area_char;
printf("Input radius for float, please? "); // Promts the user to enter a value for float
scanf("%f", &radius1 ); // obtains the value from the float promt
area_float = PI * radius1 * radius1; //Equation for area of a cricle
printf("area1 = %f\n", area_float); //Prints the answer for float
//Forces the command console to pause because c is used.
printf("Input radius for int value, please? ");
area_int = PI * radius * radius;
scanf("%d", &radius );
printf("area2 =%d\n", area_int);
_getch();
printf("Input radius for char value, please? ");
scanf("%c", &radius3);
area_char= PI * radius3*radius3;
printf("area3 =%c2\n", area_char);
_getch();
return 0;
} | http://cboard.cprogramming.com/c-programming/138558-multivariable-problem-seems-so-simple-but-i-cant-figure-out-printable-thread.html | CC-MAIN-2014-41 | refinedweb | 242 | 72.56 |
I had hear about a method to run a method like an executable, with the arguments being passed in from the commandline. Apparently, Ruby has this with the Rake project. Well, Python has it with Shovel. Only the instructions didn't work for me on windows, so I had to create a batch script to call
shovel correctly.
So here's my setup. I installed shovel using
easy_install shovel, which placed it in my python scripts folder (C:\Python27\Scripts, which is in my PATH). I created 'shovel.bat' (see source) in the same folder. So now I can call
shovel from anywhere and it will call my batch file, passing the parameters into shovel.
For completeness, here is my working directory structure (note the subfolder called shovel):
D:. │ song log.log │ Untitled_10.mp3 │ Untitled_11.mp3 │ Untitled_2.mp3 │ Untitled_3.mp3 │ Untitled_4.mp3 │ Untitled_5.mp3 │ Untitled_6.mp3 │ Untitled_7.mp3 │ Untitled_8.mp3 │ Untitled_9.mp3 │ └───shovel tasks.py
My python script is incomplete but can be run by shovel using
shovel tasks.extractSongInfo "log file name.log" from the directory above the 'shovel' directory. Here's the script:
from shovel import task @task def extractSongInfo(logFile): """Given a log file with lines like "{date} INFO Currently playing - Song: {songName} - Artist: {artist} - Length: {numSeconds}" return a collection of extracted info.""" opId = "Extracting song info" # operation ID print opId print 'Done: ' + opId
- :: @echo off
- :: This file can be called from anywhere, so we need to give the full path to the shovel file (which is in the same directory as this script)
- @setlocal
- @set directory=%~dp0
- @python %directory%shovel %*
- :: Turns out the above works, so no need to inline the contents of 'shovel' as done below. Note, the shovel python file needs to be in a subdirectory called 'shovel'
- :: python -c "__requires__ = 'shovel==0.1.9'; import pkg_resources; pkg_resources.run_script('shovel==0.1.9', 'shovel')" %*
Report this snippet
Tweet | http://snipplr.com/view/66267/getting-shovel-to-work-in-windows-so-python-methods-can-be-run-like-individual-executables/ | CC-MAIN-2014-15 | refinedweb | 314 | 56.35 |
Announces 'Google Tax'
Yeah, fair comment. It occurred to me after I wrote that that I could have better stated it that individuals have a duty to pay their taxes.
UK Announces 'Google Tax'
As the parent poster stated: "in EXCHANGE", or in other words you're paying for it. You can't rebut an statement if you've only bothered to read half of it.
Parent poster is right: western nations with higher tax rates generally do offer more stable, safer, societies, and corporations are all too happy to take advantage of that without making their contribution to that society.
And yes, governments do have the right to tax you. You can bleat about libertarianism all you like, but elected governments get to make the laws, and you can either abide by them, or piss off somewhere else without any pesky laws. I understand Somalia is nice at this time of year.
Antarctic Ice Loss Big Enough To Cause Measurable Shift In Earth's Gravity....
DMCA Claim Over GPL Non-Compliance Shuts Off Minecraft Plug-Ins
Ah, I see now. You blindly believed the Slashdot summary (which is in fact a load of bollocks) without doing any research yourself. Never a good idea.
DMCA Claim Over GPL Non-Compliance Shuts Off Minecraft Plug-Ins
You keep using the word dim as an insult towards me yet seem to repeatedly be unable to grasp the concept that offering the open sourcing of the Minecraft server source code as a solution to the problem he has decided to create is identical to asking for the Minecraft source code, I'm not sure why you struggle with with such a simple concept, but apparently you do. Maybe you're, well, a bit dim?
I've used the term dim once, so I'm not sure why you think I "keep" using it; however, I've come to the conclusion that it's pretty well merited here. I'll say it once more, in the probably vain hope that you'll actually read it: Wolfe has neither asked for nor offered the open sourcing of Mojang's server source code. Once again: you made that part up yourself.
DMCA Claim Over GPL Non-Compliance Shuts Off Minecraft Plug-Ins
I can't quite tell if you're a bit dim, or just being deliberately obtuse here. Wolfe has never demanded that Mojang release their proprietary source code (and he doesn't have the legal right to do that anyway). He's saying, correctly, that CraftBukkit cannot legally be distributed under its current license. It can certainly be argued that this is a petty move - he is basically making the entire project unavailable to everyone - and I'm in two minds about that myself, but it's a bit more than just a "pet peeve". It's two years of his work (and Wolfe was one of the biggest contributors to the project) under basically false pretences.
It's now up to Mojang to decide how to proceed. They have several choices, only one of which is re-licensing their server code under the GPL (and I very much doubt that will happen). They could alternatively shelve the Bukkit project (they are after all working on their own modding API), or they could meet Wolfe in court.
The problem is not entirely Mojang's fault, but they have basically been storing up trouble for themselves since the release of CraftBukkit in January 2011 by not dealing with the license issue then. It seems pretty clear they wanted the Bukkit project to continue (let's face it, it's certainly helped their bottom line), up to and including acquiring the project, but it was foolish of them not to immediately clarify the licence.
By the way, using the blanket insult "you FOSS zealots" says a lot more about you than it does me. You know nothing about me, or my overall stance on software licensing.
DMCA Claim Over GPL Non-Compliance Shuts Off Minecraft Plug-Ins
Stop being dense. He's not demanding the "Minecraft server source code" and no one who's actually paying attention has ever claimed that. He's saying that his GPL'd contributions can't legally be distributed in CraftBukkit along with Mojang's proprietary classes, and in that respect he's correct.
Of course this has been the case all along - CraftBukkit has never been legally licensed - but until now both Mojang and the Bukkit development team have basically swept that problem under the carpet. What gotten Wolfe (and no doubt other) contributors annoyed is that Mojang concealed their ownership of Bukkit until the current Bukkit team basically threw in the towel due to the difficulty of keeping CraftBukkit up-to-date wrt. Minecraft. At the point, Mojang basically said, "no problem, we've actually owned Bukkit all along!".
This is the fault of the original Bukkit team (who are now Mojang employees, or have been) for their poor licensing decisions, and also the fault of Mojang for failing to deal with the licensing problems either initially or when they clandestinely acquired the Bukkit project. Wolfe and other contributors may be accused of poor judgement for continuing to contribute to such a legally shaky project, but the original licensing problems are not theirs, and their contributions were made under the false belief that Bukkit remained a community-owned project.
DMCA Claim Over GPL Non-Compliance Shuts Off Minecraft Plug-Ins
Wolfe didn't do the original decompiling of Mojang's code and combining it with GPL'd and LGPL'd code. That was done by individuals who were since hired by Mojang as developers for their own planned modding API.
So while Wolfe's contributed a huge amount of GPL'd code to Bukkit/CraftBukkit, he's not the original infringer. If Mojang want to sue, they'll basically have to sue their own employees (whom they hired knowing full well that they'd been releasing decompiled Mojang code).
CraftBukkit has never been legally licensed. That does not however invalidate the copyright that Wolfe (and every other contributor) has over his own contributions. And he does maintain copyright since Mojang have never required contributors to agree to a Contributor License Agreement.
DMCA Claim Over GPL Non-Compliance Shuts Off Minecraft Plug-Ins
He's not trying to ransom it back. At no point has there been a suggestion that Wolvereness wants any kind of material gain for his code. He's unhappy that the GPL code he contributed to Bukkit is now being distributed by Mojang in breach of the GPL.
DMCA Claim Over GPL Non-Compliance Shuts Off Minecraft Plug-Ins
There's an awful lot of guesswork and name-calling going on here, and a sense that the argument is divided into "Mojang sux!" and "Mojang rulez!" camps. The truth is of course, way more complex. Here's an attempt to lay out some (hopefully) objective facts, mixed with my subjective opinion. Disclaimer: I've been involved with Bukkit for over 3 years as a plugin developer and contributor to the forums, and (very minor) contributor to the Bukkit project itself.
* The Bukkit project was started late 2010/beginning 2011 as a replacement for hMod, an earlier server mod, by 4 people: Dinnerbone, EvilSeph, Tahg and Grum.
* The Bukkit project consists of two main deliverables: the Bukkit API, licensed under the GPL, and the CraftBukkit server, licensed under the LGPL.
* The Bukkit API contains no Mojang code. It's the Java JAR against which Bukkit plugins are compiled.
* The CraftBukkit server contains a copy of the Bukkit API (via Maven shading), some original Java classes in the org.bukkit.craftbukkit Java package (most of which either implement Bukkit API interfaces or serve as glue code between Bukkit and Mojang's own code), and decompiled/semi-deobfuscated copies of Mojang classes from Mojang's official minecraft_server.jar.
* Effectively craftbukkit.jar contains code with three separate licenses: GPL, LGPL, and Mojang's proprietary license. My opinion: "what the hell were the Bukkit team thinking when they chose this license model?"
* Mojang were presumably aware of CraftBukkit from the start, it being the pre-eminent server modding platform, but chose not to take any legal action over the inclusion of (decompiled copies of) their code in a GPL'd project. In fact, Mojang even went so far as to supply deobfuscation mappings to the original Bukkit team, so it's very clear that they supported the Bukkit project. My opinion: "of course they would, it's helped their sales enormously"
* Early 2012, Mojang announced that the original Bukkit team (Dinnerbone, EvilSeph, Tahg and Grum) were being hired to work on Mojang's own planned modding API. Note that this API had been announced some time before, and has yet to materialise.
* Apparently at the same time, Mojang also acquired rights to the entire Bukkit project. This, however, was not publicised.
* The Bukkit project continued under new direct leadership (mainly feildmaster, Wolvereness and Amaranth) after the original team were hired by Mojang. feildmaster recently stated that Mojang stopped providing deobfuscation mappings shortly after the original team were hired. However, Mojang allowed the project to continue, and did not take any legal action over the CraftBukkit server.
* EvilSeph left Mojang shortly after, and is the only member of the original four to remain involved with the Bukkit project.
* Cut to last month: EvilSeph posted an announcement that the Bukkit project was being ended, due to the increasing difficulty of updating it for new Minecraft releases (remember: no more deobfuscation mappings from Mojang), and concerns of its legal status being exacerbated by Mojang's recent EULA changes.
* At this point, Mojang sprang into action, asserting ownership over the entire Bukkit project. Dinnerbone tweeted that he'd personally update Bukkit for the new Minecraft 1.8 release. Mojang's Jeb confirmed that Mojang owns Bukkit (I quote his tweet: "we checked the receipts"), having acquired it when the original four joined Mojang in 2012.
* The revelation over Mojang's ownership of Bukkit caused very significant consternation for many contributors to the project (there are around 170 individuals, including myself, who have contributed code licensed under the GPL).
* Wesley Wolfe aka Wolvereness, one of the new Bukkit dev team leaders, filed a DMCA takedown notice on Sep 5th, on the grounds that Mojang cannot legally distribute CraftBukkit when it contains both GPL'd code from him, and proprietary code from Mojang.
* In the last couple of days, pretty much all of the existing Bukkit staff (forum administration, plugin review & approval...) have resigned, effectively gutting the project.
* Also in the last couple of days, a new modding initiative called Sponge has been started by many individuals from the Bukkit project as well Forge, Spout, Flow - other leading Minecraft modding efforts. It's intended to be a higher-level abstraction layer on top of Forge, and won't be Bukkit compatible or contain any code from Bukkit. Given the calibre of many involved and the solid foundation of Forge, it has a very strong chance of success IMHO.
So those are the facts, basically. My opinions:
* The Bukkit team should not originally have chosen a LGPL license for CraftBukkit. And given that the Bukkit API is licensed under the GPL, and distributed in the CraftBukkit JAR file, this was also a poor licensing choice.
* Mojang had the opportunity to complain about this licensing decision at the start. Not only did they choose not to, but they provided tacit support to the Bukkit team, and later hired them as developers for their own API.
* Mojang's failure to make it clear that they owned the project subsequent to hiring the original dev team was unfortunate, and left many contributors feeling deceived when their ownership came to light. There is a feeling that people were being tricked into writing code for Mojang for free, when they believed they were contributing to a community-driven project.
* Wolvereness's decision to file a takedown is justified, although it's stirred up a huge amount of resentment from people who believe they're automatically entitled to software that he and others have spent a lot of time writing for little or no recompense. In addition, it may have the useful effect of forcing a solid resolution to the affair.
tl;dr Bukkit is most likely dead.
Don't Fly During Ramadan.
JavaScript Comes To Minecraft
Right, and there are also multiple Bukkit plugins which allow scripting in various languages - JRuby & Jython, for example. And WorldEdit already allows a degree of Javascript. I'm sure this particular mod is cool, but it's absolutely nothing new. People have been extending Minecraft with scripting tools for well over a year now.
Slashdot... 2011's news in 2013!
2 New Social Networks With Very Different Political Twists
For an example of her (lack of) grasp of politics, or indeed common sense:
Employees Admit They'd Walk Out With Stolen Data If Fired
So, they asked 7 people?
A Memory of Light To Be Released January 8, 2013
Not me. I enjoyed the first three books, slogged through the next five in the hope the pace would pick up, and gave up halfway through Winter's Heart. At that point, I gave up caring about how the bloody things ends
:)
Sony Music Greece Falls To Hackers
Nah.
Firefox 4.0 Beta Candidate Available
But will Firefox stay relevant? Chrome is coming up fast and Mozilla seems to be stagnating.
Until Chrome supports the use of a master password (which, since the devs won't even admit is a serious problem, seems unlikely), Firefox will continue to be my default browser. Pity, since Chrome has a lot going for it otherwise.
The Cell Phone Has Changed — New Etiquette Needed
When checking out at any store, do NOT ignore the cashier while talking on the phone. The rest of us would like to check out as well.
Agreed - that's one of the rudest, most pig-ignorant kinds of behaviour I've witnessed. It is treatable, however, with this.
Google Upgrades Chrome To Beta For OS X, Linux.
Go, Google's New Open Source Programming Language
There is basically zero quality control, anyone can put any module up they want and use any namespace. They don't have to offer ANY documentation
Sure, but since you can check the namespace and browse the docs before you choose to install the module, is that such a problem? I admit the quality control is limited, but there is a review facility which is reasonably well-used -see.
if they go AWOL and stop maintaining the module, it just stays there, festering
Just like any other open-source project then. | http://beta.slashdot.org/~Des+Herriott | CC-MAIN-2014-52 | refinedweb | 2,459 | 59.03 |
Command Query separation is nothing new. It's yet another attempt to manage impedance mismatch between how your application uses data and how the underlying store manages data so that transactional updates can be served with equal agility as read-only queries. Bertrand Meyer made this distinction long back when he mentioned that ..
The features that characterize a class are divided into commands and queries. A command serves to modify objects, a query to return information about objects.Processing a command involves manipulation of state - hence the underlying data model needs to be organized in a way that makes updation easier. A query needs to return data in the format the user wants to view them. Hence it makes sense to organize your storage likewise so that we don't need to process expensive joins in order to process queries. This leads to a dichotomy in the way the application, as a whole, requires processing of data. Command Query Separation (CQRS) endorses this separation. Commands update state - hence produce side-effects. Queries are like pure functions and should be designed using applicative, completely side-effect free approaches. So, the CQRS principle, as Bertrand Meyer said is ..
Functions should not produce abstract side-effects.Greg Young has delivered some great sessions on DDD and CQRS. In 2008 he said "A single model cannot be appropriate for reporting, searching and transactional behavior". We have at least two models - one that processes commands and feeds changes to another model which serves user queries and reports. The transactional behavior of the application gets executed through the rich domain model of aggregates and repositories, while the queries are directly served from a de-normalized data model.
CQRS and Event Sourcing
One other concept that goes alongside CQRS is that of Event Sourcing. I blogged about some of the benefits that it has quite some time back and implemented event sourcing using Scala actors. The point where event sourcing meets CQRS is how we model the transactions of the domain (resulting from commands) as a sequence of events in the underlying persistent store. Modeling persistence of transactions as an event stream helps record updates as append only event snapshots that can be replayed as and when required. All updates in the domain model are now being translated into inserts in the persistence model. And this gives us an explicit view of all state changes in the domain model.
Over the last few days I have been playing around implementing CQRS and Event Sourcing within a domain model using the principles of functional programming and actor based asynchronous messaging. One of the big challenges is to model updates in a functional way and store them as sequences of event streams. In this post, I will share some of the experiences and implementation snippets that I came up with over the last few days. The complete implementation, so far, can be found in my github repository. It's very much a work in progress, which I hope to enrich more and more as I get some time.
A simple domain model
First the domain model and the aggregate root that will be used to publish events .. It's a ridiculously simple model for a security trade, with lots and lots of stuff elided for simplicity ..
// the main domain class
case class Trade(account: Account, instrument: Instrument, refNo: String,
market: Market, unitPrice: BigDecimal, quantity: BigDecimal,
tradeDate: Date = Calendar.getInstance.getTime, valueDate: Option[Date] = None,
taxFees: Option[List[(TaxFeeId, BigDecimal)]] = None,
netAmount: Option[BigDecimal] = None) {
override def equals(that: Any) = refNo == that.asInstanceOf[Trade].refNo
override def hashCode = refNo.hashCode
}
For simplicity, we make reference number a unique identifier for a trade. So all comparisons and equalities will be based on reference numbers only.
In a typical application, the entry point for users is the service layer that exposes facade methods that render use cases for the business. In a trading application, two of the most common services that need to be done on a security trade are it's value date computation and it's enrichment. So when the trade passes through its processing pipeline it gets its value date updated and then gets enriched with the applicable taxes and fees and finally its worth net cash value.
If you are a client using these services (again, overly elided for simplicity) you may have the following service methods ..
class TradingClient {
// create a trade : wraps the model method
def newTrade(account: Account, instrument: Instrument, refNo: String, market: Market,
unitPrice: BigDecimal, quantity: BigDecimal, tradeDate: Date = Calendar.getInstance.getTime) =
//..
// enrich trade
def doEnrichTrade(trade: Trade) = //..
// add value date
def doAddValueDate(trade: Trade) = //..
// a sample query
def getAllTrades = //..
}
In a typical implementation these methods will invoke the domain artifacts of repositories that will either query aggregate roots or do updates on them before being persisted in the underlying store. In a CQRS implementation, the domain model will be updated but the persistent store will record these updates as event streams.
So now we have the first problem - how do we represent updates in the functional world so that we can compose them later when we need to snapshot the persistent aggregate root?
Lenses FTW
I used type lenses for representing updates functionally. Lenses solve the problem of representing updates so that they can be composed. A lens between a set of source structures
Sand a set of target structures
Tis a pair of functions:
-
getfrom
Sto
T
-
putbackfrom
T x Sto
S
For more on lenses, have a look at this presentation by Benjamin Pierce. scalaz contains lenses as part of its distribution and models a lens as a case class containing a pair of
getand
setfunctions ..
case class Lens[A,B](get: A => B, set: (A,B) => A) extends Immutable { //..
Here are some examples from my domain model for updating a trade with its value date or enriching it with tax/fee values and net cash value ..
// add tax/fees
val taxFeeLens: Lens[Trade, Option[List[(TaxFeeId, BigDecimal)]]] =
Lens((t: Trade) => t.taxFees,
(t: Trade, tfs: Option[List[(TaxFeeId, BigDecimal)]]) => t.copy(taxFees = tfs))
// add net amount
val netAmountLens: Lens[Trade, Option[BigDecimal]] =
Lens((t: Trade) => t.netAmount,
(t: Trade, n: Option[BigDecimal]) => t.copy(netAmount = n))
// add value date
val valueDateLens: Lens[Trade, Option[Date]] =
Lens((t: Trade) => t.valueDate,
(t: Trade, d: Option[Date]) => t.copy(valueDate = d))
We will use the above lenses for updation of our aggregate root and also wrap them into closures for subsequent feed into the event stream for persistent storage. In this example I have implemented in-memory persistence for both the command and the query store. Persistence into an on disk database will be available very soon at a github repository near you :)
Combinators that abstract state processing
Let's now define a couple of combinators that encapsulate our transactional service method implementations within the domain model. Note how the lenses have also been abstracted away from the client API as implementation artifacts. For details of these implementations please visit the github repo that contains a working model along with test cases.
// closure that enriches a trade with tax/fee information and net cash value
val enrichTrade: Trade => Trade = {trade =>
val taxes = for {
taxFeeIds <- forTrade // get the tax/fee ids for a trade
taxFeeValues <- taxFeeCalculate // calculate tax fee values
}
yield(taxFeeIds map taxFeeValues)
val t = taxFeeLens.set(trade, taxes(trade))
netAmountLens.set(t, t.taxFees.map(_.foldl(principal(t))((a, b) => a + b._2)))
}
// closure for adding a value date
val addValueDate: Trade => Trade = {trade =>
val c = Calendar.getInstance
c.setTime(trade.tradeDate)
c.add(Calendar.DAY_OF_MONTH, 3)
valueDateLens.set(trade, Some(c.getTime))
}
We will now use these combinators to implement our transactional services which the TradingClient will invoke. Each of these service methods will do 2 things :-
1. effect the closure on the domain model and
2. as a side-effect stream the event into the command store
Sounds like a kestrel .. doesn't it ? Well here's a kestrel combinator and the above service methods realized in my CQRS implementation ..
// refer To Mock a Mockingbird
private[service] def kestrel[T](trade: T, proc: T => T)(effect: => Unit) = {
val t = proc(trade)
effect
t
}
// enrich trade
def doEnrichTrade(trade: Trade) =
kestrel(trade, enrichTrade) {
ts ! TradeEnriched(trade, enrichTrade)
}
// add value date
def doAddValueDate(trade: Trade) =
kestrel(trade, addValueDate) {
ts ! ValueDateAdded(trade, addValueDate)
Back to Akka!
It was only expected that I will be using Akka for transporting the event down to the command store. And this transport is implemented as a asynchronous side-effect of the service methods - just what the doctor ordered for an actor use case :)
With Event Sourcing and CQRS, one of the things that you would require is the ability to snapshot your persistent versions of the aggregate root. The current implementation is simple and does a zero based snapshotting i.e every time you ask for a snapshot, it replays the whole stream for that trade and gives you the current state. In typical real world systems, you do interval snapshotting and start replaying from the latest available snapshot in case you want to get the current state.
Here's our command store modeled as an Akka actor that processes the various events that it receives from an upstream server ..
// CommandStore modeled as an actor
class CommandStore(qryStore: ActorRef) extends Actor {
private var events = Map.empty[Trade, List[TradeEvent]]
def receive = {
case m@TradeEnriched(trade, closure) =>
events += ((trade, events.getOrElse(trade, List.empty[TradeEvent]) :+ closure))
qryStore forward m
case m@ValueDateAdded(trade, closure) =>
events += ((trade, events.getOrElse(trade, List.empty[TradeEvent]) :+ closure))
qryStore forward m
case Snapshot =>
self.reply(events.keys.map {trade =>
events(trade).foldLeft(trade)((t, e) => e(t))
})
}
}
Note how the
Snapshotmessage is processed as a
foldover all the accumulated closures starting with the base trade. Also the command store adds the event to its repository (which is currently an in memory collection) and forwards the event to the query store. There we can have the trade modeled as per the requirements of the query / reporting client. For simplicity the current example assumes the model as the same as our domain model presented above.
Here's the query store, also an actor, that persists the trades on receiving relevant events from the command store. In effect the command store responds to messages that it receives from an upstream
TradingServerand asynchronously updates the query store with the latest state of the trade.
// QueryStore modeled as an actor
class QueryStore extends Actor {
private var trades = new collection.immutable.TreeSet[Trade]()(Ordering.by(_.refNo))
def receive = {
case TradeEnriched(trade, closure) =>
trades += trades.find(_ == trade).map(closure(_)).getOrElse(closure(trade))
case ValueDateAdded(trade, closure) =>
trades += trades.find(_ == trade).map(closure(_)).getOrElse(closure(trade))
case QueryAllTrades =>
self.reply(trades.toList)
}
}
And here's a sample sequence diagram that illustrates the interactions that take place for a sample service call by the client in the CQRS implementation ..
The full implementation also contains the complete wiring of the above abstractions along with Akka's fault tolerant supervision capabilities. A complete test case is also included along with the distribution.
Have fun!
19. | http://debasishg.blogspot.com/2011/01/cqrs-with-akka-actors-and-functional.html?showComment=1385300579017 | CC-MAIN-2017-43 | refinedweb | 1,857 | 54.42 |
I am struggling to put best practises into action, converting an existing file for Cypress testing into a more appropriate format for exporting and importing.
support-file.js
export const1 = () => cy.get('#someId1'); export const2 = () => cy.get('#someId2'); export const3 = () => cy.get('#someId3'); export function myFunct1() { // Do something } export function myFunct2() { // Do something } export function myFunct3() { // Do something }
file-where-used.js
import { const1, const2, const3, myFunct1, myFunct2, myFunct3 } // usage of the consts/functs below
I have experimented with trying to get them into a format such that I do not have to import each separately, but I cannot figure it out... I thought, perhaps wrapping all as a class and exporting that, which does work but only when using a
require rather than
import... And I also found difficulty in exporting my
const variables...
export const1 = () => cy.get('#someId1'); export const2 = () => cy.get('#someId2'); export const3 = () => cy.get('#someId3'); class myClass { myFunct1() { // Do something } myFunct2() { // Do something } myFunct3() { // Do something } } module.exports = new myClass();
You can cut your issues up in several steps.
Custom Commands/functions
Firstly you are creating custom commands like this one:
export function() { // Do something }
By putting that function in the file
cypress/support/commands.js you don't have to import it in the integration files, but you do have to rewrite is like this:
Cypress.Commands.add('myFunct1', function () { // Do something })
What you end up in the integration file is this:
cy.myFunct1()
Global variables
You are assigning global variables like this:
export const1 = () => cy.get('#someId1'); export const2 = () => cy.get('#someId2'); export const3 = () => cy.get('#someId3');
Start with rewriting them to be a constant:
const const1 = () => cy.get('#someId1'); const const2 = () => cy.get('#someId2'); const const3 = () => cy.get('#someId3');
You'll always need to import them one by one, but you can combine them as long as they are in one file. You could do so by importing them into the testfile like this:
import {const1, const2, const3} from '<FILE_DIRECTORY>'
Now they are available trough the whole testfile. | https://cmsdk.com/node-js/converting-cypress-file-into-a-more-appropriate-format-for-exporting.html | CC-MAIN-2019-22 | refinedweb | 336 | 50.53 |
FiPy unable to import LTE library
- markopraakli last edited by markopraakli
Hello
I tried to use this example of my brand new FiPy:
But after "from network import LTE" I got an error:
import socket
import ssl
from network import LTE
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: cannot import name LTE
Where is LTE library or I missing something?
Firmware: MicroPython v1.8.6-834-g5677703c on 2017-11-13; FiPy with ESP32
We're hoping to use Telstra Australia
- markopraakli last edited by
@jmarcelino said in FiPy unable to import LTE library:
If you tell us the carrier you're planning to use that would help.
Hi, We're plaaning to use Telia Sonera
They should have support for LTE Cat M1 allready. Currently we're using other much expensive GSM modules, if we'll can get use Pycom modules then we're super happy... 100pcs or more :)
- jmarcelino last edited by
Hi @markopraakli
Even though we use an approved modem design the nature of LTE Cat M1 means we have to do some work to certify the product before it's allowed on networks.
We shipped the final test units to carriers just before this main shipments and those are still undergoing certification. Once everything is ready we'll post an update to enable that functionality.
Note only a small number of carriers has yet deployed Cat M1. If you tell us the carrier you're planning to use that would help.
I'll see if I can put up a post today explaining this and asking for feedback on which networks people want to use as it would help with our planning.
@markopraakli
you miss this post
especially this sentence
Sigfox and LTE Cat M1 support will be coming later this week though a new version of the firmware updater tool | https://forum.pycom.io/topic/2295/fipy-unable-to-import-lte-library | CC-MAIN-2022-21 | refinedweb | 310 | 63.53 |
![if !IE]> <![endif]>
Communicating Using Sockets
The Windows Sockets API is based on the BSD Sockets API, so many of the calls are very similar. There are some minor differences in setup. The most obvious differences are the requirements for header files. Listing 6.31 shows the steps necessary to include the networking sockets functions. The Windows header automatically includes the 1.1 ver-sion of the Windows socket library. The #define WIN32_LEAN_AND_MEAN avoids the inclusion of this header and allows the application to include the 2.0 version of the Windows socket library.
The Microsoft compiler also allows the source to contain a directive indicating which libraries are to be linked into the executable. In this case, the library is ws2_32.lib; this is more convenient than having to specify it on the command line. We also allocate a global variable that will be used to store the handle of an event object. The event object will be used to ensure that the server thread is ready before the client thread sends any data.
Listing 6.31 Including the Header Files for Windows Sockets
#ifndef WIN32_LEAN_AND_MEAN
#define WIN32_LEAN_AND_MEAN
#endif
#include <windows.h>
#include <process.h>
#include <winsock2.h>
#include <stdio.h>
#pragma comment( lib, "ws2_32.lib" )
HANDLE hEvent;
Listing 6.32 shows the code for the main thread. The first action that the main thread needs to take is to start the Windows sockets library with a call into WSAStartup(); this takes two parameters. The first parameter is the version number of the library that the application requires; version 2.2 is current. The second parameter is the address of a WSADATA structure where the description of the sockets implementation will be stored.
Listing 6.32 The Main Thread Is Responsible for Starting Both the Client and Server Threads
int _tmain( int argc, _TCHAR* argv[] )
{
HANDLE serverthread, clientthread;
WSADATA wsaData;
WSAStartup( MAKEWORD(2,2), &wsaData );
hEvent = CreateEvent( 0, true, 0, 0 );
serverthread = (HANDLE)_beginthreadex( 0, 0, &server, 0, 0, 0 );
clientthread = (HANDLE)_beginthreadex( 0, 0, &client, 0, 0, 0 ); WaitForSingleObject( clientthread, INFINITE );
CloseHandle( clientthread );
CloseHandle( hEvent ); getchar();
WSACleanup();
return 0;
}
As previously mentioned, the code uses an event object to ensure that the server thread starts up before the client thread sends a request. The event is created through a call to CreateEvent(). The event is created unsignaled so that it can be signaled by the server thread, and this signaling will enable the client thread to progress. The event is set up to require a manual reset so that once it has been signaled, it remains in that state. This ensures that any later client threads will not block on the event.
The main thread then starts the server thread and the client thread using calls to _beginthreadex(). It waits until the client thread completes before exiting. The final action of the main thread is to call WSACleanup() to close down the sockets library.
Listing 6.33 shows the code for the client thread. The client thread is going to send data to the server and then print the response from the server. The client thread first opens up a socket and then waits for the event object to become signaled, indicating that the server is ready, before continuing.
Listing 6.33 Code for the Client Thread
unsigned int __stdcall client( void * data )
{
SOCKET ConnectSocket = socket( AF_INET, SOCK_STREAM, 0 );
WaitForSingleObject( hEvent, INFINITE );
struct sockaddr_in server; ZeroMemory( &server, sizeof(server) ); server.sin_family = AF_INET;
server.sin_addr.s_addr = inet_addr( "127.0.0.1" ); server.sin_port = 7780;
connect( ConnectSocket, (struct sockaddr*)&server, sizeof(server) );
printf( "Sending 'abcd' to server\n" );
char buffer[1024];
ZeroMemory( buffer, sizeof(buffer) );
strncpy_s( buffer, 1024, "abcd", 5 );
send( ConnectSocket, buffer, strlen(buffer)+1, 0 );
ZeroMemory( buffer, sizeof(buffer) );
recv( ConnectSocket, buffer, 1024, 0 );
printf( "Got '%s' from server\n", buffer );
printf( "Close client\n" );
shutdown( ConnectSocket, SD_BOTH );
closesocket( ConnectSocket );
return 0;
}
The code uses the socket to connect to port 7780 on the localhost (localhost is defined as the IP address 127.0.0.1). Once connected, it sends the string "abcd" to the server and then waits to receive a string back from the server. Once it receives the returned string, it shuts down the connection and then closes the socket.
Listing 6.34 shows the code for the server thread. The server thread does not actually handle the response to any client thread. It exists only to accept incoming connections and to pass the details of this connection onto a newly created thread that will handle the response.
Listing 6.34 Code for the Server Thread
unsigned int __stdcall server( void * data )
{
SOCKET newsocket;
SOCKET ServerSocket = socket( AF_INET, SOCK_STREAM, 0 );
struct sockaddr_in server; ZeroMemory( &server, sizeof(server) ); server.sin_family = AF_INET; server.sin_addr.s_addr = INADDR_ANY; server.sin_port = 7780;
bind( ServerSocket,(struct sockaddr*)&server, sizeof(server) ); listen( ServerSocket, SOMAXCONN );
SetEvent(hEvent);
while ( (newsocket = accept( ServerSocket, 0, 0) )!=INVALID_SOCKET )
{
HANDLE newthread;
newthread=(HANDLE)_beginthread( &handleecho, 0, (void*)newsocket);
}
printf( "Close server\n" );
shutdown( ServerSocket, SD_BOTH );
closesocket( ServerSocket );
return 0;
}
Listing 6.35 shows the code for the thread that will actually respond to the client. This thread will loop around, receiving data from the client thread and sending the same data back to the client thread until it receives a return value that indicates the socket has closed or some other error condition. At that point, the thread will shut down and close the socket.
Listing 6.35 Code for the Echo Thread
void handleecho( void * data )
{
char buffer[1024];
int count;
ZeroMemory( buffer, sizeof(buffer) );
int socket=(int)data;
while ( (count = recv( socket, buffer, 1023, 0) ) >0 )
{
printf( "Received %s from client\n", buffer ); int ret = send( socket, buffer, count, 0 );
}
printf( "Close echo thread\n" );
shutdown( socket, SD_BOTH );
closesocket( socket );
}
The first activity of the server thread is to open a socket. It then binds this socket to accept any connections to port 7780. The socket also needs to be placed into the listen state; the value SOMAXCONN contains the maximum number of connections that will be queued for acceptance. Once these steps have been completed, the server thread signals the event, which then enables the client thread to attempt to connect.
The main thread then waits in a loop accepting connections until it receives a con-nection identified as INVALID_SOCKET. This will happen when the Windows socket library is shut down and is how the server thread will exit cleanly when the other thread exits.
Every time the server thread accepts a connection, a new thread is created, and the identification of this new connection is passed into the newly created thread. It is impor-tant to notice that the call to create the thread that will actually handle the work is _beginthread(). The _beginthread() call will create a new thread that does not leave resources that need to be cleaned up with a call to CloseHandle() when it exits. In contrast, the client and server threads were created by the master thread with a call to _beginthreadex(), which means that they will have resources assigned to them until a call to CloseHandle() is made.
When the loop finally receives an INVALID_SOCKET, the server thread shuts down and then closes the socket.
The code for Windows is sufficiently similar to that for Unix-like operating systems that it is possible to convert between the two. Although the example program is rela-tively simple, it illustrates the key steps necessary for communication between two threads, two processes, or two systems.
Related Topics
Copyright © 2018-2020 BrainKart.com; All Rights Reserved. Developed by Therithal info, Chennai. | https://www.brainkart.com/article/Communicating-Using-Sockets_9484/ | CC-MAIN-2019-51 | refinedweb | 1,259 | 62.17 |
Stefan Sperling wrote on Wed, Jun 01, 2011 at 01:22:40 +0200:
> On Wed, Jun 01, 2011 at 01:15:56AM +0200, Johan Corveleyn wrote:
> > [[[
> > The usual namespace rules apply: only names that begin with "SVN_"
> > and don't contain double underscores are considered part of the public
> > API. Everything else is not officially supported.
> > ]]]
> >
I just edited this paragraph to remove the first of the two claims it makes.
> > Thinking: "I don't think this should be part of the public API", so I
> > can just dispense with the "SVN_" prefix.
>
> In this case I would advocate for SVN__DISABLE_PREFIX_SUFFIX_SCANNING :)
> You're right that it doesn't need to be part of the public API.
Received on 2011-06-01 02:50:26 CEST
This is an archived mail posted to the Subversion Dev
mailing list. | http://svn.haxx.se/dev/archive-2011-06/0006.shtml | CC-MAIN-2016-22 | refinedweb | 137 | 67.28 |
Recently Browsing 0 members
No registered users viewing this page.
Similar Content
- By Chimp
(This example uses a "flash" animation and therefore needs the "flash player" to work. Since "flash player" is discontinued and no longer present on many systems, this script may not work. I have therefore prepared a new script which does not use flash, but is based only on javascript. You can find it in this new post:)
If you set within a window, a given color as transparent, every area of that window painted with that color, act as an hole through wich you can see what's behind, and click on what's behind as well.
What's funny is that)
- tarretarretarre
About AutoIt-Events
AutoIt-Events is an event Observer and is a core dependency for Autoit-Socket-IO but can be used for any Autoit project.
Example
#include "Event.au3" ; Subscribe listeners _Event_Listen(UserCreatedEvent, SendWelcomeMail) _Event_Listen(UserCreatedEvent, RegisterNewsLetter) ; Fire event _Event(UserCreatedEvent, @UserName, "tarre.islam@gmail.com") Func UserCreatedEvent(Const ByRef $oEvent, $name, $email) ; via $oEvent you can pass data to its listeners $oEvent.add("name", $name) $oEvent.add("email", $email) $oEvent.add("id", 1) EndFunc Func SendWelcomeMail(Const $oEvent) MsgBox(64, "Welcome mail sent", "Welcome mail sent to " & $oEvent.item("name") & " with email " & $oEvent.item("email")) EndFunc Func RegisterNewsLetter(Const $oEvent) MsgBox(64, "News letter registred", "News letter bound to user id " & $oEvent.item("id")) EndFunc
The code is also available at Github
Autoit-Events-1.0.0.zip
- Getting started
Download the script from AutoIt or pull it from the official github repo git@github.com:tarreislam/Autoit-Socket-IO.git and checkout the tag 4.0.0-beta Check out the documentaion Take a look in the examples/ folder Changelog
To see changes from 3.x.x and 2.x.x please checkout the 3.x branch
Version 4.0.0-beta (This update break scripts.)
Code base fully rewritten with Autoit-Events and decoupled to improve code quality and reduce bloat. The new UDF is very different from 3.x.x so please checkout the UPGRADE guide to fully understand all changes Added new documentation documentaion Success stories
Since December 2017-now I have used version 1.5.0 in an production environment for 150+ clients with great success, the only downtime is planned windows updates and power outages.
Newest version (2020-09-15!)
Older versions (Not supported anymore)
Autoit-Socket-IO-1.0.0.zip Autoit-Socket-IO-1.1.0.zip Autoit-Socket-IO-1.3.0.zip Autoit-Socket-IO-1.4.0.zip Autoit-Socket-IO-1.5.0.zip
Autoit-Socket-IO-2.0.0.zip
-
-
Recommended Posts
You need to be a member in order to leave a comment
Sign up for a new account in our community. It's easy!Register a new account
Already have an account? Sign in here.Sign In Now | https://www.autoitscript.com/forum/topic/186422-who-is-who-a-little-dragdrop-game/ | CC-MAIN-2022-05 | refinedweb | 482 | 50.94 |
"Kir Kolyshkin" <kir@swsoft.com> writes:> Eric,>> Could you please hold off the horses a bit and wait till Pavel Emelyanov> returns? It means next Monday; he's currently at a conference whose organisers> don't provide internet access.When we decided to go top down (i.e. user interface first) instead ofbottom up with the pid namespace implementation it was myunderstanding that we had agreed we would make the pid namespacesdepend on CONFIG_EXPERIMENTAL so that we wouldn't be stuck foreversupporting early ABI mistakes.So to my knowledge the conversation has already happened. I believesomething in the confusion of trying to use these options to shrinkthe kernel and the futility of that, caused whatever config optionswe had before to be dropped.Further I was happy to let Pavel and Suka work on this code becausethe appeared to know what they were doing and it freed me to do otherthings. I don't think there are any mysteries in what we are tryingto do that I need them to explain.> I feel it makes great sense to review/discuss patches first on containers@> first before submitting directly to lkml/Linus.My feel before starting to review the pid namespace patches was thatthe work was essentially done except a handful of minor details. Uponcloser examination, I found that not to be the case. My rough fixqueue had 25 or so patches as of last night to fix pid namespaceissues.I have no confidence that we will fix all of the pid namespaces issuesbefore 2.6.24-final. I do think we can get most of them fixed.Given that most of the remaining issues are integration issueswith the rest of the kernel having the code merged should makeit much easier to see what is going on and actually fix things.So I am not in favor of reverting this code despite seeing numerousproblems.> Speaking of this particular patch -- I don't understand how you fix> "innumerable little bugs" by providing stubs instead of real functions.> Sent from my BlackBerry; please reply to kir@openvz.orgIt doesn't fix the bugs it avoids them because there is no way toget to the them and trigger them. So far I have yet to find a bugthat is a problem with only a single pid namespace in the kernel.Since everyone agrees that there are at least some deficiencies inthe current pid namespace I think this makes sense, to markthe code as EXPERIMENTAL and have a way for people who care toshut it off just so they don't have to worry about new issues.As far as how the config option is implemented I don't much careso long as I get the -EINVAL when I pass CLONE_NEWPID as root.Essentially this patch is part of a defense in depth against pidnamespace problems hitting people. This patch is my first lineof defense. Actually fixing all of the rest of the known bugsis the other line.Eric-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at | https://lkml.org/lkml/2007/10/26/493 | CC-MAIN-2016-07 | refinedweb | 521 | 62.27 |
slideshow movie has stopped loading all images and stallsWildtypitch Jun 4, 2009 6:26 AM
I made a basic slideshow that loads and plays photos with comments from an xml file. I've just uploaded some new photo's and edited the xml to include them but now flash stops while loading the series of photo's and just stalls. Removing the files from the xml does not work either and now I have no movie. Tried various was of uploading etc and nothing works, the movie runs fine in testing locally. It's extremly anoying as it was designed to be very simple to update with new images via the xml.
attached are all the relevant files
- slideshow.swf 7.5 K
1. Re: slideshow movie has stopped loading all images and stallsWildtypitch Jun 4, 2009 6:27 AM (in response to Wildtypitch)
the xml (upload error)
- slideshow.xml 1.2 K
2. Re: slideshow movie has stopped loading all images and stallsWildtypitch Jun 4, 2009 6:35 AM (in response to Wildtypitch)
attached relevant files
3. Re: slideshow movie has stopped loading all images and stallsWildtypitch Jun 4, 2009 6:37 AM (in response to Wildtypitch)
files
4. Re: slideshow movie has stopped loading all images and stallskglad Jun 4, 2009 6:38 AM (in response to Wildtypitch)
during the "stall" did you wait long enough for everything to load?
are you loading sequentially or loading in a for-loop? are you using code that assumes image loading will be as rapid as when loading from your hard disk? are you using preloader code?
5. Re: slideshow movie has stopped loading all images and stallsWildtypitch Jun 4, 2009 6:49 AM (in response to kglad)
Not sure but I think preloaded. I have waited it stops halfway through. Take a look here.
here is the actionscript
import mx.transitions.Tween;
import mx.transitions.easing.*;
var _this = this;
var blend:String = 'noBlend'; // disolve, fadeInFadeOut, noBlend
var slideShowWidth:Number;
var slideShowHeight:Number;
var slides:Array = [];
var speed:Number = 1;
var slideContainer:MovieClip;
var myShowXML = new XML();
myShowXML.ignoreWhite = true;
myShowXML.load("slideshow.xml");
myShowXML.onLoad = function(success:Boolean) {
if(success){
_this.slideShowWidth = this.firstChild.attributes.width;
_this.slideShowHeight = this.firstChild.attributes.height;
_this.speed = this.firstChild.attributes.speed;
_this.blend = this.firstChild.attributes.blendMode;
for(var i:Number = 0; i < this.firstChild.childNodes.length; ++i ){
var data:Array = [];
data.url = this.firstChild.childNodes[i].attributes.url;
data.title = this.firstChild.childNodes[i].attributes.title;
slides.push(data);
}
createContainer();
loadImage(0);
}
else{
trace('ERROR (could not load slideshow.xml)');
}
};
function createContainer() {
_this.slideContainer = _this.createEmptyMovieClip("slideContainer", _this.getNextHighestDepth());
_this.slideContainer._x =215;
_this.slideContainer._y = 0;
}
function loadImage(loadCounter) {
var loader:MovieClipLoader = new MovieClipLoader();
var listener = new Object();
loader.addListener(listener);
loadCounter = undefined == loadCounter
? 0
: loadCounter;
_root.myClips_array = [];
listener.onLoadProgress = function(target) {
_this.preloader.textfield.text = "Loading.. "+(loadCounter+1)+"/"+slides.length+" Completed";
};
listener.onLoadComplete = function(target:MovieClip) {
target._alpha = 0;
_this.slides[loadCounter].mc = target;
if(++loadCounter < _this.slides.length){
loadImage(loadCounter);
}
else{
_this.preloader._visible = false;
moveSlide(0);
}
};
//trace('load: ' +_this.slides[loadCounter].url);
var mc = _this.slideContainer.createEmptyMovieClip(loadCounter, _this.slideContainer.getNextHighestDepth());
loader.loadClip(slides[loadCounter].url, mc);
}
function moveSlide(slideCounter) {
trace('slideCounter: ' +slideCounter+ ' blendMode: ' +blend);
//debug.text += slideCounter+ ' - ' + slides[slideCounter].url +'\n';
slideCounter = slideCounter < slides.length
? slideCounter
: 0;
var slide:MovieClip = slides[slideCounter].mc;
var title:String = slides[slideCounter].title;
var textfield = getTitleText();
textfield.htmlText = slides[slideCounter].title;
textfield._y = Stage.height -textfield._height - 5;
//trace(textfield._height);
if('noBlend' == blend){
slide._alpha = 100;
}
else if('fadeInFadeOut' == blend){
new Tween(slide, "_alpha", Strong.easeOut, 0, 100, 1, true);
}
else if('disolve' == blend){
new Tween(slide, "_alpha", Strong.easeOut, 0, 100, 3, true);
}
doLater(this.speed, function(){
if('noBlend' == _this.blend){
slide._alpha = 0;
}
else if('fadeInFadeOut' == _this.blend){
var fadeOut:Tween = new Tween(slide, "_alpha", Strong.easeOut, 100, 0, 1, true);
doLater(0.5, function(){
_this.moveSlide(++slideCounter);
});
fadeOut.onMotionFinished = function(){
//_this.moveSlide(++slideCounter);
};
return;
}
else if('disolve' == _this.blend){
new Tween(slide, "_alpha", Strong.easeOut, 100, 0, 5, true);
}
moveSlide(++slideCounter);
});
}
function doLater(time:Number, delegate:Function):Number {
var intervalId:Number;
var _delegate:Function = function() {
clearInterval(intervalId);
delegate();
};
intervalId = setInterval(
_delegate,
time * 1000
);
return intervalId;
}
function getTitleText():TextField {
if(this['titleTextfield']){
return this['titleTextfield'];
}
var titleTextfield:TextField = createTextField ("titleTextfield",_root.getNextHighestDepth (),15,0,180,100);
titleTextfield.autoSize = "left";
titleTextfield.wordWrap = true;
//titleTextfieldt.MySpace page</font></a>";
titleTextfield.html = true;
var tf:TextFormat = new TextFormat();
tf.font = 'Arial';
tf.align = 'left';
tf.size = 10;
tf.color = 0x999999;
titleTextfield.setNewTextFormat(tf);
return titleTextfield;
}
6. Re: slideshow movie has stopped loading all images and stallskglad Jun 4, 2009 6:57 AM (in response to Wildtypitch)
if your images start to load and display and then stall, at the point it stalls you have a file name problem. ie, if the ith image fails to load, in you xml file, myShowXML.firstChild.childNodes[i].attributes.url has a file name that flash cannot find.
if flash is finding that file when you run locally, then either:
1. you failed to upload that file or
2. that file name has a typo. for example, image22.JPG looks the same as image22.jpg locally but not online.
7. Re: slideshow movie has stopped loading all images and stallsWildtypitch Jun 4, 2009 7:27 AM (in response to kglad)
it runs locally fine. When uploaded it does not. All image files are uploaded and are fine. I have changed position of the files in the xml and the same error happens on the same number. tried removing some of the later files and the error happens at an earlier number in the series. The image files are not the problem. I've downloaded the unworking swf, xml and image files and re-ran them locally and they worked. When on the hosting server thought they stop working. visi t or with it embeded and both have the same problem so there is no html problem in the page. Maybe javascript code has a problem but it is what is supplied by dreamweaver and have no reason to doubt it.
8. Re: slideshow movie has stopped loading all images and stallskglad Jun 4, 2009 8:44 AM (in response to Wildtypitch)
instead of arguing with me, read my message above it explains your error.
9. Re: slideshow movie has stopped loading all images and stallsWildtypitch Jun 4, 2009 9:30 AM (in response to kglad)
Sorry I'm an idiot, and a terrible proof reader.
Thank you
10. Re: slideshow movie has stopped loading all images and stallskglad Jun 4, 2009 10:04 AM (in response to Wildtypitch)
you're welcome.
sorry i was so abrupt. | https://forums.adobe.com/thread/442719 | CC-MAIN-2019-04 | refinedweb | 1,111 | 53.27 |
1.835 23 Dec 2014 * Silence more compiler warnings 1.834 11 Dec 2014 * Makefile.PL: version check is missing a zero RT #100844 1.833 9 Dec 2014 * More Silence compiler warnings * 1.832 breaks bleadperl C89 build RT #100812 1.832 8 Dec 2014 * Silence compiler warnings * C++ change from blead 1.831 15 November 2013 * C99 comment is a nogo RT #90383 1.830 2 November 2013 * Memory leaks when failed to open db RT #89589 * DB_File uses AutoLoader for no reason RT #88258 1.829 7 July 2013 * make realclean: removing all files RT #68214 * Documented the issue where the error below BDB0588 At least one secondary cursor must be specified to DB->join * DB_File installs to wrong place for CPAN version RT #70420 Makefile.PL prevents INSTALLDIRS on command line. RT #68287: Makefile.PL prevents INSTALLDIRS on command line. * typo fix RT #85335 1.828 7 May 2013 * Minor change to build with Berkeley DB 6.x 1.827 25 Jan 2012 * DB_File.pm - Don't use "@_" construct [RT #79287] 1.826 25 Jan 2012 * t/db-btree.t - fix use of "length @array" [RT #74336] 1.825 24 Jan 2012 * t/db-btree.t - fix use of "length @array" [RT #74336] 1.824 6 Aug 2011 * Amendments to tests to work in blead [RT #70108] 1.823 6 Aug 2011 * croak if attempt to freeze/thaw DB_File object [RT #69985] 1.822 12 March 2011 * Link rot [rt.cpan.org #68739] 1.822 12 March 2011 * Keep DB_File's warnings in sync with perl's [rt.cpan.org #66339] from README. The page referenced doesn't exist anymore. 1.817 27 March 2008 * Updated dbinfo * Applied core patch 32299 - Re-apply change #30562 * Applied core patch 32208 * Applied core patch 32884 - use MM->parse_version() in Makefile.PL * Applied core patch 32883 - Silence new warning grep in void context warning * Applied core patch 32704 to remove use of PL_na in typemap * Applied core patch 30562 to fix a build issue on OSF 1.816 28 October 2007 * Clarified the warning about building with a different version of Berkeley DB that is used at runtime. * Also made the boot version check less strict. [rt.cpan.org #30013] 1.815 4 February 2007 * A few casting cleanups for building with C++ from Steve Peters. * Fixed problem with recno which happened if you changed directory after opening the database. Problem reported by Andrew Pam. documentation - reported by Jeremy Mates & Mark Jason Dominus. * dbinfo updated to report when a database is encrypted. 1.806 22nd October 2002 * Fixed problem when trying to build with a multi-threaded perl. * Tidied up the recursion detection code. * merged core patch 17844 - missing dTHX declarations. * merged core patch 17838 1.805 1st September 2002 * Added support to allow DB_File to build with Berkeley DB 4.1.X * Tightened up the test harness to test that calls to untie don't generate the "untie attempted while %d inner references still exist" warning. * added code to guard against calling the callbacks (compare,hash & prefix) recursively. * passing undef for the flags and/or mode when opening a database could cause a "Use of uninitialized value in subroutine entry" warning. Now silenced. * DBM filter code beefed up to cope with read-only $_. 1.804 2nd June 2002 * Perl core patch 14939 added a new warning to "splice". This broke the db-recno test harness. Fixed. * merged core patches 16502 & 16540. 1.803 1st March 2002 * Fixed a problem with db-btree.t where it complained about an "our" variable redeclaration. * FETCH, STORE & DELETE don't map the flags parameter into the equivalent Berkeley DB function anymore. 1.802 6th January 2002 * The message about some test failing in db-recno.t had the wrong test numbers. Fixed. * merged core patch 13942. 1.801 26th November 2001 * Fixed typo in Makefile.PL * Added "clean" attribute to Makefile.PL 1.800 23rd November 2001 * use pport.h for perl backward compatibility code. * use new ExtUtils::Constant module to generate XS constants. * upgrade Makefile.PL upgrade/downgrade code to toggle "our" with "use vars" 1.79 22nd October 2001 * Added a "local $SIG{__DIE__}" inside the eval that checks for the presence of XSLoader s suggested by Andrew Hryckowin. * merged core patch 12277. * Changed NEXTKEY to not initialise the input key. It isn't used anyway. 1.79 22nd October 2001 * Fixed test harness for cygwin 1.78 30th July 2001 * the test in Makefile.PL for AIX used -plthreads. Should have been -lpthreads * merged Core patches 10372, 10335, 10372, 10534, 10549, 10643, 11051, 11194, 11432 * added documentation patch regarding duplicate keys from Andrew Johnson 1.77 26th April 2001 * AIX is reported to need -lpthreads, so Makefile.PL now checks for AIX and adds it to the link options. * Minor documentation updates. * Merged Core patch 9176 * Added a patch from Edward Avis that adds support for splice with recno databases. * Modified Makefile.PL to only enable the warnings pragma if using perl 5.6.1 or better. 1.76 15th January 2001 * Added instructions for using LD_PRELOAD to get Berkeley DB 2.x to work with DB_File on Linux. Thanks to Norbert Bollow for sending details of this approach. 1.75 17th December 2000 * Fixed perl core patch 7703 * Added support to allow DB_File to be built with Berkeley DB 3.2 -- btree_compare, btree_prefix and hash_cb needed to be changed. * Updated dbinfo to support Berkeley DB 3.2 file format changes. 1.74 10th December 2000 * A "close" call in DB_File.xs needed parenthesised to stop win32 from thinking it was one of its macros. * Updated dbinfo to support Berkeley DB 3.1 file format changes. * DB_File.pm & the test hasness now use the warnings pragma (when available). * Included Perl core patch 7703 -- size argument for hash_cb is different for Berkeley DB 3.x * Included Perl core patch 7801 -- Give __getBerkeleyDBInfo the ANSI C treatment. * @a = () produced the warning 'Argument "" isn't numeric in entersub' This has been fixed. Thanks to Edward Avis for spotting this bug. * Added note about building under Linux. Included patches. * Included Perl core patch 8068 -- fix for bug 20001013.009 When run with warnings enabled "$hash{XX} = undef " produced an "Uninitialized value" warning. This has been fixed. 1.73 31st May 2000 * Added support in version.c for building with threaded Perl. * Berkeley DB 3.1 has reenabled support for null keys. The test harness has been updated to reflect this. 1.72 16th January 2000 * Added hints/sco.pl * The module will now use XSLoader when it is available. When it isn't it will use DynaLoader. * The locking section in DB_File.pm has been discredited. Many thanks to David Harris for spotting the underlying problem, contributing the updates to the documentation and writing DB_File::Lock (available on CPAN). 1.71 7th September 1999 * Fixed a bug that prevented 1.70 from compiling under win32 * Updated to support Berkeley DB 3.x * Updated dbinfo for Berkeley DB 3.x file formats. 1.70 4th August 1999 * Initialise $DB_File::db_ver and $DB_File::db_version with GV_ADD|GV_ADDMULT -- bug spotted by Nick Ing-Simmons. * Added a BOOT check to test for equivalent versions of db.h & libdb.a/so. 1.69 3rd August 1999 * fixed a bug in push -- DB_APPEND wasn't working properly. * Fixed the R_SETCURSOR bug introduced in 1.68 * Added a new Perl variable $DB_File::db_ver 1.68 22nd July 1999 * Merged changes from 5.005_58 * Fixed a bug in R_IBEFORE & R_IAFTER processing in Berkeley DB 2 databases. * Added some of the examples in the POD into the test harness. 1.67 6th June 1999 * Added DBM Filter documentation to DB_File.pm * Fixed DBM Filter code to work with 5.004 * A few instances of newSVpvn were used in 1.66. This isn't available in Perl 5.004_04 or earlier. Replaced with newSVpv. 1.66 15th March 1999 * Added DBM Filter code 1.65 6th March 1999 * Fixed a bug in the recno PUSH logic. * The BOOT version check now needs 2.3.4 when using Berkeley DB version 2 1.64 21st February 1999 * Tidied the 1.x to 2.x flag mapping code. * Added a patch from Mark Kettenis <kettenis@wins.uva.nl> to fix a flag mapping problem with O_RDONLY on the Hurd * Updated the message that db-recno.t prints when tests 51, 53 or 55 fail. 1.63 19th December 1998 * Fix to allow DB 2.6.x to build with DB_File * Documentation updated to use push,pop etc in the RECNO example & to include the find_dup & del_dup methods. 1.62 30th November 1998 Added hints/dynixptx.pl. Fixed typemap -- 1.61 used PL_na instead of na 1.61 19th November 1998 Added a note to README about how to build Berkeley DB 2.x when using HP-UX. Minor modifications to get the module to build with DB 2.5.x Fixed a typo in the definition of O_RDONLY, courtesy of Mark Kettenis. 1.60 Changed the test to check for full tied array support 1.59 Updated the license section. Berkeley DB 2.4.10 disallows zero length keys. Tests 32 & 42 in db-btree.t and test 27 in db-hash.t failed because of this change. Those tests have been zapped. Added dbinfo to the distribution. 1.58 Tied Array support was enhanced in Perl 5.004_57. DB_File now supports PUSH,POP,SHIFT,UNSHIFT & STORESIZE. Fixed a problem with the use of sv_setpvn. When the size is specified as 0, it does a strlen on the data. This was ok for DB 1.x, but isn't for DB 2.x. 1.57 If Perl has been compiled with Threads support,the symbol op will be defined. This clashes with a field name in db.h, so it needs to be #undef'ed before db.h is included. 1.56 Documented the Solaris 2.5 mutex bug 1.55 Merged 1.16 changes. 1.54 Fixed a small bug in the test harness when run under win32 The emulation of fd when useing DB 2.x was busted. 1.53 Added DB_RENUMBER to flags for recno. 1.52 Patch from Nick Ing-Simmons now allows DB_File to build on NT. Merged 1.15 patch. 1.51 Fixed the test harness so that it doesn't expect DB_File to have been installed by the main Perl build. Fixed a bug in mapping 1.x O_RDONLY flag to 2.x DB_RDONLY equivalent 1.50 DB_File can now build with either DB 1.x or 2.x, but not both at the same time. 1.16 A harmless looking tab was causing Makefile.PL to fail on AIX 3.2.5 Small fix for the AIX strict C compiler XLC which doesn't like __attribute__ being defined via proto.h and redefined via db.h. Fix courtesy of Jarkko Hietaniemi.. 1.14 Made it illegal to tie an associative array to a RECNO database and an ordinary array to a HASH or BTREE database. 1.13 Minor changes to DB_FIle.xs and DB_File.pm 1.12 Documented the incompatibility with version 2 of Berkeley DB. 1.11 Documented the untie gotcha. 1.10 Fixed fd method so that it still returns -1 for in-memory files when db 1.86 is used. 1.09 Minor bug fix in DB_File::HASHINFO, DB_File::RECNOINFO and DB_File::BTREEINFO. Changed default mode to 0666. 1.08 Documented operation of bval. 1.07 Fixed bug with RECNO, where bval wasn't defaulting to "\n". 1.06 Minor namespace cleanup: Localized PrintBtree. 1.05 Made all scripts in the documentation strict and -w clean. Added logic to DB_File.xs to allow the module to be built after Perl is installed. 1.04 Minor documentation changes. Fixed a bug in hash_cb. Patches supplied by Dave Hammen, <hammen@gothamcity.jsc.nasa.govt>. Fixed a bug with the constructors for DB_File::HASHINFO, DB_File::BTREEINFO and DB_File::RECNOINFO. Also tidied up the constructors to make them -w clean. Reworked part of the test harness to be more locale friendly. 1.03 Documentation update. DB_File now imports the constants (O_RDWR, O_CREAT etc.) from Fcntl automatically. The standard hash function exists is now supported. Modified the behavior of get_dup. When it returns an associative array, the value is the count of the number of matching BTREE values. 1.02 Merged OS/2 specific code into DB_File.xs Removed some redundant code in DB_File.xs. Documentation update. Allow negative subscripts with RECNO interface. Changed the default flags from O_RDWR to O_CREAT|O_RDWR. The example code which showed how to lock a database needed a call to sync added. Without it the resultant database file was empty. Added get_dup method. 1.01 Fixed a core dump problem with SunOS. The return value from TIEHASH wasn't set to NULL when dbopen returned an error. 1.0 DB_File has been in use for over a year. To reflect that, the version number has been incremented to 1.0. Added complete support for multiple concurrent callbacks. Using the push method on an empty list didn't work properly. This has been fixed. 0.3 Added prototype support for multiple btree compare callbacks. 0.2 When DB_File is opening a database file it no longer terminates the process if dbopen returned an error. This allows file protection errors to be caught at run time. Thanks to Judith Grass <grass@cybercash.com> for spotting the bug. 0.1 First Release. | https://metacpan.org/changes/distribution/DB_File | CC-MAIN-2015-35 | refinedweb | 2,248 | 71 |
Sharing limited resources
Posted on March 1st, 2001
You can think of a single-threaded program as one lonely entity moving around through your problem space and doing one thing at a time. Because there’s only one entity, you never have to think about the problem of two entities trying to use the same resource at the same time, like two people trying to park in the same space, walk through a door at the same time, or even talk at the same time.
With multithreading, things aren’t lonely anymore, but you now have the possibility of two or more threads trying to use the same limited resource at once. Colliding over a resource must be prevented or else you’ll have two threads trying to access the same bank account at the same time, print to the same printer, or adjust the same valve, etc.
Improperly accessing resources
Consider a variation on the counters that have been used so far in this chapter. In the following example, each thread contains two counters that are incremented and displayed inside run( ). In addition, there’s another thread of class Watcher that is watching the counters to see if they’re always equivalent. This seems like a needless activity, since looking at the code it appears obvious that the counters will always be the same. But that’s where the surprise comes in. Here’s the first version of the program:
//: Sharing1.java // Problems with resource sharing while threading import java.awt.*; import java.awt.event.*; import java.applet.*; class TwoCounter extends Thread { private boolean started = false; private TextField t1 = new TextField(5), t2 = new TextField(5); private Label l = new Label("count1 == count2"); private int count1 = 0, count2 = 0; // Add the display components as a panel // to the given container: public TwoCounter(Container c) { Panel p = new Panel(); p.add(t1); p.add(t2); p.add(l); c.add(p); } public void start() { if(!started) { started = true; super.start(); } } public void run() { while (true) { t1.setText(Integer.toString(count1++)); t2.setText(Integer.toString(count2++)); try { sleep(500); } catch (InterruptedException e){} } } public void synchTest() { Sharing1.incrementAccess(); if(count1 != count2) l.setText("Unsynched"); } } class Watcher extends Thread { private Sharing1 p; public Watcher(Sharing1 p) { this.p = p; start(); } public void run() { while(true) { for(int i = 0; i < p.s.length; i++) p.s[i].synchTest(); try { sleep(500); } catch (InterruptedException e){} } } } public class Sharing1 extends Applet { TwoCounter[][numCounters]; for(int i = 0; i < s.length; i++) s[i] = new TwoCounter(Sharing1.this); } } public static void main(String[] args) { Sharing1 applet = new Sharing); } } ///:~
As before, each counter contains its own display components: two text fields and a label that initially indicates that the counts are equivalent. These components are added to the Container in the TwoCounter constructor. Because this thread is started via a button press by the user, it’s possible that start( ) could be called more than once. It’s illegal for Thread.start( ) to be called more than once for a thread (an exception is thrown). You can see that the machinery to prevent this in the started flag and the overridden start( ) method.
In run( ), count1 and count2 are incremented and displayed in a manner that would seem to keep them identical. Then sleep( ) is called; without this call the program balks because it becomes hard for the CPU to swap tasks.
The synchTest( ) method performs the apparently useless activity of checking to see if count1 is equivalent to count2; if they are not equivalent it sets the label to “Unsynched” to indicate this. But first, it calls a static member of the class Sharing1 that increments and displays an access counter to show how many times this check has occurred successfully. (The reason for this will become apparent in future variations of this example.)
The Watcher class is a thread whose job is to call synchTest( ) for all of the TwoCounter objects that are active. It does this by stepping through the array that’s kept in the Sharing1 object. You can think of the Watcher as constantly peeking over the shoulders of the TwoCounter objects.
Sharing1 contains an array of TwoCounter objects that it initializes in init( ) and starts as threads when you press the “start” button. Later, when you press the “Observe” button, one or more observers are created and freed upon the unsuspecting TwoCounter threads.
Note that to run this as an applet in a browser, your Web page will need to contain the lines:
<applet code=Sharing1 width=650 height=500> <param name=size <param name=observers </applet>
You can change the width, height, and parameters to suit your experimental tastes. By changing the size and observers you’ll change the behavior of the program. You can also see that this program is set up to run as a stand-alone application by pulling the arguments from the command line (or providing defaults).
Here’s the surprising part. In TwoCounter.run( ), the infinite loop is just repeatedly passing over the adjacent lines:
t1.setText(Integer.toString(count1++)); t2.setText(Integer.toString(count2++));
(as well as sleeping, but that’s not important here). When you run the program, however, you’ll discover that count1 and count2 will be observed (by the Watcher) to be unequal at times! This is because of the nature of threads – they can be suspended at any time. So at times, the suspension occurs between the execution of the above two lines, and the Watcher thread happens to come along and perform the comparison at just this moment, thus finding the two counters to be different.). That’s the problem that you’re dealing with.
Sometimes you don’t care if a resource is being accessed at the same time you.
How Java shares resources
Java has built-in support to prevent collisions over one kind of resource: the memory in an object. Since you typically make the data elements of a class private and access that memory only through methods, you can prevent collisions by making a particular method synchronized. Only one thread at a time can call a synchronized method for a particular object (although that thread can call more than one of the object’s synchronized methods). Here are simple synchronized methods:
synchronized void f() { /* ... */ } synchronized void g(){ /* ... */ }
Each object contains a single lock (also called a monitor) that is automatically part of the object (you don’t have to write any special code). When you call any synchronized method, that object is locked and no other synchronized method of that object can be called until the first one finishes and releases the lock. In the example above, if f( ) is called for an object, g( ) cannot be called for the same object until f( ) is completed and releases the lock. Thus, there’s a single lock that’s shared by all the synchronized methods of a particular object, and this lock prevents common memory from being written by more than one method at a time (i.e. more than one thread at a time).
There’s also a single lock per class (as part of the Class object for the class), so that synchronized static methods can lock each other out from static data on a class-wide basis.
Note that if you want to guard some other resource from simultaneous access by multiple threads, you can do so by forcing access to that resource through synchronized methods.Synchronizing the counters
Armed with this new keyword it appears that the solution is at hand: we’ll simply use the synchronized keyword for the methods in TwoCounter. The following example is the same as the previous one, with the addition of the new keyword:
//: Sharing2.java // Using the synchronized keyword to prevent // multiple access to a particular resource. import java.awt.*; import java.awt.event.*; import java.applet.*; class TwoCounter2 extends Thread { private boolean started = false; private TextField t1 = new TextField(5), t2 = new TextField(5); private Label l = new Label("count1 == count2"); private int count1 = 0, count2 = 0; public TwoCounter2(Container c) { Panel p = new Panel(); p.add(t1); p.add(t2); p.add(l); c.add(p); } public void start() { if(!started) { started = true; super.start(); } } public synchronized void run() { while (true) { t1.setText(Integer.toString(count1++)); t2.setText(Integer.toString(count2++)); try { sleep(500); } catch (InterruptedException e){} } } public synchronized void synchTest() { Sharing2.incrementAccess(); if(count1 != count2) l.setText("Unsynched"); } } class Watcher2 extends Thread { private Sharing2 p; public Watcher2(Sharing2 p) { this.p = p; start(); } public void run() { while(true) { for(int i = 0; i < p.s.length; i++) p.s[i].synchTest(); try { sleep(500); } catch (InterruptedException e){} } } } public class Sharing2 extends Applet { TwoCounter2[]2[numCounters]; for(int i = 0; i < s.length; i++) s[i] = new TwoCounter22(Sharing2.this); } } public static void main(String[] args) { Sharing2 applet = new Sharing); } } ///:~
You’ll notice that both run( ) and synchTest( ) are synchronized. If you synchronize only one of the methods, then the other is free to ignore the object lock and can be called with impunity. This is an important point: Every method that accesses a critical shared resource must be synchronized or it won’t work right.
Now a new issue arises. The Watcher2 can never get a peek at what’s going on because the entire run( ) method has been synchronized, and since run( ) is always running for each object the lock is always tied up and synchTest( ) can never be called. You can see this because the accessCount never changes.
What we’d like for this example is a way to isolate only part of the code inside run( ). The section of code you want to isolate this way is called a critical section and you use the synchronized keyword in a different way to set up a critical section. Java supports critical sections with the synchronized block; this time synchronized is used to specify the object whose lock is being used to synchronize the enclosed code:
synchronized(syncObject) { // This code can be accessed by only // one thread at a time, assuming all // threads respect syncObject's lock }
Before the synchronized block can be entered, the lock must be acquired on syncObject. If some other thread already has this lock, then the block cannot be entered until the lock is given up.
The Sharing2 example can be modified by removing the synchronized keyword from the entire run( ) method and instead putting a synchronized block around the two critical lines. But what object should be used as the lock? The one that is already respected by synchTest( ), which is the current object ( this)! So the modified run( ) looks like this:
public void run() { while (true) { synchronized(this) { t1.setText(Integer.toString(count1++)); t2.setText(Integer.toString(count2++)); } try { sleep(500); } catch (InterruptedException e){} } }
This is the only change that must be made to Sharing2.java, and you’ll see that while the two counters are never out of synch (according to when the Watcher is allowed to look at them), there is still adequate access provided to the Watcher during the execution of run( ).
Of course, all synchronization depends on programmer diligence: every piece of code that can access a shared resource must be wrapped in an appropriate synchronized block.Synchronized efficiency
Since having two methods write to the same piece of data never sounds like a particularly good idea, it might seem to make sense for all methods to be automatically synchronized and eliminate the synchronized keyword altogether. (Of course, the example with a synchronized run( ) shows that this wouldn’t work either.) But it turns out that acquiring a lock is not a cheap operation – it multiplies the cost of a method call (that is, entering and exiting from the method, not executing the body of the method) by a minimum of four times, and could be more depending on your implementation. So if you know that a particular method will not cause contention problems it is expedient to leave off the synchronized keyword.
Java Beans revisited
Now that you understand synchronization you can take another look at Java Beans. Whenever you create a Bean, you must assume that it will run in a multithreaded environment. This means that:
- Whenever possible, all the public methods of a Bean should be synchronized. Of course, this incurs the synchronized runtime overhead. If that’s a problem, methods that will not cause problems in critical sections can be left un- synchronized, but keep in mind that this is not always obvious. Methods that qualify tend to be small (such as getCircleSize( ) in the following example) and/or “atomic,” that is, the method call executes in such a short amount of code that the object cannot be changed during execution. Making such methods un- synchronized might not have a significant effect on the execution speed of your program. You might as well make all public methods of a Bean synchronized and remove the synchronized keyword only when you know for sure that it’s necessary and that it makes a difference.
- When firing a multicast event to a bunch of listeners interested in that event, you must assume that listeners might be added or removed while moving through the list.
The first point is fairly easy to deal with, but the second point requires a little more thought. Consider the BangBean.java example presented in the last chapter. That ducked out of the multithreading question by ignoring the synchronized keyword (which hadn’t been introduced yet) and making the event unicast. Here’s that example modified to work in a multithreaded environment and to use multicasting for events:
//: BangBean2.java // You should write your Beans this way so they // can run in a multithreaded environment. import java.awt.*; import java.awt.event.*; import java.util.*; import java.io.*; public class BangBean2 extends Canvas implements Serializable { private int xm, ym; private int cSize = 20; // Circle size private String text = "Bang!"; private int fontSize = 48; private Color tColor = Color.red; private Vector actionListeners = new Vector();(GraphicsElement(l); } public synchronized void removeActionListener( ActionListener l) { actionListeners.removeElement(l); } // Notice this isn't synchronized: public void notifyListeners() { ActionEvent a = new ActionEvent(BangBean2.this, ActionEvent.ACTION_PERFORMED, null); Vector lv = null; // Make a copy of the vector in case someone // adds a listener while we're // calling listeners: synchronized(this) { lv = (Vector)actionListeners.clone(); } // Call all the listener methods: for(int i = 0; i < lv.size(); i++) { ActionListener al = (ActionListener)lv.elementAt(i); al(); } } // Testing the BangBean2:"); } }); Frame aFrame = new Frame("BangBean2 Test"); aFrame.addWindowListener(new WindowAdapter(){ public void windowClosing(WindowEvent e) { System.exit(0); } }); aFrame.add(bb, BorderLayout.CENTER); aFrame.setSize(300,300); aFrame.setVisible(true); } } ///:~
Adding synchronized to the methods is an easy change. However, notice in addActionListener( ) and removeActionListener( ) that the ActionListeners are now added to and removed from a Vector, so you can have as many as you want.
You can see that the method notifyListeners( ) is not synchronized. It can be called from more than one thread at a time. It’s also possible for addActionListener( ) or removeActionListener( ) to be called in the middle of a call to notifyListeners( ), which is a problem since it traverses the Vector actionListeners . To alleviate the problem, the Vector is cloned inside a synchronized clause and the clone is traversed. This way the original Vector can be manipulated without impact on notifyListeners( ).
The paint( ) method is also not synchronized. Deciding whether to synchronize overridden methods is not as clear as when you’re just adding your own methods. In this example it turns out that paint( ) seems to work OK whether it’s synchronized or not. But the issues you must consider are:
- Does the method modify the state of “critical” variables within the object? To discover whether the variables are “critical” you must determine whether they will be read or set by other threads in the program. (In this case, the reading or setting is virtually always accomplished via synchronized methods, so you can just examine those.) In the case of paint( ), no modification takes place.
- Does the method depend on the state of these “critical” variables? If a synchronized method modifies a variable that your method uses, then you might very well want to make your method synchronized as well. Based on this, you might observe that cSize is changed by synchronized methods and therefore paint( ) should be synchronized. Here, however, you can ask “What’s the worst thing that will happen if cSize is changed during a paint( )?” When you see that it’s nothing too bad, and a transient effect at that, it’s best to leave paint( ) un- synchronized to prevent the extra overhead from the synchronized method call.
- A third clue is to notice whether the base-class version of paint( ) is synchronized, which it isn’t. This isn’t an airtight argument, just a clue. In this case, for example, a field that is changed via synchronized methods (that is cSize) has been mixed into the paint( ) formula and might have changed the situation. Notice, however, that synchronized doesn’t inherit – that is, if a method is synchronized in the base class then it is not automatically synchronized in the derived class overridden version.
The test code in TestBangBean2 has been modified from that in the previous chapter to demonstrate the multicast ability of BangBean2 by adding extra listeners.
There are no comments yet. Be the first to comment! | http://www.codeguru.com/java/tij/tij0156.shtml | CC-MAIN-2016-22 | refinedweb | 2,905 | 60.95 |
Salai4,389 Points
Compiler error on "Challenge Task 4 of 4". JAVA Data Sctructures
I got this error and I can't find the syntax error in BlogPost.Java Class.
./com/example/BlogPost.java:3: error: '{' expected public class BlogPost() ^ 1 error
Bummer! There is a compiler error. Please click on preview to view your syntax errors!
many thanks in advance.
package com.example; public class BlogPost() { }
import com.example.BlogPost; public class Display { public static void main(String[] args) { BlogPost bp = new BlogPost(); System.out.printf("This is a blog post: %s", bp); } }
Thomas Salai4,389 Points
Thomas Salai4,389 Points
just found my self the error..
in BlogPost.java the class name should be without " () ". | https://teamtreehouse.com/community/compiler-error-on-challenge-task-4-of-4-java-data-sctructures | CC-MAIN-2022-27 | refinedweb | 117 | 61.33 |
Common definition between mysql server & client. More...
#include <stdbool.h>
#include "my_command.h"
#include "my_io.h"
#include <mysql/udf_registration_types.h>
Go to the source code of this file.
Common definition between mysql server & client.
field is a autoincrement field
Intern: Used by sql_yacc..
field is an enum
Intern: field is marked, general purpose.
The length of the header part for each generated column in the .frm file.
Maximum length of the expression statement defined for generated columns.
Used to get fields in item tree.
Intern: Group field..
Flag used by the parser.
Kill only the query and not the connection.
Intern; Part of some key.
Maximum expression length in chars.
Flush the binary log.
Flush all storage engine logs.
Rotate only the erorr log.
Intern flag.
Flush the general log.
Refresh grant tables.
Flush host cache.
Start on new log file.
Remove all bin logs in the index and truncate the index.
Lock tables for read.
Flush the relay log.
Reset master info and restart slave thread.
Flush the slow query log.
Flush status variables.
close all tables
Flush thread cache..
field is a set
Maximum length of comments.
pre 5.6: 60 characters
Intern: Used by sql_yacc..)
Write a MySQL protocol packet to the network handler. | https://dev.mysql.com/doc/dev/mysql-server/8.0.3/mysql__com_8h.html | CC-MAIN-2018-51 | refinedweb | 207 | 73.74 |
Type-safe events
This is some code I wrote a while ago. It is (mostly) based upon Data Types a la Carte, a great pearl by Wouter Swierstra. It uses some ideas discussed in this paper to create a type-safe, extensible event-based framework in Haskell.
This blogpost is written in Literate Haskell, meaning you should be able to download and run it. It also means we’re going to have some (relatively common) language extentions and imports:
> {-# LANGUAGE FlexibleContexts, FlexibleInstances, GeneralizedNewtypeDeriving, > MultiParamTypeClasses, OverlappingInstances, TypeOperators #-}
> import Control.Applicative (Applicative) > import Control.Monad.Reader (ReaderT, ask, runReaderT) > import Control.Monad.Trans (MonadIO, liftIO)
An extensible sum type
The first job is to write an extensible sum type, which will be how we represent events. Think of it as an extended
> data SumType = A | B | C
where we can add more constructors in different files, so it’s somewhat more flexible. The
Contains a typeclass means that a value of type
s optionally contains a value of type
a. We can
wrap and
unwrap this type:
> class Contains a s where > wrap :: a -> s > unwrap :: s -> Maybe a
Our main instance is a sum type combining two other types:
> data a :+: b = L a | R b > deriving (Show) > infixr 5 :+:
Later, we will chain this sum type to a list like:
> type SomeNumber = Int :+: Float :+: Double :+: Integer
We need instances of
Contains so we can wrap and unwrap these lists:
> instance Contains a (a :+: b) where > wrap = L > unwrap (L x) = Just x > unwrap _ = Nothing
> instance Contains b (a :+: b) where > wrap = R > unwrap (R x) = Just x > unwrap _ = Nothing
> instance Contains a s => Contains a (b :+: s) where > wrap = R . wrap > unwrap (R x) = unwrap x > unwrap _ = Nothing
An event-aware monad
Now, let’s go back to our extensible, event-based framework. We’ll assume all clients of the framework can be implemented as a monad. We can abstract over this monad, creating a typeclass for monads which can respond to an event of type
e:
> class (Functor m, Monad m) => MonadResponds e m where > fire :: e -> m ()
As you probably guessed, the
fire method fires an event. We implement an instance which is a
ReaderT. This way, the underlying monad can access a function which triggers an event:
> newtype RespondsT e m a = RespondsT > { unRespondsT :: ReaderT (e -> RespondsT e m ()) m a > } deriving (Applicative, Functor, Monad, MonadIO)
> runRespondsT :: RespondsT e m a -> (e -> RespondsT e m ()) -> m a > runRespondsT (RespondsT r) e = runReaderT r e
By using this trigger, our
RespondsT becomes an instance of
MonadResponds.
> instance (Contains e s, Functor m, Monad m) => > MonadResponds e (RespondsT s m) where > fire x = RespondsT $ ask >>= unRespondsT . ($ wrap x)
Now, all we need in order to write clients is some more syntactic sugar:
> client :: (Monad m, Contains e s) => (e -> m ()) -> s -> m () > client f = maybe (return ()) f . unwrap
A logging client
Let’s start out by implementing a very simple logger as client for the framework:
> data Log = Warn String | Info String
> logger :: (MonadIO m, Contains Log s) => s -> m () > logger = client $ \event -> liftIO $ putStrLn $ case event of > Warn s -> "[Warn]: " ++ s > Info s -> "[Info]: " ++ s
A ping client
The logging client received events using
client… let’s see how we can actually send events by writing an artificial ping-pong protocol. This client uses features from the logger, so we can really compose clients by just listing the required instances in the type signature (as is commonly done with monad transformers), which is a pretty cool thing.
> data Ping = Ping Int | Pong Int
> ping :: (Contains Log s, Contains Ping s, > MonadResponds Log m, MonadResponds Ping m) > => s -> m () > ping = client $ \event -> case event of > Ping x -> fire (Pong x) > Pong x -> fire (Info $ "Received pong with token " ++ show x)
Actually running it
If you’ve followed this blogpost until now, you probably want to see how we can, in the end, combine a number of clients and run them.
To this end, we’ll write a small utility function which combines a number of handlers (our clients) by sequentially applying them to the same event).
> combine :: Monad m => [e -> m ()] -> e -> m () > combine handlers event = mapM_ ($ event) handlers
Now, let’s use this to compose our clients. At this point, we’re required to fix the type for our client:
> type Features = Log :+: Ping
> testClient :: Features -> RespondsT Features IO () > testClient = combine [logger, ping]
And then we can write a program which uses these features:
> test :: RespondsT Features IO () > test = do > fire $ Warn "Starting the engines!" > fire $ Ping 100 > fire $ Info "Engines has been started." > fire $ Ping 200
> main :: IO () > main = runRespondsT test testClient
I hope you’ve enjoyed this blogpost – all criticism is welcome. If someone feels like turning this into a proper library, you’re also welcome. | http://jaspervdj.be/posts/2011-10-16-type-safe-events.html | CC-MAIN-2014-15 | refinedweb | 803 | 59.06 |
Created on 2008-04-24 19:51 by CharlesMerriam, last changed 2008-04-26 10:55 by vsajip.
About same as problem in 2.4 Issue1470422 closed without a test case on
MacOS X/Python 2.4.
Also same as
and so on back for years.
What happens:
chasm@chasm-laptop:~/py$ cat x.py
import logging
logging.basicConfig(level=logging.DEBUG,
format="%(levelname)s:%(pathname)s:%(lineno)d:%(message)s")
from logging import debug
if __name__ == "__main__":
debug("Hello")
chasm@chasm-laptop:~/py$ python x.py
DEBUG:logging/__init__.py:1327:Hello
What should happen:
It should print DEBUG: x.py:3:Hello
Why it fails:
Because logging guesses that the right sys._getframe(level) should be
level 3 in __init__.py:71, in currentFrame
if hasattr(sys, '_getframe'): currentframe = lambda: sys._getframe(3)
What should happen:
It shouldn't guess. In Python 2.5, the lambda might count. In any
case, the level is off by one (4). I suggest that it get set by walking
up the stack from until it exits the stack frame.
oops, last line should be "exits the stack frames for the logging
module. This should be a once-per-program-execution event"
Hmm.. tracker should have a preview button.?
In my installation, line 1327 is within the logging.debug() function,
specifically at the
call to apply(root.debug, (msg,)+args, kwargs)
chasm@chasm-laptop:~/py$ rm *.pyc
chasm@chasm-laptop:~/py$ python x.py
DEBUG:logging/__init__.py:1327:Hello
chasm@chasm-laptop:~/py$ uname -a
Linux chasm-laptop 2.6.22-14-generic #1 SMP Tue Feb 12 07:42:25 UTC
2008 i686 GNU/Linux
chasm@chasm-laptop:~/py$ python -V
Python 2.5.1
-and then-
chasm@chasm-laptop:/usr/lib/python2.5$ sudo rm -rf *.pyc *.pyo */*.pyc
*/*.pyo */*/*.pyc */*/*.pyo
chasm@chasm-laptop:/usr/lib/python2.5$ cd ~/py
chasm@chasm-laptop:~/py$ python x.py
DEBUG:x.py:7:Hello
chasm@chasm-laptop:~/py$
So it was somewhere in the library brunches. The uname -a translates
to "Kbuntu Gutsy". Python, and extras like pylint, coverage, and
nose, were installed via Kbuntu's package manager.
-- Charles
On Fri, Apr 25, 2008 at 3:27 AM, Vinay Sajip <report@bugs.python.org> wrote:
>
> Vinay Sajip <vinay_sajip@yahoo.co.uk> added the comment:
>
>?
>
> ----------
> assignee: -> vsajip
> nosy: +vsajip
>
>
>
> __________________________________
> Tracker <report@bugs.python.org>
> <>
> __________________________________
>
This is not a logging bug, but rather due to the circumstance that
.pyc/.pyo files do not correctly point to the source files that produced
them. There is another issue about this (#1180193) .
Closing this, as it's not a logging issue. | http://bugs.python.org/issue2684 | crawl-002 | refinedweb | 438 | 63.15 |
Java Enumeration
AdsTutorials
An Enumeration is an object that provides you element one at a time. This is passed through using a collection, usually of unknown size. The element is to be traversed only once at a time. you can't change the object or value in the collection,as Enumeration is read only and has forward facility.
We have declared a public class EnumerationTest,Inside the class we have a main method through which we assign an memory to the object of vector class. The vector class object is month names that is used to add the object in the collection class.
We have some method in enumeration-
1) has more element-Return true till the last object in collection returned by the next element.
2) next element -Return the next object in collection class
Output on Command Prompt
Advertisements
Posted on: April 17, 2011 If you enjoyed this post then why not add us on Google+? Add us to your Circles
Advertisements
Ads
Ads
Discuss: Java Enumeration
Post your Comment | https://www.roseindia.net/java/beginners/java-enumeration-exception.shtml | CC-MAIN-2017-43 | refinedweb | 172 | 60.95 |
0
I have another string dilemma. I am asked to write a program that counts the number of words in a sentence entered by the user. This is what I have right now, although I don't think that it is anywhere near being correct:
import string def main(): print "This program calculates the number of words in a sentence" print p = raw_input("Enter a sentence: ") words = string.split(p) wordCount = string.count(words, p) print "The total word count is:", wordCount main()
I'm pretty sure that the string.count() is all wrong but I don't understand what I'm supposed to put in the parentheses. Assistance would be greatly appreciated. Thank you. | https://www.daniweb.com/programming/software-development/threads/72128/word-count-issues | CC-MAIN-2018-39 | refinedweb | 115 | 73.17 |
I consider the difference between builtin and def'ed functions to be
something of an implementation wart -- one that I would like to see
someday removed if sensibly possible. How is a beginner to know that
the parameter names used in the docs and help() responses are not really
parameter names?
In the meanwhile, I think something like the following in the doc would
help: "(Note: an implementation may provide builtin functions whose
positional parameters do not have names, even if they are 'named' for
the purpose of documentation, and which therefore cannot be supplied by
keyword.)"
Also in the meanwhile, the OP can def-wrap builtins
import builtins
def abs(number): return builtins.abs(number)
# but some like int require more care with its no-default option | https://bugs.python.org/msg65820 | CC-MAIN-2019-26 | refinedweb | 128 | 55.17 |
Hey Thomas, thanks for the response. I fixed my problem but I can tell
everyone how just for future reference. I am using Cloudscape 10.0 and
Torque 3.1.1. The problem was addressed in the wiki and here was the answer
Jorge Ortega gave: "Another error may be at the OM generation. The database
name was not specifed in then XML schema file. <database
defaultIdMethod="idbroker" name="databaseName">". So it wasn't anything
wrong with my Java syntax.
I checked my schema file and sure enough the name attribute was not
specified. The XML file was made from an Ant JDBC task though! So I opened
the Torque source, found the TorqueJDBCTransformTask.java file (it's in
org.apache.torque.task), and found where it added the database element.
Since there is no particular explicit variable defined in that Java file for
the name of the database I parsed the dbUrl string and got the name of the
database. So here was the line in the Java file to begin with where it
creates the database tag in the XML file:
databaseNode = doc.createElement("database");
I simply added this line immediately after it (using the syntax for setting
the attributes of a table):
((Element)databaseNode).setAttribute("name",
dbUrl.substring(dbUrl.lastIndexOf("/")+1) );
Of course then I had to recompile that file, then find the .class file it
created, open the torque-gen-3.1.1.jar, replace the existing
TorqueJDBCTransformTask.class with my new one, and then recreate the jar and
copy it into my project directory. But now everything works and my database
tag shows up with something like <database name="blah"> now. :)
-Brandon
----- Original Message -----
From: "Thomas Fischer" <tfischer@apache.org>
To: <torque-user@db.apache.org>
Sent: Wednesday, March 09, 2005 5:28 AM
Subject: RE: problems with ordering
> Hi,
>
> your code looks ok. There is something similar in the Runtimetest,
> public void testCriteriaOrderBy()
> {
> List books = null;
> try
> {
> Criteria criteria = new Criteria();
> criteria.addAscendingOrderByColumn(BookPeer.TITLE);
> criteria.addAscendingOrderByColumn(BookPeer.ISBN);
>
> books = BookPeer.doSelect(criteria);
> }
> catch (Exception e)
> {
> e.printStackTrace();
> fail("Exception caught : "
> + e.getClass().getName()
> + " : " + e.getMessage());
> }
>
> which runs without problems.
> Can you please send in the stacktrace of the NullPointerException, and
> state which Torque version you are using ?
>
> Thomas
>
> Original Message:
> I'm having trouble with ordering columns. I'm not sure why, I've tried
> debugging, but I always seems to get a null pointer exception, yet I can't
> figure out what's null...maybe it's because I don't have the Torque source
> in the debugger, so I lose my abililty to step through and really see
> what's
> happening once I leave my generated classes. But normally I'm getting a
> list back, it's just not sorted. So I installed p6spy to check the SQL
> and
> see what was really going on. So I made this simple little tester class
> to
> see what's up and I get an exception when I try to order. I have a DB
> that's a fake doctor's office say, and it holds records of patients. Any
> idea why the code below wouldn't work, and any suggestions on how to add
> ordering? According to the Criteria tutorial, my code should work
> perfectly. All they have is
> "criteria.addAscendingOrderByColumn(RolePeer.NAME);" and I do something
> very
> similar. The problem is obviously with the ordering method call. If I
> comment out that line, the code works and my list prints out...it's just
> not
> sorted which is what I want to do. Thanks! -Brandon
>
>
>
> public class Tester
> {
> public static void main(String[] args)
> {
> try
> {
>
> Torque.init("torque.properties");
>
> Criteria c = new Criteria();
>
> c.addAscendingOrderByColumn(PatientPeer.LAST_NAME);
> List li = PatientPeer.doSelect(c);
> Iterator i = (Iterator)li.iterator();
>
> System.out.println("list 1");
> while (i.hasNext())
> {
> Patient p = (Patient)i.next();
>
> System.out.println(p.toString());
> }
> }
> catch (Exception e)
> {e.printStackTrace();}
> }
> }
>
>
> ---------------------------------------------------------------------
> | http://mail-archives.apache.org/mod_mbox/db-torque-user/200503.mbox/%3CBAY12-DAV22C202BE54770DFCE94DEE7510@phx.gbl%3E | CC-MAIN-2018-30 | refinedweb | 648 | 60.31 |
JabChapter 10
From WikiContent
Pointers for Further Development
The previous two chapters have demonstrated how Jabber can be used to build applications and solutions in many functional areas. They expanded upon and indeed went beyond the theme of instant messaging (IM) to employ the fundamental features of contextual messaging, presence, and request/response sequences, in a wide range of scenarios.
While these scenarios have in some ways been inward looking, they are natural progressions that originated inside the IM world and matured into applications and solutions that retain much of the messaging flavor. Let's consider what else Jabber has to offer as a messaging and routing mechanism.
This chapter explores some "outward looking" scenarios, to give you pointers and ideas for the future. With Demo::JBook, we consider the possibility of "Jabber without Jabber"—in other words, using Jabber as an infrastructure, in this case as a data store, without focus on any particular Jabber client or IM functionality. We also explore how Jabber is the perfect transport partner for procedure calls formalized in XML: JabberRPCRequester and JabberRPCResponder are scripts that exchange method calls and responses encoded in XML-RPC.
Using Jabber as a conduit to foreign systems is also a theme of this chapter. With ldapr, we build a script that reflects the hierarchy and contents of an LDAP data store, allowing that store to be navigated from a Jabber client. Finally, we look to the business world, employing Jabber in a tiny but crucial role as a conduit between SAP systems and their users.
A Simple Jabber-Based Address Book
With the availability of many different off-the-shelf Jabber clients and the use of these clients as generic tools to interact with diverse Jabber-based services, it's easy to lose sight of the fact that Jabber can also be used to contribute to infrastructure solutions. That is, applications and utilities can be built using the Jabber protocols, in conjunction with Jabber server-based services, without the need for a Jabber client.
By way of illustration, let's build a simple two-level address book using Jabber services. We'll call it Demo::JBook. We'll use this address book to look up details of our friends and colleagues while we're on the move. The ideal platform for this is going to be a web browser, in that it's accessible from personal workstations, airport web consoles, cybercafés, and personal digital assistants (PDAs) that offer access to the Internet.
The point of this illustration is not to show that there's a single solution to the problem of disconnected, incompatible, and unsynchronized directory information (because, despite any answers that you may get to the contrary—no such solution exists). Instead, the goal is to show that it's possible to make use of Jabber services and get to information stored and managed by those services without having to use a Jabber client.
Using the JUD and vCards
The two levels in our address book are going to reflect two distinct (but related) mechanisms in Jabber. We're going to base our address book on the Jabber User Directory (JUD) component and supply further information, in a "drill-down" action, using vCards.
- The JUD
- Our address book will act as a query frontend for a user directory in
- the form of a JUD. It doesn't matter which JUD we use; obviously,
- that depends on how the application is to be deployed. On one hand, it
- might be appropriate to point it at your company's internal JUD, if
- you have one. On the other hand, it might also be just as appropriate
- to point it at one of the larger public JUDs, such as the one
- connected to the Jabber server running on jabber.org (which is
- users.jabber.org).
- vCards
- Every Jabber entity—users, components, and servers—has the potential
- to have a vCard. We saw in Chapter 4 that the Jabber server itself,
- and many of the components connected to it, had a vCard definition.
- While the vCard standard is still fluid, the implementation within
- Jabber, as described in Section 6.5.1 in Chapter 6, is enough to be
- useful. The key to the application is that both the Jabber mechanisms
- that it relies upon—the JUD and vCards—can be accessed independently
- of the availability of the users that the information stored in those
- mechanisms represents. The JUD runs as an independent component and
- manages the directory information using its own data store. With the
- default JUD and XML Database (XDB) component configurations, this data
- store will be in the Jabber server's spool directory in a file called
- jud/global.xdb.
Users interact with the JUD to manage their information, using IQ elements qualified by the jabber:iq:register namespace. User vCard information is also stored at the server side, and users manage the vCard contents using IQ elements qualified by the vcard-temp namespace. Again, with the default configuration, the vCard information, stored along with the rest of the user data relevant to the Jabber server in user-specific spool files, will be held in the Jabber server's spool directory in files called [hostname]/[user].xml.
What Demo::JBook Will Do
The JUD can be queried using a normal Jabber client. In Figure 10-1, we can see the search form of Jabber Instant Messenger (JIM).
</code> The search fields are not fixed; instead, they're dynamically generated, according to the result of an initial IQ-get in the jabber:iq:search namespace. Just as an IQ-get in the jabber:iq:register namespace (as illustrated in Section 8.3) is used for registering users, the jabber:iq:search namespace is is used for search requests. This is illustrated in Example 10-1, where an IQ-get is sent to the JUD at jud.gnu.mine.nu.
An IQ-get to a JUD in the jabber:iq:search namespace
SEND: <iq type="get" id="9138" to="jud.gnu.mine.nu">
<query xmlns="jabber:iq:search"/> </iq>
RECV: <iq from='jud.gnu.mine.nu' id='9138' to='qmacro@jabber.org/study' type='result'> <query xmlns='jabber:iq:search'> <first/> <last/> <instructions> Fill in a field to search for any matching Jabber users. </instructions> </query> </iq> In response to the IQ-get request, the JUD sends back a list of fields with which the directory can be searched, along with some simple instructions.
The actual search follows the same registration pattern (using the jabber:iq:register namespace) that we saw in Section 8.3. After receiving a list of possible search fields, a search request is made with an IQ-set, as shown in Example 10-2. The results, returned in an IQ-result from the JUD, are listed with each entry contained in an <item/> tag. The search results are shown in Figure 10-2.
[[image:title=Searching the JUD: the search results|image=0596002025-jab_1002.png</code>
The JUD search request and response
SEND: <iq type="set" id="2627" to="users.jabber.org">
<query xmlns="jabber:iq:search"> <last>adams</last>
</query> </iq>
RECV: <iq from='users.jabber.org' id='2627' to='qmacro@jabber.org/study' type='result'> <query xmlns='jabber:iq:search'> <item jid='qmacro@jabber.org'> <nick>qmacro</nick> <first>DJ</first> <email>dj.adams@gmx.net</email> <last>Adams</last> </item> <item jid='qmacro@jabber.com'> <nick>dj</nick> <first>DJ</first> <email>dj.adams@gmx.net</email> <last>Adams</last> </item> <item jid='joseph@gnu.mine.nu'> <nick>joseph</nick> <first>Joseph</first> <email>joseph@pipetree.com</email> <last>Adams</last> </item>
... (more items) ...
</query> </iq>
Searching a JUD
The first level that we'll build into Demo::JBook is the ability to query a JUD. The address book is to be browser-based, so we'll generate HTML on the fly according to the search fields we receive in response to our IQ-get. It should look something like that shown in Figure 10-3.
</code> The items returned in the search results don't really provide much information:
<item jid='joseph@gnu.mine.nu'>
<nick>joseph</nick> <first>Joseph</first>
<last>Adams</last> </item>
Here, we have the user's first and last names, his nickname, and his
email address. One of the functions of the JUD is to determine which
fields are storable.
{{Sidebar|JUD FieldsThe fields in a JUD that are used to store the directory data are determined either by a hardcoded list in the JUD source itself or a configurable list that's stored and maintained separate from the JUD code. The list fields allowed for searching a JUD may or may not be used for storing the information; it may be a subset (it makes no sense, of course, for it to be a superset). For example, if the fields used to store information in a JUD are:
<name/> <email/> <first/> <last/>
<nick/> <text/> it may be that the fields available for
searching may be just:
<first/> <last/>
as shown in Example 10-1.
In case you're wondering how to discover the fields that can be used to populate the information in the JUD, an IQ-get can be used in the jabber:iq:register namespace to qualify a registration conversation with the JUD, which is where the information is "registered," or stored. This is shown in Example 10-3.
</code>
An IQ-get to a JUD in the jabber:iq:register namespace
SEND: <iq type='get' to='jud.gnu.mine.nu'> <query
xmlns='jabber:iq:register'/> </iq>
RECV: <iq from='jud.gnu.mine.nu' to='qmacro@jabber.com/study' type='result'> <query xmlns='jabber:iq:register'> <nick/> <first/> <email/> <last/> <text/> <name/> <instructions> Fill in all of the fields to add yourself to the JUD. </instructions> </query> </iq> We can see in Figure 10-4 how the results will typically be rendered.
[[image:jab_1004.png|JUD search results as rendered in Demo::JBook|center|350 px
</code>
Retrieving vCard information
As well as registering with a JUD, it's possible that a user has maintained more information about himself—in his vCard. Depending on the Jabber client used, various user information can be stored in a personal vCard, which is stored on the server side. In Figure 10-5, we see the JIM client's vCard maintenance window, titled User Profile.
The result of entering information and clicking the OK button in Figure 10-5 can be seen in Example 10-4, where an IQ-set in the vcard-temp namespace is made to store the information (some of the vCard tags have been omitted to keep the example short). Notice how no to attribute is specified in the IQ-set and how the result appears to come from the sender (qmacro@jabber.com/study). The storage of personal—user-specific—vCard information is a function of the Jabber Session Manager (JSM), which is where the <iq/> element will be routed automatically, as it is coming in over a client connection (defined by the jabber:client stream-level namespace). This is further discussed in Section 5.4.3.1.
[[image:jab_1005.png|Updating personal vCard information with JIM|center|350 px
</code>
Setting vCard information in the vcard-temp namespace
SEND: <iq type="set"> <VCARD version="3.0"
xmlns="vcard-temp"> <N> <GIVEN>DJ</GIVEN>
<FAMILY>Adams</FAMILY> <MIDDLE/> </N>
<NICKNAME>qmacro</NICKNAME> <EMAIL> <INTERNET/>
<PREF/> dj.adams@gmx.net </EMAIL>
...
</VCARD> </iq>
RECV: <iq type='result' from='qmacro@jabber.com/study' to='qmacro@jabber.com/study'/> As well as being storable, the information in a personal vCard is also retrievable by anyone, anytime. The idea of a personal vCard is that the information it contains is permanently available. Because the vCard data is stored server side, it can be retrieved anytime the user is online.
The key (literally and metaphorically) of each of the search result items returned is a JID. We can see this in Example 10-2, where each <item/> tag has a jid attribute. You won't be surprised to know that the key to accessing someone's vCard is his JID too. So we have a great way for Demo::JBook to jump from level 1, which displays JUD search results, to level 2 by retrieving and displaying a vCard via the JID. The results of jumping from a JUD result entry to a vCard display for the selected user, via the JUD, can be seen in Figure 10-6.
[[image:jab_1006.png|A vCard as displayed in Demo::JBook|center|350 px-</code>
Using Demo::JBook as an Apache Handler
The Demo::JBook application is going to exist as an Apache mod_perl handler, that is, we'll use the power of Perl's integration into the Apache web server to write an Apache module in Perl. You can find out more about mod_perl at.
Being a mod_perl module, Demo::JBook exists in the form of a Perl module and is configured in Apache to service calls to http://[hostname]/jbook. The configuration can be comfortably placed in the main Apache configuration file, httpd.conf, or, as is common in mod_perl installations, in an extra file usually called perl.conf, which is linked to httpd.conf as follows:
<IfModule mod_perl.c> Include conf/perl.conf
</IfModule>
The configuration for Demo::JBook is placed in the perl.conf file,
as shown in Example 10-5.
Configuring the module as a handler in Apache
require conf/startup.pl
...
<Location /jbook> SetHandler perl-script PerlHandler Demo::JBook </Location> The PerlHandler directive refers to the Demo::JBook module, which is the JBook.pm file that exists in the Demo/ directory. The module will be invoked to handle calls to the relative URL /jbook. You can add the location of the Demo::JBook module to mod_perl's list of directories in the BEGIN section of the startup.pl script, in the conf/ directory, like this:
BEGIN { use Apache (); use lib '[the directory location of
Demo::JBook]';
}
The Demo::JBook Script
Before taking the Demo::JBook script apart, let's have a look at the script in its entirety, shown in Example 10-6. Written in Perl, Demo::JBook uses the Jabber::Connection library.
The Demo::JBook script, written in Perl
package Demo::JBook; use strict; use Jabber::Connection; use Jabber::NodeFactory; use Jabber::NS qw(:iq :misc); use constant SERVER => 'gnu.mine.nu'; use constant USER => 'jbook'; use constant PASS => 'pass'; use constant RESOURCE => 'jbook'; use constant JUD => 'jud.gnu.mine.nu'; the Jabber server my $c = Jabber::Connection->new(server => SERVER); unless ($c->connect) { $r->print("Sorry, no connection to Jabber available at ".SERVER); $r->print("</body></html>"); return; } $c->auth(USER, PASS, RESOURCE); # No arguments: Instructions if (!@a) { # Construct IQ-get in iq:search namespace my $iq = $nf->newNode('iq'); $iq->attr('to', JUD); $iq->attr('type', IQ_GET); $iq->insertTag('query', NS_SEARCH); # Send the IQ-get my $result = $c->ask($iq); if ($result->attr('type') eq IQ_ERROR) { $r->print("sorry, no connection to JUD available at ".JUD); $r->print("</body></html>"); $c->disconnect; return; } my $info = $result->getTag(, NS_SEARCH); #"); } # Multiple arguments: JUD lookup elsif (scalar @a > 1) { # Treat the arguments as a hash my %a = @a; # Construct an IQ-set my $iq = $nf->newNode('iq'); $iq->attr('to', JUD); $iq->attr('type', IQ_SET); my $query = $iq->insertTag('query', NS_SEARCH); while (my($name, $val) = each(%a)) { $query->insertTag($name)->data($val) if $val; } # Make call my $result = $c->ask($iq); if ($result->attr('type') eq IQ_ERROR) { $r->print("sorry, cannot query JUD"); $r->print("</body></html>"); $c->disconnect; return; } my $info = $c->ask($iq)->getTag(, NS_SEARCH); my $items = 0; $r->print("<p><strong>".JUD."</strong></p> ;\n"); $r->print("<table border="1">\n"); foreach my $item ($info->getChildren) { # Heading unless ($items) { $r->print("<tr>"); foreach my $tag ($item->getChildren) { $r->print("<th>".ucfirst($tag->name)."</th>"); } $r->print("</tr>\n"); } $r->print("<tr>"); my $flag = 0; foreach my $tag ($item->getChildren) { unless (length($tag->data) == 0 or $flag++) { $r->print("<td><a href="/jbook?".$item->attr('jid')."">"); $r->print($tag->data."</a></td>"); } else { $r->print("<td>".$tag->data."</td>"); } } $r->print("</tr>\n"); $items++; } $r->print("</table>\n"); $r->print("<p>$items results found</p>"); } # Single argument: vCard lookup else { # Construct query my $iq = $nf->newNode('iq'); $iq->attr('to', $a[0]); $iq->attr('type', IQ_GET); $iq->insertTag('vcard', NS_VCARD); # Make call and retrieve results my $result = $c->ask($iq); if ($result->attr('type') eq IQ_ERROR) { $r->print("sorry, cannot retrieve vCard for $a[0]"); $r->print("</body></html>"); $c->disconnect; return; } my $vcard = $result->getTag(, NS_VCARD); print ("<strong>$a[0]</strong>\n"); # Display each of the top-level tags if they contain data foreach my $tag ($vcard->getChildren) { print "<br/>".$tag->name." : ".$tag->data."\n" if $tag->data; } } $r->print("</body></html>"); $c->disconnect; return; } 1;
Taking Demo::JBook Step by Step
Now that we've got a hold on the scope and scale of Demo::JBook, let's take the script apart—step by step—to see how it works.
Declarations
Because Demo::JBook is an Apache handler, it exists as a Perl module—hence the package declaration at the top of the file:
package Demo::JBook;
use strict;
use Jabber::Connection; use Jabber::NodeFactory; use Jabber::NS qw(:iq
- misc);
use constant SERVER => 'gnu.mine.nu'; use constant USER => 'jbook'; use constant PASS => 'pass'; use constant RESOURCE => 'jbook'; use constant JUD => 'users.jabber.org'; This code exists within the Demo::JBook package that was declared as the handler for the http://[hostname]/jbook location, shown in Example 10-5. We're going to make full use of the Jabber::Connection library and bring in all three of its modules for managing the Jabber server connection, for dispatching elements that arrive (Jabber::Connection), for building and manipulating elements (Jabber::NodeFactory), and using common Jabber programming constants (Jabber::NS). In the case of Jabber::NS, we need only a few namespaces to manage our JUD and vCard queries, so the :iq and :misc tags will be used to refer a collection of constants in Jabber::NS.
The constants SERVER, USER, PASS, and RESOURCE define the connection to the Jabber server. This connection doesn't have to be made to the JUD that will be queried by Demo::JBook. By way of illustration, we have jabber.org's JUD (users.jabber.org) specified as the value for the JUD constant. For the purpose of this example, this will be the JUD that Demo::JBook will query.
You can use the reguser script, described in Section 7.4, to create the Demo::JBook user. See Section 8.2.1.1 for an example of how this can be done.
General handler preparation
Following the script's declarations, it's time to define the handler function that Apache will call to handle incoming requests to the http://[hostname]/jbook location. The name of the handler must be handler(): home Jabber server my $c = Jabber::Connection->new(server => SERVER); unless ($c->connect) { $r->print("Sorry, no connection to Jabber available at ".SERVER); $r->print("</body></html>"); return; }
$c->auth(USER, PASS, RESOURCE);
The mod_perl mechanism hands the handler() function an argument that is stored in a variable called $r. This is the HyperText Transfer Protocol (HTTP) request that has been made, which is handled by the function.
Calling the args() method on our request object gives us a list of arguments. These arguments follow the question mark in a typical HTTP GET request. In Figure 10-3, the URL in the Location bar is. In this URL, there is neither a question mark nor any arguments. The assignment to @a here:
my @a = $r->args;
would leave @a empty.
{{Note|The first part of the URL containing the hostname is truncated by the size of the browser window.
</code> However, if the URL in the Location bar (shown in Figure 10-4) contained question marks and/or arguments, such as:
- nick=&email=
the arguments that @a would receive follow the question mark and are separated in pairs with ampersands and further separated with equals signs. So here, @a would receive the arguments:
name, (blank), first, (blank), last, adams, nick, (blank),
email, (blank) In the final URL example, shown in Figure 10-6: the array
received by @a is just a single element:
dj@gnu.mine.nu The content of @a defines which stage
of the script takes the appropriate action. Queries will be generated
in the form of IQ-gets and IQ-sets to retrieve information from the JUD
as well as to retrieve personal vCards. To do this, we need to create an
instance of a node factory, so we can build elements:
my $nf = Jabber::NodeFactory->new;
After retrieving the request arguments and creating a node factory
instance, it's time to generate some HTML that will be common to all of
Demo::JBook's features. The print() method will be used on
the request object $r to send a response back to the requesting
web browser:
$r->print("<html><head><title>JBook</title>&
lt;/head><body>"); $r->print("<h1><a
href="/jbook">JBook</a></h1>");
Next, we need to create a connection to the Jabber server for each
request:
# Connect to home Jabber server my $c =
Jabber::Connection->new(server => SERVER); unless ($c->connect)
{ $r->print("Sorry, no connection to Jabber available at ".SERVER);
$r->print("</body></html>"); return;
}
$c->auth(USER, PASS, RESOURCE);
It will be much more efficient to create a persistent connection that could be used to serve further requests. One way to do this is to fork a daemon that connects to the Jabber server, acting as a proxy for this script. Rather than make direct requests to the Jabber server, Demo::JBook is used to send the IQ elements to the daemon, which in turn uses its persistent Jabber connection to make queries on Demo::JBook's behalf. A setup like this is shown in Figure 10-7.
</code>
State 1: Build the JUD query form
Based upon how many arguments we have in the @a array, as described earlier, we can build the response, which will represent one of three states:
- JUD query form
- JUD query results
- vCard display If Demo::JBook is called without arguments (i.e.,
http://[hostname]/jbook), which we test for like this:
if (scalar @a == 0) {
then we want to build and present the JUD search form. This form—the
fields that the form consists of—will be specific to the particular JUD
that will be searched.
We construct an IQ-get to send to the JUD to ask for a list of search fields and instructions. What we're looking for is something like the IQ-get shown in Example 10-1, like this:
SEND: <iq type="get" to="users.jabber.org"> <query
xmlns="jabber:iq:search"/> </iq>
To construct this, we start with a new node (element) created by
using the node factory:
my $iq = $nf->newNode('iq'); $iq->attr('to', JUD);
$iq->attr('type', IQ_GET); $iq->insertTag('query', NS_SEARCH);
In previous recipes, we've called methods in the Jabber libraries
(Jabberpy, JabberBeans, Net::Jabber, and
Jabber::Connection) to send an element to the Jabber server.
Typically, such a method is called a send() method. Here, we
don't use a send() method. Instead, we use
Jabber::Connection's ask(). Like
Net::Jabber's SendAndReceiveWithID() method, and
Jabberpy's method of the same name, ask() not only
sends the element to the Jabber server, it waits for a reply.
A reply—in Jabber terms—is a response in the form of an element that comes back along the stream, with a matchingidattribute. In this case, we're making an IQ-get and are expecting a response back from the recipient of that IQ-get. One way of expecting and handling the response would be to use a predefined callback specified for <iq/> elements and send our IQ-get with the send() method. However, this means that the script receives the response in an element callback that's somewhat independent of our call to send(). As well as catching the response in the callback, we also need some way of matching it up with the original request. Not to mention getting back on track with the flow of execution that represented the logical sequence of events that we were in the middle of following when we called send().
There's an easier way, if we want to make a request and wait for the response to that request before continuing on in the script. This way makes use of the ask() method.
What the ask() method does is avoid the need to catch responses in callbacks and match them up with their originating requests. It send()s the element to the Jabber server and blocks until an element with a matching id attribute value is received. Other elements that might be received while waiting for the response are duly dispatched as normal. It's not as if elements get queued up if the response takes a moment or two to arrive. When the matching element arrives on the stream, it is passed directly back to the caller of the ask() method, in the same form as if it had been handed to a callback—in object form, as an instance of a Jabber::NodeFactory::Node object, in this case.
my $result = $c->ask($iq);
So here, $result receives the response to the
<iq/> element just sent. This response will look
something like this:
RECV: <iq from='users.jabber.org' id='43'
to='jbook@gnu.mine.nu/jbook' type='result'> <query
xmlns='jabber:iq:search'> <nick/> <first/> <email/>
<last/> <instructions> Fill in a field to search for any
matching Jabber users. </instructions> </query> </iq>
It's addressed to the JID that the script is using,
jbook@gnu.mine.nu/jbook. But hold on—there's something that
doesn't look quite right here, that is, when compared to the example of
the IQ-get we just sent. Indeed—there's an id attribute in the
response. We didn't specify one in the Perl code that built the
<iq/> element. The ask() method did it for us.
Knowing that it's going to have to check the id attribute on
every incoming element to find a match for the element it's just sent
off for us, the ask() method makes sure that the outgoing
element actually has an id attribute. If it doesn't, it
adds one, giving it a unique value. That way, it stands a fighting
chance of returning to the caller something this side of the end of
time.
{{Note|Talking of time, if you're uneasy about calling blocking functions in general, you can always set an alarm() to interrupt the call after a certain length of time.
</code> For more information on matching requests and responses, see Section 2.5 in Chapter 2.
Now let's move on to the response to IQ-get, which we now have in $result. While we're expecting an IQ-result in response to IQ-get, the request might not have been succesful, and we simply bail out gracefully if it isn't, ending our connection with the Jabber server:
if ($result->attr('type') eq IQ_ERROR) {
$r->print("Sorry, no connection to the JUD available at ".JUD);
$r->print("</body></html>"); $c->disconnect; return;
}
Otherwise, we're expecting a result, containing the search fields and some instructions. These will be contained within the query tag qualified by the jabber:iq:search namespace (the tag is usually called query, but here, as elsewhere in the script, we're not taking any chances and are looking for "the first (hopefully the only!) occurrence of a child tag qualified by the jabber:iq:search namespace."
my $info = $result->getTag(, NS_SEARCH);
An HTML form is built from the instructions and the search fields; the
instructions are retrieved with:
$info->getTag('instructions')->data
which retrieves the <instructions/> tag and extracts its
contents.
The getChildren() method is called upon our <query/> tag to discover what fields are available. For all those fields, barring the "instructions" one, we create an input text field:
#");
}
Once the form has been built, the work for this stage is complete—the form is relayed to the user, who will submit a completed form, thereby invoking the next state.
State 2: Query the JUD
The submission of the HTML form will cause a number of name/value pairs to be passed as part of the HTTP GET request. These names and values are captured into the @a array as described earlier. If we have more than one entry in @a, we know it's a form submission and must respond to that by querying the JUD and returning the results. In this case, as we know that the contents of @a are name/value pairs, we can view those contents as a hash, using a new variable %a:
elsif (scalar @a > 1) { my %a = @a;
Now, %a will contain entries where the keys are the names of
the search fields and the values are the values entered in the form.
In the same way that we constructed an IQ-get to query the JUD for the search fields and instructions, we construct an IQ-set to perform the actual query:
my $iq = $nf->newNode('iq'); $iq->attr('to', JUD);
$iq->attr('type', IQ_SET); my $query = $iq->insertTag('query',
NS_SEARCH);
Using the information in %a, we insert tags for each of the search fields for which a value was specified in the form. For example, if only the value adams was specified, in the field representing the <last/> search field, as shown in Figure 10-3, we would only want to insert:
<last>adams</last>
as a child of the IQ-set's <query/> tag:
while (my($name, $val) = each(%a)) {
$query->insertTag($name)->data($val) if $val;
}
Also in a similar way to handling state 1, we make the call by using the ask() method; the response will be received into the $result variable as an object representation of the IQ-result (or IQ-error) element:
my $result = $c->ask($iq);
We deal simply with any error situation:
if ($result->attr('type') eq IQ_ERROR) {
$r->print("Sorry, cannot query the JUD");
$r->print("</body></html>"); $c->disconnect; return;
}
otherwise proceeding to extract the <query/> tag from the search result. This tag will contain the <item/>s representing the JUD entries found to match the search criteria submitted (see the response in Example 10-2):
my $info = $c->ask($iq)->getTag(, NS_SEARCH);
We want to display a simple table of results, with each table row
representing an <item/>:
my $items = 0;
$r->print("<p><strong>".JUD."</strong></p> ;\n"); $r->print("<table border="1">\n");
foreach my $item ($info->getChildren) {
During the first iteration of the foreach loop, that is, on our first child tag of <query/>, we take the opportunity to write out a heading row, using the HTML <th/> (table heading) tags:
unless ($items) { $r->print("<tr>"); foreach my
$tag ($item->getChildren) {
$r->print("<th>".ucfirst($tag->name)."</th>");
} $r->print("</tr>\n"); }
The main part of our loop translates the information found in each <item/> tag:
<item jid='joseph@gnu.mine.nu'>
<nick>joseph</nick> <first>Joseph</first>
<last>Adams</last> </item>
into HTML table rows, with one table cell (<td/>) for
each item field:
$r->print("<tr>"); my $flag = 0; foreach my $tag
($item->getChildren) { unless (length($tag->data) == 0 or $flag++)
{ $r->print("<td><a
href="/jbook?".$item->attr('jid')."">");
$r->print($tag->data."</a></td>");
} else { $r->print("<td>".$tag->data."</td>"); } } $r->print("</tr>\n");
In this section of the code, we also make the link between the first and second level of the address book. The JID, specified in each <item/> tag's jid attribute, is the key to the JUD entry that the item represents and also the key to the vCard of the user that has that JID. For each of the item lines in the table, we need to build a selectable link to lead the user to the second level, which allows him to view the vCard. This is what we want each link to look like:
That is, a single argument, representing the JID, specified after the
question mark. The condition:
unless (length($tag->data) == 0 or $flag++)
serves to ensure that the link is made on a single, nonempty field in
the JUD item. When registering with a standard JUD, none of the fields
are compulsory, so it's quite possible for there to be missing values
returned in the search results. So we want to make sure that the
<a href="...">...</a> link that we build actually
surrounds some value; otherwise, it wouldn't be clickable. The
$flag variable just ensures we build only one link and not a
link for every nonempty field.
Finally, the number of items found is displayed with:
$items++;
} $r->print("</table>\n");
$r->print("<p>$items results found</p>");
}
State 3: Retrieve a vCard
If we receive a single argument in the request, we'll take it to be a JID, passed from the link in the table build in the previous state, and immediately build an IQ-get to retrieve the vCard. The JID is to be found in the first element in the @a array—$a[0].
else {
my $iq = $nf->newNode('iq'); $iq->attr('to', $a[0]); $iq->attr('type', IQ_GET); $iq->insertTag('vcard', NS_VCARD);
Notice how the name of the query tag in this query is not query, but vcard. See Section 6.5.1 in Chapter 6 for details.
The retrieval query is in the form of an IQ-get, rather than an IQ-set. As we needed to send information in our JUD query (the search criteria), an IQ-set was appropriate. Here, an IQ-get is appropriate as we're not including any information to qualify our request; all we need to send is this:
<iq type='get' to='dj@gnu.mine.nu'> <vcard
xmlns='vcard-temp'/> </iq>
After sending the <iq/> element, waiting for the
response, and checking for any errors, we extract the vCard detail and
display it:
my $result = $c->ask($iq);
if ($result->attr('type') eq IQ_ERROR) { $r->print("Sorry, cannot retrieve the vCard for $a[0]"); $r->print("</body></html>"); $c->disconnect; return; }
The structure of the vCard namespace is rather complicated and long-winded, and it's common for many of the fields to remain unfilled. So to keep the script simple, we're going to display all the top-level fields that aren't empty:
my $vcard = $result->getTag(, NS_VCARD);
print ("<strong>$a[0]</strong>\n");
foreach my $tag ($vcard->getChildren) { print "<br/>".$tag->name." : ".$tag->data."\n" if $tag->data; }
}
General handler close
Once we've dealt with the possible states, we send the closing HTML statements common to all three of them, and disconnect from the Jabber server:
$r->print("</body></html>"); $c->disconnect;
return;
At this stage, there's nothing more for the module to do. Having discerned the state from the arguments in the URL (and thereby the appropriate action) and having carried out that action, the module ends, handing control back to its mod_perl host. Remembering that Demo::JBook is an Apache handler in the form of a Perl module, we need to ensure that the module itself returns a true value (as with any Perl module):
}
1;
Notes for Improvement
The Demo::JBook script is merely an example. On top of tightening up the error and exception handling, there are a few other things that you might want to consider doing to improve upon it:
- Jabber connectivity
- As mentioned already, you'll probably want to improve the connection
- efficiency to the Jabber server by holding a socket open and sharing
- this connection across multiple calls to the handler.
- Choice of JUD
- The JUD to be queried is fixed; you may prefer to allow the user to
- select which JUD will be searched. What's more, selection of more than
- one JUD would allow a powerful search across public Jabber user
- directories.
- Key handling
- We've seen how a JUD is queried in Example 10-2. Some JUDs use the
- simple key-based security and pass an additional <key/>
- tag containing random data—a sort of session key, as described in
- Section 6.2.11 and Section 6.2.13. Any <key/> tag
- received from the JUD in response to an IQ-get must be sent back
- verbatim to the JUD in the subsequent IQ-set. Otherwise the search
- will fail, and you'll get a response similar to that shown in Example
- 10-7.
- Visual impact
- Last but not least, the visual impact of the end result as shown here
- (in Figure 10-3, Figure 10-4, and Figure 10-6) lacks a certain
- something. You might want to do something about that—give it a grander
- design, make it more pleasing, or at least interesting, to the eye.
- The HTML has been kept deliberately basic in this recipe, so as not to
- cloud the real theme of "Jabber without Jabber."
Failure to return a key could cause a search to fail
RECV: <iq from='users.jabber.com' id='jud33'
to='jbook@gnu.mine.nu/jbook' type='error'> <query
xmlns='jabber:iq:search'> <error code='405'>Keys do not
match.</error> </query> </iq>
XML-RPC over Jabber
XML-RPC is an easy way to get software that's running on different operating systems to be able to make and respond to procedure calls (the "RPC" part of the name stands for "Remote Procedure Call") over the Internet.
The basis of XML-RPC is straightforward and is described at XML-RPC's home page (). The procedure calls, each consisting of the name of the procedure (or method) to call and a set of arguments to go with that call and the corresponding responses, each consisting of a set of results, are encoded in an XML-based format. The requests and responses, so encoded, are exchanged over HTTP, carried as the payloads of POST requests.
Example 10-8 shows a typical request in XML-RPC encoding. It's calling a procedure called examples.getStateName, and passing a single integer parameter with the value 41.
An XML-RPC request
<?xml version="1.0"?> <methodCall>
<methodName>examples.getStateName</methodName>
<params> <param>
<value><i4>41</i4></value> </param>
</params> </methodCall>
Example 10-9 shows a typical response to that request. The response
consists of a single string value "South Dakota."
An XML-RPC response
<?xml version="1.0"?> <methodResponse> <params>
<param> <value><string>South
Dakota</string></value> </param> </params>
</methodResponse>
The choice of the word "payload" to describe the encoded requests and
responses is significant: each request, headed with an XML declaration
(<?xml version="1.0"?>) and encapsulated as a single tag
(<methodCall/>), and each response, also headed with an
XML declaration and encapsulated as a single tag
(<methodResponse/>), are succinct, fully formed, and
complete parcels that have meaning independent of their HTTP carrier.
While the XML-RPC specification stipulates that these parcels be carried over HTTP, we could take advantage of the power and simplicity of the encoding and carry procedure calls and responses over Jabber.
Jabber-RPC
Jabber-RPC is the name given to the marriage of encoding from the XML-RPC specification and Jabber as the transport mechanism. As well as building upon a stable specification, Jabber-RPC brings advantages of its own to the world of procedure calls over the Internet. Many of the potential XML-RPC responders are HTTP servers behind corporate firewalls or one-way Network Address Translation (NAT) mechanisms and are therefore unreachable from the Internet. Substituting Jabber as a transport gives calls a better chance of reaching their destination. If a Jabber-RPC responder—a program that connects to a Jabber server, is addressable by a JID, and can receive (and respond to) XML-RPC-encoded calls—is connected to a Jabber server visible to the Internet, then request calls have to make it only to that Jabber server, and internal packet routing within the server will allow the parcels to reach their destinations behind the firewall, whether those destinations are client-based or component-based responders.
It should be clear by now that the idea of Jabber-RPC is to transport the XML-RPC-encoded parcels in an extension, an attachment, to an IQ element. Just as the details for a search attempt (for example of a Jabber User Directory) are carried in an IQ-set extension qualified by the jabber:iq:search namespace (as shown in Example 10-2), so the Jabber-RPC method calls are carried in an IQ-set extension qualified by a namespace of its own. This namespace is a new one and is jabber:iq:rpc.[1]
Similarly, as the results of a JUD search are returned in an IQ-result, so Jabber-RPC method responses are returned in an IQ-result. Example 10-10 shows a simple Jabber-RPC-based method call and response, using the same XML-RPC-encoded parcels as shown in Example 10-8 and Example 10-9.
A Jabber-RPC request/response conversation
SEND: <iq type='set' to='responder@company-a.com/jrpc-server'
id='1'> <query xmlns='jabber:iq:rpc'> <methodCall>
<methodName>examples.getStateName</methodName>
<params> <param>
<value><i4>41</i4></value> </param>
</params> </methodCall> </query> </iq>
RECV: <iq type='result' to='requester@company-b.com/jrpc-client' from='responder@company-a.com/jrpc-server' id='1'> <query xmlns='jabber:iq:rpc'> <methodResponse> <params> <param> <value><string>South Dakota</string></value> </param> </params> </methodResponse> </query> </iq> It's clear that the parcels of XML-RPC encoding lend themselves very well to being transported in a meaningful way over Jabber and that Jabber's ultimate flexibility makes this possible.
The only major difference between the payloads as carried in an HTTP POST and the payloads as carried in an <iq/> element is that there's no XML declaration. It's not possible of course, when you consider the context of the IQ elements. They're document fragments in the XML stream between the requester (or responder) and the Jabber server. As explained in Section 5.3, these documents are fanfared with their own XML declaration, and any further declaration within the document is illegal from an XML point of view.
{{Sidebar|Jabber-RPC Requesters and RespondersA quick word about Jabber-RPC requesters and reponders. Connection to a Jabber server is possible in two main ways, as we've seen in the recipes so far: as a client via the JSM or as a component connected directly to the jabberd backbone. At the simplest level, all a Jabber-RPC requester or responder is, is something that makes a Jabber connection, sends and receives IQ elements, and uses a standard XML-RPC library to encode, decode, and service the requests and responses.
It doesn't matter whether the requesters and responders are built as clients or components. One could argue that the Jabber client model fits more naturally into the role of a requester, and the Jabber component model fits more naturally into the role of a responder, but this isn't a requirement. Indeed, if you're not the Jabber server administrator and don't have access to the server configuration (to be able to insert a <service/> stanza for a new component—see Section 9.3.1.1), building a Jabber-RPC responder as a Jabber client may be the path of least resistance.
</code>
Building a Requester and a Responder
Let's have a look at the power and flexibility of Jabber-RPC and how to build requesters and responders. We're going to build a client-based requester, in Python, and a client-based responder, in Java. There are two great XML-RPC library implementations for Python and Java that we'll use to do the legwork for us. Figure 10-8 shows what we want to build.
To keep things fairly simple, we'll just implement and call something similar to the getStateName() function already shown in the examples: getCountyName().
[[image:jab_1008.png|The Jabber-RPC requester and responder|center|350 px
</code>
The responder: JabberRPCResponder
We'll start up the Java client-based Jabber-RPC responder (imaginatively called JabberRPCResponder) specifying a handler class that is to service the XML-RPC-encoded method calls, get it to connect to a Jabber server, and listen for incoming Jabber-RPC requests. It will use the Helma XML-RPC library (at) to service incoming requests using the handler class.
Being a Java script, we'll use the JabberBeans library. We'll need to extend the library's capabilities to handle extensions in the new jabber:iq:rpc namespace.
The requester: JabberRPCRequester
The Python client-based Jabber-RPC requester will also start up and connect to a Jabber server. We'll use a different server than the one the responder is connected to; after all, the whole point of XML-RPC and Jabber-RPC is to make the RPC world a smaller and bigger place at the same time, through the power of the Internet. We're going to use the Pythonware XML-RPC library () to encode a getCountyName() request and decode the response.
JabberRPCResponder
Let's look at the Jabber-RPC responder first. There is a single script, called JabberRPCResponder, but there are also a number of supporting classes that we need. Let's take things one at a time.
The RPCHandler class
The Helma XML-RPC library implementation allows you to build XML-RPC responders independent of any particular transport by using an instance of the XmlRpcServer object, which represents an abstraction of an XML-RPC server. You can then construct a class—the handler class—containing your methods to be callable via XML-RPC. The calling of these methods is coordinated by the XmlRpcServer object, which you tell about your handler class using the addHandler() method.
This is what the handler class, called RPCHandler, looks like:
// RPCHandler: A class of XML-RPC-callable methods
public class RPCHandler {
// Note: This is a "traditional" list of counties!
private String"};
public RPCHandler() {}
public String getCountyName(int c) { return county[c - 1]; }
} The class has three elements:
- The list of counties
- The names of counties are stored in a simple array, county[].
- The constructor
- We don't need to do anything in the constructor,
- RPCHandler(), as there's no requirement to manipulate objects
- directly, so the constructor remains empty.
- The single available method
- All public methods in the class are available through the
- XmlRpcServer object. Here we have a single method for the
- purposes of this recipe: getCountyName() returns the name of
- the county for the index given. When we examine the
- JabberRPCResponder script (in Section 10.2.3.3) we'll see how the
- XmlRpcServer object is instantiated and how this
- RPCHandler class is used.
The IQRPC classes
JabberBeans deals with Jabber element extensions—<query/> and <x/> tags—using individual helper classes. We've seen this in the HostAlive recipe in Section 8.2, where the IQAuthBuilder class is used to construct an authorization extension:
<query xmlns='jabber:iq:auth'>
<username>alive</username> ... </query>
The helper classes are oriented around namespaces. Because we have a new
namespace to deal with (jabber:iq:rpc), JabberBeans
needs to have helper classes to handle extensions in that namespace.
We need a minimum of two helper classes. We need a class that represents an extension—a <query/> tag—in the jabber:iq:rpc namespace. If we are going to construct IQ elements containing such Jabber-RPC extensions, we also need a class to build those extensions. The class that represents the jabber:iq:rpc extension is called IQRPC, and the class that is the builder for the extensions is called IQRPCBuilder.
For an example we're already familiar with, look at the steps leading up to the authorization phase in the HostAlive script (and indeed our JabberRPCResponder script, which is shown in the next section); the authorization IQ element is constructed as follows:()); If we bear in mind that this is what we want to end up with:
<iq type='set'> <query xmlns='jabber:iq:auth'>
<username>...</username>
<password>...</password>
<resource>...</resource> </query> </iq>
then we can understand what's going on:
- The <query/> tag in the jabber:iq:auth
namespace is built with an authorization builder, iqAuthb, which is an instance of IQAuthBuilder.
- Values for the tags within <query/>, such as <username/> and
<password/>, are set using methods belonging to that builder class.
- The actual extension is generated with a call to the
build() method. Using addExtension(), the newly generated extension is added to the container representing the <iq/> element being constructed by the IQ builderiqb, an instance of InfoQueryBuilder.
Figure 8-3 shows the relationship between these builders and the things they create. (For a review of what's required to authenticate with a Jabber server, see Section 7.3.)
Although necessary, the IQRPC classes aren't the focus of this recipe and can be found in Appendix B. They are essentially modified copies of the classes for the jabber:iq:time namespace: IQTime and IQTimeBuilder. They just need to be compiled and made available in the $CLASSPATH. Putting them in the same directory as the JabberRPCResponder script will work fine.
The JabberRPCResponder script
Example 10-11 shows the JabberRPCResponder script in its entirety. In the next section we'll take it piece by piece.
The JabberRPCResponder Script, written in Java
import org.jabber.jabberbeans.*; import
org.jabber.jabberbeans.Extension.*; import
org.jabber.jabberbeans.util.JID; import java.net.InetAddress; import
java.util.Enumeration; import java.io.*; import helma.xmlrpc.*;
public class JabberRPCResponder implements PacketListener { private String server = "gnu.mine.nu"; private String user = "server"; private String pass = "pass"; private String resource = "jrpc-server";
private XmlRpcServer responder;
private ConnectionBean cb;
// Constructor public JabberRPCResponder() { responder = new XmlRpcServer(); responder.addHandler("examples", new RPCHandler()); }
// Main program public static void main(String args[]) { JabberRPCResponder server = new JabberRPCResponder(); try { server.start(); } catch (Exception e) { System.out.println("Cannot start server: " + e.toString()); } }());
}
// Packet listener interface:
public void receivedPacket(PacketEvent pe) { Packet packet = pe.getPacket(); System.out.println("RECV:" + packet.toString());
if (packet instanceof InfoQuery) { Enumeration e = ((InfoQuery)packet).Extensions(); if (e.hasMoreElements()) { Extension ext = (Extension)e.nextElement();
String request = ext.toString(); String id = ((InfoQuery)packet).getIdentifier(); JID from = ((InfoQuery)packet).getFromAddress();
ByteArrayInputStream bis = new ByteArrayInputStream(request.getBytes()); String result = new String(responder.execute(bis));
String response = result; int pos = result.lastIndexOf("?>"); if (pos >= 0) { response = result.substring(pos + 2); }
IQRPCBuilder iqrpcb = new IQRPCBuilder(); iqrpcb.setPayload(response););
} } }
public void sentPacket(PacketEvent pe) { Packet packet = pe.getPacket(); System.out.println("SEND:" + packet.toString()); }
public void sendFailed(PacketEvent pe) { Packet packet = pe.getPacket(); System.out.println("failed to send:" + packet.toString()); }
}
Looking at JabberRPCResponder Step by Step
Now let's examine the JabberRPCResponder script step by step so we can see how it works:
import org.jabber.jabberbeans.*; import
org.jabber.jabberbeans.Extension.*; import
org.jabber.jabberbeans.util.JID; import java.net.InetAddress; import
java.util.Enumeration; import java.io.*; import helma.xmlrpc.*;
We need to bring in the jabberbeans classes as shown, as well
as some core Java features that we'll see used in the script a bit
later: an InetAddress to represent the Jabber server's
hostname, an Enumeration interface to access the extensions in
the incoming IQ elements, and java.io features for feeding the
XML-RPC-encoded requests to the XmlRpcServer object. We also
bring in the classes from the Helma XML-RPC library.
public class JabberRPCResponder implements PacketListener {
private String server = "gnu.mine.nu"; private String user =
"server"; private String pass = "pass"; private String resource =
"jrpc-server";
private XmlRpcServer responder;
private ConnectionBean cb;
The definition of our JabberRPCResponder class looks similar to that of the HostAlive class in Section 8.2. However, rather than merely connecting to a Jabber server and sending packets off down the stream, we want to listen for incoming packets—in this case, IQ elements carrying jabber:iq:rpc-qualified payloads—and handle them. Accordingly, we specify that our main class implements PacketListener, a JabberBeans interface that Jabber clients use to receive notification of incoming packets. The interface describes three methods: receivedPacket(), sentPacket(), and sendFailed(). We'll define our receivedPacket() method, described later in this section, to catch and process the incoming Jabber-RPC requests.
We're going to use an XmlRpcServer object, in responder, to provide the translation services between our RPCHandler class that contains the methods we want to "expose," and the XML-RPC-encoded requests and responses.
Naturally, we also need a JabberBeans ConnectionBean, which we'll hold in cb.
public JabberRPCResponder() { responder = new XmlRpcServer();
responder.addHandler("examples", new RPCHandler());
}
The JabberRPCResponder class constructor, JabberRPCResponder(), will be called in the main() method later in the script. It is used to create an instance of the Helma XmlRpcServer object, and associate our RPCHandler class with it, as a handler for method calls. The addHandler() method takes two arguments: the first is the method prefix name, such as examples in the methodName specification:
<methodName>examples.getCountyName</methodName>
and the second is the object—an instantiation of our handler class—which
contains the methods that the XmlRpcServer will use to
"service" requests. In other words, the XmlRpcServer object
will determine that a call to examples.getCountyName should be
handled, as getCountyName, by the RPCHandler object.
The main() method is quite short:
public static void main(String args[]) { JabberRPCResponder
server = new JabberRPCResponder(); try { server.start();
} catch (Exception e) { System.out.println("Cannot start server: " + e.toString()); } }
We instantiate our JabberRPCResponder object into server and call the start() method. Various functions in start() can raise exceptions; we take care of them all here with a simple catch clause and abort if necessary, rather than include all of start()'s functions here and have multiple try and catch statements:());
}
Most of the content of this start() function should be fairly familiar. We instantiate a ConnectionBean and will use that, in cb, throughout the script to send elements to the Jabber server. Our PacketListener is added as a listener to the ConnectionBean, causing a method of that interface (packetReceived()) to be invoked when an incoming element appears on the stream managed by that ConnectionBean.
In the same way as described in Section 8.2.2 in Chapter 8, we build our IQ element to authenticate, and send it to the server with the ConnectionBean's send() method. We also send an initial availability, in the form of a simple <presence/> element to the server.
Having connected to and authenticated with the server and sent initial availability, we can sit back and relax. All the subsequent activity will be initiated by the arrival of IQ elements on the stream. As described already, these will be made known (and available) in the form of calls to the receivedPacket() method of the PacketListener interface:
// Packet Listener interface:
public void receivedPacket(PacketEvent pe) { Packet packet = pe.getPacket(); System.out.println("RECV:" + packet.toString());
The method receives a PacketEvent object, which represents both the event of a packet (an element) arriving and the packet itself, which we retrieve into packet with the getPacket() method. For debugging purposes, we print out what we get.
Now to determine what has actually arrived:
if (packet instanceof InfoQuery) { Enumeration e =
((InfoQuery)packet).Extensions(); if (e.hasMoreElements()) { Extension
ext = (Extension)e.nextElement();
We check to see if the element is an IQ element, and, if so, we retrieve
the extensions—the <query/> tags—that the element
contains. Calling the Extensions() method on the
packet object returns an Enumeration of those tags.
We're expecting only one tag, and we pull that into ext using the nextElement() method on our Enumeration.
{{Note|For the sake of simplicity, we're not checking here whether the <query/> tag received is qualified by the jabber:iq:rpc namespace. You might wish to do that in your version.
</code>
String request = ext.toString(); String id =
((InfoQuery)packet).getIdentifier(); JID from =
((InfoQuery)packet).getFromAddress();
We can pull the extension into string form, with the toString()
method, ready for passing to our XmlRpcServer object. At this
stage, ext contains the complete XML-RPC-encoded parcel,
starting with the <methodCall> opening tag:
<methodCall>
<methodName>examples.getCountyName</methodName>
<params> <param>
<value><i4>14</i4></value> </param>
</params> </methodCall>
So that we can send a response back to the requester, we need two other
things from the incoming element besides the actual request payload. In
general, when an <iq/> element is sent, representing a
request, either as an IQ-get or an IQ-set, the sender is expecting the
response, either an IQ-result or an IQ-error, with the same value in
the id attribute. This is so the responses, once received, can
be matched up with the original requests. So we need to store the
id value, available to us through the getIdentifier()
method of the packet object. We also need the JID of the
sender, retrieved with the getFromAddress() method, so we can
specify it in the to attribute of the IQ-result we'll be
sending back.
Now it's time to service the request:
ByteArrayInputStream bis = new
ByteArrayInputStream(request.getBytes()); String result = new
String(responder.execute(bis));
With the execute() method of our XmlRpcServer object
in responder, we convert the <methodCall/> into
a <methodResponse/>, in effect. The decoding of the
XML-RPC-encoded request, the determination of which method in which
class to call (in our case it will be the getCountyName()
method in our RPCHandler class), the calling of that method,
and the encoding of the result into an XML-RPC-encoded response are all
done magically and transparently for us by XmlRpcServer (thank
goodness, or this recipe would be unbearably long!).
There's a bit of jiggling about required before we can make the execute() call, though. The method is expecting an InputStream object, and we've got a String. Not to worry, we just convert it with a ByteArrayInputStream, which can take a byte array (from request.getBytes()) and produce an InputStream object bis. Similarly, execute() produces a byte array, so that is converted to a String by wrapping the call with String().
The incoming XML-RPC-encoded request, being carried in the IQ element, will not include an XML declaration. Luckily, the XmlRpcServer does not mind that one is not present before decoding and dispatching the request. It will, however, include one on the encoded response it emits. So we must check for that and strip it off if there is one:
String response = result; int pos =
result.lastIndexOf("?>"); if (pos >= 0) { response =
result.substring(pos + 2);
}
Now we have everything we need to return the XML-RPC-encoded response to the requester. It's time to call on the services of our IQRPC classes:
IQRPCBuilder iqrpcb = new IQRPCBuilder();
iqrpcb.setPayload(response);
We create an instance of the IQRPCBuilder and call the single
method setPayload() to load the XML-RPC-encoded response into
the <query/> tag, which is qualified by the
jabber:iq:rpc namespace. This is effectively the "extension" in
JabberBeans parlance:);
} } }
After creating an IQ element container using an InfoQueryBuilder and setting the appropriate attributes, we generate the <query/> tag (iqrpcb.build()) and add it to the IQ element container with addExtension().
The <iq/> is then generated—it now contains our payload—and sent back to the server, where it will be routed to the original requester.
That's pretty much all there is to it. We have a couple of other methods belonging to the PacketListener interface:
public void sentPacket(PacketEvent pe) { Packet packet =
pe.getPacket(); System.out.println("SEND:" + packet.toString());
}
public void sendFailed(PacketEvent pe) { Packet packet = pe.getPacket(); System.out.println("Failed to send:" + packet.toString()); }
} We fill these methods with debugging-style output statements for our convenience.
JabberRPCRequester
Now that we've got our Jabber-RPC responder all set up, it's time to turn our attention to our requester, JabberRPCRequester, shown in Example 10-12. This is a Python script that uses the Jabberpy library and Pythonware's xmlrpclib library. It's a pretty simple affair.
The JabberRPCRequester script, written in Python
import jabber import xmlrpclib import sys
Server = 'qmacro.dyndns.org' Username = 'client' Password = 'pass' Resource = 'jrpc-client' Endpoint = 'server@gnu.mine.nu/jrpc-server'; Method = 'examples.getCountyName';
county = int(sys.argv[1])
con = jabber.Client(host=Server) try: con.connect() except IOError, e: print "Couldn't connect: %s" % e sys.exit(0)
con.auth(Username,Password,Resource)
request = xmlrpclib.dumps(((county),), methodname=Method); "Could not complete call"
con.disconnect()
Looking at JabberRPCRequester Step by Step
After importing the libraries that we will need:
import jabber import xmlrpclib import sys
we specify a number of parameters:
Server = 'qmacro.dyndns.org' Username = 'client' Password =
'pass' Resource = 'jrpc-client'
Endpoint = 'server@gnu.mine.nu/jrpc-server'; Method = 'examples.getCountyName'; The script connects to the Jabber server defined in Server, with the username defined in Username. The resource that will be passed in the authentication request is jrpc-client. There is as much significance in this name as there is in the name of the resource used by JabberRPCResponder (jrpc-server): none. It's just a useful naming convention to adopt when writing requesters and responders.
A single parameter, which will be interpreted as the index of the county to retrieve via the call to examples.getCountyName, is expected.
county = int(sys.argv[1])
The method expects an integer, so we convert it directly. This has a
favorable secondary effect when we come to XML-RPC encode the request;
if we hadn't called the int() function and left
county as a string, this is what the XML-RPC-encoded parcel
would have looked like:
<methodCall>
<methodName>examples.getCountyName</methodName>
<params> <param>
<value><string>1</string></value> </param>
</params> </methodCall>
However, this is what we really want:
<methodCall>
<methodName>examples.getCountyName</methodName>
<params> <param>
<value><int>1</int></value> </param>
</params> </methodCall>
In the same way as in the previous Python recipes, we connect to the
Jabber server and authenticate:
con = jabber.Client(host=Server) try: con.connect() except
IOError, e: print "Couldn't connect: %s" % e sys.exit(0)
con.auth(Username,Password,Resource) Now all we have to do is compose our XML-RPC-encoded request and send it on its way to the Jabber-RPC responder, which in our case, identified here by the JID in the Endpoint variable, is our JabberRPCResponder script.
It's time to call on the services of the XML-RPC library. We use the dumps() function to build an XML-RPC encoding, passing the single parameter (in a tuple, which is required by dumps()) representing the county index and the name of the method to call:
request = xmlrpclib.dumps(((county),), methodname=Method); We
build the IQ-set containing a <query/> element in the
jabber:iq:rpc namespace, by creating an instance of an
Iq object, and calling methods to specify that namespace
(setQuery()) and insert the XML-RPC encoding as the payload
(setQueryPayload()):
iq = jabber.Iq(to=Endpoint, type='set')
iq.setQuery('jabber:iq:rpc') iq.setQueryPayload(request) At this
stage, we need to send it off and wait for a response. You have probably
noticed that, unlike in previous Python recipes, this script hasn't
defined or registered a callback to handle incoming elements. This is
because we're approaching the task in a slightly different way in this
script. The method used to send the element to the Jabber
server—SendAndWaitForResponse():
result = con.SendAndWaitForResponse(iq)
is the rather verbose cousin of the Jabber::Connection
library's ask() method, as described in Section 10.1.5.3. It
does "exactly what it says on the tin," namely, send the element off to
the server and block until an element is received that is deemed, by a
matchup of the id attribute values on both elements, to be the
response. If no id attribute exists on the outgoing element, it
is stamped with one (with a value unique within the current usage of the
jabber class). Both are also relations of
Net::Jabber's SendAndReceiveWithID().
So result receives an element object. It's an IQ element, as that is what a response to an IQ element will be. If the call transport (as opposed to the call itself) is successful, the response will be an IQ-result. If not, for example, if the Endpoint that we specified doesn't exist, then the response will be an IQ-error.
if result.getType() == 'result': response =
str(result.getQueryPayload()) parms, func = xmlrpclib.loads(response)
print parms[0] else: print "Could not complete call"
Conversely, while the call transport might have succeeded, the call
itself might have failed, for example, if we had specified a nonexistent
method name in Method. However, for our purposes, we're going
to assume that the call was successful. So we grab the payload (i.e.,
the contents of the <query/> tag) and string it with
str().
The loads() function of xmlrpclib is used to extract the details from an XML-RPC-encoded parcel; it will produce two values, which we capture in resp and func. The first is the set of parameters, common to either a request or a response, and the second is the method name, if it exists in the encoding. We want to decode a <methodResponse/> here, so there's no method name present, but there should be a set of parameters.
We take the first (and only) element from the set of parameters in resp and print it out.
Jabber-RPC in Perl
Exciting as this recipe might be, it's not very visual. There are no screenshots of note to show, just a couple of STDOUTs from two scripts started at the command line. To fix this "problem," we're going to round this recipe off with a quick look at Jabber-RPC in Perl. Based on the Jabber::Connection library is a fairly new Perl library called Jabber::RPC, which sports two modules: Jabber::RPC::Client.pm and Jabber::RPC::Server.pm.
If you're slightly perplexed about what it takes to extend Jabber support in Java, take a look at Example 10-13 and Example 10-14. The first is an implementation, in Perl, of the JabberRPCResponder and all its class periphery. The second is an implementation of our JabberRPCRequester.
A Jabber-RPC Responder in Perl
use strict; use Jabber::RPC::Server;
my ");
sub getCountyName { my $county = shift; return $county[$county - 1]; }
my $server = new Jabber::RPC::Server( server => 'gnu.mine.nu', identauth => 'server:pass', resource => 'jrpc-server', methods => {'examples.getCountyName' => \&getCountyName}, );
$server->start;
A Jabber-RPC Requester in Perl
use strict; use Jabber::RPC::Client;
my $client = new Jabber::RPC::Client( server => 'qmacro.dyndns.org', identauth => 'client:pass', resource => 'jrpc-client', endpoint => 'dj@localhost/jrpc-server', );
print $client->call('examples.getCountyName', $ARGV[0])
|| $client->lastFault;
That's it. There's no more code than that. So don't lose heart. Just use Perl; you know it makes sense.
Browsing LDAP
Browsing, a relatively new Jabber feature (introduced in Section 2.9 and described through its associated namespace in Section 6.2.5), is an extremely powerful and flexible beast. Whereas many of the standard Jabber namespaces such as jabber:iq:oob, jabber:iq:auth, and jabber:iq:register busy themselves with providing context for relatively narrow areas of information (out-of-band information exchange, authentication, and registration, respectively), jabber:iq:browse is a namespace that qualifies, and therefore defines, a flexible container that can be used to carry all sorts of information—information that is wrapped in meaning and context and that can have virtually unlimited levels of hierarchy. What's more, it can deliver information in a standard way that can be understood by Jabber clients.
{{Note|As of this writing, the only Jabber client to implement browsing is WinJab.
</code> Put another way, it means that we can extend the scope of Jabber deployment to areas outside what we traditionally see as the "Jabber world." The power and simplicity of the Jabber browsing namespace design means that we can adapt its use to whatever purpose we have in mind. As we push out the Jabber protocol and give entities within our networks the gift of speech and presence (in other words, give them a JID and graft on a Jabber connection stub), we will want to identify those entities as something more than a collection of hazy objects that sit behind Jabber addresses.
- Want to find out about dictionaries that are reachable by Jabber?
Browse to a directory and pull out those JIDs—the dictionaries' addresses—that are identified as keyword/dictionary JID-types.
- Want to allow your users to navigate an information hierarchy outside
the Jabber space but from within the comfort of their Jabber client? Build a "reflector" that navigates the hierarchy on behalf of your Jabber users and transforms it into a Jabber browsing context—by formulating the information in jabber:iq:browse terms. Jabber browsing is one of those oddities that is so simple and so ultimately flexible that it's sometimes better to demonstrate it than to talk about it. Let's have a look at what browsing can do by building ldapr, a "reflector" for information in a database accessed by the Lightweight Directory Access Protocol (LDAP).
Building the Reflector
We're going to build the reflector as a component that connects directly to the Jabber backbone. This makes sense, as it's a service that we'll probably want to run continuously (and perhaps start up and shut down in conjunction with the Jabber server itself), rather than something more transient like a client-based 'bot, for example.
Before we go any further, have a look at Figure 10-9.
</code>
This figure shows WinJab's browser window, which, when first opened, requests the top-level browse information from the Jabber server that WinJab is connected to (cicero, in this case). This is the information in the <browse/> section of the JSM component custom configuration, as described in Section 4.4.3.8, which looks like:
<browse> <conference type='public' jid='conf.cicero'
name='Public Chat'/> </browse> We can see just one icon,
representing the Public Chat conference service.
WinJab sensibly uses this location—the Jabber server (specifically the JSM) itself— as a starting position for browsing navigation. From the description of jabber:iq:browse in Section 6.2.5, we know that each element within a <browse/> section is identified with a JID, in the jid attribute. Here, the Public Chat element has a JID of conf.cicero. If we click on the element's icon in WinJab's browser window, it would make a further browse request—an IQ-get with an empty extension qualified by the jabber:iq:browse namespace—to that JID. Thus is a browse hierarchy descended. Example 10-15 shows what that browse request might look like.
Browsing to the Public Chat service
<iq type='get' to='conf.cicero' id='82A'> <query
xmlns='jabber:iq:browse'/> </iq>
Identifying ldapr in the browsing hierarchy
This is exactly where our reflector service, in the form of a script called ldapr, will fit in. In the same way as we described our RSS news agent (newsagent) in the JSM configuration's <browse/> section, we'll now describe ldapr. We add it, like this:
<browse> <conference type='public' jid='conf.cicero'
name='Public Chat'/> <service type='x-ldap' jid='ldap.cicero'
name='LDAP Reflector'/> </browse>
There are a couple of things to note in the definition:
- service/x-ldap
- The LDAP reflector is a service, so we use the service
- category for the tag name used to describe it. While there are already
- many subtypes defined within the jabber:iq:browse namespace
- (such as irc, aim, and jud), none of them
- matches what this service is going to offer, so in the same way as we
- invent Multipurpose Internet Mail Extensions (MIME) subtypes in the
- x- space, we specify x-ldap for the type
- attribute here.
- ldap.cicero
- Each component has a JID. In browsing, the JID is the key to
- navigation. When the icon representing this service is clicked, it's
- to this JID that the browse request is sent. To provide a smooth
- navigational path through the hierarchy, it's up to the component to
- return browse data items that are identified by JIDs appropriate for
- further navigation. We'll see what this means in the next section.
Navigating into the LDAP hierarchy
Having our reflector component defined in the JSM's <browse/> list makes for a smooth transition into the reflection. We're going to navigate the LDAP hierarchy in a similar way to what was described in Section 6.2.5.1. On receipt of an initial browse request, ldapr must return the top level of the LDAP hierarchy it has been set up to reflect:
<iq type='get' to='ldap.cicero' id='7'> <query
xmlns='jabber:iq:browse'/> </iq>
Figure 10-10 shows the LDAP hierarchy we've discussed in Section
6.2.5.1. It's part of an imaginary structure devised to represent people
and departments in an organization. The base distinguished name, or
base DN (an LDAP term meaning the common suffix used in the
identifiers of all elements in a particular LDAP structure), is
dc=demo,dc=org. A DN, or distinguished name, can be thought
of as a key for a particular element. Levels within the LDAP structure
are identified with DNs of ever-increasing lengths, as they get more
specific the deeper the hierarchy is descended.
</code> If
ldapr were to reflect this hierarchy, we want the response to an initial browse request to return information about the People and Groups nodes. In Example 10-16, we see what this response looks like.
An initial browse request elicits a reflection of the People and Groups nodes
SEND: <iq type="get" id="B88" to="ldap.cicero"> <query
xmlns="jabber:iq:browse"/> </iq>
RECV: <iq id='B88' type='result' to='dj@cicero/basement' from='ldap.cicero'> <query xmlns='jabber:iq:browse'> <item jid='ldap.cicero/ou=People' name='ou=People'/> <item jid='ldap.cicero/ou=Groups' name='ou=Groups'/> </query> </iq> In the response, we're returning a single level of the LDAP hierarchy. The People and Groups nodes are represented by <item/> tags, within the generic container tag <query/>.
This is in slight contrast to the browse <iq/> elements shown in Section 6.2.5.1, in which, effectively, two levels of hierarchy are returned:
RECV: <iq type='result' id='B89' to='dj@cicero/basement'
from='ldap.cicero'> <item name='root entry'
xmlns='jabber:iq:browse' jid='ldap.cicero'> <item name='ou=People'
jid='ou=People@ldap.cicero'/> <item name='ou=Groups'
jid='ou=Groups@ldap.cicero'/> </item> </iq>
It's really up to you to decide how many levels of information you want
each browse response to emit. It depends on circumstances (will the
requester be able to interpret multiple levels of hierarchy?) and the
type of application scenario in which you're wanting to employ Jabber
browsing. In this case, despite the difference in appearance, WinJab
will interpret the information correctly whichever way you play it.
{{Sidebar|No Query Tag?This last example looks slightly odd, because the familiar <query/> tag, usually the container for information within various IQ namespaces, is conspicuously absent. It's actually still there in spirit, in the form of the first <item/> tag, which takes the <query/> tag's role in carrying the xmlns='jabber:iq:browse' namespace declaration.
</code> So, we've got our first two LDAP levels back. Figure 10-11 shows how they're displayed in WinJab's browser window. The JID we've just browsed to—ldap.cicero—is shown in the Jabber Address field. The People and Groups nodes are represented by <item/> tags within the jabber:iq:browse-qualified result; these are translated into folder icons in the browser window.
</code> The browser displays folder
icons for these nodes because we've described them using jabber:iq:browse's generic "holder" tag <item/>. While the namespace describes many categories to represent many different things (service, conference, user, and so on), there's not really a category that fits the "LDAP_ hierarchy node" description. So we plump for the generic <item/>, reserved specially for such occasions.
To navigate to the next level in the hierarchy, all we do is click on one of the icons now displayed to us. WinJab will send this next browse request to the JID that's associated with the icon we click. In the case of the People icon, we know from the browse result shown in Example 10-16 that the JID is ldap.cicero/ou=People. This is what that browse request and response look like:
SEND: <iq type="get" id="B105" to="ldap.cicero/ou=People">
<query xmlns="jabber:iq:browse"/> </iq>
RECV: <iq id='B105' type='result' to='dj@cicero/basement' from='ldap.cicero/ou=People'> <query xmlns='jabber:iq:browse'> <item jid='ldap.cicero/ou=UK, ou=People' name='ou=UK'/> <item jid='ldap.cicero/ou=France, ou=People' name='ou=France'/> <item jid='ldap.cicero/ou=Germany, ou=People' name='ou=Germany'/> </query> </iq> The results of this browse request are shown in Figure 10-12. Again, notice the JID displayed in the Jabber Address window—it's the JID that we've just browsed to. The JIDs that are assigned to the items displayed here reflect the next level in the hierarchy. And so it goes on.
</code>
What the reflector is actually doing
Each of the JIDs that are browsed to look like this:
[component name]/[relative LDAP DN] such as:
ldap.cicero/ou=UK, ou=People The hostname part of the
JID—[component_name]—is the name of the LDAP
reflector component, and the resource part of the
JID—[relative_LDAP_DN]—reflects the DN of the node
(minus the base DN suffix) within the LDAP structure that the
<item/> with that JID represents.
The crucial bit is that the [component_name] part remains the same (ldap.cicero) across every call. This means that all of the jabber:iq:browse requests made in the navigation sequence go to the component, in this case, the ldapr script.
Once the component receives these requests, it disassembles the JID, extracting the resource part, and uses it to query the LDAP server on the Jabber client's behalf. It then builds a response, in the form of an IQ-result containing the all-important jabber:iq:browse-qualified extension, containing the results of the LDAP query.
JIDs are used to convey information to the component, as well as to have the request delivered to the right place.
{{Sidebar|Using JID Parts to Convey InformationThe example from Section 6.2.5.1 showed the information being conveyed as the user part of the JID:
<item name='ou=People' jid='ou=People@ldap.yak'/>
This is perfectly fine, as long as the information we want to convey
doesn't contain characters deigned illegal for that part of the JID.
There are far fewer restrictions on how the resource part of a JID may
be constructed. So it should usually be your first choice of location
for piggybacking information:
<item jid='ldap.cicero/ou=People' name='ou=People'/>
</code>
The ldapr script
While the explanation might be long, the actual script, shown in Example 10-17, is relatively short. Written in Perl, the ldapr script uses the Jabber::Connection library.
The ldapr script, written in Perl
use strict; use Jabber::Connection; use Jabber::NodeFactory; use
Jabber::NS qw(:all); use Net::LDAP;
my $ldapsrv = 'cicero'; my $basedn = 'dc=demo,dc=org';
my $ldap = Net::LDAP->new($ldapsrv) or die $@;
debug("connecting to Jabber"); my $c = new Jabber::Connection( server => 'localhost:9389', localname => 'ldap.cicero', ns => 'jabber:component:accept', );
unless ($c->connect()) { die "Oops: ".$c->lastError; }
debug("registering IQ handlers"); $c->register_handler('iq',\&iq_browse); $c->register_handler('iq',\&iq_notimpl);
debug("authenticating"); $c->auth('pass');
debug("waiting for requests"); $c->start;
sub iq_browse {
my $node = shift; return unless $node->attr('type') eq IQ_GET and my $query = $node->getTag(, NS_BROWSE);
my ($obj) = $node->attr('to') =~ /\/(.*)$/; debug("request: $obj");
my $result = $ldap->search( base => $obj ? join(',', $obj, $basedn) : $basedn, filter => "(objectclass=*)", scope => 'one', );
if ($result->code) { debug("search error: ".$result->error); } else { foreach my $entry ($result->all_entries) { my $e = strip($entry->dn, $basedn); debug("found: $e"); my $item = $query->insertTag(isUser($e) ? "user" : "item"); $item->attr('jid', join('/', $ID, $e)); $item->attr('name', [split(/,/, $e)]->[0]); } }
$node = toFrom($node); $node->attr('type', IQ_RESULT);
$c->send($node);
return r_HANDLED;
}
sub iq_notimpl {
my $node = shift; $node = toFrom($node); $node->attr('type', IQ_ERROR); my $error = $node->insertTag('error'); $error->attr('code', '501'); $error->data('Not Implemented'); $c->send($node); return r_HANDLED;
}
sub strip {
my ($fqdn, $basedn) = @_; my @fqdn = split(/,/, $fqdn); my @basedn_elements = split(/,/, $basedn); return join(',', @fqdn[0 .. ($#fqdn - scalar @basedn_elements)]);
}
sub toFrom { my $node = shift; my $to = $node->attr('to'); $node->attr('to', $node->attr('from')); $node->attr('from', $to); return $node; }
sub isUser {
my $rdn = shift; return $rdn =~ /^cn/ ? 1 : 0
}
sub debug {
print STDERR "debug: ", @_, "\n";
}
Looking at ldapr Step by Step
Taking the script step by step, we start on familiar ground:
use strict; use Jabber::Connection; use Jabber::NodeFactory; use
Jabber::NS qw(:all); use Net::LDAP;
my $ldapsrv = 'cicero'; my $basedn = 'dc=demo,dc=org'; After declaring the modules we want to use (the only one we haven't seen so far is the Net::LDAP module that we'll need to connect to and query an LDAP server), we define a couple of variables. $ldapsrv is the name of the LDAP server that ldapr is going to be reflecting, and $basedn is the base distinguished name that will be used as the suffix in all of the LDAP queries.
If you don't have an LDAP server of your own, a number of public ones are available that you could point this script at. The two variables $ldapsrv and $basedn go together—make sure you specify the correct base DN for the LDAP server you want to reflect.
{{Note|Depending on the configuration, some LDAP servers will require you to bind to them with a username and password before you can perform searches. To do this, you'll need to include an extra step in this script, using the bind() method in Net::LDAP.
</code> Having opened our connection to the LDAP server:
my $ldap = Net::LDAP->new($ldapsrv) or die $@;
we then proceed to connect to the Jabber server as a component. We're
connecting to localhost:9389, which means this component script
is going to run on the same host as the Jabber server, and connect to it
on port 9389:
debug("connecting to Jabber"); my $c = new Jabber::Connection(
server => 'localhost:9389', localname => 'ldap.cicero', ns
=> 'jabber:component:accept', );
unless ($c->connect()) { die "cannot connect: ".$c->lastError; } We'll also need a <service/> stanza in the Jabber server's configuration file to describe this component. The stanza in Example 10-18 would be appropriate. Refer to Section 9.3.1.1 for more details on connecting to Jabber as a component.
A component instance definition for ldapr
<service id='ldap.cicero'> <accept>
<ip>localhost</ip> <port>9389</port>
<secret>secret</secret> </accept> </service>
The script isn't going to do much apart from reflect
jabber:iq:browse queries as LDAP searches. Consequently, it
doesn't need to be able to handle anything apart from incoming IQ
elements:
debug("registering IQ handlers");
$c->register_handler('iq',\&iq_browse);
$c->register_handler('iq',\&iq_notimpl);
The iq_browse() function is where all the work will be done. As
in the newsagent recipe (Section 8.3)," we also have a
"catchall" function (iq_notimpl()) that cleans up any "stray"
IQ elements that it doesn't know about. Furthermore, because we aren't
registering any handlers for <message/> or
<presence/> elements, the dispatcher in
Jabber::Connection will just throw them away, leaving ldapr
blissfully ignorant of them—which is what we want.
After authenticating with the server by calling the auth() method to send the <handshake/> element:
debug("authenticating"); $c->auth('secret');
it's time to set the main event loop going. With a call to
start(), we hand over control to Jabber::Connection,
safe in the knowledge that we have a function (iq_browse()) to
deal with the incoming requests that we're supposed to deal with, and
that we don't care a hoot about anything else:
debug("waiting for requests"); $c->start;
Performing the actual reflection
Now, let's move on to the meat of the script. The main handler, iq_browse(), starts by making sure it has an IQ element:
sub iq_browse {
my $node = shift; return unless $node->attr('type') eq IQ_GET and my $query = $node->getTag(, NS_BROWSE);
What we're looking for is an IQ-get with a jabber:iq:browse-qualified query extension. If there isn't one, we exit out of the function, and dispatching continues to the iq_notimpl() function, because we didn't return the special value represented by m_HANDLED.
If we do get a valid request, we first extract the relative DN from the resource part of the JID specified in the IQ-get's to attribute—the JID. If the request was sent to ldap.cicero/ou=UK, ou=People, then we extract the relative DN ou=UK, ou=People into $obj like this:
my ($obj) = $node->attr('to') =~ /\/(.*)$/; debug("request:
$obj");
Armed with a specification of what part of the LDAP hierarchy needs to
be searched, the next step is to call the search() method on
the LDAP object in $ldap:
my $result = $ldap->search( base => $obj ? join(',',
$obj, $basedn) : $basedn, filter => "(objectclass=*)", scope =>
'one', );
As you can see, we're specifying three parameters in the
search() method:
- base
- This is the point within the LDAP hierarchy from which to start
- looking. We must specify this as a full DN, so we append the base DN
- (dc=demo,dc=org) to the relative DN received in the request
- to make an absolute DN:
ou=UK, ou=People, dc=demo, dc=org
- filter
- Specifying (objectclass=*) effectively means "look for
- anything."
- scope
- There may be many levels below the point in the hierarchy that we're
- going to start searching from. Specifying 1 for the scope
- parameter tells the search to descend only one level. After all, we
- want to return only one level back to the requester in the reflection.
- If the search failed for some reason (e.g., if a relative DN specified
- in the request didn't exist in the hierarchy),[2] then don't bother checking the
- results. Instead, a warning debug message with the error details is
- issued:
if ($result->code) { debug("search error:
".$result->error);
}
However, if the search succeeded, the results should be translated into the jabber:iq:browse extension:
else { foreach my $entry ($result->all_entries) { my $e =
strip($entry->dn, $basedn); debug("found: $e"); my $item =
$query->insertTag(isUser($e) ? "user" : "item");
$item->attr('jid', join('/', $ID, $e)); $item->attr('name',
[split(/,/, $e)]->[0]);
} }
Calling the all_entries() method on the search results returns a list of LDAP entries, in the form of objects. Calling the dn() method on one of these objects (in $entry) returns us its full DN. Searching with a base of:
ou=UK, ou=People, dc=demo, dc=org
in the demonstration database would return two entries:
cn=Janet Abrams, ou=UK, ou=People, dc=demo, dc=org cn=Paul
Anthill, ou=UK, ou=People, dc=demo, dc=org
The task of this section of the iq_browse() function is to turn
the information, in the form of these two entries, into something like
this:
<iq id='B25' type='result' to='dj@cicero/basement'
from='ldap.cicero/ou=UK, ou=People'> <query
xmlns='jabber:iq:browse'> <user name='cn=Janet Abrams'
jid='ldap.cicero/cn=Janet Abrams, ou=UK, ou=People'/> <user
name='cn=Paul Anthill' jid='ldap.cicero/cn=Paul Anthill, ou=UK,
ou=People'/> </query> </iq>
which in turn should be rendered into something like the contents of
WinJab's browser window as shown in Figure 10-13.
</code> The function strip()
takes two arguments and strips away a base DN from a fully qualified DN to leave the significant, relative part. Calling strip() on this:
cn=Janet Abrams, ou=UK, ou=People, dc=demo, dc=org
when specifying the base DN in $basedn, would return this:
cn=Janet Abrams, ou=UK, ou=People
For each of the entries found, we insert a tag into the
<query/> extension. The name of the tag is either
item, if the entry is simply an LDAP hierarchy node, or
user (a valid jabber:iq:browse category), if the entry
points to a person. We make the decision on the basis of the first part
of the DN—if it's cn=, then it's a reference to a person. The
isUser() function makes this decision for us.
Once inserted, we embellish the <item/> or <user/> tag with name and jid attributes. The jid attribute is crucial, as it represents the path to further descend within the LDAP hierarchy on the next request. It is given the whole of the relative DN as a value. The name attribute is simply given the most significant portion of the DN as a value.
Whether we found something or not, we still want to return a result to the requester:
$node = toFrom($node); $node->attr('type', IQ_RESULT);
$c->send($node);
So we swap around the to and from attributes (remembering we're a component, and not a client) change the IQ element's type from get to result, and send it back.
We end the function by telling the dispatcher that the element has been handled:
return r_HANDLED;
}
Supporting functions
The rest of the script consists of minor functions that play roles in assisting the core iq_browse().
The iq_notimpl() function is exactly the same as in the newsagent script in Section 8.3; it serves to catch stray IQ elements that in this case aren't IQ-gets containing a jabber:iq:browse query, and send them back with a "Not Implemented" error. With IQs, this is preferable to not responding at all, as responses are usually expected, even if those responses are IQ-errors. It's different with <message/> and <presence/> elements, as they can be seen as "one-way" and valid fodder for an element-sink.
sub iq_notimpl {
my $node = shift; $node = toFrom($node); $node->attr('type', IQ_ERROR); my $error = $node->insertTag('error'); $error->attr('code', '501'); $error->data('Not Implemented'); $c->send($node); return r_HANDLED;
} The strip() function, as described already, removes a base DN from a fully qualified DN:
sub strip {
my ($fqdn, $basedn) = @_; my @fqdn = split(/,/, $fqdn); my @basedn_elements = split(/,/, $basedn); return join(',', @fqdn[0 .. ($#fqdn - scalar @basedn_elements)]);
} As well as sharing the iq_notimpl() function with the newsagent script, ldapr also shares the toFrom() function, which flips around the values of the to and from attributes of an element:
sub toFrom { my $node = shift; my $to = $node->attr('to');
$node->attr('to', $node->attr('from')); $node->attr('from',
$to); return $node;
}
The isUser() function, also described earlier, makes an
arbitrary distinction between LDAP nodes that it thinks are
People and those that it thinks aren't:
sub isUser {
my $rdn = shift; return $rdn =~ /^cn/ ? 1 : 0
} Last but not least, we have a simple debug() function, which in this case is simply an abstraction of the classic "print to STDERR" method:
sub debug {
print STDERR "debug: ", @_, "\n";
}
Building an ERP Connection
To some extent, SAP's R/3, an Enterprise Resource Planning (ERP) system, has until recently been a monolithic piece of software, both from an actual and psychological perspective.
The drive to replace disparate and incompatible business system "islands" with an integrated software solution that covered business processes across the board was strong in the 1980s and 1990s. With good reason. For example, SAP delivered the ability for companies to manage their different end-to-end processes coherently, without requiring custom-built interfaces between logistics and financial accounting packages. The flow of information was automatic between the application subsystems that were standard within SAP's R/2 and R/3 products.
The upside to this was, of course, the seamless integration of data and functions within the SAP universe, a universe vast enough to offer application coverage for pretty much every conceivable business function.
The downside was, ironically, the universe itself. All-encompassing as SAP's products were, and indeed they were forever changing and expanding to meet business demands, it was never enough, if you wanted to break out of the mould formed by the omnipresent standard proprietary client called SAPGUI. Sure, these days, with a huge installed base of R/2 and R/3 systems in production around the world, there are a multitude of ways to get data in and out of SAP systems, but despite the varied successes of alternative client initiatives, user interaction with the business processes remains largely orientated around the SAPGUI.
This recipe is extremely simple compared to the others in this chapter. It breaks out of the SAPGUI mold and the monolithic software culture to use open source tools and technologies to add value to our SAP business processes. The point of this recipe is not particularly what the script looks like or how it's written, but what it does and how it does it.
Building an Order Approval Notification Mechanism
We're going to use a standard Jabber client as an SAP R/3 client—obviously not to replace SAPGUI, rather to allow someone who perhaps has a single R/3-related task to perform and connects only to SAP occasionally. In stark contrast to the SAPGUI client, an off-the-shelf Jabber client is much smaller. It takes up less screen space, memory, and CPU and generally for focused access is a great way for someone to play his part in business processes from the comfort of familiar communication surroundings—his IM client.
Of course, the available Jabber clients don't have any built-in R/3 functionality per se, but as clients that can receive messages and recognize URLs,[3] they provide enough horsepower for us to achieve our goal.
That goal is to notify a supervisor whenever a sales order is placed that requires his approval. The notification will arrive in the form of a <message/> element, carrying some descriptive text and, crucially, a URL, which points to an Apache-based handler. When invoked, the handler pulls the relevant information for the order out of R/3, requests verification of the viewer's identity, and offers a chance to approve the order with the click of a button.
Effectively we're building a miniworkflow scenario: in one direction a notification is transmitted out of the bounds of the SAP universe to the approver, and in the other direction the notification process is turned around in a one-step approval cycle via Apache. Figure 10-14 shows this scenario and where our Jabber client and script, called approv, fits in.
</code> The goal here is to get the
notification message out of R/3 and send it to the supervisor's JID along with a URL that he can follow back into the R/3 system to carry out the approval process. The return process via Apache is outside this recipe's scope and has been left as an exercise for you.
Getting the notification out of R/3
As we've already mentioned, there are many ways of getting data in and out of SAP. We're going to use a generic, lowest common denominator feature of the R/3 Basis system to invoke a script and pass it parameters.
{{Note|At this stage, if you're squeamish about R/3 Basis or SAP's ABAP language or are of another ERP persuasion, it's time to look away. What we're going to do here is not rocket science, nor is the general process specific to R/3. We're just going to call a script, at the operating system level, from within an application inside R/3.
</code> The function group SXPT encompasses a number of function modules related to the definition, management, and execution of operating system commands. Each of these commands is described within sets of configuration parameters that define how and where they can be invoked. Using the program RSLOGCOM, you can create definitions for these operating system commands manually.
We need to define such a command that refers to the approv script. Figure 10-15 shows the RSLOGCOM definition of approv as an external command that is called ZNOTIFY.
</code> Once approv has been defined
this way, it can be invoked by passing parameters with a call to a function module in the SXPT function group (SXPG_EXECUTE_COMMAND). Example 10-19 shows how the script might be invoked using this function module in ABAP. Code like this could typically be installed in a customer exit—a place in a standard R/3 application where custom processing can be added without having to go through the involved process of creating and carrying out a modification request.
Calling approv, via ZNOTIFY, from within R/3
data: sxpg_exec_protocol like btcxpm occurs 0 with header line,
sxpg_add_parms like SXPGCOLIST-PARAMETERS,
sxpg_target_system like RFCDISPLAY-RFCHOST value 'gnu.mine.nu'.
concatenate ordernumber approver into sxpg_add_parms separated by space.
call function 'SXPG_COMMAND_EXECUTE' exporting commandname = 'ZNOTIFY' additional_parameters = sxpg_add_parms operatingsystem = sy-opsys targetsystem = sxpg_target_system stdout
= 'X' stderr = 'X' terminationwait = 'X' table
exec_protocol = sxpg_exec_protocol exception others
= 1.
if sy-subrc <> 0. raise notification_failed. endif. We pass two parameters to approv via the SXPG_COMMAND_EXECUTE: the number of the order that needs approving (from ordernumber) and the JID of the approver (from approver). The SXPG mechanism invokes the approv script with the two parameters, and the notification starts its journey.
The approv Script
As mentioned already, the approv script (shown in Example 10-20) is rather small and insignificant. Its purpose is to take the order number it receives, wrap it in a descriptive message that includes a URL, and send it to the approver's JID. Written in Perl, the approv script uses a minimum set of features from the Jabber::Connection library.
The approv script, written in Perl
use strict; use Jabber::Connection;
my ($order, $approver) = @ARGV; my $c = new Jabber::Connection(server => 'qmacro.dyndns.org'); die "Cannot connect: ".$c->lastError unless $c->connect(); $c->auth('approv','secret','approv');
$c->send(<<EO_MSG); <message to="$jid"> <subject>Order Approval Required</subject> <body> An order ($order) requiring your approval has been placed. Please visit to approve.
Thank you. </body> </message> EO_MSG
$c->disconnect; The script receives the two parameters passed from R/3 into the $order and $approver variables. Having connected to the Jabber server at qmacro.dyndns.org and authenticated as the user approv, it sends off a formatted message to the approver, before disconnecting.
The script itself is extremely simple and is not of primary importance. What is crucial here is the role that Jabber is playing. The <message/> element of Jabber's protocol is used to span the R/3 world with the IM world, enabling a business process cycle to take place outside the normal boundaries of R/3 interaction. On a simple level, that's all it takes to merge the ERP business process world with the world of IM.
The return path to R/3, via the Apache-based CGI application, is initiated when the user clicks on the URL in the message, as shown in Figure 10-16.
</code> Here we see that Jarl has
recognized the http:// address and has rendered it as an active link. That's all it has to do. There's no attempt on Jarl's part to make an HTTP request to the Apache server and render the HTML received. Focusing on "the right tools for the right job," we should leave that role to a web browser, as indeed Jarl does, starting up the browser of our choice.[4]
Taking This Further
The journey through the recipes in the last three chapters of the book has taken us from the simplest CVS notification mechanism with the cvsmsg script (Section 8.1) through a fairly involved RSS component (Section 9.3) to the transporting of XML-RPC-encoded requests and responses with our JabberRPCRequester and JabberRPCResponder scripts (Section 10.3).
This chapter, and indeed the book, ends on a simple note, in the form of the approv script. Not without reason, we've completed the circle of script and application complexity. While we can build very useful and successful applications that are naturally complex, a Jabber-powered solution doesn't necessarily have to be. To employ the Jabber philosophy, the technology, and the protocol elements to build bridges between previously separate systems, and to span different areas of technology, open and proprietary alike, using Jabber's open, extensible, and flexible protocol, is what it's all about.
Finally, it's hopefully clear from the diversity of recipes shown in this part of the book that deploying solutions with Jabber, as noted in the preface, really is fun! | http://commons.oreilly.com/wiki/index.php/JabChapter_10 | CC-MAIN-2014-42 | refinedweb | 16,886 | 51.58 |
Overview
Introducing QuickTime for Java
Initializing Quicktime and Displaying an Image
Summary
Further Exploration
Overview
In
this first module, we will learn the basics of QuickTime for Java, such
as initializing QuickTime for Java, creating a window with a QTCanvas,
and drawing an image centered within the window. (See image right.)
Additionally, we will familiarize ourselves with some of the principal
concepts behind QTJ and explore the architecture. At the end of this module,
you will have an knowledge of how to create and prepare an application
that uses QuickTime for Java and having a basic understanding of how to
use the QTJ API.
Introducing
QuickTime for Java
QuickTime for Java is both an API and an application framework. As an
API, it provides Java developers access to the rich wealth of multimedia
capabilities in Quicktime previously available only to C/C++ and Pascal
programmers. It enables access to QuickTime's native runtime libraries
which provide support for different forms of media (images, audio, and
movies), timing services, media capture, complex compositing, visual effects,
and custom controllers. QuickTime for Java has many other benefits as
well. Since the API relies on native libraries to perform its complex
and time-consuming tasks, it is extremely fast. It is also cross-platform-
it will run on all platforms that support QuickTime (Macintosh, Windows
NT, Windows 95, and Windows 98).
The application framework layer makes it easier for Java developers to
integrate QuickTime capabilities into Java applets and applications. The
framework includes:
We will be taking advantage of both of these layers in the first module
of this tutorial.
Initializing
QuickTime and Drawing an image
QuickTime for Java is a large API and it is very easy to get lost. Perhaps
the best way to learn QuickTime for Java is to jump right in and start
writing a simple example. This module is presented in a progressive format
as if the code was being written before your eyes. This is designed to
help suggest a methodology for programming to the QuickTime for Java API.
Newly added code will appear in yellow (yellow)
while code from previous steps will appear in gray (gray).
Code that is removed or deleted will appear in red (red).
We will explain things as we go along, so open up your favorite source
editor, and let's start coding!
Creating the Zoo1 Class
Our first step will be creating a window and displaying it on the screen.
The easiest way to do this is to create a new class derived from java.awt.Frame
that looks something like this:
import java.awt.*;
public class Zoo1 extends Frame
{
public static void main( String[] args )
{
}
}
So far we haven't done anything tricky. We have a static main method because
we want to build an application instead of an applet. Now would be a good
time to save your file as "Zoo1.java". The file should be the
same name as the main class or your compiler may complain.
Back to top
Adding the Constructor
Next, we will add a constructor that takes a string parameter for the
title of the window:
public class Zoo1 extends Frame
{
public Zoo1( String s )
{
super(s);
}
public static void main( String[] args )
{
Zoo1 appWindow = new Zoo1( "QTZoo 1" );
}
}
Now we have added our constructor for the Zoo class. The constructor takes
a string argument and passes it off to our superclass constructor which
is Frame. The Frame class will automatically handle initializing the window
title for us based on our string argument. We have also created an instance
of our object in main( ), and passed a window title to the constructor.
Now we are ready to perform our initialization.
Window Initialization
Now we need to configure the parameters of the window such as the size
and resizability. Once again, this is all standard Java AWT, so we are
not going to explain most of this in great detail:
public class Zoo1 extends Frame
{
static public int WIDTH = 640;
static public int HEIGHT = 480;
public Zoo1( String s )
{
super(s);
setResizable( false );
setBounds( 0, 0, WIDTH, HEIGHT );
}
public static void main( String[] args )
{
Zoo1 appWindow = new Zoo1( "QTZoo 1" );
appWindow.show();
appWindow.toFront();
We have created two static integers to store information about the height
and width of our window. We will use these values in the constructor in
which we make the window non-resizable and set the bounds of the window.
Finally, in the main method, we call show( ) to make the window visible
and call toFront( ) to ensure that it is the front-most window.
It is useful to note that thus far, no special coding to deal with QuickTime
has been done. This serves to emphasize the fact that QuickTime for Java
is designed to work in a complementary manner with the standard AWT. It
does not enforce a specific programming style.
In the next step we will initialize QT.
Initializing QuickTime
Since QuickTime is a native library, its memory is managed outside of
the normal Java heap. In most cases, you create QuickTime for Java objects
in the same manner that you would create a standard Java object. However,
before you work with QuickTime for Java, you must allow QuickTime to allocate
memory for the native objects associated with your application and initialize
its internal data structures.
This is accomplished through one API call that is part of the quicktime.QTSession
package:
Import quicktime.QTSession;
import quicktime.QTException;
public class Zoo1 extends Frame
{
static public int WIDTH = 640;
static public int HEIGHT = 480;
setResizable( false );
setBounds( 0, 0, WIDTH, HEIGHT );
}
public static void main( String[] args )
{
try
{
QTSession.open();
Zoo1 appWindow = new Zoo1( "QTZoo 1" );
appWindow.show();
appWindow.toFront();
}
catch ( Exception e )
{
QTSession.close();
e.printStackTrace();
}
We have included two packages at the beginning of our application. The
first, the QTSession
package allows us to open and close QuickTime sessions between Java and
the native QuickTime library. The second is for QTException which is thrown
by many QTJ calls if error conditions arise.
After adding the two import statements, we add a try / catch block. This
is necessary because our call to open the QTSession which initializes
QuickTime for Java could throw an Exception (for example if QuickTime
is not installed). If we don't catch this, we will get a compiler error
because the static method QTSession.open(
) is declared as throwing an Exception.
We attempt to open the session, and if any error occurs, we close the
session in our catch block by calling the static method QTSession.close(
).
The key point of interest in this step is that we must call QTSession.open(
) to initialize QuickTime. At the end of our application, we should call
QTSession.close( ) to perform cleanup and properly shut down our QuickTime
session. You will notice that we are only closing down QTJ if an error
condition occurs. We should also close our session if the user quits the
application. We will handle this case in the following step.
Handling Application Termination
When the user quits our application by closing our window, or choosing
Quit from the Apple Menu, we need to shut down QTJ:
import
java.awt.*;
Import java.awt.event.*;
Import quicktime.QTSession;
import quicktime.QTException;
public class Zoo1 extends Frame
{
static public int WIDTH = 640;
static public int HEIGHT = 480;
setResizable( false );
setBounds( 0, 0, WIDTH, HEIGHT );
addWindowListener( new WindowAdapter()
{
public void windowClosing( WindowEvent we )
{
QTSession.close();
dispose();
}
public void windowClosed( WindowEvent we )
{
System.exit( 0 );
}
});
}
public static void main( String[] args )
{
We have added an import statement for java.awt.event so that we can create
our new WindowAdapter. Our window adapter code is a bit tricky. We are
taking advantage of a Java's support for a feature called anonymous inner
classes. This allows us to create a new subclass of WindowAdapter and
override two of its methods from inside the call addWindowListener( )
which is part of Frame. This allows us to receive notification when the
close box is pressed via the windowClosing( ) method and when the window
has been destroyed via the windowClosed( ) method.
In the windowClosing( ) method we shut down the QTSession and dispose
of our window. In windowClosed( ) we call System.exit( ) which quits our
program.
You will notice that we did not do anything special to handle the quit
menu item. On Windows, choosing quit causes close messages to get sent
to any open windows (which in our case would close QTJ because of our
WindowAdapter class). On the Mac, where this isn't the case, a default
quit handler is installed for us by QTJ that automatically calls QTSession.close(
) as the application terminates.
Now the only task that remains is drawing our image.
Drawing an image using QTJ
Now comes the fun part- opening an image file and using QuickTime to
display it in our window. First we need to import a few packages:
import java.awt.*;
Import java.awt.event.*;
Import java.io.IOException;
import quicktime.QTSession;
import quicktime.QTException;
import quicktime.app.image.GraphicsImporterDrawer;
import quicktime.app.display.QTCanvas;
import quicktime.app.QTFactory;
import quicktime.io.QTFile;
These are mostly QuickTime packages which we will be using to load and
draw our images. For example, the GraphicsImporterDrawer
class allows QuickTime to parse a number of image file formats and draw
the image. We also need IOException because QT will throw this type if
the file specified is not found. We will talk about these classes in the
code section below:
setResizable( false );
setBounds( 0, 0, WIDTH, HEIGHT );
QTCanvas myQTCanvas = new QTCanvas(
QTCanvas.kInitialSize, 0.5F, 0.5F );
add( myQTCanvas );
Here we are adding drawing functionality to our window. For simplicity's
sake, we have decided to place this code directly in the constructor.
Our first step is to create a QTCanvas.
To present QuickTime content in a Java Applet or Application, you need
a mechanism for interacting with the display and the event system. The
QTCanvas is a specialized canvas that provides access to QuickTime's native
drawing mechanism within a Java window. It essentially punches a hole
in the Java display area and manages display of that region using QuickTime.
All QuickTime drawing must occur in a QTCanvas. When we create a QTCanvas
here, we are specifying an area of our window where QuickTime will perform
imaging. The constructor of the QTCanvas takes three arguments. The first
specifies how the QTCanvas should behave when its container resizes. In
this example, we have specified the constant QTCanvas.kInitialSize.
With this sizing flag, if our window grows larger than the size of the
canvas, the canvas does not scale. It remains at its original specified
size. Similarly, if the window becomes smaller than the canvas, the canvas
does not scale either. It will be clipped to the window size. There are
other flags that allow you to have free scaling as well as ones that allow
scaling but preserve aspect ratio. These are all defined in QTCanvas.
The second and third parameters of the QTCanvas constructor are for horizontal
and vertical alignment. These specify the default position of items drawn
within the canvas. We have specified the float value of 0.5 for both the
horizontal and vertical alignment parameter. This will center items within
the Canvas. A value of zero will specify the top or left and the value
of one will specify bottom or right.
For example, let's pretend that we are creating a window and placing
a QTCanvas inside it. Let's also pretend that our client (QT object that
is capable of drawing itself in a QTCanvas) is 125 by 125 pixels (which
is slightly smaller than the window we are creating). The following series
of images demonstrates the behavior we would see based on the values of
the parameters we pass the QTCanvas. Note that the QTCanvas is invisible.
It is the area that drawing may occur in. The green area with the text
QTCanvas is the client of the Canvas that is doing the drawing:
QTCanvas( QTCanvas.kInitialSize,
0.5F, 0.5F )
Remember that the canvas is the same size as the window. The client
area is smaller than the QTCanvas it lives in. By specifying kInitial
size, no scaling occurs. The two float values specify that the client
is to be drawn centered within the QTCanvas. If our window was smaller
than the client, part of the client would not fit in the window.
No scaling would occur.
QTCanvas( QTCanvas.kInitialSize,
0F, 0F )
This is very similar to the first example except that by specifying
0F, 0F, we are telling the canvas to draw it's client object in
the upper left hand corner. Like the previous example, if the window
were resized, no scaling would occur.
QTCanvas( QTCanvas.kFreeResize,
0F, 0F )
In this example, we have specified the kFreeResize parameter. This
will cause the QTCanvas to resize with the window, and will also
cause the content area of the QTCanvas to freely scale as the QTCanvas
changes sizes.
For our application, we want the images to be centered in our window and
not scale (even if they are too big to fit in the window). These parameters
may easily be changed at runtime by calling the QTCanvas methods setAlignment(
), and setResizeFlag(
).
Once we have created the QTCanvas, we add it to the window by calling
add( ) from Component.
Now that we have a QTCanvas to draw into, we need to load the image files
and display them:
add( myQTCanvas );
try
{
QTFile imageFile = new QTFile(
QTFactory.findAbsolutePath(
"data/zebra/ZebraBackground.jpg" ));
We need to put all of our code in a try block, because QuickTime could
throw an exception if the media is not found. Now we create a new QTFile
object that represents the location of our zebra image. Since QTFile's
constructor can take a file, and we don't want to hard code an absolute
path (which will break if our application moves), we are using QTFactory's
findAbsolutePath( ) to locate the image file based on a relative path
and create a File object that we can pass to our QTFile constructor.
Now that we have a reference to the QuickTime media file, we need to
load it and draw it: );
We declare a new GraphicsImporterDrawer
object. This object is responsible for examining the media file, decompressing
the file (if necessary), importing it into a native format that can be
easily drawn, and preparing it for drawing. If the media file is in one
of the many formats that QuickTime supports, the GraphicsImporterDrawer
object is capable of loading the file and displaying it in a QTCanvas.
Once we have loaded the media file, we call setClient(
) from the QTCanvas with that object as a parameter. This tells the
canvas that our GraphicsImporterDrawer is responsible for imaging the
contents of the QTCanvas. Thus, when the area of the window that contains
the QTCanvas needs to be redrawn, the QTCanvas will notify its client
(the GraphicsImporterDrawer) that it needs to draw.
That's all that needs to be done! The code that we just wrote is sufficient
to load and draw a media file regardless of the format! Now we will take
a step back and look at the whole constructor and add some exception handling: );
}
});
}
Both the QTFile constructor, and the findAbsolutePath( ) routine can throw
IOExceptions. In addition, the GraphicsImporterDrawer constructor can
throw a QTException. Thus, it is vital that we catch these exceptions.
Note that we are not doing much if we get an error. We are just printing
a stack trace. In a real application, you would probably want to display
some kind of dialog to alert the user of the problem. For the purposes
of this tutorial, this level of error handling is sufficient.
That is all there is to it! Click on this link if you would like to see
the entire source file.
Back to top
Summary
This module represents our first foray into the world of QuickTime for
Java. As such, we are taking tentative steps. We learned how to initialize
QuickTime for Java, and how to shut down our QTSession when the user quits
or an error occurs. We learned how the QTCanvas works, and how to create
a canvas and install it in our window. Finally, we learned how to load
and display media files using a GraphicsImporterDrawer and set it to be
the client of the canvas. That is actually quite a bit for such a small
example.
Looking back, there is actually not very much code required to do what
we are doing. It would take about the same code to get similar functionality
using just the AWT, but the advantage of using QuickTime for Java is that
not only do we have the flexibility of a variety of image formats (which
there is no support for in Java such as PNG, Photoshop, Pict, Targa, TIFF,
and bmp to name a few), but we can also render the media very quickly.
That is the heart of Quicktime for Java- flexibility, and performance. | http://developer.apple.com/quicktime/qtjava/qtjtutorial/Module1.html | crawl-001 | refinedweb | 2,841 | 63.09 |
The.
Comments for this entry are closed. If you'd like to share your thoughts on this entry with me, please contact me directly.
I think Timbo has a minor point in that there aren’t (I assume) as many HTML parsers as XHTML parsers. And it’s not about scraping your own data, it’s about letting others scrape it. With that said, your site offers up a fine XML-based API via….RSS! I can’t imagine what Timbo is looking to scrape that isn’t neatly covered by an RSS feed.
I think you’ve unfairly slighted Don Ulrich, though. By use of parentheses around the X, he’s indicating that his statement applies to HTML and XHTML. It seems more a call for people to pay more attention to the spec than the hype. (my 2c anyhoo)
Must… resist… fist of death.
I think you’re right, James, that for all practical purposes HTML is just as good (if not better) than XHTML.
HTML’s nature is much more human-oriented; our minds aren’t robotic calculators, they’re a stream of consciousness. We are imperfect beings, and XHTML’s (intended) strictness places the burden of content perfection on ourselves; HTML, in effect, allows the computer to try to “do what we mean, not what we say.”
The only mental inconvenience I have with HTML now is a syntactic one: When I see an HTML tag without a closing “/>”, my mind doesn’t yet recognize that it’s unclosed. I’m so used to XHTML that I need to acclimate myself to the different syntax.
I could see benefit in choosing one or the other in different use cases, but in most cases HTML could be a perfectly reasonable choice (i.e. when the site is intended for human consumption). As long as HTML handles imperfect syntax uniformly across browsers, that is.
I just recalled something from when xhtml came out and everyone was urged to switch.
Everybody understood the limitations you mention, but took the pill because we were all waiting for this brilliant xml future. And the minute that future arrived, you would add your xml declaration at the top and change the mime type on your server, and voila!
The xhtml movement was future-proofing, the last time you would need to touch the markup for your content. All changes after that would be redesigns where you only touched the css! It would be a paradise!!!
First off, I’m one of those fools who knowingly misuses XHTML… because he can. Even so, I’m having an awfully hard time finding anything from the earlier post with which I can disagree. I think that the bozos know you’re right, so they’re whining.
As for the rest I’m with Austin, though I’m probably more sanguine about the extent to which CSS can be leveraged even in the environment we have.
What mystifies me is how anyone finds this argument worth having. HTML? XHTML? Whatever. For typical production, the difference matters exactly how? Pick one, be done with it, and don’t consider yourself all that special either way… which sounds an awful lot like one of the things you said in the first place.
As for true professionalism, well, we need best practices above and beyond what the W3C has to offer. For that matter, there are still a ton of shops out there that will see your Recommendation and raise you an STFU: the correlation between standards-friendliness and the bottom line is still awfully weak. The correlation between trust in vendors and their bottom line is as strong as ever in the meantime, regardless of the means by which that trust is earned (or wheedled).
I think a few out there may really like Xhtml and thus try to argue for it merits.
Personally I gave up not only on XHTML but on XML also. I will refuse to use anything that has a XML config at all. (Haha that means I cant use Java…)
I dont really mind btw because I autogenerate html or xml or xhtml anyway (and if i autogenerate, i dont care what the end result is anyway, cuz i work with an intermediate), and chose to use human-readable text files, but for others who dont autogenerate it it is really a shame
This is outstanding. I will carve this on my gravestone. Right after I dig my own grave and bury myself alive.
XHTML is a redundant solution to a problem we don’t have, developed by a committee (keyword: COMMITTEE, as in, also, BUREAUCRACY, and POLITICS) that also gave us the pile of crap that is CSS 2. Thanks?
I fully agree that there is nothing wrong with HTML 4.01, and using something to enhance it can’t be bad. Although, it might not be standards, so let’s all yell & scream about that, LoL!!!
I look forward to any future entries you make on this subject, as sanity is hard to come by nowadays.
Don’t you find BeautifulSoup to be a slow tool? (I did.) Anyway, dealing with XML is much faster.
You can scrape well-formed html 4 pretty well, too.
In fact, I think the only different between html 4 and xhtml is the closing of the image and break tags. (And who uses break tags anymore? Please!)
I still use xhtml, though. Have for years, which is why I stay there. I know exactly how it works with what css in all the browsers.
P.S. Xhtml 1.1 has fewer tags than html 4. Xhtml 1.0 continued the html 4 deprecation of a bunch of tags. Xhtml 1.1 went ahead with killing them. Also, semantic markup (often conflated with the xhtml/css movement) lends itself to fewer tags, which may be where the misperception comes from.
I can’t remember I’m remembering all this crap from 8 years ago.
Does anyone out there remember Netscape and IE 4 hacks? I swear there’s an entire ghetto in my brain filled with that crap.
I think I agree with you, but out of interest I did some (possibly flawed, definitely unscientific) tests to try and work out if browsers (well, Firefox 3) rendered XHTML any faster then HTML, and concluded that XHTML was rendered ~5% faster. I’m not suggesting that’s an particularly compelling argument, but I thought it might be interesting.
My current position on the matter is that the web framework I use outputs XHTML, and I don’t see any great disadvantage in that so I’m sticking to XHTML for now.
The detail of the transfer protocol always matter. Try serving HTML as
text/plainand see how well that works. (Or CSS as
image/png. You get the idea.)
Not my buisness, but since when did you become monk of anti XHTML and pro HTML campaigns ? I’ll be happy if people at least output VALID stuff, doesen’t matter what markup it is ( but for validty it needs some specs so not any stuff, but specd stuff).
So these debates are pointless, when there’s 3/4 of web still outputing tag soup.
P.S: I would love OpenID support here (:
@jinzo
That us true, but eople who work with XHTML served as text/html should at least be fully aware it’s not really XHTML they’re using, but invalid HTML pretending to be XHTML.
Only when they’re aware can they can make an educated decision what to use.
@Austin: yes, XHTML 1.1 cut some stuff. But it also A) introduced some things as well (see: Ruby markup) and B) since you’re not supposed to pretend it’s compatible with HTML 4, hardly anybody bothers with it.
@arien: thing is, you can take someone else’s well-formed XHTML document, serve it with the correct media type, and still create a situation where it must be considered non-well-formed. Not only that, but falling back to the source of metadata that’s most likely to be correct — e.g., the metadata in the XHTML document itself — is expressly forbidden by the relevant standards.
What I’ve learned about xhtml is there are a lot of people who take standards too seriously.
I can’t believe I’m wasting my time reading about this. Isn’t there a war going on?
I’ll go along with XHTML having been an imperfect argument against tag soup, that HTML never had to be malformed, etc., but I think the argument had to be made. Remember, we were battling proprietary hegemony that depended on confusion and slop for traction.
So maybe XHTML has done its work. And I wonder if Postel isn’t vindicated in the exercise.
LQ
Firefox only displays MathML when it appears in XHTML pages, I believe. Not sure the source of this, whether it’s by design or a quirk of their implementation.
“Nitroadict”, if nothing else, has rather dented your inferred assumption that the idiots in this dispute are all on one side.
James, thank you… I made the decesion 2 years ago to go to HTML and not XHTML as I didn’t find any pro for XHTML… People said I was wrong but you prove me right.
I wonder, would using HTML also be a + for HTML 5
Dude, looks like your blog is hacked with all kind of spam keywords that are not visible (hidden style). I suggest you fix it.
In response to to the post/comments above:
WOW
To begin with XHTML is an application of HTML which is a variant of SGML. This application of HTML is achieved through a namespace. Served as XML it can be parsed natively by XSL. A good example of XSL would be Doc Book.
dra·co·ni·an error handling This is bullshit. A note to design types: Unit testing is all the rage. It allows you to MAKE SURE you don’t deliver errors to clients. Fatal errors in XHTML docs you have failed an interoperability test. You want your clients view,webpage,semantic document to be free of errors don’t you?
All of the arguments here seem to be based on XHTML being served as text/html. I deliver XHTML as XML and serve it as application XHTML+XML. One word, context people.
@Jonathan Snook THX someone has to. The lobby is full of design types. The programmers must be on the lower floor.
Hey Don, you might want to check your references, because any markup language using SGML’s syntax and rules is an “SGML application” and any markup language using XML’s syntax and rules is an “XML application”. However, these are not “applications” in the equivocating sense you seem to be promoting.
Don’t look any further than !
@James love the site design.
But: HTML does not programmatically reference SGML. However, XML is used as a substrate and the XHTML namespace is applied to XML. It is an application of XHTML over XML. The pointer is the namespace which carries the XHTML spec. So literally it is an application of XHTML over XML. HTML is a varient of SGML the same way there are varients of Linux.
Pardon my ignorance, but what is the alternative to using break tags?
@Jackson check out w3schools.com the CSS section should be helpful.
@James My partner at work with the Phd said that if you really get into brass tacks the XHTML namespace synthesizes XHTML. (damn engineers)
When I said XHTML is an application I was refering to its complex characteristics. Versus the varient characteristics of HTML relative to SGML. It is too general to say they are mearly applications of their parent technologies. Think spatial.
I have something for release ~mid July. Ever been busting to release a project but you can’t? Damn… | http://www.b-list.org/weblog/2008/jun/21/xhtml/ | crawl-002 | refinedweb | 1,990 | 73.07 |
If you have ever worked on a Computer Vision project, you might know that using augmentations to diversify the dataset is the best practice. On this page, we will:
Let’s jump in.
As you might know, every image can be viewed as a matrix of pixels, with each pixel containing some specific information, for example, color or brightness.
To define the term, Horizontal Flip is a data augmentation technique that takes both rows and columns of such a matrix and flips them horizontally. As a result, you will get an image flipped horizontally along the y-axis.
In the real world, people regularly confuse Horizontal and Vertical Flip as they feel alike. Still, there is a clear-cut difference:
That is it. Keep this info in mind, and you will never find yourself stuck on a thought of which augmentation to choose.
import albumentations as albu from PIL import Image import numpy as np transform =albu.HorizontalFlip(p=0.5) image = np.array(Image.open('/some/random/image.png')) augmented_image = transform(image=image)['image'] # we have our required flipped image in augmented_image.
Only 13% of vision AI projects make it to production, with Hasty we boost that number to 100%. | https://hasty.ai/docs/mp-wiki/augmentations/horizontal-flip | CC-MAIN-2022-40 | refinedweb | 201 | 55.95 |
Created on 2010-02-15 19:07 by JT.Johnson, last changed 2015-02-06 18:47 by BreamoreBoy.
I am running Python 2.6.4 on Windows Vista and when I try to get any command line arguments via sys.argv, it only contains sys.argv[0], but nothing else. Even if I supply several parameters. The only third-parts modules I am using are Pygame, but even before Pygame imports, there is nothing in sys.argv. I've tried creating a shortcut with the args in it, but it still doesn't work.
Note that I also have a couple other Python versions installed (2.5, 2.7a, and 3.1) maybe somehow they are conflicting?
How do you supply parameters to your script?
Did you add the association for ".py" files yourself?
From a command prompt, type:
assoc .py
this displays a string like ".py=Python.File". Now type
ftype Python.File
(actually replace Python.File with the output from the first command)
This should display something like
Python.File="C:\Python26\python.exe" "%1" %*
If %* is missing, then arguments are not passed to the python interpreter.
Actually, I was playing around with it and I found that using the "python" command before it does work.
Now, when I type what you said to do, it shows that there is no association, even though i can run a program without the python command.
This is quite strange...
Sorry for a double-comment, but I thought of posting this after I already submitted the last one and I found no edit button.
Here's some output:
"Microsoft Windows [Version 6.0.6001]
C:\Users\JT>assoc .py
File association not found for extension .py
C:\Users\JT>assoc py
File association not found for extension py
C:\Users\JT\Desktop\Programming Projects>pythonversion.py
2.6.4 (r264:75708, Oct 26 2009, 08:23:19) [MSC v.1500 32 bit (Intel)]
C:\Users\JT\Desktop\Programming Projects>ftype Python.File
File type 'Python.File' not found or no open command associated with it.
C:\Users\JT\Desktop\Programming Projects>assoc py
File association not found for extension py
C:\Users\JT\Desktop\Programming Projects>assoc .py
File association not found for extension .py
C:\Users\JT\Desktop\Programming Projects>assoc .pyw
File association not found for extension .pyw
C:\Users\JT\Desktop\Programming Projects>assoc pyw
File association not found for extension pyw
C:\Users\JT\Desktop\Programming Projects>assoc pyc
File association not found for extension pyc
C:\Users\JT\Desktop\Programming Projects>assoc .pyc
File association not found for extension .pyc
C:\Users\JT\Desktop\Programming Projects>assoc .pyo
File association not found for extension .pyo
C:\Users\JT\Desktop\Programming Projects>assoc pyo
File association not found for extension pyo"
(Sorry it's so long, I cut out the blank lines as it is)
I'm getting something like this on Windows 7:
C:\>assoc .py
.py=Python.File
C:\>ftype Python.File
Python.File="C:\Python31\py31.exe" "%1" %*
C:\>args.py 1 2 3
Python version: sys.version_info(major=3, minor=1, micro=1, releaselevel='final', serial=0)
Command-line args: 1
['C:\\args.py']
C:\>\python31\py31 args.py 1 2
Python version: sys.version_info(major=3, minor=1, micro=1, releaselevel='final', serial=0)
Command-line args: 3
['args.py', '1', '2']
I've read on a random forum somewhere that % need to be doubled on Windows 7.
Tom, can you try changing the association by typing
FTYPE Python.File="C:\Python31\py31.exe" "%%1" %%*.
Closed issue 8984 as a duplicate of this; merging nosy lists.
Tom, did you ever find a solution to this problem?
Here's a link to a similar discussion about Perl on Windows:
From various other sites, and from experiments (thanks Eric Smith) it looks like the associations reported by 'assoc' and 'ftype' aren't necessarily the associations that are actually being used.
Sworddragon: can you get any useful information out of the Windows registry (e.g., using regedit, searching for Python.File, and looking under shell\open\command), or by setting file associations through the Control Panel?
I don't know enough about Python on Windows to know whether there's any hope that this is a problem that can be solved at the Python end, but I'd guess not.
I agree with Mark: there's probably nothing Python can do about this. It's almost certainly an error with how Python is being invoked.
That being said, it would be interesting to see what the registry key HKEY_CLASSES_ROOT\Python.File\shell\open\command contains.
This registry key contains "E:\Python31\python.exe" "%1" %*. I have too 2 python versions installed and manually associated the .py files to Python 3.
I have set now the key to "E:\Python31\python.exe" "%%1" %%* and it works. So Windows XP need double % too. The installer of the next version should consider this.
I made a mistake in the last post. After I have set the value, Python 2 was active and I forgot to set it to Python 3 back. This solution doesn't work. Well, I can't edit or delete the post..
So the real question is is: how does that key get that invalid value? I can't reproduce the problem.
I believe that key you mention is an alias for HKEY_CURRENT_USER\Software\Classes\Applications\python.exe\shell\open\command, which doesn't exist on my XP machine.
I have now uninstalled Python 2 and 3 and installed them new. First Python 2 with only register extension and compile files to bytecode. After this I have made the same with Python 3. In the "Open with" menu was now "python" registered which was Python 3 (sys.version[0] from a testscript telled it me). Until this point the arguments were accepted. After this I have added the python.exe from Python 2 to the "Open with" menu. This was the point where the key got invalid. Now Python 3 is working correctly but not Python 2. Well, I can manually use now the workarround to fix it but maybe the problem can be solved with this information.
The problem went away by itself after a while. I suspect a Windows update.
I found this page while encountering the same problem (only one argument, the scriptname, being passed in from the command line), and wanted to post the following workaround. I'm running Vista and using Python 2.6.
In summary I had to have 'python' at the beginning of the command line.
I found:
this did not work: c:\>django-admin startproject mysite
this DID work: c:\>python django-admin startproject mysite
Before finding this fix:
I had tried the 'ftype Python.File' and 'assoc .py' commands mentioned in other posts above and gotten the equivalent of 'not found'.
I actually found the correct info in the registry under HKEY_CURRENT_USER, but 'ftype' and 'assoc' don't appear to read from there.
I modified the registry under HKEY_LOCAL_MACHINE, and then I was getting the responses that were claimed to be needed in msg99369 above, but this did NOT fix the problem.
It was after the above that I found the solution - use 'python' at the beginning of the command line and all args are passed in.
(I did not go back and remove the registry edits that I made to prove conclusively that they are not part of the solution, but I doubt that they are.)
Encountered the same issue with 3.1.2 and 3.1.3 64bit on Win7 64bit. I was able to fix it in registry but did so many changes at once that I'm not able to reproduce (was really annoyed after trying to fix it for half a day...). Anyway, sending my observations:
- the root cause seems to be creation of "python.exe" and "pythonw.exe" entries under HKEY_CLASSES_ROOT. Their open command did not have %*. They were not created under HKEY_LOCAL_MACHINE. They were probably created automatically by the system when manually associating py and pyw files (see below).
- .py and .pyw files were originally associated with py_auto_file and pyw_auto_file in HKCR. The associations were probably created by the system, when I manually change association of the .py and .pyw files from jython to python through control panel. The py_auto_file and pyw_auto_files seemed to call those python.exe and pythonw.exe entries in the HKLC.
- The assoc and ftype commands changed association in HKLM but it is not propagated automatically into HKCR, not even after restart. After manually deleting .py and .pyw entries from HKCR, they were replaced by correct entries from HKLM.
- BUT!! the system still called open commands under python.exe and pythonw.exe entries in HKCR! (even if .py was associated with Python.File in HKCR and proper Python.File existed even in HKCR!) Only after deleting them, it works as should. But I deleted a lot of other python related entries as well, so this is only assumption.
If anyone else can confirm that deleting of python.exe and pythonw.exe from HKCR itself corrects the issue, I think the installation program can check if these entries exists and offer to delete them.
Just for complete picture, it works now even with .py and .pyw in PATHEXT, so calling the scripts without extension.
I'm sorry, I changed version to 3.3 by mistake. Did not want to do it.
I had the same problem with another version of python on Windows 7.
We are using python 2.4.2 for production and it is installed in D:\Tools\Python. For experimentation purpose, I installed Python 2.7 in the usual location. It broke a few things and I uninstalled both python, and reinstalled only 2.4.2.
After that, using python 2.4.2, I got the problem described by this issue. None of my colleagues (who did not install Python 2.7) had the issue.
The registry entry HKEY_CLASSES_ROOT\py_auto_file\shell\open\command was reading: D:\Tools\Python\python.exe "%1"
It was missing %*. I added it and my issue went away.
I finally found a solution from a page on StackOverflow.
I had to add the two characters %*
to the end of the Data value for the Windows registry key:
HKEY_CLASSES_ROOT\Applications\python.exe\shell\open\command
So the Data value is now:
"C:\Python26\python.exe" "%1" %*
just as it appears above including the two sets of double quotes.
I had modified or added six other keys but only the one above solved the problem.
I am running Python 2.6 under MS Windows Vista Home Premium 32 bit.
For completeness I tested potential fixes as follows:
I had a program show_args.py which had the two lines:
import sys
print 'argv length is ', len(sys.argv)
In order for show_args.py to receive args from the command line invocation (before I did the registry fix) I had to explicitly call the python interpreter, so ...
Before the fix:
C:\...>python show_args.py aaa bbb ccc
argv length is 4
C:\...>show_args.py aaa bbb ccc
argv length is 1
After the fix:
C:\...>show_args.py aaa bbb ccc
argv length is 4
In hindsight I should have searched the registry for python paths that didn't have the ending %*.
Very briefly, for those unfamiliar with Windows' RegEdit:
Click the Start button and enter regedit in the Search box and the Regedit.exe name will appear in the list above - click it.
Repeatedly do Edit/Find python.exe
until you find a python path of the form: ...\python.exe" "%1"
instead of the form ...\python.exe" "%1" %*
In the right pane of Regedit, under Name click the word (Default), it will turn blue, then select Edit/Modify...
Simply add a space and the two chars %* to the end of the entry in the Value data box, then click OK.
Changes take effect immediately; you don't have to reboot or re-open your prompt window, but if you are using IDLE or another IDE you probably have to close and restart it.
I have v2.7, v3.2, and v3.3 installed on a Win7 64-bit machine and the exact same setup on a Win7 32-bit machine.
The 32-bit works OK. The 64-bit machine had this argv problem, too! (I tried installing either Win32/Win64 version, no difference!)
Adding %* fixed the argv problem, but I noticed there was one more. The wrong version was called.
To sum it up, I had to change only the key
HKEY_CLASSES_ROOT/py_auto_file/shell/open/command
from:
c:\Python32\Python32.exe "%1" %*
to:
c:\Python33\Python33.exe "%1" %*
or to:
c:\windows\py.exe "%1" %*
(for auto-detection, both worked)
I ran into this issue by doing the following steps, though I did not try to reproduce:
1 - install Python 3.4
2 - have windows Always Open .py files with Python
3 - install Python 2.7
Then I encountered issues where all .py scripts were opened with Python 2.7. After some unsuccessful attempts I decided to uninstall Python 2.7
4 - remove Python 2.7
At this point, I had to update the file associations because Windows could not find Python 2.7 anymore
5 - update file associations
The issue is obviously that in at least a few places in the registry, the ..\shell\open\command has a value of ..\python.exe "%1"
In my case it was under all roots:
HKEY_CLASSES_ROOT\py_auto_file
HKEY_LOCAL_MACHINE
HKEY_USERS
I corrected only the one under HKEY_USERS by adding %* and it resolved the issue.
@Alecz are you aware of the Python launcher, see and ? The easiest way to sort out 3.4 would have been to download the msi file and use the repair option.
Having said that I've no idea whether or not this issue can be sorted out, or is already sorted in 3.5. @Steve, Zach or Tim, over to you :) | https://bugs.python.org/issue7936 | CC-MAIN-2018-05 | refinedweb | 2,329 | 69.38 |
Carey Brown wrote:Change line 102 to
calculation = Math.round((converted - 32.0)*(5.0/9.0));
Campbell Ritchie wrote:I didn’t read the whole code, but I can see a serious design error in it. You ought not to be calculating anything inside a frame class. You ought to have a temperature converter class which does the calculations. This is how you ought to have developed your app1: Create a Temperature class, or a TemperatureConverter class.2: Demonstrate the workings of this class with a TemperatureDemo class.3: Put a GUI round that./**
* For the second stage of developing a temperature change app, only.
* Temporary class, not intended to be used again
*/
public class TemperatureDemo
{
/**
* Requires various figures from the command line, eg
* <code>java TemperatureDemo 100 -40 36.9 98.4 0 32 212 1000000 123.45</code>
*@param args the command-line arguments: at least one floating-point number required
*@deprecated This method intended for testing purposes only
*/
@Deprecated public static void main(String[] args)
{
Temperature t1;
Temperature t2;
for (String s : args)
{
t1 = new Temperature(Double.parseDouble(s), TemperatureScales.CELSIUS);
t2 = new Temperature(Double.parseDouble(s), TemperatureScales.FAHRENHEIT);
System.out.printf("%10.2f°C = "%10.2f°F%n", t1.getCelsius(), t1.getFahrenheit());
System.out.printf("%10.2f°C = "%10.2f°F%n", t2.getCelsius(), t2.getFahrenheit());
}
}
}I’t let you work out how the TemperatureScales enum and the Temperature class work. If you use a TemperatureConverter class, that will work completely differently.
Winston Gutkowski wrote:
Because integer division is different from FP division - ie, it truncates. It's also generally worth making all components of an expression the same type, because it saves the code from having to convert them.
BTW, there is a better (in my opinion), but lesser known algorithm, which eliminates the need for having to remember the order in which you add and subtract, viz:
converted_temp = ((original_temp + 40) * factor) - 40
where factor = (5.0/9.0), when converting from F to C, and (9.0/5.0) when converting from C to F.
Winston
Lee Sigauke wrote: . . . I'd like my program to calculate the temperature rather than having predefined methods doing it for me.
Campbell Ritchie wrote:
Lee Sigauke wrote: . . . I'd like my program to calculate the temperature rather than having predefined methods doing it for me.
The predefined methods are part of your program. The whole idea of OO programming is to have one class represent one thing. Your frame class should hold the display together, and your text fields or whatever show the figures. The Temperature class (or similar) records the temperature. Divide your app into different parts. The frame class ought not to do the calculations.
There are several ways you can do it.1: Utility class with static toFahrenheit and toCelsius methods2: Temperature class with celsius and fahrenheit fields, set up in the constructor. You can make such a class immutable. That is similar to what I showed earlier.3: Temperature class (again can be made immutable) recording temperature in one scale, maybe absolute (°K), calculating C and F equivalents as required.4: There is bound to be a 4th version, but I can’t think of it just at the moment. Note this might be easier if you use a TemperatureScale enum as well. You can even enhance the TemperatureScale objects with the conversion factors, 1.8 and 0.55555555555555555555555555555555555.... You can enhance the Temperature class with absolute zero (-273.15°C) and throw an IllegalArgumentException for temperatures below that.
I like the +40-40 algorithm, which I had never seen before. That fits very nicely with the enum recording 1.8 and 0.555555555555555555555555555555555.... Not quite so good for °K or the old French Réaumur scale.
Lee Sigauke wrote: . . . divide an application into different parts doing different things at a time. Thank you for your help, I've learnt more than I though | http://www.coderanch.com/t/587597/java/java/temperature-converter-program-display-negative | CC-MAIN-2014-35 | refinedweb | 650 | 58.99 |
Declaring local variables without using them immediately may unnecessarily increase their scope. This decreases legibility, and increases the likelihood of error.
There are two common cases where a local variable is assigned some default initial
value (typically
null, 0, false, or an empty
String):
tryblock, and are thus declared and initialized just before the
tryblock (in modern code using try-with-resources, this is now relatively rare)
Here,
input and
output are examples of local variables
being initialized to
null, since they need to be visible in both
the
try and
finally blocks.
As well,
line is an example of a loop variable declared and
initialized outside the loop.
Note that this example uses JDK 6, simply to illustrate the point.
In JDK 7, try-with-resources
would be used to automatically close streams, and the issue
of stream references being initialized to
null would not occur.
import java.io.*; /** JDK 6 or before. */ public class ReadWriteTextFile { /** * Fetch the entire contents of a text file, and return it in a String. * This style of implementation does not throw Exceptions to the caller. * * @param aFile is a file which already exists and can be read. */ static public String getContents(File aFile) { //...checks on aFile are elided StringBuilder contents = new StringBuilder(); try { //use buffering, reading one line at a time //FileReader always assumes default encoding is OK! BufferedReader input = new BufferedReader(new FileReader(aFile)); try { String line = null; //not declared within while loop /* * readLine is a bit quirky : * it returns the content of a line MINUS the newline. * it returns null only for the END of the stream. * it returns an empty String if two newlines appear in a row. */ while (( line = input.readLine()) != null){ contents.append(line); contents.append(System.getProperty("line.separator")); } } finally { input.close(); } } catch (IOException ex){ ex.printStackTrace(); } return contents.toString(); } /** * Change the contents of text file in its entirety, overwriting any * existing text. * * This style of implementation throws all exceptions to the caller. * * @param aFile is an existing file which can be written to. * @throws IllegalArgumentException if param does not comply. * @throws FileNotFoundException if the file does not exist. * @throws IOException if problem encountered during write. */ static public void setContents(File aFile, String aContents) throws FileNotFoundException, IOException { if (aFile == null) { throw new IllegalArgumentException("File should not be null."); } if (!aFile.exists()) { throw new FileNotFoundException ("File does not exist: " + aFile); } if (!aFile.isFile()) { throw new IllegalArgumentException("Should not be a directory: " + aFile); } if (!aFile.canWrite()) { throw new IllegalArgumentException("File cannot be written: " + aFile); } //use buffering Writer output = new BufferedWriter(new FileWriter(aFile)); try { //FileWriter always assumes default encoding is OK! output.write( aContents ); } finally { output.close(); } } /** Simple test harness. */ public static void main (String... aArguments) throws IOException { File testFile = new File("C:\\Temp\\blah.txt"); System.out.println("Original file contents: " + getContents(testFile)); setContents(testFile, "The content of this file has been overwritten..."); System.out.println("New file contents: " + getContents(testFile)); } } | http://javapractices.com/topic/TopicAction.do;jsessionid=4D0FE8213FBC2C17B7BA3A279E6C354F?Id=126 | CC-MAIN-2018-26 | refinedweb | 483 | 50.23 |
Part 2, Chapter 4
Linux Installation and Package Management (Topic 2.2)
Many resources describe Linux installation.[1] Despite its title, however, this section's Topic and Objectives do not provide an overview for the installation of any particular Linux distribution. Rather, they focus on four installation topics and two packaging tools as required for LPI Exam 102:
Objective 1:Objective 1:
- Objective 1: Design a Hard Disk Layout
- The layout and partitioning of disks is a fundamental concept for almost all computer platforms. Unlike other operating systems though, Linux uses multiple partitions in a unified filesystem. This Objective covers this filesystem layout. Weight: 2.
- Objective 2: Install a Boot Manager
- Booting Linux is a process started by a boot manager. This Objective covers the use of LILO. Weight: 2.
- Objective 3: Make and Install Programs from Source
- The unique advantages of open source software allow the distribution of programs in source code form. This Objective covers compiling and installing programs from source code. (Objectives 5 and 6 deal with binary package management.) Weight: 2.
- Objective 4: Manage Shared Libraries
- One of the efficiencies of modern operating systems is the concept of shared libraries of system software. This Objective provides an overview of shared libraries and their configuration. Weight: 3.
- Objective 5: Use Debian Package Management.
- This topic covers the management of Debian Linux binary packages. Weight: 5.
- Objective 6: Use Red Hat Package Manager (RPM)
- This topic covers the management of RPM binary packages. Weight: 8.
Design a Hard Disk Layout
Part of the installation process for Linux is the design of the hard disk partitioning scheme. If you're used to systems that reside on a single partition, this step may seem to complicate installation. However, there are advantages to splitting the filesystem into multiple partitions, potentially on multiple disks. Details about disks, partitions, and Linux filesystem top-level directories are provided in . This Topic covers considerations for implementing Linux disk layouts.
System Considerations
A variety of factors influence the choice of a disk layout plan for Linux, including:
- The amount of disk space
- The size of the system
- What the system will be used for
- How and where backups will be performed
Limited disk space
Except for read-only filesystems (such as CD-ROMs or a shared /usr partition), most Linux filesystems should have some free space available. Filesystems holding user data should be maintained with a generous amount of free space to accommodate user activity. Unfortunately, if there are many filesystems and all of them contain free space, a significant portion of disk space could be considered wasted. This presents a tradeoff between the number of filesystems in use and the availability of free disk space. Finding the right configuration depends on system requirements and available disk resources.
When disk space is limited, it is desirable.
- /swap
- 100 MB.
- /
- 850 MB. A large root partition holds everything on the system that's not in /boot.
The /boot partition could be combined with the root partition as long as the entire root partition fits within the 1024-cylinder limit (see ").
On older systems with smaller hard drives, Linux is often installed by spreading the directory tree across multiple physical disks. This is no different in practice than using multiple partitions on a single disk and often encourages the reuse of older hardware. An additional disk might be dedicated to /home in order to allow a larger work area for the users' home directories.
Larger systems
On larger platforms, functional issues such as backup strategies and required filesystem sizes can dictate disk layout. For example, suppose a file server is to be constructed serving 100 GB of executable datafiles to end-users via NFS. Such as system will have enough resources to compartmentalize various parts of the directory tree into separate filesystems and might look like this:
- /boot
- 50 MB. Keep kernels under the 1024-cylinder limit.
- /swap
- 100 MB.
- /
- 100 MB.
- /usr
- 1 GB. All of the executables in /usr are shared to workstations via read-only NFS.
- /var
- 500 MB. By placing log files in their own partition, they won't threaten system stability if the filesystem is full.
- /tmp
- 100 MB. By placing temporary files in their own partition, they won't threaten system stability if the filesystem is full.
- 98.
System role.
Backup.
Swap Space equal to the amount of physical RAM in the machine. For example, if your system has 64 MB of RAM, it would be reasonable to set your swap size to at least 64 MB. Another rule of thumb that predates Linux says swap space should equal three times the main memory size. These are just guidelines, of course, because a system's utilization of virtual memory depends on what the system does and the number and size of processes it runs. Using the size of main memory, or thereabouts, is a good starting point.
Spreading swap space across multiple disk drives can allow better swap performance because multiple accesses can occur concurrently when multiple devices are used. For even better performance, place those disks on separate controllers, increasing bandwidth. For example, you could place half of your planned swap space on each of two IDE disks in your system. Those disks could be attached to the two separate IDE interfaces.
General Guidelines
Here are some guidelines for partitioning a Linux system:
Objective 2:Objective 2:
- Keep the root filesystem ( / ) small by distributing larger portions of the directory tree to other partitions. A small root filesystem is less likely to be corrupted than a large one.
- Separate a small /boot partition below cylinder 1024 for kernels.
- Separate /var. Make certain it is big enough to handle your logs and their rotation scheme, but not so large that disk space is wasted when the rotation is filled.
-. For production use, put it on a disk array subsystem.
- Set swap space around the same size as the main memory. If possible, try to split the swap space across multiple disks and controllers.
Install a Boot Manager
While it is possible to boot Linux from a floppy disk, most Linux installations boot from the computer's hard disk.[2] This is a two-step process that begins after the system BIOS is initialized and ready to run an operating system. Starting Linux consists of the following two basic phases:
- Run lilo from the boot disk
- It is Linux loader's (LILO's) job to find the selected kernel and get it loaded into memory, including any user-supplied options.
- Launch the Linux kernel and start processes
- LILO starts the loaded kernel. LILO's job at this point is complete and the hardware is placed under the control of the running kernel, which sets up shop and begins running processes.
LILO
The Linux Loader (LILO) is a small utility designed to load the Linux kernel (or the boot sector of another operating system) into memory and start it. A program that performs this function is commonly called a boot loader. While other boot loaders exist, LILO is the most popular and is installed as the default boot loader on most Linux distributions. LILO consists of two parts:
- The boot loader
- This part of LILO is a two-stage program intended to find and load a kernel.[3] The first stage of LILO usually resides in the Master Boot Record (MBR) of the hard disk. This is the code that is started at boot time by the system BIOS. It locates and launches a second, larger stage of the boot loader that resides elsewhere on disk. The second stage offers a user prompt to allow boot-time and kernel image selection options, finds the kernel, loads it into memory, and launches it.
- The lilo command
- Also called the map installer, lilo is used to install and configure the LILO boot loader. The lilo command reads a configuration file, which describes where to find kernel images, video information, the default boot disk, and so on. It encodes this information along with physical disk information and writes it in files for use by the boot loader.
The boot loader
When the system BIOS launches, LILO presents you with the following prompt:
LILO:
The
LILOprompt is designed to allow you to select from multiple kernels or operating systems installed on the computer and to pass parameters to the kernel when it is loaded. Pressing the Tab key at the LILO prompt yields a list of available kernel images. One of the listed images will be the default as designated by an asterisk next to the name:
LILO: <TAB>
linux* linux_586_smp experimental
Under many circumstances, you won't need to select a kernel at boot time because LILO will boot the kernel configured as the default during the install process. However, if you later create a new kernel, have special hardware issues, or are operating your system in a dual-boot configuration, you may need to use some of LILO's options to load the kernel or operating system you desire.
The LILO map installer and its configuration file
Before any boot sequence can complete from your hard disk, the boot loader and associated information must be installed by the LILO map installer utility. The lilo command writes the portion of LILO that resides in the MBR, customized for your particular system. Your installation program will do it, then you'll repeat it manually if you build a new kernel yourself.
lilo
Syntax
lilo [options]
The lilo map installer reads a configuration file and writes a map file, which contains information needed by the boot loader to locate and launch Linux kernels or other operating systems.
Frequently used options
- -C config_ file
- Read the config_ file file instead of the default /etc/lilo.conf.
- -m map_file
- Write map_ file in place of the default as specified in the configuration file.
- -q
- Query the current configuration.
- -v
- Increase verbosity.
LILO's configuration file contains options and kernel image information. An array of options is available. Some are global, affecting LILO overall, while others are specific to a particular listed kernel image. Most basic Linux installations use only a few of the configuration options. Example 2-1 shows a simple LILO configuration file.
Example 2-1: Sample /etc/lilo.conf File
boot = /dev/hda
timeout = 50
prompt
read-only
map=/boot/map
install=/boot/boot.b
image = /boot/vmlinuz-2.2.5-15
label = linux
root = /dev/hda1
Each of these lines is described in the following list:
-
boot
- The boot directive tells lilo the name of the hard disk partition device that contains the boot sector. For PCs with IDE disk drives, the devices will be /dev/hda, /dev/hdb, and so on.
-
timeout
- The timeout directive sets the timeout in tenths of a second (deciseconds) for any user input from the keyboard. To enable an unattended reboot, this parameter is required if the
promptdirective is used.
-
prompt
- This directive instructs the boot loader to prompt the user. This behavior can be stimulated without the prompt directive if the user holds down the Shift, Ctrl, or Alt key when LILO starts.
-
read-only
- This directive specifies that the root filesystem should initially be mounted read-only. Typically, the system startup procedure will remount it later as read/write.
-
map
- The map directive specifies the location of the map file, which defaults to /boot/map.
-
install
- The install directive specifies the file to install as the new boot sector, which defaults to /boot/boot.b.
-
image
- An image line specifies a kernel image to offer for boot. It points to a specific kernel file. Multiple image lines may be used to configure LILO to boot multiple kernels and operating systems.
-
label
- The optional label parameter is used after an image line and offers a label for that image. This label can be anything and generally describes the kernel image. Examples include
linux, or perhaps
smpfor a multiprocessing kernel.
-
root
- This parameter is used after each image line and specifies the device to be mounted as root for that image.
There is more to configuring and setting up LILO, but a detailed knowledge of LILO is not required for this LPI Objective. It is important to review one or two sample LILO configurations to make sense of the boot process. A discussion on using LILO to boot multiple kernels is presented in .
LILO locations
During installation, LILO can be placed either in the boot sector of the disk or in your root partition. If the system is intended as a Linux-only system, you won't need to worry about other boot loaders, and LILO can safely be placed into the boot sector. However, if you're running another operating system such as Windows, you should place its boot loader in the boot sector.[4]Objective 3:Objective 3:
Make and Install Programs from Source
Open source software is credited with offering value that rivals or even exceeds that of proprietary vendors' products. While binary distributions make installation simple, you sometimes won't have access to a binary package. In these cases, you'll have to compile the program from scratch.
Getting Open Source and Free Software
Source code for the software that makes up a Linux distribution is available from a variety of sources. Your distribution media contain both source code and compiled binary forms of many software projects. Since much of the code that comes with Linux originates from the Free Software Foundation (FSF), the GNU web site contains a huge array of software.[5] Major projects, such as Apache (), distribute their own code. Whatever outlet you choose, the source code must be packaged for your use, and among the most popular packaging methods for source code is the tarball.
What's a tarball?
Code for a significant project that a software developer wishes to distribute is originally stored in a hierarchical tree of directories. Included are the source code (in the C language), a Makefile, and some documentation. In order to share the code, the entire tree must be encapsulated in a way that is efficient and easy to send and store electronically. A common method of doing this is to use tar to create a single tarfile containing the directory's contents, and then use gzip to compress it for efficiency. The resulting compressed file is referred to as a tarball. This method of distribution is popular because both tar and gzip are widely available and understood, ensuring a wide audience. A tarball is usually indicated by the use of the multiple extensions .tar and .gz, put together into .tar.gz. A combined single extension of .tgz is also popular.
Opening a tarball
The contents of a tarball is obtained through a two-step process. The file is first uncompressed with gzip and then extracted with tar. Following is an example, starting with tarball.tar.gz:
# gzip -d tarball.tar.gz
# tar xvf tarball.tar
The -d option to gzip indicates "decompress mode." If you prefer, you can use gunzip in place of gzip -d to do the same thing:
# gunzip tarball.tar.gz
# tar xvf tarball.tar
You can also avoid the intermediate unzipped file by piping the output of gzip straight into tar:
# gzip -dc tarball.tar.gz | tar xv
In this case, the -c option to gzip tells it to keep the compressed file in place. By avoiding the full-sized version, disk space is saved. For even more convenience, avoid using gzip entirely and use the decompression capability in tar:[6]
# tar zxvf tarball.tar.gz
Compiling Open Source Software
Once you've extracted the source code, you're ready to compile it. You'll need to have the appropriate tools available on your system, namely a configure script, the GNU C compiler, gcc, and the dependency checker, make.
configure
Most larger source code packages include a configure script[7] located at the top of the source code tree. This script needs no modification or configuration from the user. When it executes, it examines your system to verify the existence of a compiler, libraries, utilities, and other items necessary for a successful compile. It uses the information it finds to produce a custom Makefile for the software package on your particular system. If configure finds that something is missing, it fails and gives you a terse but descriptive message. configure succeeds in most cases, leaving you ready to begin the actual compile process.
make
make is a utility for compiling software. When multiple source-code files are used in a project, it is rarely necessary to compile all of them for every build of the executable. Instead, only the source files that have changed since the last compilation really need to be compiled again.
make works by defining targets and their dependencies. The ultimate target in a software build is the executable file or files. They depend on object files, which in turn depend on source-code files. When a source file is edited, its date is more recent than that of the last compiled object. make is designed to automatically handle these dependencies and do the right thing.
To illustrate the basic idea, consider this trivial and silly example. Suppose you're writing a program with code in two files. The C file, main.c, holds the
main( )function:
int main( ) {
printit( );
}
and printit.c contains the
printit( )function, which is called by
main( ):
#include <stdio.h>
void printit( ) {
printf("Hello, world\n");
}
Both source files must be compiled into objects main.o and printit.o, and then linked together to form an executable application called hw. In this scenario, hw depends on the two object files, a relationship that could be defined like this:
hw: main.o printit.o
Using this syntax, the dependency of the object files on the source files would look like this:
main.o: main.c
printit.o: printit.c
With these three lines, there is a clear picture of the dependencies involved in the project. The next step is to add the commands necessary to satisfy each of the dependencies. Compiler directives are added next:
gcc -c main.c
gcc -c printit.c
gcc -o hw main.o printit.o
To allow for a change of compilers in the future, a variable can be defined to hold the actual compiler name:
CC = gcc
To use the variable, use the syntax
$(variable
)for substitution of the contents of the variable. Combining all this, the result is:
CC = gcc
hw: main.o printit.o
$(CC) -o hw main.o printit.o
main.o: main.c
$(CC) -c main.c
printit.o: printit.c
$(CC) -c printit.c
This illustrates a simple Makefile, the default control file for make. It defines three targets: hw (the application), and main.o and printit.o (the two object files). A full compilation of the hw program is invoked by running make and specifying hw as the desired target:
# make hw
gcc -c main.c
gcc -c printit.c
gcc -o hw main.o printit.o
make automatically expects to find its instructions in Makefile. If a subsequent change is made to one of the source files, make will handle the dependency:
# touch printit.c
# make hw
gcc -c printit.c
gcc -o hw main.o printit.o
This trivial example doesn't illustrate a real-world use of make or the Makefile syntax. make also has powerful rule sets that allow commands for known dependency relationships to be issued automatically. These rules would shorten even this tiny Makefile.
Installing the compiled software
Most mature source-code projects come with a predetermined location in the filesystem for the executable files created by compilation. In many cases, they're expected to go to /usr/local/bin. To facilitate installation to these default locations, many Makefiles contain a special target called install. By executing the make install command, files are copied and set with the appropriate attributes.
Example: Compiling bash
GNU's bash shell is presented here as an example of the process of compiling. You can find a compressed tarball of the bash source at the GNU FTP site,. Multiple versions might be available. Version 2.03 is used in this example (you will find more recent versions). The compressed tarball is bash-2.03.tar.gz. As you can see by its name, it is a tar file that has been compressed with gzip. To uncompress the contents, use the compression option in tar:
# tar zxvf bash-2.03.tar.gz
bash-2.03/
bash-2.03/CWRU/
bash-2.03/CWRU/misc/
bash-2.03/CWRU/misc/open-files.c
bash-2.03/CWRU/misc/sigs.c
bash-2.03/CWRU/misc/pid.c
... (extraction continues) ...
Next move into the new directory, take a look around, and read some basic documentation:
# cd bash-2.03
# ls
AUTHORS NEWS
CHANGES NOTES
COMPAT README
COPYING Y2K
CWRU aclocal.m4
INSTALL alias.c
MANIFEST alias.h
Makefile.in ansi_stdlib.h
... (listing continues) ...
# less README
The build process for bash is started by using the dot-slash prefix to launch configure:
# ./configure
creating cache ./config.cache
checking host system type... i686-pc-linux-gnu
Beginning configuration for bash-2.03 for i686-pc-linux-gnu
checking for gcc... gcc
checking whether the C compiler (gcc ) works... yes
checking whether the C compiler (gcc ) is a
cross-compiler... no
checking whether we are using GNU C... yes
checking whether gcc accepts -g... yes
checking whether large file support needs explicit
enabling... yes
checking for POSIXized ISC... no
checking how to run the C preprocessor... gcc -E # make
... (configure continues) ...
Next, compile:
# make
/bin/sh ./support/mkversion.sh -b -s release -d 2.03 \
-p 0 -o newversion.h && mv newversion.h version.h
***********************************************************
* *
* Making Bash-2.03.0-release for a i686 running linux-gnu
* *
***********************************************************
rm -f shell.o
gcc -DPROGRAM='"bash"' -DCONF_HOSTTYPE='"i686"' \
-DCONF_OSTYPE='"linux-gnu"' -DCONF_MACHTYPE='"i686
-pc-linux-gnu"' -DCONF_VENDOR='"pc"' -DSHELL \
-DHAVE_CONFIG_H -D_FILE_OFFSET_BITS=64 -I. -I. -I./
lib -I/usr/local/include -g -O2 -c shell.c
rm -f eval.o
... (compile continues) ...
If the compile yields fatal errors, make terminates and the errors must be addressed before installation. Errors might include problems with the source code (unlikely), missing header files or libraries, and other problems. Error messages will usually be descriptive enough to lead you to the source of the problem.
The final step of installation requires that you are logged in as root in order to copy the files to the system directories:
# make install
/usr/bin/install -c -m 0755 bash /usr/local/bin/bash
/usr/bin/install -c -m 0755 bashbug /usr/local/bin/bashbug
( cd ./doc ; make \
man1dir=/usr/local/man/man1 man1ext=1 \
man3dir=/usr/local/man/man3 man3ext=3 \
infodir=/usr/local/info install )
make[1]: Entering directory `/home/ftp/bash-2.03/doc'
test -d /usr/local/man/man1 || /bin/sh ../support/mkdirs /usr/local/man/man1
test -d /usr/local/info || /bin/sh ../support/mkdirs
/usr/local/info
/usr/bin/install -c -m 644 ./bash.1
/usr/local/man/man1/bash.1
/usr/bin/install -c -m 644 ./bashbug.1
/usr/local/man/man1/bashbug.1
/usr/bin/install -c -m 644 ./bashref.info
/usr/local/info/bash.info
if /bin/sh -c 'install-info --version'
>/dev/null 2>&1; then \
install-info --dir-file=/usr/local/info/dir
/usr/local/info/bash.info; \
else true; fi
make[1]: Leaving directory `/home/ftp/bash-2.03/doc'
The installation places the new version of bash in /usr/local/bin. Now, two working versions of bash are available on the system:
# which bash
/bin/bash
# /bin/bash -version
GNU bash, version 1.14.7(1)
# /usr/local/bin/bash -version
GNU bash, version 2.03.0(1)-release (i686-pc-linux-gnu)
Objective 4:Objective 4:
Manage Shared Libraries
When a program is compiled under Linux, many of the functions required by the program are linked from system libraries that handle disks, memory, and other functions. For example, when
printf( )is used in a program, the programmer doesn't provide the
printf( )source code, but instead expects that the system already has a library containing such functions. When the compiler needs to link the code for
printf( ), it can be found in a system library and copied into the executable. A program that contains executable code from these libraries is said to be statically linked because it stands alone, requiring no additional code at runtime.
Statically linked programs can have a few liabilities. First, they tend to get large, because they include executables for all of the library functions linked into them. Also, memory is wasted when many different programs running concurrently contain the same library functions. To avoid these problems, many programs are dynamically linked. Such programs utilize the same routines but don't contain the library code. Instead, they are linked into the executable at runtime. This dynamic linking process allows multiple programs to use the same library code in memory and makes executable files smaller. Dynamically linked libraries are shared among many applications and are thus called shared libraries. A full discussion of libraries is beyond the scope of the LPIC Level 1 exams. However, a general understanding of some configuration techniques is required.
Shared Library Dependencies
Any program that is dynamically linked will require at least a few shared libraries. If the required libraries don't exist or can't be found, the program will fail to run. This could happen, for example, if you attempt to run an application written for the GNOME graphical environment but haven't installed the required GTK+ libraries. Simply installing the correct libraries should eliminate such problems. The ldd utility can be used to determine which libraries are necessary for a particular executable.
ldd
Syntax
ldd programs
Description
Display shared libraries required by each of the programs listed on the command line. The results indicate the name of the library and where the library is expected to be in the filesystem.
Example
In Objective 3, a trivial executable called hw was created. Despite its small size, however, hw requires two shared libraries:
# ldd /home/jdean/hw
/home/jdean/hw:
libc.so.6 => /lib/libc.so.6 (0x40018000)
/lib/ld-linux.so.2 => /lib/ld-linux.so.2 (0x40000000)
The bash shell requires three shared libraries:
#)
Linking Shared Libraries
Dynamically linked executables are examined at runtime by the shared object dynamic linker, ld.so. This program looks for dependencies in the executable being loaded and attempts to satisfy any unresolved links to system-shared libraries. If ld.so can't find a specified library, it fails, and the executable won't run.
To illustrate this, let's assume that the
printit( )function from the hw example in Objective 3 is moved to a shared library instead of being compiled into the program. The custom library is called libprintit.so and stored in /usr/local/lib. hw is reconfigured and recompiled to use the new library.[8] By default, ld.so doesn't expect to look in /usr/local/lib for libraries, and fails to find
printit( )at runtime:
# ./hw
./hw: error in loading shared libraries: libprintit.so:
cannot open shared object file: No such file or directory
To find the new library, ld.so must be instructed to look in /usr/local/lib. There are a few ways to do this. One simple way is to add a colon-separated list of directories to the shell environment variable
LD_LIBRARY_PATH, which will prompt ld.so to look in any directories it finds there. However, this method may not be appropriate for system libraries, because users might not set their
LD_LIBRARY_PATHcorrectly.
To make the search of /usr/local/lib part of the default behavior for ld.so, files in the new directory must be included in an index of library names and locations. This index is /etc/ld.so.cache. It's a binary file, which means it can be read quickly by ld.so. To add the new library entry to the cache, its directory is first added to the ld.so.conf file, which contains directories to be indexed by the ldconfig utility.
ldconfig
Syntax
ldconfig [options] lib_dirs
Description
Update the ld.so cache file with shared libraries specified on the command line in lib_dirs, in trusted directories /usr/lib and /lib, and in the directories found in /etc/ld.so.conf.
Frequently used options
- -p
- Display the contents of the current cache instead of recreating it.
- -v
- Verbose mode. Display progress during execution.
Example 1
Examine the contents of the ld.so library cache:
# ldconfig -p
299 libs found in cache `/etc/ld.so.cache' (version 1.7.0)
libzvt.so.2 (libc6) => /usr/lib/libzvt.so.2
libz.so.1 (libc6) => /usr/lib/libz.so.1
libz.so.1 (ELF) => /usr/i486-linux-libc5/lib/libz.so.1
libz.so (libc6) => /usr/lib/libz.so
libx11amp.so.0 (libc6) => /usr/X11R6/lib/libx11amp.so.0
libxml.so.0 (libc6) => /usr/lib/libxml.so.0
(... listing continues ...)
Example 2
Look for a specific library entry in the cache:
# ldconfig -p | grep "printit"
libprintit.so (libc6) => /usr/local/lib/libprintit.so
Example 3
Rebuild the cache:
# ldconfig
After /usr/local/lib is added, ld.so.conf might look like this:
/usr/lib
/usr/i486-linux-libc5/lib
/usr/X11R6/lib
/usr/local/lib
Next, ldconfig is run to include libraries found in /usr/local/lib in /etc/ld.so.cache :
# ldconfig
# ./hw
Hello, world
Now the hw program can execute correctly because ld.so can find libprintit.so in /usr/local/lib. It is important to run ldconfig after any changes in system libraries to be sure that the cache is up-to-date.Objective 5:
Use Debian Package Management
The Debian package management system is a versatile and automated suite of tools used to acquire and manage software packages for Debian Linux. The system automatically handles many of the management details associated with interdependent software running on your system.
Debian Package Management Overview number. Most package versions are the same as that of the software they contain, thus the format of package versions varies from package to package. Most are numeric, with major, patch, and release numbers, but other information may appear as well. Typical versions are 0.6.7-7, 0.96a-14, 6.05, 80b2-8, and 2.0.7.19981211. The version is separated from the package name with an underscore.
- A file extension
- By default, all Debian packages end with .deb file extension.
Figure 2-1 illustrates a Debian package name.
Managing Debian Packages.
dpkg
Syntax
dpkg [options] action
Description
The Debian package manager command, dpkg, consists of an action that specifies a major mode of operation as well as zero or more options, which modify the action's behavior.
The dpkg command maintains package information in /var/lib/dpkg. Two files located there of particular interest are:
- available
- The list of all available packages.
- status
- Contains package attributes, such as whether it is installed or marked for removal.
These files are modified by dpkg, dselect, and apt-get, and it is unlikely that they will ever need to be edited.
Frequently used options
- -E
- Using this option, dpkg will not overwrite a previously installed package of the same version.
- -G
- Using this option, dpkg will not overwrite a previously installed package with an older version of that same package.
- -R (also --recursive)
- Recursively process package files in specified subdirectories. Works with -i, --install, --unpack, and so on.
Frequently used actions
- --configure package
- Configure an unpacked package. This involves setup of configuration files.
- -i package_ file (also --install package_ file)
- Install the package contained in package_ file. This involves backup)
- Search for a filename matching search_ pattern from installed packages.
- -
Syntax
apt-get [options] [command] [package_name ...]
Description.
Frequently used options
-
- This command is used to automatically upgrade to new versions of Debian Linux.
- install
- The install command is used to install or upgrade one or more packages by name.
- remove
- This command is used to remove the specified packages.
- update
- Running apt-get update fetches a list of currently available packages. This is typically done before any changes are made to existing packages.
- upgrade
- The upgrade command is used to safely upgrade a system's complete set of packages to current versions. It. This file is not in the Objectives for Exam 102. promted to continue. Using the -y option to apt-get would eliminate this interaction.
dselect
Syntax
dselect
Description
Syntax
alien [--to-deb] [--patch=patchfile] [options] file
Description
Convert to or install a non-Debian (or "alien") package. Supported package types include Red Hat .rpm, Stampede .slp, Slackware .tgz, and generic .tar.gz files. rpm must also be installed on the system in order to convert an RPM package into a .deb package. The alien command produces an output package in Debian format by default after conversion.
Frequently used option
- -i
- Automatically install the output package and remove the converted package file.
Example
Install a non-Debian package on Debian system using alien with the -i option:
alien -i package.rpm
Objective 6:Objective 6:
Use Red Hat Package Manager (RPM)
The Red Hat Package Manager is among the most popular methods for the distribution of software for Linux and is installed by default on many distributions. It automatically handles many of the management details associated with interdependent software running on your system.
RPM Overview it's consulted on a file-by-file basis for dependencies when packages are removed, queried, and installed.
As with Debian packages, RPM filenames have three, thus the format of package versions varies from package to package. Most are numeric, with major, patch, and release numbers, but other information may appear as well. Typical versions are 3.0beta5-7, 1.05a-4, 2.7-5, 1.10.5-2, 1.1.1pre2-2, 1.14r4-4, 6.5.2-free3-rsaref, and 0.9_alpha3-6. The version is separated from the name by a hyphen.
- Architecture
- Packages containing binary (compiled) files are by their nature specific to a particular type of system. For PCs, the RPM architecture designation is i386, meaning the Intel 80386 and subsequent line of microprocessors and compatibles. For Sun and Sun-compatible processors, the architecture is sparc. The architecture is separated from the version with a dot.
- A .rpm extension
- All RPM files end with .rpm extension by default.
An RPM filename is constructed by tying these elements together in one long string, as shown in Figure 2-2.
As you can see, there are three uses for hyphens in RPM filenames. They appear as word separators in package names, as a delimiter between names and versions, and as part of the version. This may be confusing at first, but the version is usually obvious, making the use of hyphens unambiguous.[9]
Running rpm
The rpm command provides for the installation, removal, upgrade, verification, and other management of RPM packages and has a bewildering array of options. Some are of the traditional single-letter style, while others are the --option variety. In most cases, both styles exist and are interchangeable. At first glance, configuring rpm may appear to be a bit daunting. However, its operation is segmented into modes, which.
rpm
Syntax
rpm -i (also rpm -- install),
rpm -U (also rpm --upgrade)
rpm -e (also --uninstall)
rpm -q (also --query)
rpm -V
Install/Upgrade mode
The install mode ( rpm -i) is used to install new packages. A variant of install mode is the upgrade mode ( rpm -U), where an installed package is upgraded to a more recent version.
Frequently used install- and upgrade-mode options
- --force
- This option allows the replacement of existing packages and of files from previously installed packages; for upgrades, it allows the replacement of a newer package with an older one.
- -h (also --hash)
- This option adds a string of 50 hash marks (
#) during installation as a sort of progress indicator.
- --nodeps
- rpm will skip dependency checking with this option enabled. This allows you to install a package without regard to dependencies.
- --test
- This option will run through all the motions except for actually writing files; it's useful to verify that a package will install correctly prior to making the attempt. Note that verbose and hash options cannot be used with --test, but -vv can.
- -v
- This option sets verbose mode.
- -vv
- This -iv netscape-communicator-4.72-3.i386.rpm
error: failed dependencies:
netscape-common = 4.72 is needed by
netscape-communicator-4.72-3
To correct the problem, the dependency must first be satisfied. In this example, netscape-communicator is dependent on netscape-common, which is installed first:
# rpm -iv netscape-common-4.72-3.i386.rpm
netscape-common
# rpm -iv netscape-communicator-4.72-3.i386.rpm
netscape-communicator netscape-common-4.72-3.i386.rpm
Uninstall mode
This mode is used to remove installed packages from the system. By default, rpm uninstalls a package only if no other packages are dependent on it.
Frequently used uninstall-mode options
- --nodeps
- rpm skips dependency checking with this option enabled.
- --test
- netscape-common
error: removing these packages would break dependencies:
netscape-common = 4.72 is needed by
netscape-communicator-4.72-3
Query mode
Installed packages and raw package files can be queried using the rpm -q command. Query-mode options exist for package and information selection.
Frequently used query-mode-mode.
Example 1
To determine the version of the software contained in an RPM file, use the query and package information options:
# rpm -qpi xv-3.10a-13.i386.rpm | grep Version
Version : 3.10a Vendor: Red Hat Software
For installed packages, omit the -p option and specify a package name instead of a package filename:
# rpm -qi kernel-source | grep Version
Version : 2.2.5 Vendor: Red Hat Software
Example 2
Enter query mode and.2.5/COPYING
/usr/src/linux-2.2.5/CREDITS
/usr/src/linux-2.2.5/Documentation
/usr/src/linux-2.2.5/Documentation/00-INDEX
/usr/src/linux-2.2.5/Documentation/ARM-README
(.. the package from which a particular file was installed. Of course, not all files originate from packages:
# rpm -qf /etc/issue
file /etc/issue is not owned by any package
Those that are package members look like this:
# rpm -qf /etc/aliases
sendmail-8.9.3-10
Example 6
List the packages that have been installed on the system (all or a subset):
# rpm -qa
(... hundreds of packages are listed ...)
To search for a subset with kernel in the name, pipe the previous command to grep:
# rpm -qa | grep kernel
kernel-headers-2.2.5-15
kernel-2.2.5-15
kernel-pcmcia-cs-2.2.5-15
kernel-smp-2.2.5-15
kernel-source-2.2.5-15
kernelcfg-0.5-5
kernel-ibcs-2.2.5-15
kernel-doc-2.2.5-15
Verify mode
Files from installed packages can be compared against their expected configuration from the RPM database by using rpm -V. The output is described in ."
Frequently used verify-mode options
- --nofiles
- Ignores missing files.
- --nomd5
- Ignores MD5 checksum errors.
- --nopgp
- Ignores PGP checking errors.
Additional operational modes
There are also modes in RPM for building, rebuilding, signing, and checking the signature of RPM files; however, these are beyond the scope of the LPIC Level 1 exams.
1. One excellent resource is Running Linux, Third Edition, by Matt Welsh, Matthias Kalle Dalheimer, and Lar Kaufman (O'Reilly & Associates).
2. This isn't to say that you can't boot from other media such as floppies--many people do.
3. It's a two-stage operation because the boot sector of the disk is too small to hold the entire boot loader program. The code located in the boot sector is compact because its only function is to launch the second stage, which is the interactive portion.
4. Multiple-boot and multiple-OS configurations are beyond the scope of the LPIC Level 1 exams.
5. Not just for Linux, either. Although Linux distributions are largely made up of GNU software, that software runs on many other Unix and Unix-like operating systems, including the various flavors of BSD (e.g., FreeBSD, NetBSD, and OpenBSD).
6. GNU tar offers compression; older tar programs didn't.
7. configure is produced for you by the programmer using the autoconf utility. autoconf is beyond the scope LPIC Level 1 exams.
8. Though not complicated, the compilation of libprintit.so and hw is beyond the scope of the LPIC Level 1 exams.
9. Perhaps this won't be clear at first glance, but once you're used to RPM names you'll know what to expect.
Back to: LPI Linux Certification in a Nutshell
© 2001, O'Reilly & Associates, Inc.
webmaster@oreilly.com | http://www.oreilly.com/catalog/lpicertnut/chapter/p2_ch04.html | crawl-001 | refinedweb | 6,799 | 57.27 |
Creating a Java Class with Variable (Program Statement+Concepts+Sample Programs+Output)
PROGRAM STATEMENT
Create a Java class called Student with the following details as variables within it.
(i) USN
(ii) Name
(iii) Branch
(iv) Phone
Write a Java program to create nStudent objects and print the USN, Name, Branch, and Phone of these objects
with suitable headings.
CONCEPT
In Java, everything is encapsulated under classes. Class is the core of Java language. The Class can be defined as a
template/ blueprint that describes the behaviors /states of a particular entity. A class defines the new data type. This
type can be used to create an object of that type. The Object is an instance of the class. You may also call it as physical
existence of a logical template class.
A class is declared using class keyword. A class contains both data and code that operate on that data. The data
or variables defined within a class are called instance variables and the code that operates on this data is known
as methods.
class Student{ String USN,name , branch; int phoneno; } the new operator. This process is called instantiating an object or creating an object instance. In the following statement, obj is instance/object of Student class.
Student obj=new Student();
An array of objects is created just like an array of primitive type data items in the following way.
Student[] obj_Array = new Student[7];
However, in this particular case, we may use for a loop since all Student objects are created with the same default constructor.
for ( int i=0; i<obj_Array.length; i++) { obj_Array[i]=new Student(); }
Constructor
The Constructor in java is a special type of method that is used to initialize the object. Java constructor is invoked at the time of object creation. It constructs the values i.e. provides data for the object that is why it is known as a constructor.
Types of java constructors
There are two types of constructors:
1. Default constructor (no-arg constructor)
2. Parameterized constructor
A constructor that have no parameter is known as default constructor.
Student() { //block of code }
Student obj=new Student(); Constructor with arguments is known as parameterized constructor. Student(int i,String n){ id = i; name = n; }
Student4 s1 = new Student4(160,"RabinsXP");
PROGRAM
import java.util.Scanner; import java.io.*; public class student { String USN,name , branch; int phoneno; public void getinfo() throws Exception { InputStreamReader r=new InputStreamReader(System.in); BufferedReader br=new BufferedReader(r); System.out.println("Enter USN"); USN=br.readLine(); System.out.println("Enter Name"); name=br.readLine(); System.out.println("Enter Branch"); branch=br.readLine(); Scanner integer=new Scanner(System.in); System.out.println("Enter phone number"); phoneno=integer.nextInt(); } public void display() { System.out.printf("%s\t\t%s\t\t%s\t\t%d\n",USN,name,branch,phoneno); } public static void main(String[] args) throws Exception { int n,i; Scanner integer=new Scanner(System.in); System.out.println("Enter Number of student"); n=integer.nextInt(); //declare object with array student[] obj=new student[n]; //object creation for(i=0;i<n;i++) { obj[i]=new student(); System.out.printf("Student : %d\n",i+1); obj[i].getinfo(); } System.out.println("USN\t\tName\t\tBranch\t\tPhone Number"); for(i=0;i<n;i++) obj[i].display(); } }
OUTPUT
Excellent and much useful post, especially for those who wants to start their java programming language. All these guidelines/tips are much useful.
Thanks, Mr. Niraj, if you have any queries or want to know about something specific to JAVA, then you can ask. I will be happy to solve any issue. | https://www.rabinsxp.com/java/creating-java-class-variable-program-statementconceptssample-programsoutput/ | CC-MAIN-2018-51 | refinedweb | 604 | 51.95 |
WikiClone wiki
qatrackplus / admin / calculated
Tests with Calculated Results
There are currently three test types that allow you to calculate test results using snippets of Python code. These tests include Composite, String Composite & Upload.
Composite Tests_0<<
and second a slightly more complicated multi-line snippet that collects a group of readings and calculates the average value of them.
When your script (calculation procedure) is executed, it has access to
1) the current value of all the tests in the current test list being performed 2) the Python math module, along with NumPy and SciPy. 3) A META variable
The snippet below shows a composite calculation which takes advantage of the SciPy stats library to perform a linear regression and return the intercept as the result.
()
Composite tests made up of other composite tests
QATrack+ has a primitive dependency resolution system and it is therefore safe to create composite values that depend on other composite values as they will be calculated in the correct order.
A note about division for people familiar with Python
In Python versions 2.x the calculation
a = 1/2 will result in
a
being set to the value
0 and not 0.5 like many people would
expect. This is because Python2.x uses
integer division
by default. This behaviour can be overridden so that
(1/2 == 0.5) ==
True in Python by adding
from __future__ import division to the top
of your Python script.
from __future__ import division is automatically added to every
composite calculation procedure. If you specifically require integer
division you must explicitly use the floor division operator, two
forward slashes (//)
This was done to cut down on confusion caused by people unfamiliar with the way Python handles division as well as provide compatability with the 3.x versions of Python in the future.
String Composite Tests
The String Composite test type are the same as the Composite test type described above with the exception that the calculated value should be a string rather than a number. An example Composite String test is shown below.
Upload Tests
Upload test types allow the user to attach arbitrary files (text, images,
spreadsheets etc) which can then be analyzed with a Python snippet similar
to the composite tests above. The uploaded file object is made available in
the calculation context with the variable name
FILE (more information
on file objects is available in the Python documentation.
The calculation procedure can return any JSON serializable object (number, string, list, dict etc) and then (optionally) other composite tests can make use of the returned results. An example of this is given below.
Example Upload_4<<
We can then define three composite tests to store our calculated results. The
calculation procedure required for Average Temp is simply
avg_temp = temp_stats['avg']
and the complete test definition is shown below:
An example test list made of these 4 tests is shown below as it is being performed:
Updated | https://bitbucket.org/tohccmedphys/qatrackplus/wiki/admin/calculated.md | CC-MAIN-2020-29 | refinedweb | 486 | 51.28 |
Last edited by defragster; 09-22-2019 at 10:03 AM.
Not sure if you saw the @manitou's post #71 where there are a couple of quick fixes for the PIT issue.
Just compiled @KurtE's async sketch with 1.8.10/1.48B1 without the redef PROGMEM error.
Just for ref haven't tested uncanny on 240x240 st7789's but they should work without a problem on the T4.
So I did the PIT change posted here : pjrc.com/threads/57609-Teensyduino-1-48-Beta-1
Code is running on 1.8.10 with TD 1.48b1 - though no displaying - I the @mjs513 'ST7735_t3-ST7735_T4_rewrite.zip' is the correct github source I just need to get my init settings and pins made right.
Using ST7735_t3_multiple.ino from github zip ... making 7789 no CS adjustments and it compiles - need to find the other settings to get display …
Found old sketch { 5 serial port cross transfer } that worked - only one display active … that T4 w/twin 7789's has been sitting a month - wires look good - but this version using TeensyThreads …
Picking it up and checking wires something reset - PC complained about USB device bad ... - and the Tthreads and non thread version now showing on both displays - time to check init and pin settings ...
When I run it, the adafruit splash screen comes up, but nothing after that. Here are the messages on the USB serial monitor:
Here is my config.h:Here is my config.h:Code:IOMUXC_GPR_GPR17:aaaaaaaf IOMUXC_GPR_GPR16:7 IOMUXC_GPR_GPR14:aa0000 Initial Stack pointer: 20070000 ITCM allocated: 65536 DTCM allocated: 458752 ITCM init range: 0 - 9a40 Count: 39488 DTCM init range: 20000000 - 20001b20 Count: 6944 DTCM cleared range: 20001b20 - 200042c0 Count: 10144 Now fill rest of DTCM with known pattern /home/meissner/Arduino/teensy-eyes/uncannyEyes_async_st7735/uncannyEyes_async_st7735.ino Sep 23 2019 09:49:06 Init Create display #0 ST7789_t3::init mode: 8 Init ST77xx display #0 Rotate done Display logo $0: Using Frame buffer 420 : 141 55775 0: 108 0: 456 Long wait 0 : updateScreenAsync FAILED Long wait 424 : 140 56173 0: 108 0: 465 0 : updateScreenAsync FAILED Long wait 394 : 147 53498 0: 103 0: 479 0 : updateScreenAsync FAILED Long wait?Code://#define SERIAL_tt Serial // Send debug_tt output here. Must have SERIAL_tt.begin( ## ) //#include "debug_tt.h" // Pin selections here are based on the original Adafruit Learning System // guide for the Teensy 3.x project. Some of these pin numbers don't even // exist on the smaller SAMD M0 & M4 boards, so you may need to make other // selections: // GRAPHICS SETTINGS (appearance of eye) ----------------------------------- // If using a SINGLE EYE, you might want this next line enabled, which // uses a simpler "football-shaped" eye that's left/right symmetrical. // Default shape includes the caruncle, creating distinct left/right eyes. // Otherwise your choice, standard is asymmetrical #define SYMMETRICAL_EYELID #define SINGLE_EYE // temporarily disable 2nd eye // Enable ONE of these #includes -- HUGE graphics tables for various eyes: #ifdef USE_ST7789 #include "graphics/default_large.h" // 240x240 #else #include "graphics/defaultEye.h" // Standard human-ish hazel eye -OR- //#include "graphics/dragonEye.h" // Slit pupil fiery dragon/demon eye -OR- //#include "graphics/noScleraEye.h" // Large iris, no sclera -OR- //#include "graphics/goatEye.h" // Horizontal pupil goat/Krampus eye -OR- //#include "graphics/newtEye.h" // Eye of newt -OR- //#include "graphics/terminatorEye.h" // Git to da choppah! //#include "graphics/catEye.h" // Cartoonish cat (flat "2D" colors) //#include "graphics/owlEye.h" // Minerva the owl (DISABLE TRACKING) //#include "graphics/naugaEye.h" // Nauga googly eye (DISABLE TRACKING) //#include "graphics/doeEye.h" // Cartoon deer eye (DISABLE TRACKING) //#include "graphics/Nebula.h" //Dont work //#include "graphics/MyEyeHuman1.h" //#include "graphics/Human-HAL9000.h" //#include "graphics/NebulaBlueGreen.h" //#include "graphics/SpiralGalaxy.h" //#include "graphics/ChameleonX_Eye.h" //no work //#include "graphics/MyEye.h" #endif // Optional: enable this line for startup logo (screen test/orient): #if !defined(ADAFRUIT_HALLOWING) // Hallowing can't always fit logo+eye #include "graphics/logo.h" // Otherwise your choice, if it fits #endif // EYE LIST ---------------------------------------------------------------- // DISPLAY HARDWARE SETTINGS (screen type & connections) ------------------- //#define TFT_SPI SPI //#define TFT_PERIPH PERIPH_SPI // Enable ONE of these #includes to specify the display type being used #include <ST7735_t3.h> // TFT display library (enable one only) #define SPI_FREQ 48000000 // TFT: use max SPI (clips to 12 MHz on M0) // This table contains ONE LINE PER EYE. The table MUST be present with // this name and contain ONE OR MORE lines. Each line contains THREE items: // a pin number for the corresponding TFT/OLED display's SELECT line, a pin // pin number for that eye's "wink" button (or -1 if not used), and a screen // rotation value (0-3) for that eye. eyeInfo_t eyeInfo[] = { #ifdef ST77XX_ON_SPI_SPI2 //CS DC MOSI SCK RST WINK ROT INIT // Going to try to NO CS displays. {-1, 9, 11, 13, 8, -1, 0, INITR_144GREENTAB }, // LEFT EYE display-select and wink pins, no rotation #ifndef SINGLE_EYE {-1, 36, 35, 37, 39, -1, 0, INITR_144GREENTAB }, // RIGHT EYE display-select and wink pins, no rotation #endif #else #ifndef SINGLE_EYE {0, 2, 26, 27, 3, -1, 0, INITR_144GREENTAB }, // RIGHT EYE display-select and wink pins, no rotation #endif {10, 9, 11, 13, 8, -1, 0, INITR_144GREENTAB }, // LEFT EYE display-select and wink pins, no rotation #endif }; // INPUT SETTINGS (for controlling eye motion) ----------------------------- // JOYSTICK_X_PIN and JOYSTICK_Y_PIN specify analog input pins for manually // controlling the eye with an analog joystick. If set to -1 or if not // defined, the eye will move on its own. // IRIS_PIN speficies an analog input pin for a photocell to make pupils // react to light (or potentiometer for manual control). If set to -1 or // if not defined, the pupils will change on their own. // BLINK_PIN specifies an input pin for a button (to ground) that will // make any/all eyes blink. If set to -1 or if not defined, the eyes will // only blink if AUTOBLINK is defined, or if the eyeInfo[] table above // includes wink button settings for each eye. //#define JOYSTICK_X_PIN A0 // Analog pin for eye horiz pos (else auto) //#define JOYSTICK_Y_PIN A1 // Analog pin for eye vert position (") //#define JOYSTICK_X_FLIP // If defined, reverse stick X axis //#define JOYSTICK_Y_FLIP // If defined, reverse stick Y axis #define TRACKING // If defined, eyelid tracks pupil #define AUTOBLINK // If defined, eyes also blink autonomously //#define BLINK_PIN 1 // Pin for manual blink button (BOTH eyes) //#define LIGHT_PIN A2 // Photocell or potentiometer (else auto iris) //#define LIGHT_PIN_FLIP // If defined, reverse reading from dial/photocell //#define LIGHT_MIN 0 // Lower reading from sensor //#define LIGHT_MAX 1023 // Upper reading from sensor #define IRIS_SMOOTH // If enabled, filter input from IRIS_PIN #if !defined(IRIS_MIN) // Each eye might have its own MIN/MAX #define IRIS_MIN 120 // Iris size (0-1023) in brightest light #endif #if !defined(IRIS_MAX) #define IRIS_MAX 720 // Iris size (0-1023) in darkest light #endif
@MichaelMeissner
The 7789's I have are the one's with CS pins not without. Let me hook my up and see what happens - I do have them without the CS pin just have to solder the other one up. You may have to download Kurt's latest changes to the 7735_t3 library:
@MichaelMeissner
Try this for the eye.info setup in config.h as to opposed whats there. I am getting it to display the eye but for some reason its smaller than the display - let me know how it works out:
EDIT: Also don't use asynch updates. It should work with or without the initRgreentab specifiedEDIT: Also don't use asynch updates. It should work with or without the initRgreentab specifiedCode:{-1, 9, 11, 13, 8, -1, 0 }
Last edited by mjs513; 09-23-2019 at 04:46 PM.
@KurtE
With your latest DMA changes getting some very strange results with the 7789 with no CS pin - its only displaying to a smaller area on the screen like its set for 160x160? Then if I do a rotation test it rotates that block but leaves the previous blocks.
EDIT: I am using tft.init(240,240, SPI_MODE2)
@MichaelMeissner
Did some testing at least with 1 display on SPI and it seems to be working now with @KurtE's sketch in post #1-3 BUT you will need to download his latest library updates:.
Testing was done with a 240x240 7789 without a CS pin. Think I need to delete the multiple copies of IDEs I have installed
@MichaelMeissner @mjs513 ...
Sorry I was out of town this weekend and have not kept up with all that is going on... Just got back, now to play with dogs, ...
I am close to seeing if it works. I got the ST7735_t3_multiple.ino coded to work on my SPI0 / SPI1 system with NO_CS 7789's:
Had it working on another sketch already - tracked the pin #'s - and wires to setup the eyeInfo[] …Had it working on another sketch already - tracked the pin #'s - and wires to setup the eyeInfo[] …Code:ST7789_t3 tft = ST7789_t3(-1, 3, 26, 27, 4); ST7789_t3 tft1 = ST7789_t3(-1, 10, 12); // … void setup() // ... tft.init(240,240, SPI_MODE2) ; // use for ST7789 without CS) tft1.init(240,240, SPI_MODE2) ; // use for ST7789 without CS)
Do I put SPI_MODE2 in for INITR_144GREENTAB?
Left eye working - had to go rotate 2 as it was downside up
Right eye blank until I did Rotate 2
>> Seems there is a rectangle around the iris when the pupil is closed - when pupil shows the iris square goes away. Both eyes.
Both working in SYNC - Adafruit logo splits across matching edges:
Crosspost update :: indeed it works the same with :: INITR_144GREENTABCrosspost update :: indeed it works the same with :: INITR_144GREENTABCode:eyeInfo_t eyeInfo[] = { //CS DC MOSI SCK RST WINK ROT INIT {-1, 10, 11, 13, 12, -1, 2, SPI_MODE2 }, // LEFT EYE display-select and wink pins, no rotation {-1, 3, 26, 27, 4, -1, 2, SPI_MODE2 }, // RIGHT EYE display-select and wink pins, no rotation
Just rotated right eye and it still works - it seems after first moving to this the one eye was offline until ??? I think I may have just seen the last of what was an intermittent display. I was moving it around and now backlight only
So this worked a bit as well... PJRC DIY break with SPI and SPI1 using 26/27 wired from POGO's as above::
Odd no pupil open doesn't have the iris box clipped. Not sure where that is in code after a quick glance.Odd no pupil open doesn't have the iris box clipped. Not sure where that is in code after a quick glance.Code:{-1, 10, 11, 13, 12, -1, 0, INITR_144GREENTAB }, // LEFT EYE display-select and wink pins, no rotation {-1, 3, 26, 27, 4, -1, 0, INITR_144GREENTAB }, // RIGHT EYE display-select and wink pins, no rotation
Cool - yes and glad I got to see that before a display went away
… they were cheap… they were cheap
Yeah that box is odd - only when no or maybe minimal pupil open - hope that helps find it. Of course looks odd fully closed.
Well looks like my second 7789(noCS) is dead but was playing around with Iris settings to see where the box would disappear (yes it was annoying me as well) - these seem to work
Code:#define IRIS_SMOOTH // If enabled, filter input from IRIS_PIN #if !defined(IRIS_MIN) // Each eye might have its own MIN/MAX #define IRIS_MIN 500 // Iris size (0-1023) in brightest light #endif #if !defined(IRIS_MAX) #define IRIS_MAX 700// Iris size (0-1023) in darkest light #endif
EDIT: don't bother trying with the smaller eyes - the scaling is all messed up on the display. Unless its just me | https://forum.pjrc.com/threads/57015-ST7789_t3-(part-of-ST7735-library)-support-for-displays-without-CS-pin?s=ae039c251cc264ec5c6b4ceba041b267&p=216624&viewfull=1 | CC-MAIN-2019-47 | refinedweb | 1,912 | 69.11 |
Just a little example that uses the Object Model to create a null and a ObjectToCluster constraint for each selected component (point, edge, polygon, …).
Note line 13. I can use CollectionItem.SubElements to get the indices of the selected components.
from win32com.client import constants as C si = Application log = si.LogMessage if si.Selection.Count > 0 and si.ClassName(si.Selection(0)) == 'CollectionItem' and si.Selection(0).SubComponent is not None: # pnt, poly, edge, ... clusterType = si.Selection(0).Type.replace( 'SubComponent','' ) o = si.Selection(0).SubComponent.Parent3DObject for i in si.Selection(0).SubElements: c = o.ActivePrimitive.Geometry.AddCluster( clusterType, "", [i] ) n = si.ActiveSceneRoot.AddNull() n.Kinematics.AddConstraint( "ObjectToCluster", c )
Is it possible to constrain a null to point in a point cloud that is simulated? I have no results with Object to Cluster constrain, ICE Kinematics is not an option because mainly I deal with a large number of positions. Thanks!
Ok, i found a way out (), but rotations is still under question…
how ’bout ?
It uses ICE kinematics. I have a basic script that creates a null for every point and adds that kinematic compound to it. For 2000 points it executes about an hour. And even after that I can’t plot result because it massively slows down SI
Oh, ok, I got it, I’ve created nulls first and then applied Transform Objects by Particles command to all of them at once. In this case I got a faster result! Thanks!
My relatives every time say that I am killing my time here at web, however I know I am
getting know-how every day by reading thes nice content.
Any reason you dispatched the Application object instead of just using “Application”? (I know you have to for things like imported modules in plugins, and I think relational views too, but otherwise not really, right?)
While we’re at it, dispatching XSI.Collection vs XSIFactory.CreateObject(“XSI.Collection”)… Which one is better? Is there a difference?
Is there a reason why in this example you dispatch Application instead of using it directly as is?
Also, while we’re at it, is it better to dispatch XSI.Collection or use XSIFactory.CreateObject()? Is there a difference, downside or overhead at all to using either?
RE: dispatching Application
I don’t remember why I didn’t use the siutils shortcuts, but when I removed those shortcuts I was too literal and just pasted in what was in the module. You’re right, I don’t need to dispatch Application.
RE: dispatching XSI.Collection vs using XSIFactory.CreateObject()?
I don’t know. The Syntax Help in the script ed uses dispatch, so that suggests dispatch is the preferred way to do it.
Many thanks for this. Speeds up some work I’m doing now 🙂 | https://xsisupport.com/2012/03/17/python-example-constraining-nulls-to-components/ | CC-MAIN-2018-51 | refinedweb | 466 | 60.92 |
In this tutorial, we will cover how we can leverage Appwrite’s Cloud functions feature to execute certain tasks when certain events take place in the server. You can find a complete list of available system events here.
In this example, we will demonstrate how we can integrate with a third-party storage provider like Dropbox to create backups of files uploaded to Appwrite. For the sake of this example, we will be using Dropbox’s Python SDK. A similar concept applies to other API providers like Box or Google Drive. So let’s get started.
Creating your Cloud Function
The first step is to create a Dropbox Developer account and obtain the Access Token. Now it's time to create the Cloud Function in the Appwrite Console. Head over to the Functions section of your console and select Add Function. You can give your function a funky new name and select the preferred environment. We will be using Python for this example.
Let's Write Some Code
The next step is to write the code that will be executed and upload it to the Appwrite Console. Create a directory to hold your Cloud Function. Then create your main code file and a
requirements.txt.
$ mkdir cloud-functions-demo $ cd cloud-functions-demo $ touch main.py $ touch requirements.txt
We will be using two dependencies for this example
- dropbox
- appwrite
Add these to your
requirements.txt. We would typically perform a
pip install at this stage but that would install the libraries in the shared libraries path. We need the libraries to be installed in the same directory so that they can be packaged easily. Run the following command to install the libraries inside the local
.appwrite directory. Appwrite’s Python environment will know how to autoload a file from that directory without any special configuration.
$ PIP_TARGET=./.appwrite pip install -r ./requirements.txt --upgrade --ignore-installed
Great. It’s time to start editing the
main.py file. We start by importing the relevant libraries.
import sys import json import os # Drop box SDK import dropbox from dropbox.files import WriteMode from dropbox.exceptions import ApiError, AuthError # Appwrite SDK from appwrite.client import Client from appwrite.services.storage import Storage
Appwrite's API returns a binary file as output whereas the Dropbox SDK expects a file path. So we will make use of a temporary file during the function execution. So let's define those variables.
TOKEN = os.environ['DROPBOX_KEY'] FILENAME = 'my-file.txt' BACKUPPATH = '/my-file.txt'
Now it’s time to set up the Appwrite SDK
# Setup the Appwrite SDK client = Client() client.set_endpoint('') # Your API Endpoint client.set_project('5fca866c65afc') # Your project ID client.set_key(os.environ["APPWRITE_KEY"]) # Your secret API key
Note: Within the Cloud Function, you cannot use localhost to refer to your Appwrite server, because localhost refers to your own runtime environment. You will have to find the private IP of your default network interface using ifconfig (usually eth0 in Linux or en0 in macOS).
When a function is triggered by an event, we can obtain a lot of metadata about the event from some special environment variables that are exposed by Appwrite. A complete list is available here. In our case, we need the ID of the file that was uploaded, in order to fetch it. Appwrite conveniently exposes this information as an environment variable named
APPWRITE_FUNCTION_EVENT_PAYLOAD. Let’s parse this JSON string to retrieve the file ID.
# Triggered by the storage.files.create event payload = json.loads(os.environ["APPWRITE_FUNCTION_EVENT_PAYLOAD"]) fileID = payload["$id"]
Using the SDK, lets fetch the file and save it:
# Create an instance of Appwrite's Storage API storage = Storage(client) result = storage.get_file_download(fileID) # Save the file with open(FILENAME, "wb") as newFile: newFile.write(result)
We’re almost done. Now we will set up the Dropbox SDK and upload the file.
# Check if we have the access token if (len(TOKEN) == 0): sys.exit("ERROR: Looks like you didn't add your access token. " "Open up backup-and-restore-example.py in a text editor and " "paste in your token in line 14.") # Create an instance of a Dropbox class, which can make requests to the API. print("Creating a Dropbox object...") with dropbox.Dropbox(TOKEN) as dbx: # Check that the access token is valid try: dbx.users_get_current_account() except AuthError: sys.exit("ERROR: Invalid access token; try re-generating an " "access token from the app console on the web.") # Create a backup of the current settings file backup() print("Done!")
Let’s now take a look at the
backup() function. Here we use Dropbox’s
files_upload() function to upload our file and watch for some specific errors.
# Uploads contents of FILENAME to Dropbox def backup(): with open(FILENAME, 'rb') as f: # We use WriteMode=overwrite to make sure that the contents in the file # are changed on upload print("Uploading " + FILENAME + " to Dropbox as " + BACKUPPATH + "...") try: dbx.files_upload(f.read(), BACKUPPATH, mode=WriteMode('overwrite')) except ApiError as err: # This checks for the specific error where a user doesn't have # enough Dropbox space quota to upload this file if (err.error.is_path() and err.error.get_path().reason.is_insufficient_space()): sys.exit("ERROR: Cannot back up; insufficient space.") elif err.user_message_text: print(err.user_message_text) sys.exit() else: print(err) sys.exit()
Deploying the Cloud Function
Before we can deploy our cloud function, we need to ensure that our directory has the following structure.
. ├── .appwrite/ ├── main.py └── requirements.txt
There are two ways to deploy your function. Using the Appwrite CLI and using the Appwrite Console.
Deploy using Appwrite CLI (recommended)
You can easily deploy your functions using Appwrite CLI. If you have not already installed Appwrite CLI, please go through these instructions to install Appwrite CLI. Once installed, you can run the following command from the directory containing your cloud function to deploy your tag.
appwrite functions createTag \ --functionId=<id> \ --command='python main.py' \ --code='.'
The function ID can be found on the right side of the overview section of your function.
Deploy using the Console (Manual)
If deploying manually, we need to first package the function by creating a tar file out of our folder.
$ cd .. $ tar -zcvf code.tar.gz cloud-functions-demo
We can now upload this tarfile to our function’s dashboard by selecting the Deploy Tag > Manual option. Our entry point command, in this case, would be:
$ python main.py
Once created, we need to define a trigger for the function. In our case, we wish to trigger it whenever a new file is uploaded to the Appwrite server. So we would be interested in the
storage.files.create event. The trigger can be enabled under the Settings tab of the function.
Once the triggers are enabled, it’s time for our final step, Function Variables. Appwrite allows you to securely store secret keys using Appwrite Function Variables which will be available as environment variables to your program. The best part is that these keys are encrypted and stored securely on Appwrite’s internal DB. In this example, we have used two environment variables namely DROPBOX_KEY (Dropbox’s API Key) and APPWRITE_KEY (Appwrite API Key) so let’s add them to the Function Variables. Don’t forget to click the Update option once you’re happy with your settings.
Great! We’re done with all the setup. All that’s left now is to test the Cloud Function.
Testing
Now it’s time to test your shiny new Cloud Function! Head over to the Storage section of Appwrite and create a new file by clicking on the ‘+’ button at the bottom right. Choose a text file ( Or any other file. But be sure to rename the files in the code example appropriately. ) and click Create.
Your Cloud Function would now have been triggered. You can check it out by heading over to
Functions > Your Function Name > Logs
Once the execution is complete, you can check the response from the API.
And in a few simple steps, we successfully deployed our first Cloud Function. The possibilities with Cloud Functions are endless! Stay tuned for more Cloud Function ideas from the Appwrite Team.
Learn More
- You can find the complete code sample and lots of other demos in our Cloud Functions Demo Repo.
- Our Discord Server is the place to be if you ever get stuck.
- You can find all our Documentation here.
Credits
Photo by Markus Spiske on Unsplash
Discussion (0) | https://practicaldev-herokuapp-com.global.ssl.fastly.net/appwrite/create-an-appwrite-file-backup-function-using-the-dropbox-api-2pfo | CC-MAIN-2021-43 | refinedweb | 1,404 | 58.99 |
Hi, My name is Rose and I work in an IT dept of a school. I am trying to set up a Java class for a teacher. I am very beginner to this programming. I have a program for a GUI-based temperature conversion program which I am trying to run in JCreator. I have checked the syntax and all seems ok. But I keep getting the same error:
C:\mywork\ConvertWithGUI.java:4: cannot access GBFrame
bad class file: C:\jdk1.3.1_07\BreezySwing\GBFrame.class
class file contains wrong class: BreezySwing.GBFrame
Please remove or make sure it appears in the correct subdirectory of the classpath.
public class ConvertWithGUI extends GBFrame{
I am using the SDK 1.3.1_07 from Sun. Any suggestions on what I am doing wrong? I am going crazy and have been working on this for several days. I am sure it is something simple or at least for you it is simple!
Thanks, Rose
Can you attach the code to your post. It'd help.
ArchAngel.
O:-)
did you use the import statement to import GBFrame?
I wonder how poisoned Java tastes like....
Forum Rules
Development Centers
-- Android Development Center
-- Cloud Development Project Center
-- HTML5 Development Center
-- Windows Mobile Development Center | http://forums.devx.com/showthread.php?138363-well-a-very-wierd-problem&goto=nextnewest | CC-MAIN-2014-10 | refinedweb | 208 | 69.07 |
menu_driver(3x) menu_driver(3x)
menu_driver - command-processing loop of the menu system
#include <menu.h> int menu_driver(MENU *menu, int c);
Once a menu has been posted (displayed), you should funnel input events to it through menu_driver. This routine has three major input cases: o The input is a form navigation request. Navigation request codes are constants defined in <form.h>, which are distinct from the key- and character codes returned by wgetch. o The input is a printable character. Printable charac- ters (which must be positive, less than 256) are checked according to the program's locale settings. o buf- fer. decora- tion window) are handled. If you click above the display region of the menu: o a REQ_SCR_ULINE is generated for a single click, o a REQ_SCR_UPAGE is generated for a double-click and o a REQ_FIRST_ITEM is generated for a triple-click. If you click below the display region of the menu: o a REQ_SCR_DLINE is generated for a single click, o a REQ_SCR_DPAGE is generated for a double-click and o a REQ_LAST_ITEM is generated for a triple-click. If you click at an item inside the display area of the menu: o the menu cursor is positioned to that item. o If you double-click an item a REQ_TOGGLE_ITEM is gen- erated and E_UNKNOWN_COMMAND is returned. This return value makes sense, because a double click usually means that an item-specific action should be returned. It is exactly the purpose of this return value to sig- nal that an application specific command should be executed. o If a translation into a request was done, menu_driver returns the result of this request. If you clicked outside the user window or the mouse event could not be translated into a menu request an E_REQUEST_DENIED is returned. APPLICATION-DEFINED COMMANDS.
menu_driver return one of the following error codes: E_OK The routine succeeded. E_SYSTEM_ERROR System error occurred (see errno). E_BAD_ARGUMENT Routine detected an incorrect or out-of-range argu- ment.x), menu(3x), getch(3x).
The header file <menu.h> automatically includes the header files <curses.h>.
These routines emulate the System V menu library. They were not supported on Version 7 or BSD versions. The sup- port for mouse events is ncurses specific.
Juergen Pfeifer. Manual pages and adaptation for new curses by Eric S. Raymond. menu_driver(3x) | https://ncurses.scripts.mit.edu/?p=ncurses.git;a=blob_plain;f=doc/html/man/menu_driver.3x.html;hb=cef50b3afcd58166f3541b701c97bce538844c76 | CC-MAIN-2021-21 | refinedweb | 390 | 57.67 |
Benchmarking calling Oracle Machine Learning using REST
Over the past year I’ve been presenting, blogging and sharing my experiences of using REST to expose Oracle Machine Learning models to developers in other languages, for example Python.
One of the questions I’ve been asked is, Does it scale?
Although I’ve used it in several projects to great success, there are no figures I can report publicly on how many REST API calls can be serviced 😦
But this can be easily done, and the results below are based on using and Oracle Autonomous Data Warehouse (ADW) on the Oracle Always Free.
The machine learning model is built on a Wine reviews data set, using Oracle Machine Learning Notebook as my tool to write some SQL and PL/SQL to build out a model to predict Good or Bad wines, based on the Prices and other characteristics of the wine. A REST API was built using this model to allow for a developer to pass in wine descriptors and returns two values to indicate if it would be a Good or Bad wine and the probability of this prediction.
No data is stored in the database. I only use the machine learning model to make the prediction
I built out the REST API using APEX, and here is a screenshot of the GET API setup.
Here is an example of some Python code to call the machine learning model to make a prediction.
import json import requests country = 'Portugal' province = 'Douro' variety = 'Portuguese Red' price = '30' resp = requests.get(''+country+'/'+province+'/'+'variety'+'/'+price) json_data = resp.json() print (json.dumps(json_data, indent=2))
—–
{ "pred_wine": "LT_90_POINTS", "prob_wine": 0.6844716987704507 }
But does this scale, as in how many concurrent users and REST API calls can it handle at the same time.
To test this I multi-threaded processes in Python to call a Python function to call the API, while ensuring a range of values are used for the input parameters. Some additional information for my tests.
- Each function call included two REST API calls
- Test effect of creating X processes, at same time
- Test effect of creating X processes in batches of Y agents
- Then, the above, with function having one REST API call and also having two REST API calls, to compare timings
- Test in range of parallel process from 10 to 1,000 (generating up to 2,000 REST API calls at a time)
Some of the results. The table shows the time(*) in seconds to complete the number of processes grouped into batches (agents). My laptop was the limiting factor in these tests. It wasn’t able to test when the number of parallel processes when above 500. That is why I broke them into batches consisting of X agents
* this is the total time to run all the Python code, including the time taken to create each process.
Some observations:
- Time taken to complete each function/process was between 0.45 seconds and 1.65 seconds, for two API calls.
- When only one API call, time to complete each function/process was between 0.32 seconds and 1.21 seconds
- Average time for each function/process was 0.64 seconds for one API functions/processes, and 0.86 for two API calls in function/process
- Table above illustrates the overhead associated with setting up, calling, and managing these processes
As you can see, even with the limitations of my laptop, using an Oracle Database, in-database machine learning and REST can be used to create a Micro-Service type machine learning scoring engine. Based on these numbers, this machine learning micro-service would be able to handle and process a large number of machine learning scoring in Real-Time, and these numbers would be well within the maximum number of such calls in most applications. I’m sure I could process more parallel processes if I deployed on a different machine to my laptop and maybe used a number of different machines at the same time
How many applications within you enterprise needs to process move than 6,000 real-time machine learning scoring per minute? This shows us the Oracle Always Free offering is capable and suitable for most applications.
Now, if you are processing more than those numbers per minutes then perhaps you need to move onto the paid options.
What next? I’ll spin up two VMs on Oracle Always Free, install Python, copy code into these VMs and have then run in parallel 🙂
2 thoughts on “Benchmarking calling Oracle Machine Learning using REST”
April 14, 2020 at 1:43 pm
[…] 5. Benchmarking calling Oracle Machine Learning using REST […]
March 11, 2020 at 8:46 pm
[…] 5. Benchmarking calling Oracle Machine Learning using REST […] | https://oralytics.com/2020/03/02/bench-marking-calling-oracle-machine-learning-using-rest/ | CC-MAIN-2021-17 | refinedweb | 789 | 58.21 |
PDF and Java
HTML continues to be the leading format for creating Web content for various reasons including its relative simplicity. For many types of content, HTML offers a sufficient set of tags for an effective presentation. There are, however, document types, that are too rich for HTML. Documents where positioning of various text and non-text elements is important are usually not good candidates for HTML. For example, it would be rather difficult to create a typical IRS Tax form via HTML. The Portable Document Format (PDF) is often used for creating and displaying rich content. The Acrobat Reader plug-in software from Adobe, allows browsers to effectively display PDF files.
Java servlets are an effective mechanism for creating Web applications. Such applications often require manipulation of HTML documents before serving them to the browser. Such manipulations are quite common for servlets, CGI and other server-side technologies and often require data extraction using HTML tags as delimiters. Text-processing algorithms and utility programs (e.g., AWK, scripting languages, and regular expressions) can be used to complement the capabilities of servlets and CGI programs. But what about PDF? This article is an overview of using Java to interact with PDF files.
What is PDF?
PDF uses objects to describe a page. Everything you see (and some things that you don't see) in a PDF page is an object. The objects making up a document are expressed in a sequential manner. At the end, there is a cross-reference table that lists the byte offset of each object within the file. The trailing piece of a PDF document also indicates which object is the "root" object. The trailer also contains a byte offset, which points to the beginning of the cross-reference table. The structure, once mapped out, is somewhat similar to an XML document with a "containment" hierarchy; that is, the document is composed of "page" objects, the page objects are composed of other objects like fonts, streams of text, etc.
If you have not done so, use a text editor to take a look at a PDF file (for simplicity, try a document that contains no images). You'll see that the instructions are expressed in plain text. The PDF language specification describes the syntax of all the instructions and can be found along with other documents from the Adobe site. The specification is a fairly large document, which is testimony to the relative complexity of PDF.
PDF documents typically use a compression algorithm (such as LZW) to reduce the size of text and binary streams in the document. That's why you will most likely see unreadable characters instead of the text contained in the document. One way to extract information from a PDF file is by simply reading the "text-based" instructions and extracting the appropriate data. This requires a fair amount of understanding of the PDF language specification and given the format of PDF, I doubt that was the intended mechanism for manipulating PDF files.
Adobe provides a variety of tools for creating and reading PDF documents. It also provides a Development Kit with an API to programmatically interact with PDF documents. I looked and searched the Adobe site hoping to find a Java API, but could not find any mention of it. They do provide a Java API for Form Documentation Format (FDF) but not for PDF. I suppose you could use JNI to use the C++ API from within a Java program, but that would certainly be too complex and cumbersome.
A Java-based PDF API
I discovered a Java library for PDF from Etymon Consulting. Although it does not cover the full specification, it does provide a convenient approach for reading, changing and writing PDF files from within Java programs. As with any Java library, the API is organized into packages. The main package is . Here, you'll find an object representation of all PDF core objects, which are arrays, boolean, dictionary, name, null, number, reference, stream, and string. Where the Java language provides an equivalent object, it is used but with a wrapper around it for consistency purposes. So, for example, the string object is represented by PjString.
When you read a PDF file, the Java equivalents of the PDF objects are created. You can then manipulate the objects using their methods and write the result back to the PDF file. You do need knowledge of PDF language to effectively do some of the manipulations. The following lines, for example, create a Font object:
PjFontType1 font = new PjFontType1(); font.setBaseFont(new PjName("Helvetica-Bold")); font.setEncoding(new PjName("PDFDocEncoding")); int fontId = pdf.registerObject(font);
where is the object pointer to a PDF file.
One thing, I wanted to do was to change parts of the text in the PDF file to create "customized" PDF. While I have access to the PjStream object, the bytearray containing the text is compressed and the current library does not support decompression of LZW. It does support decompression of Flate algorithm.
Despite some limitations, you can still do many useful things. If you need to append a number of PDF documents programmatically, you can create a page and then append the page to the existing PDF documents, all from Java. The API also provide you with information about the document like number of pages, author, keyword, and title. This would allow for a Java servlet to dynamically create a page containing the document information with a link to the actual PDF files. As new PDF files are added and old ones deleted, the servlet would update the page to reflect the latest collection.
Listing 1 shows a simple program that uses the pj library to extract information from a PDF file and print that information to the console.
Listing 1.
import com.etymon.pj.*;import com.etymon.pj.object.*;public class GetPDFInfo { public static void main (String args[]) { try { Pdf pdf = new Pdf(args[0]); System.out.println("# of pages is " + pdf.getPageCount()); int y = pdf.getMaxObjectNumber(); for (int x=1; x <= y; x++) { PjObject obj = pdf.getObject(x); if (obj instanceof PjInfo) { System.out.println("Author: " + ((PjInfo) obj).getAuthor()); System.out.println("Creator: " + ((PjInfo) obj).getCreator()); System.out.println("Subject: " + ((PjInfo) obj).getSubject()); System.out.println("Keywords: " + ((PjInfo) obj).getKeywords()); } } } catch (java.io.IOException ex) { System.out.println(ex); } catch (com.etymon.pj.exception.PjException ex) { System.out.println(ex); } }}
Before you compile the above program, you need to download the pj library, which includes the pj.jar file. Make sure your CLASSPATH includes the pj.jar file.
The program reads the PDF file specified at the command-line and parses it using the following line:
Pdf pdf = new Pdf(args[0]);
It then goes through all the objects that were created as a result of parsing the PDF file and searches for a object. That object encapsulates information such as the author, subject, and keywords, which are extracted using the appropriate methods. You can also "set" those values, which saves them permanently in the PDF file.
There are a number of sample programs that ship with the pj library, along with the standard javadoc-style documentation. The library is distributed under GNU General Public License.
Conclusion
Despite additions and advancements of HTML, PDF continues to be the most popular mean for sharing rich documents. As a programming language, Java needs to be able to interact with data. The pj library shown here, is a preview of how PDF objects can be modeled in Java and then use Java's familiar constructs to manipulate the seemingly complex PDF documents. With this type of interaction, applications that need to serve rich documents can actually "personalize" the content before sending out the document. This scenario can be applied, for example, to many legal forms where a hand signature is still required and the form is too complex to be drawn entirely in HTML. Java and PDF provide a nice solution for these types of applications.
About the AuthorPiroz Mohseni is president of Bita Technologies focusing on business improvement through effective usage of technology. His area of interest include enterprise Java, XML, and e-commerce applications.
| http://www.developer.com/tech/article.php/626501/PDF-and-Java.htm | CC-MAIN-2013-20 | refinedweb | 1,359 | 56.35 |
Re: [soaplite] Multiple return values to Java
Expand Messages
- Hi, stevetrans140!
--- stevetrans140 <stevie__k@...> wrote:
> I have written my server in SOAP::Lite and client in Java. Howsimply return list
> would
> I be able to return multiple values to my Java client, and/or a 2-d
> array?
return (1,2,3);
for multiple values and
return [[1,2], [3,4]];
for 2-d array (strictly speaking Perl doesn't have multi-dim arrays,
so it'll return array of arrays, which is not the same, but that's
the best we can do).
Best wishes, Paul.
__________________________________________________
Do You Yahoo!?
Yahoo! - Official partner of 2002 FIFA World Cup
Your message has been successfully submitted and would be delivered to recipients shortly. | https://groups.yahoo.com/neo/groups/soaplite/conversations/topics/1604?o=0&var=1&p=1 | CC-MAIN-2017-39 | refinedweb | 121 | 64.1 |
Good Day all,
I am struggling to launch my fabfile within my Python script. I have looked at similar posts on Stack Overflow regarding this but they don't solve my problem... Or maybe they do but I am not understanding them...Not sure.
My script writes to the fab file depending on what the user wants to run on the remote host. Here is an example of the fabfile:
[root@ip-50-50-50-50 bakery]# cat fabfile.py
from fabric.api import run
def deploy():
run('wget -P /tmp')
run('sudo yum localinstall /tmp/httpd-2.2.26-1.1.amzn1.x86_64.rpm')
fab -f fabfile.py -u ec2-user -i id_rsa -H 10.10.15.150 deploy
Try with subprocess :
import subprocess subprocess.call(['fab', '-f', 'fabfile.py', '-u ec2-user', '-i', 'id_rsa', '-H', bakery_internalip, 'deploy'])
should do the trick | https://codedump.io/share/PfHN58k0YbGw/1/how-to-call-my-fabfile-within-my-python-script | CC-MAIN-2018-26 | refinedweb | 143 | 70.29 |
Different Ways of Creating a List of Objects in C#
Different Ways of Creating a List of Objects in C#
In this post, we look at all the different approaches available to create a list of objects in C#. Do you know of any more?
Join the DZone community and get the full member experience.Join For Free
Jumpstart your Angular applications with Indigo.Design, a unified platform for visual design, UX prototyping, code generation, and app development.
It has always been fun to play with C#. In this post, we will see that how we can create a list of objects with a different approach. So the scenario is, for one of my MVC applications I need to bind the 5 empty rows (a list of 5 objects) to the kendo grid for a bulk insert of the records. So whenever I open that page, kendo grid renders 5 empty rows in editable mode.
In this post, for a better illustration, I have used the example of "Book." Let's say I want to add multiple books to one library management software. First, let's create one basic POCO class - Book - with some properties, which looks like the following:
public class Book { public string BookName { get; set; } = string.Empty; public string Author { get; set; } = string.Empty; public string ISBN { get; set; } = string.Empty; }
So,() });
And then C# 3.0 came with a lot of enhancements. One of them was Collection Initializers. It is a shortened syntax to create a collection.
// using collection initializer var bookList = new List<Book>() { new Book(), new Book(), new Book() };
In the .NET framework, there is one class - Enumerable - which resides under the "System.Linq" namespace. This class contains some static methods, using which we can create a list of objects. So, for example, using
Enumerable.Repeat() method:
//();
Well, the
Range() method generates a collection within a specified range. Kindly note there are so many use cases for this method.
All right, but I'm thinking I want);
Well, }} | https://dzone.com/articles/different-ways-of-creating-list-of-objects-in-c | CC-MAIN-2018-47 | refinedweb | 334 | 75.3 |
There is no straight way to reverse a string in Java. reverse() method is not included in String class as String is immutable. We use reverse() method of StringBuffer class to reverse a string. In the following code, string palindrome is checked with string reverse.
public class Demo { public static void main(String args[]) { String str1 = "radar"; StringBuffer sb1 = new StringBuffer(str1); sb1.reverse(); // see string reverse done here String str2 = sb1.toString(); if(str1.equals(str2)) { System.out.println(str1 + " is palindrome"); // this prints } else { System.out.println(str1 + " is not palindrome"); } // all the above code can be converted into a single step using anonymous StringBuffer class. System.out.println(str1.equals(new StringBuffer(str1).reverse().toString())); // prints true } // String Reverse on anonymous StringBuffer object }
Output screen on String Reverse Java
All the code, it looks all around process.
StringBuffer sb1 = new StringBuffer(str1);
sb1.reverse();
String str3 = sb1.toString();
As reverse() method does not exist with String class, but exists in StringBuffer class, the string str1 is passed as parameter to StringBuffer constructor. reverse() method is applied on StringBuffer object sb1. All the string stored in the sb1 gets reversed. As equals() method cannot be applied on StringBuffer object, sb1 is converted back to string using toString() method of Object class.
An experienced programmer does not go all this process. He uses a single statement as follows.
System.out.println(str1.equals(new StringBuffer(str1).reverse().toString()));
All the String and StringBuffer methods are discussed very elaborately with examples. | https://way2java.com/java-general/java-string-reverse/ | CC-MAIN-2022-33 | refinedweb | 251 | 67.55 |
Regular expression - capture a character when "escaped", split otherwise
Recently Browsing 0 members
No registered users viewing this page.
Similar Content
- By nend
This is a program that I made to help my self learn better regular expressions.
There are a lot of other programs/website with the similar functions.
- By FrancescoDiMuro
Good morning
I'm playing with SRE and trying to obtain some information from a test file.
I was testing the pattern on regex101, but when I bring it to AutoIt, it doesn't return the same result as on regex101.
I am surely (?:missing some important notes about PCRE engine|the pattern is not correct at all).
Script:
#include <Array.au3> #include <StringConstants.au3> Test() Func Test() Local $strFileName = @ScriptDir & "\TestFile.txt", _ $strFileContent, _ $arrResult $strFileContent = FileRead($strFileName) If @error Then Return ConsoleWrite("FileRead ERR: " & @error & @CRLF) $arrResult = StringRegExp($strFileContent, '(?sx)User:\h([^\n]+)\n' & _ 'Login\-name:\h([^\n]+)\n' & _ '(?:CaseSensitive:\h([^\n]+)\n)?' & _ 'NTSecurity:\h([^\n]+)\n' & _ '(?:NO\n)?' & _ '(?:Domain:\h([^\n]+)\n)?' & _ 'Timeout:\h([^\n]+)\n' & _ '.*?' & _ 'Member:\h([^\n]+)\n', $STR_REGEXPARRAYGLOBALMATCH) If IsArray($arrResult) Then _ArrayDisplay($arrResult) EndFunc Test file:
User: AMMINISTRATORE Login-name: ADM CaseSensitive: YES NTSecurity: NO NO Timeout: 00:05:00 Member: AMMINISTRATORI User: Test_User Login-name: Test_User NTSecurity: YES Domain: DNEU Timeout: 00:00:00 Member: OPERATORS Member: OPERATORS Any help (even from cats) it's highly appreciated.
Cheers
- By
-
Recommended Posts
You need to be a member in order to leave a comment
Sign up for a new account in our community. It's easy!Register a new account
Already have an account? Sign in here.Sign In Now | https://www.autoitscript.com/forum/topic/193563-regular-expression-capture-a-character-when-escaped-split-otherwise/?tab=comments#comment-1388746 | CC-MAIN-2021-49 | refinedweb | 278 | 50.63 |
Components
Label
Label is a themed version of the native label element that can be paired with other form components like Input, Textarea, Radio and Checkbox.
Form inputs must always have a label. Ideally, this label is visual, but in all cases, the label must be accessible to technology like screen readers. The easiest way to label an input field is by using this component and providing an
htmlFor property that matches the
id of the input field being labelled.
If the label absolutely cannot be visible, you can visually hide it or use an
aria-label on the input field itself.
import { Label } from '@sproutsocial/racine'
Properties
Recipes
Label with icon | https://seeds.sproutsocial.com/components/label/ | CC-MAIN-2020-45 | refinedweb | 112 | 52.9 |
.
Content playing in full screen mode
When Stage.fullScreenSourceRect is defined
If Stage.fullScreenSourceRect is defined, the width and height of Stage are fixed. Thus, when the screen orientation changes, the interchanged width and height ratio of the screen become mismatched with that of the Stage.
The four figures below show Flash content in full screen mode before and after screen orientation changes.
Figure 1: Full screen landscape view with Stage.fullScreenSourceRect set to landscape ratio
Figure 1 and Figure 3 show Flash content that is initially made full-screen in landscape view and portrait view respectively.
Figure 2: After screen orientation change from Figure 1
In Figure 2, the control bar and video occupy only the middle portion of the screen. Figure 2 also shows landscape full-screen content after the screen orientation change to portrait mode and the Stage is scaled to fit into full screen portrait view.
A comparison of Figure 2 and Figure 3 shows a difference whereby Flash occupies a part of the screen in Figure 2 but occupies the whole screen in Figure 3. In detail, in Figure 3 the control bar is placed at the bottom of the screen and video at the middle of the screen, thus occupying the whole screen.
Figure 4: After screen orientation change from Figure 3
Figure 4 demonstrates a more drastic mismatch between the content Stage and the device screen, which occurs when a portrait full-screen content is shown in a landscape view.
When Stage.fullScreenSourceRect is not defined
If Stage.fullScreenSourceRect is not defined and the content uses NO_SCALE scale mode, the width and height of the Stage change when screen orientation changes and the Stage.resize event is dispatched to the content. However, if the content does not re-layout the content in response to the Stage.resize event, portions of the content could be clipped.
Figure 5: Full screen video in landscape mode, Stage.fullScreenSourceRect is not defined.
Figure 6: After screen orientation change from Figure 5
Figure 7 Overlapped view of Figure 5 and 6
Figure 8 Overlapped view of full screen portrait video after screen orientation change to landscape mode
If content uses scale mode other than no scale mode (i.e., exact fit, no border or show all), the width and height of Stage does not change when the screen orientation changes. Thus, in this scenario, unintended content distortion does not occur necessarily but is still possible. Appropriate testing and modification may be needed for such content.
Content is playing in normal mode (embedded mode)
In normal mode, Flash content occupies the same width and height of Stage and screen even when screen orientation changes. Clipping or other distortions of FP content do not happen in normal mode. A notification of screen orientation change is not provided when the content is playing in normal mode.
The following examples demonstrate the modifications that are needed for full-screen FP contents to function correctly on devices that support multiple screen orientations.
If content defines Stage.fullScreenSourceRect
public class StageExample extends Sprite { public function StageExample() { stage.addEventListener(Event.RESIZE, resizeHandler); } public function goToFullscreen() { stage.fullScreenSourceRect = new Rectangle(0, 0, myWidth, myHeight); stage.displayState = StageDisplayState.FULL_SCREEN; } private function resizeHandler(event:Event):void { switch(stage.displayState) { case StageDisplayState.NORMAL: // Check stage.width and stage.height and layout objects break; case StageDisplayState.FULL_SCREEN: // Check stage.fullScreenWidth and stage.fullScreenHeight // Set stage.fullScreenSourceRect to new width and new height // It is recommended that the ratio of new width and new // height is much the same with that of stage.fullScreenWidth // and stage.fullScreenHeight for the best utilizing of // full screen stage.fullScreenSourceRect = new Rectangle(0, 0, newWidth, newHeight); // place objects within new width and new height break; } } }
If content does not define Stage.fullScreenSourceRect
public class StageExample2 extends Sprite { public function StageExample2() { stage.addEventListener(Event.RESIZE, resizeHandler); } public function goToFullscreen() { stage.displayState = StageDisplayState.FULL_SCREEN; } private function resizeHandler(event:Event):void { switch(stage.displayState) { case StageDisplayState.NORMAL: // Check stage.width and stage.height // layout objects within stage.width and stage.height break; case StageDisplayState.FULL_SCREEN: // Check stage.fullScreenWidth and stage.fullScreenHeight // layout objects within stage.width and stage.height // re-layout is more important if scale mode is no scale break; } } }
good | http://blogs.adobe.com/flashplayer/2014/09/guide-for-supporting-screen-orientation-with-full-screen-flash-player-content.html | CC-MAIN-2018-51 | refinedweb | 704 | 60.01 |
A base class reference-counting view of some image data. More...
#include <vcl_iosfwd.h>
#include <vcl_string.h>
#include <vcl_cassert.h>
#include <vcl_cstddef.h>
#include <vil/vil_pixel_format.h>
#include <vil/vil_smart_ptr.h>
Go to the source code of this file.
A base class reference-counting view of some image data.
Modifications 10 Sep. 2004 Peter Vanroose Inlined all 1-line methods in class decl
Definition in file vil3d_image_view_base.h.
An interface between vil3d_image_views and vil3d_image_resources.
This object is used internally by vil to provide a type-independent transient storage for a view as it is being assigned to a vil3d_image_view<T> from a vil3d_image_resource::get_view(), vil3d_load() or vil3d_convert_..() function call. If you want a type independent image container, you are recommended to use a vil3d_image_resource_sptr
Definition at line 103 of file vil3d_image_view_base.h.
Print a 1-line summary of contents.
Definition at line 107 of file vil3d_image_view_base.h. | http://public.kitware.com/vxl/doc/release/contrib/mul/vil3d/html/vil3d__image__view__base_8h.html | crawl-003 | refinedweb | 147 | 54.29 |
My institute offers students results, library services, and data server services on their LAN by providing the IP addresses of the machines holding the respective data. However as time passes these IP addresses change.
So I want to establish a mechanism where students could access these services through names instead of using their IP addresses directly. In the event that the IP changes for these machines, the name could be updated behind the scenes with the new address.
I tried to search for the solution over the Internet, and I found bind9.
Would that be the right solution? If not then what else? And if so then guide me through the process.
I think using bind9 would require me to make some changes not only on the server side, but also on the client side, am I right or wrong?
The precise solution will depend on whether you use Windows or Linux based severs. In either case, however you will probably want to set up
- a DNS server to map domain names to IP addresses
- a DHCP server to hand out IP addresses and details of the DNS server
You can then either get your clients to register their addresses with the DNS server or get the DHCP server register on their behalf.
If you provide more details on the servers you are running, a more detailed answer can be given
Yes, setting up a bind (or other DNS) server is the right solution. Then your important services will have static names no matter where in the IP namespace they get assigned. Your network will need to be adapted a little. The DHCP servers will need to give out your DNS server in the nameservers field, so that clients that connect will use your new server to resolve names to numbers. You should have at least two nameserver setup, a primary and a secondary.
By posting your answer, you agree to the privacy policy and terms of service.
asked
4 years ago
viewed
213 times
active
3 years ago | http://serverfault.com/questions/259164/how-to-setup-local-domain-name-servers-for-an-institute-lan | CC-MAIN-2015-18 | refinedweb | 339 | 69.31 |
Tim Wilson wrote: > > I'm still a little intimidated by the OOP > features that Python offers. I should probably just bit the bullet and dig > in and learn it. For starts, you might try thinking of a class instance as a Python dictionary. For instance the dictionary: # make a dictionary aFoo = {} # fill dictionary with data aFoo['one'] = 1 aFoo['two'] = 2 aFoo['three'] = 3 is quite similar to the class: # make a class (and also say somehow how the data inside is structured) class Foo: def __init__(self, one, two, three): self.one = one self.two = two self.three = three # fill an instance of Foo (aFoo) with data aFoo = Foo(1, 2, 3) Of course, initialization in __init__ isn't necessary, you could do: class Foo: pass # empty class aFoo = Foo() aFoo.one = 1 aFoo.two = 2 aFoo.three = 3 But the nice thing about a class (as compared to a dictionary) is that it's easy to add functions to a class that actually do something with the data stored inside (for instance, do some calculation). By initializing (lots of) the data in an __init__ function, you're sure these data members are there during the class's lifetime, as __init__ is always executed when the class is created: class Foo: def __init__(self, one, two, three): self.one = one self.two = two self.three = three def add_all(self): return self.one + self.two + self.three aFoo = Foo(1, 2, 3) print aFoo.add_all() # prints 6 aBar = Foo(3, 3, 3) print aBar.add_all() # prints 9 Of course, it's possible to write functions that do this with dictionaries, but classes add some nice extra structure that is helpful. They can be useful to bundle data and the functions that operate on that data together, without the outside world needing to know what's going on exactly inside the data and the functions (data hiding). Classes of course also offer inheritance, but that's for later. > I think I understand how a list would be useful to store > the atoms until the total mass can be calculated. I don't see where you > parse the user input here. > I'll be more specific: > How will it know the difference between CO (carbon monoxide) > and Co (a cobalt atom)? Hmm. You have this dictionary with as the keys the various element names (and as value their mass), let's call it 'elements', and a string describing a molecule called 'molecule'. An approach may be: # untested code! Doesn't do boundary checking! is probably slow! def getWeightForNextElement(molecule, elements): # assumption: all elements are a maximum of two characters, where the first # is a capital, and the second is lowercase # if 'molecule' doesn't start with an atom identifier at all we return 'None' if molecule[0] not in string.uppercase: return None if molecule[1] not in string.lowercase: # okay, we're dealing with a single char element, look it up: if elements.has_key(molecule[0]): return (elements[molecule[0]), 1) # return weight and length of what we read else: return None # not a known element else: # okay, we're dealing with a two char element: if elements.has_key(molecule[0:1]): return (elements[molecule[0:1]), 2) # return weight and length of str we read else: return None # not a known element This function, if it works at all :), could be fed with a molecule string. If the function doesn't return None and thus recognizes the weight, it'll return the weight value, and the length (1 or 2) of the characters it read. You can then strip those characters from the front of the string, feed in the string again, and get the weight of the next character, until you read the string. Of course it doesn't work with '()' or numbers yet. > How will the program be > able to figure out how many atoms of each type are in a molecule like > (NH4)3PO4? Numbers first. You can adapt the previous function (better rewrite it anyway, it was just a bad first attempt :) so that it recognizes if digits are involved (string.digits). What it should do is that as soon as it encounters a digit, it scans if any digits follow. It'll compose a string of all digits read. It then should convert (with int(), let's say) this string to an actual amount. It should then somehow notify the calling function that it read a number, and the value of this number. Since the function doesn't support this I suggested rewriting. Better perhaps to do any weight calculations later anyway, and just return the elements read (not their weights), too. Parenthesis. You could do something like this: * If you read a '(' parenthesis, add create a new empty list of elements, and add this to the list of elements read. * Do this whenever you see another '(' (a nested '('). So you might get a list nested in a list nested in a list, if you have a lot of ((())) stuff. * Until you read a ')' parenthesis, add any read elements to the current list (or their mass). If you read numbers, of course do the right multiplications, or simply add as many elements to the current list as the number indicates. When you've read the string (and the string makes sense syntactically; doesn't contain unknown elements or weird parenthesis such as '(()'), you'll end up with a big master list of elements (or mass) that may contain sublists of a similar structure. Now you want to add the weight of it all: # untested! def add_everything(master_list, elements): sum = 0.0 for el in master_list: if el is types.ListType: # recursion; add everything in the sublist and add it to the master sum sum = sum + add_everything(el, elements) else: sum = sum + elements[el] return sum A whole long post. I hope I'm making sense somewhat and that it helps. :) Please let us all know! Regards, Martijn | https://mail.python.org/pipermail/tutor/1999-March/000080.html | CC-MAIN-2018-05 | refinedweb | 992 | 71.14 |
Computer Science Archive: Questions from August 25, 2008
- Anonymous askedGiven a 16-bit parallel output port attached with the FALCON-A CPUas shown in the figure below. The... Show moreGiven a 16-bit parallel output port attached with the FALCON-A CPUas shown in the figure below. The port is mapped onto address DEhof the FALCON-A's I/O space.
Sixteen LED branches are used to display the data being receivedfrom the FALCON-A's data bus. Every LED branch is wired in such away that when a 1 appears on the particular data bus bit, it turnsthe LED on; a 0 turns it off.
(a) Which LEDs will be ON when the instruction
out r1, 123
Executes on the CPU? Assume r1 contains A2C9h. Briefly explain youranswer.
(b) Identify the changes needed to map the above output port ataddress D0h and D1h of the FALCON-A's I/O space (instead of DEh andDFh )
• Show less0 answers
- Anonymous askedhow would you write a function such thatit removes an existing structure from the linked list of str... Show morehow would you write a function such thatit removes an existing structure from the linked list of structurescreated by program 10-11
while running you should be able to delete one of the userinputs.
the result should be like:
the list of record is
bob white (843)123-233
john doe (976)123-564
which record do you wish to remove : 2
the list is now
bob white (843)123-233
please help thank you
//program 10-11
#include <iostream.h>
#include <iomanip.h>
const int MAXNAME = 30; // maximum no. of characters in aname
const int MAXTEL = 16; // maximum no. of characters ina telephone number
const int MAXRECS = 3; // maximum no. of records
struct TeleType
{
char name[MAXNAME];
char phoneNo[MAXTEL];
TeleType *nextaddr;
};
void populate(TeleType *); // function prototype neededby main()
void display(TeleType *); // function prototypeneeded by main()
int main()
{
int i;
TeleType *list, *current; // two pointers to structuresof
// type TeleType
// get a pointer to the first structure in thelist
list = new TeleType;
current = list;
cout << endl;
// populate the current structure and create theremaining structures
for(i = 0; i < MAXRECS - 1; i++)
{
populate(current);
current->nextaddr = new TeleType;
current = current->nextaddr;
}
populate(current); // populate the last structure
current->nextaddr = NULL; // set the last addressto a NULL address
cout << "The list consists of the followingrecords:\n";
display(list); // display the structures
cout << endl;
return 0;
}
// input a name and phone number
void populate(TeleType *record) // record is a pointer to a
{ // structure of type TeleType
cout << "Enter a name: ";
cin.getline(record->name,MAXNAME);
cout << "Enter the phone number: ";
cin.getline(record->phoneNo,MAXTEL);
return;
}
void display(TeleType *contents) // contents is a pointer toa
{ // structure of type TeleType
while (contents != NULL) // display till end oflinked list
{
cout << endl <<setiosflags(ios::left)
<< setw(30)<< contents->name
<< setw(20)<< contents->phoneNo;
contents = contents->nextaddr;
}
cout << endl;
return;
}
• Show less1 answer
- Anonymous askedPlease cansomeone help me fix the Error problem on the program below (blue ). Anytime i run the prog... Show morePlease cansomeone help me fix the Error problem on the program below (blue ). Anytime i run the program ikeep getting Error. PLease helpQurestion....>Write a program calledFourRectanglePrinter that constructs a Rectangle object, prints itslocation by calling System.out.println(box), and then translatesand prints it three more times, if the rectangles were drawn, theywould form one large retangle:ANSwer
import java.applet.*;
import java.awt.*;
import java.awt.Graphics;
import java.awt.Graphics2D;
import javax.swing.JComponent;
public class FourRectanglePrinter
extends Applet
{
public static Rectangle box;
public void init()
{
box = new Rectangle(20,20,40,40);
System.out.println(box);
box.translate(40,0);
System.out.println(box);
box.translate(0,40);
System.out.println(box);
box.translate(-40,0);
System.out.println(box);
}
public voidpaint(Graphics g)
{
g.drawRect((int)box.x,(int)box.y,(int)box.getWidth(),(int)box.getHeight());
g.drawRect((int)box.x+40,(int)box.y,(int)box.getWidth(),(int)box.getHeight());
g.drawRect((int)box.x+40,(int)box.y+40,(int)box.getWidth(),(int)box.getHeight());
g.drawRect((int)box.x,(int)box.y+40,(int)box.getWidth(),(int)box.getHeight());
}
}1 answer
- Anonymous asked1 answer
- Anonymous asked1 answer
- Anonymous askedImplement a combination lock class. Acombination lock has a dial with 26 positions labeled A.....Z.... Show moreImplement a combination lock class. Acombination lock has a dial with 26 positions labeled A.....Z. Thedial needs to be set three times. if it is set to the correctcombination, the lock can be opened. When the lock is closed again,the combination can be entered again. if a user sets the dialmore than three times, he last three settings detemine whether thelock can be opened. An important part of this exercise is toimplement a suitable interface for the CombinationLockclass. • Show less0 answers
- Anonymous askedThe CashRegister class has an unfortunatelimitation: It is closely tied to the coin system in the Un... Show moreThe CashRegister class has an unfortunatelimitation: It is closely tied to the coin system in the UnitedStates and Canada. Research the system used in most of Europe. yourgoal is to produce a cash register that works with euros and cents.Rather than designing another limited CashRegister implementationfor the European market, you should design a separate Coin classand a cash register that can work with coins of all types. • Show less0 answers
- Anonymous asked1 answer
- Anonymous askedWrite a program that takes the length and width of a rectangular yard and the lenght and width of a ... More »1 answer
- Anonymous asked0 answers
- Anonymous askedImplement a class Product. Aproduct has a name and a prie, for examplenew Product (" Toas... Show moreQuestion>> Implement a class Product. Aproduct has a name and a prie, for examplenew Product (" Toaster", 29.95). Supplymethods getName, getPrice, and reducePrice. Supply a programProductPrinter that makes two products, prints the name and price,reduces their prices by $5.00, and then prints the pricesagain.• Show lessBelow is my program but i am gettingerror. Please help me fix it....
importjava.lang.*;
classProduct
{
String name;
double price;
Product (String myName, double myPrice)
{
name = myName;
price = myPrice;
} // end condtructor
public String getName ()
{
return name;
} // end method getName
public double getPrice ()
{
return price;
} // end method getPrice
public void reducePrice ( double discount)
{
price -= discount;
} // end metho reducePrice
} /// end class product
class Test
{
public static void man (String args[])
{
// instantiat two products
Product p1 = new Product ("Toaster", 29.99);
Product p2 = new Product ("Oven", 49.99);
// print the name and price of the product
System.out.println("\n\n\tThe product1 details are asfollows"
+ "\n\tName : " + p1.getName() + "\n\tPrice : " +p1.getPrice());
// print the name and price of the product
System.out.println("\n\n\tThe product2 details are asfollows"
+ "\n\tName : " + p2.getName() + "\n\tPrice : " +p2.getPrice());
// reduce the price of the products
p1.reducePrice( 5.0 );
p2.reducePrice( 5.0 );
// print the reduced price of the products
System.out.println("\n\n\tReduced price of product1 is" + p1.getPrice());
System.out.println("\n\n\tReduced price of product2 is" + p2.getPrice());
} // end the method
} // end class Test1 answer | http://www.chegg.com/homework-help/questions-and-answers/computer-science-archive-2008-august-25 | CC-MAIN-2014-41 | refinedweb | 1,175 | 56.05 |
#include <MemoryFree.h>void setup() { Serial.begin(9600);}void loop() { int n = 0; while(n <=100) { int u = 20*n; int v = 30*n; String str = String(n) + ", " + String(u) + ", " + String(v) + ", " + String(freeMemory()); Serial.println(str); delay(500); n++; } Serial.println("Done"); }
1.Is there a library to download to replace the broken one?
2. If not, is there a workaround
3. If nothing helps I have to surrender and stop using String?
If I use the patch, do I have to implement it then on every single file in which I use String or do I use the "working String" automatically?
is it a community-made one or a official?
Don't want to lose my warranty
Btw, is here any thank-you-function?
#include <MemoryFree.h>void setup() { Serial.begin(9600);}void loop() { for (int n = 0; n <=100; n++) { int u = 20*n; int v = 30*n; Serial.print(n, DEC); Serial.print(", "); Serial.print(u, DEC); Serial.print(", "); Serial.print(v, DEC); Serial.print(", "); Serial.println(freeMemory(), DEC); delay(500); } Serial.println("Done"); }
but I'm working with serialevent-methods..that wont work..every print will call it another time.
I haven't found the proper patch yet but if there's one, I'll find it at all cost TY though!
Please enter a valid email to subscribe
We need to confirm your email address.
To complete the subscription, please click the link in the
Thank you for subscribing!
Arduino
via Egeo 16
Torino, 10131
Italy | http://forum.arduino.cc/index.php?topic=126541.0;prev_next=prev | CC-MAIN-2016-50 | refinedweb | 252 | 62.14 |
Problem with building opencv_contrib. Cannot find file face.hpp.
I want to use FaceRecognizer class. As far as I know, it's not present in OpenCV 3.* versions, so I need to build opencv_contrib repository to be able to use it in my projects.
Unfortunately something went wrong, I've built it, but I still can't find
face.hpp file in libraries folder. Here is what I've done to build it:
This is where I installed OpenCV and how it looks like:
This is where I've copied
opecv_contrib and how it looks like on my drive:
This is how I configured cmake-gui it build new libraries:
When I try to include face.hpp in my code:
#include "opencv2/face.hpp"
I see:
cannot open source file "opencv2/face.hpp" and
identifier "FaceRecognizer" is undefined.
Am I doing something wrong?
it works with #include "opencv2/face.hpp" or with #include <opencv2 face.
Have you check that BUILD_opencv_face is checked ? and BUILD_opencv_objdetect is checked too ?
@LBerger If you are talking about options in cmake-gui, then: yes, they are all checked, but BUILD_opencv_face is highlighted in red. Does it mean something?
Yes it is in CMake GUI. About red I don't know. Have you build opencv with samples? if yes check if facerec_demo.cpp samples is OK.
@LBerger no, it's not, because, like in my project, compiler can't find "opencv2/face.hpp"
To build opencv you have to clone it from github
Its simple, if something is red in cmake then it means something is wrong. You are lucky, for OpenCV 3 Blueprints we also used the face module, so we have a tutorial completely describing the configuration. Can you take a look there?
I'm not sure that if something is red something is wrong using CMake 3.7.1. When I leave cmake everything is white and when I reopen cmakelists everything is red
@StevenPuttemans: I've followed tutorial that you've linked. I deleted all opencv libs and I've built everything from scratch. Now, my Visual Studio doesn't seem to see any of opencv libraries. Not only face.hpp but all of them. As far as I know, all libraries should be present in include folder, they aren't...
BTW do I need to build it by myself? Why didn't opencv team built it and share it just like they did with "base" opencv here:? I'm not a power user, I just want to use OpenCV in one of my projects.
"This repository is intended for development of so-called "extra" modules, contributed functionality. New modules quite often do not have stable API, and they are not well-tested. Thus, they shouldn't be released as a part of official OpenCV distribution, since the library maintains binary compatibility, and tries to provide decent performance and stability." | https://answers.opencv.org/question/121310/problem-with-building-opencv_contrib-cannot-find-file-facehpp/ | CC-MAIN-2019-39 | refinedweb | 478 | 77.13 |
RPC::ExtDirect::Event - Asynchronous server-to-client events
use RPC::ExtDirect; use RPC::ExtDirect::Event; sub foo : ExtDirect(pollHandler) { my ($class) = @_; # Do something good, collect results in $good_data my $good_data = { ... }; # Do something bad, collect results in $bad_data my $bad_data = [ ... ]; # Return the data as a list (not arrayref!) return ( RPC::ExtDirect::Event->new('good', $good_data), RPC::ExtDirect::Event->new( name => 'bad', data => $bad_data, ), ); }
This module implements Event object that is used to send asynchronous events from server to client via periodic polling.
Data can be anything that is serializable to JSON. No checks are made and it is assumed that client side can understand the data format used with Events.
Note that by default JSON will blow up if you try to feed it a blessed object as data payload, and for very good reason: it is not obvious how to serialize a self-contained object. To avoid this, set a global Config option json_options to include
allow_blessed flag:
my $config = RPC::ExtDirect->get_api->config; $config->json_options({ allow_blessed => 1, });
new
Constructor. Creates a new Event object with event name and some data. Accepts arguments by position as
new($name, $data), as well as by name in a hash or hashref:
my $event1 = RPC::ExtDirect::Event->new( 'foo', 'bar' ); my $event2 = RPC::ExtDirect::Event->new({ name => 'foo', data => 'bar', }); my $event3 = RPC::ExtDirect::Event->new( name => 'foo', data => 'bar' );
This makes it easier to extend Event objects in a Moose(ish) environment, etc.
run
Instance method. Not intended to be called directly, provided for duck typing compatibility with Exception and Request objects.
result
Instance method. Returns an Event hashref in format required by Ext.Direct client stack. Not intended to be called directly. | http://search.cpan.org/~tokarev/RPC-ExtDirect-3.21/lib/RPC/ExtDirect/Event.pod | CC-MAIN-2015-32 | refinedweb | 282 | 55.64 |
Hi,
I need some help making a program that outputs change in the least amount of dollars and coins. Here is what I have so far but it doesnt work correctly and I don't know why. Thanks.
Code:#include <iostream> using namespace std; int main() { double charged, given, difference; cout << "Enter amount charged:\n"; cin >> charged; cout << "Enter amount given:\n"; cin >> given; difference = given - charged; double money[10] = {100,50,20,10,5,1,.25,.10,.05,.01}; double change[10] = {0,0,0,0,0,0,0,0,0,0}; for(int i =0; i<10; i++) { while(money[i] < difference) { change[i]++; difference -= money[i]; } } for(int i = 0; i<10; i++) if(change[i] != 0) cout << "$" << money[i] << ": " << change[i] << "\n"; } | http://cboard.cprogramming.com/cplusplus-programming/99262-make-change-program-help.html | CC-MAIN-2013-48 | refinedweb | 126 | 78.59 |
You can subscribe to this list here.
Showing
3
results of 3
Hello,
I am using Player 3.0.2 and Stage 4.1.1. I am having problem in running the
bigbob code (.cc file) from jennifer owen's tutorial. I have made necessary
changes for the rangerproxy everywhere in cfg and world file according to
stage 4.1.1. My cfg file is running perfect. But when I am trying to run the
.cc file which has simple reading of GetElementCount then it is giving me
zero. Please help me out to sort this problem.
The .cc code is attached below:
******************************************************************************
#include <stdio.h>
#include <libplayerc++/playerc++.h>
#include<time.h>
#include<unistd.h>
#include<iostream>
using namespace PlayerCc;
using namespace std;
void Wander(double *, double * );
int main(int argc, char *argv[])
{
/*need to do this line in c++ only*/
//using namespace PlayerCc;
PlayerClient robot("localhost");
Position2dProxy p2dProxy(&robot,0);
*RangerProxy sonarProxy(&robot,0); //for sonar*
BlobfinderProxy blobProxy(&robot,0);
*RangerProxy laserProxy(&robot,1); // for laser*
p2dProxy.SetMotorEnable(1);
p2dProxy.RequestGeom();
sonarProxy.RequestGeom();
laserProxy.RequestGeom();
double forwardSpeed, turnSpeed;
uint32_t count;
//double range;
srand(time(NULL));
//some control code
robot.Read();
while(true)
{
// read from the proxies
robot.Read();
//wander
Wander(&forwardSpeed, &turnSpeed);
*count = sonarProxy.GetElementCount();*
//range = laserProxy.GetRange(1);
*printf("count = %d\n",count);*
p2dProxy.SetSpeed(forwardSpeed, dtor(turnSpeed));
sleep(1);
}
return 0;
}
void Wander(double *forwardSpeed, double *turnSpeed)
{
int maxSpeed = 1;
int maxTurn = 90;
double fspeed, tspeed;
//fspeed is between 0 and 10
fspeed = rand()%11;
//(fspeed/10) is between 0 and 1
fspeed = (fspeed/10)*maxSpeed;
tspeed = rand()%(2*maxTurn);
tspeed = tspeed-maxTurn;
//tspeed is between -maxTurn and +maxTurn
*forwardSpeed = fspeed;
*turnSpeed = tspeed;
}
*******************************************************************************
The following is the output which I get:
playerc warning : warning : [Player v.3.0.2] connected on [localhost:6665]
with sock 4
count = 0
count = 0
count = 0
count = 0
count = 0
count = 0
count = 0
count = 0
...
...
******************************************************************************
Thanks in advance.
--
View this message in context:
Sent from the playerstage-users mailing list archive at Nabble.com.
We all need the right to obtain the specifics of stryker total knee
replacement recalls .
My uncle appeared to be a pianist who might possibly sight-read the the
majority of famously complex masterpieces.I am so thankful to him to get his
continuous encouragement for the period of my child years.
All of us all have the right to see the facts of stryker total knee
replacement recalls .Many of us children enjoyed him due to the fact he
didnt at all times act his age.He shared with me mountain climbing can
certainly build up my muscles like Popeyes.He instructed me the body is the
capital of revolution.Having his guide, I have a technological physical
exercise of basic skills, to build a excellent habit.As the years pass, he
is never strong enough.As age has set in, his left leg has become
increasingly hard to move on.Today, the replacement of artifical joints
treatment severity arthropathy is standard operation of clinic, the health
care professional advised him got a total knee replacement.Unfortunately for
me, however he is definitely so solid that he prefer to follow the doctor’s
guidance to get the total knee replacement.He shortly gets over his illness,
I am very pleased for him.He is worried about his Stryker knee these days,
because it is said that there are recalls on the Stryker knee.
Most of us all need the right to get the facts of stryker total knee
replacement recalls . Next I build a website here
<>
to make valuable articles and other content of Stryker knee replacement to
help men and women that are interested. So I really desire my uncle enjoy
it, and helpful to him, and also to other folks who seem to like him.
Precisely why we require Stryker knee replacement?
Stryker is one of the world’s leading medical technology companies.It is
fortune 500 organization whose products are offered for sale in 89 countries
around the globe and that has 20000 staff members worldwide.These people
have been constantly nominated for ideal innovative business in their class
for numerous years. Their annual turnover is over 7.5 billion US.
The business is actually focused on help knee doctors and other health care
experts perform their work in most valuable method and increase patient
satisfaction.
The Company provides a large array of innovative health related systems, for
example reconstructive, medical and surgical including stryker knee
replacement, neurotechnology and spine solutions for helping folks lead even
more effective and more satisfying lives.
Okay thus I noticed a short while ago that my best OS is likely to be making
use of the Stryker Triathlon. Any time questioned exactly why he consider
this kind of device his response was first " It is the one It's my opinion
will probably be ideal for you", he mentioned that it would be a good option
as a result of my age (36) additionally, the energetic lifestyle that I
live. He began to explain that through clinical scientific studies of TKR's
he says Stryker Tri's would have been a great fit for longevity as well as
ROM. I have no doubt about it is not going to last my life span but if I
could possibly get 15-20 years I will be greater than happy. If anybody
would choose to talk about a few comments positive or negative it would be
really appreciated.
SO SORRY to point out this, although I really hope that these data were
thinking about the life span of the folks. If the average age of TKR is
55-65 and the common life time (in US) is 85, how many of the sufferers were
not around to determine how much time their knees could actually have
lasted? And similar to Jo said most Dr's retire as soon as they may wich
mathematically would be about 25-30 years thinking about they've been an OS
doing TR's their particular total practicing career. Did I merely confuse
most people? I hope I truly do not offend a person with this unique content!
--
View this message in context:
Sent from the playerstage-users mailing list archive at Nabble.com.
Dear Mr/Ms,
How can i execute three player server together? The first, at port
6665, has these drivers: mapfile, stage, vfh. The second, at port 6667,
has wavefront driver. The third, at port 6668, has amcl driver. I don't
know what is the terminal command for the problem.
With regards, E.S. | http://sourceforge.net/p/playerstage/mailman/playerstage-users/?viewmonth=201305&viewday=23 | CC-MAIN-2014-23 | refinedweb | 1,096 | 64 |
Your browser does not seem to support JavaScript. As a result, your viewing experience will be diminished, and you have been placed in read-only mode.
Please download a browser that supports JavaScript, or enable it if it's disabled (i.e. NoScript).
On 13/02/2013 at 02:10, xxxxxxxx wrote:
Sorry for super-basic questions, but I´m stucked...
Learning Python without any programming background is giving me a hard time, although a Python for Children book was a great help already
Understanding many easy things now, I want to make a first plugin. Nothing of real use, just the same thing a simple user Script can do in plugin form.
I got so far, that my plugin loads, but no way I could get it to work by using it. Best I got were no errors messages
import c4d
from c4d import plugins
import os
PLUGIN_ID = 10000010
doc = c4d.documents.GetActiveDocument()
obj = doc.GetActiveObject()
class Starter(c4d.plugins.CommandData) :
def Execute(self, doc) :
obj.SetAbsPos(c4d.Vector(111.0, 22.0, 22.0))
c4d.EventAdd()())
And if anybody knows a good tutorial on plugin creation, or a very well documented small plugin, I´d be happy
Thanks
On 13/02/2013 at 04:06, xxxxxxxx wrote:
So what's your actual problem? There's nothing specific in your question, actually there isn't
even a question-mark.
PS: The two lines after PLUGIN_ID = 10000010 are senseless, the result will not be what you
expect it to be when your plugin executes. You probably want to use doc.GetActiveObject() from
withtin the Execute() method.
-N
On 13/02/2013 at 06:52, xxxxxxxx wrote:
Sorry next time I´ll be more specific...
Problem was that I had no real clue why it wasn´t working, not a problem with a user script.
But you pointing out the wrong position of the variables already helped.
Now I got it to work, and I learned lots, but I still don´t understand why I have to put the variables two times in there?
Probably missing some real basic concept here...
import c4d
import random
from c4d import plugins
import os
PLUGIN_ID = 10000010
class Starter(c4d.plugins.CommandData) :
doc = c4d.documents.GetActiveDocument()
obj = doc.GetActiveObject() #Why define Variables here and under Execute again?
rnd = random.random()
def Execute(self, doc) :
true_v = True
obj = doc.GetActiveObject()
rnd = random.random()
if obj != None:
obj.SetAbsPos(c4d.Vector((rnd*100), 0, 0))
else:
print ("Nothing")
c4d.EventAdd()
return true_v())
On 13/02/2013 at 07:13, xxxxxxxx wrote:
You can remove the 3 lines after class Starter(c4d.plugins.CommandData) :.
On 13/02/2013 at 07:36, xxxxxxxx wrote:
Ohoh I should stop for today
In this older version without an if I had to keep them both though:
class Starter(c4d.plugins.CommandData) :
true_v = True
# doc = c4d.documents.GetActiveDocument()
# obj = doc.GetActiveObject()
# rnd = random.random()
print obj
def Execute(self, doc) :
obj = doc.GetActiveObject()
rnd = random.random()
obj.SetAbsPos(c4d.Vector((rnd*100), 0, 0))
c4d.EventAdd()
return self.true_v
Anyway, thanks for the patience
On 13/02/2013 at 07:42, xxxxxxxx wrote:
Just out of curiousity: Why do you use the true_v variable, when it is never assigned another
value but True?
This should work fine as well:
class Starter(c4d.plugins.CommandData) :
def Execute(self, doc) :
obj = doc.GetActiveObject()
if not obj:
print "Nothing"
return True
rnd = random.random()
obj.SetAbsPos(c4d.Vector((rnd*100), 0, 0))
c4d.EventAdd()
return True
On 13/02/2013 at 08:15, xxxxxxxx wrote:
Google the word "scope". This will tell you why you had to have the same code in two different places.
-ScottA
On 13/02/2013 at 08:51, xxxxxxxx wrote:
is was not so much a scope problem, more a time of execution problem. obj should
have been always None for his first example, unless he started the c4d with an saved
document which already contained a selection. doc would also have been always the
startup document. or what do you mean with scope in that case ?
there are also some special methods in python which are defined by a double underline
pre- and postfix. you should take a look into them. __init__ can be used in a way you
would normally use a class constructor. it helps to make visually more clear which
variables are visible for all members of the class and that alle values defined here are
tied to the time of instantiation (which would be for a plugin class the start of c4d).
On 13/02/2013 at 23:53, xxxxxxxx wrote:
Thanks everybody, I still struggle understanding classes, I got why not to put variables before the class, and thanks for pointing out returning True instead of a senseless variable. Although I´m sure I tried that
Now I tried to make use of __init__ but doesn´t matter what, it doesn´t work:
For the understanding, when the class is called, __init__ assigns variables within this class?
I read up about it but still...
So what I did wrong here? Error message in red... Or is it complete nonsense?
class Starter(c4d.plugins.CommandData) :
def __init__ (self) :
self.doc_a = c4d.documents.GetActiveDocument()
self.obj =
On 14/02/2013 at 00:21, xxxxxxxx wrote:
Hi Schnups,
I'm wondering where you are reading about classes, it obviously is teaching you wrong. In Python,
you can assign any variable to your own class, not necessarily in the __init__() method which is
only called when the instance has just been created. But when you assign the instance the
variable, you also have to read it from it.
class Foo(object) :
def __init__(self, abc) :
self.abc = abc
def get_abc(self) :
return self.abc
def get_other(self) :
return self.other
f = Foo("hello schnupsi") # creates a new instance and calls __init__() on it
print f.abc # Prints "hello schnupsi"
print f.get_abc() # Prints "hello schnupsi"
print f.get_other() # This will NOT WORK, `f` does not have an attribute `other`
f.other = "other text"
print f.other # Prints "other text"
print f.get_other() # Prints "other text"
You should really lay down the book you're currently reading at take a look at the official Python
tutorial, which is imho the best place to start programming with Python.
See:
PS: I still don't understand why you want to use the document that is active at the time
an instance of your CommandData subclass is created, which is at registration-time. The document
you obtain might either not be valid at the time Execute() is called (which is when you click on your
plugin in Cinema's GUI) or simply not be the active document anymore.
-Niklas
On 14/02/2013 at 01:17, xxxxxxxx wrote:
"hello schnupsi"
On 14/02/2013 at 01:35, xxxxxxxx wrote:
Uups, had your name wrong in mind while writing that.
On 14/02/2013 at 01:47, xxxxxxxx wrote:
Ahh, thanks Niklasi that helped...
I didn´t want to get the variables at creation time. I thought before, the class is somehow created while executing...
So, if I´ve got it right: The class, as everything before and after, is being "executed" while plugin loading up. Putting a function, variable etc. here, you want to execute by using the plugin doesn´t make sense.
Only the execute triggers its defined code, but I can also use other pre-defined functions in here too, right?
I guess the jump from little user scripts to the interface plugins I plan is a bit high.
The Python Tutorial is on my scope now first...
On 14/02/2013 at 04:07, xxxxxxxx wrote:
What you wrote refers to a variable doc_a which is known for all funtcions, since you haven't declared that it comes from some owner, but such a variable doesn't exist. That's what the error message says "No global variable of that name"
Greetings
Nachtmahr
On 16/02/2013 at 02:44, xxxxxxxx wrote:
Ok, roger that
Is it possible give a plugin a background behaviour in a similar way the Script Log works?
Not in a list, just that it writes the string of the last command called to a variable, (like the entries which also shows up in the Script Log entry)?
If it´s very complicated don´t worry...
On 16/02/2013 at 03:36, xxxxxxxx wrote:
Unfortunately there is not (yet, from Python).
On 16/02/2013 at 04:04, xxxxxxxx wrote:
What a pity, I wanted to do a "Repeat last Command" plugin, like Max and Maya have. Which would check with a list of chosen c4d commands to repeat them easily.
Well, another question for future plugins:
It is possible to update a plugin-gui spinner, which for instance changes a position, in realtime, without the need of an apply button? Not in an object or tag plugin.
Don´t need to know how, just if...
Thanks for being such a big helper around here Niklas
On 16/02/2013 at 07:49, xxxxxxxx wrote:
what do you mean with Not in an object or tag plugin ? a dialog or a shader/tool ?
for dialogs, use the command/coremessage method to write/read data from/into
your dialog.for descriptions it is allso possible, but actually not the way things are
intended to be done. the message/getdenabling methods could be here a suiteable
place to implement such behaviour. for certain behaviours you would have to write a
message helper plugin to notify your plugins when they have to update themself.
On 16/02/2013 at 08:50, xxxxxxxx wrote:
A bit confusing written, sorry. I meant the python tag or generator with user data by not in a tag....
What Im planning once my skill is good enough, is to do a much improved coordinate manager, which first of all, has an auto update, so you see the changes while adjusting.
On 16/02/2013 at 08:57, xxxxxxxx wrote:
So, you'll want to make a dialog. This is perfectly possible: once a value is changed by the user, you will be
notified about this change and can react on it. | https://plugincafe.maxon.net/topic/6943/7807_basic-plugin-creation | CC-MAIN-2021-31 | refinedweb | 1,699 | 73.88 |
Parrameter server in C++ publisher?
How do i provide command line parameter for publisher written in c++. I tried similar to what i did with python publisher but it doesn't work. I used getparam in c++.
#include "ros/ros.h" #include "performance_tests/SuperAwesome.h" #include <sstream> int main(int argc, char **argv) { ros::init(argc, argv, "PublisherNode"); ros::NodeHandle nh; std::string ra; //I want to accept an Integer value if (nh.getParam("/ra", ra)) { ROS_INFO("Got param: %s", ra.c_str()); } else { ROS_INFO("Failed to get param "); } ros::Publisher msg_pub = nh.advertise<performance_tests::SuperAwesome>("publisher", 1000); // I want to change the rate value through the value I receive in through ra. ros::Rate rate(100); int count = 0; while (ros::ok()) { performance_tests::SuperAwesome msg; std::stringstream s; s <<"Hello World " << count; msg.data = s.str(); ROS_INFO("%s",msg.data.c_str()); msg_pub.publish(msg); ros::spinOnce(); rate.sleep(); count++; } return 0; }
I would like to change the publisher rate through command line parameter similar to what I do in python publisher. i.e,
rosrun publisher_package publisher.py _rate:=50
How can I do the same in c++ publisher
Can you please update your question so that it's a minimum working example? If I were to copy and paste your code and try to compile it (which I tried), it wouldn't work due to a custom class that's not defined. | https://answers.ros.org/question/311349/parrameter-server-in-c-publisher/ | CC-MAIN-2021-04 | refinedweb | 232 | 58.58 |
view raw
I feel like I'm way overthinking this problem, but here goes anyway...
I have a hash table with M slots in its internal array. I need to insert N elements into the hash table. Assuming that I have a hash function that randomly inserts am element into a slot with equal probability for each slot, what's the expected value of the total number of hash collisions?
(Sorry that this is more of a math question than a programming question).
Edit:
Here's some code I have to simulate it using Python. I'm getting numerical answers, but having trouble generalizing it to a formula and explaining it.
import random
import pdb
N = 5
M = 8
NUM_ITER = 100000
def get_collisions(table):
col = 0
for item in table:
if item > 1:
col += (item-1)
return col
def run():
table = [0 for x in range(M)]
for i in range(N):
table[int(random.random() * M)] += 1
#print table
return get_collisions(table)
# Main
total = 0
for i in range(NUM_ITER):
total += run()
print float(total)/NUM_ITER | https://codedump.io/share/M0d3DMvFUBKt/1/expected-number-of-hash-collisions | CC-MAIN-2017-22 | refinedweb | 177 | 61.67 |
This logging module has seven classes: CLog, CFuncLog,
IStoreLog, CWinLog, CFileLog,
CAutoCritic, CLogSimpleLock.
CLog
CFuncLog
IStoreLog
CWinLog
CFileLog
CAutoCritic
CLogSimpleLock
The main class of the Logging module is the CLog class and in
most cases it must be a singleton in the application. Being a Singleton is not a
requirement for it though.
The Second most useful class is CFuncLog. This class is used
to log functions when entering and leaving. Also this class gives the developer
an easy way to log any data. The Class has overloaded operators <<,
that is why adding something to log is very easy.
operators <<
Figure 1. UML Design Class Inheritance Diagram
As you saw from Figure 1 the module classes are divided on two
parts:
1. Storage classes
2. Logging classes
The declaration of IStoreLog:
//////////////////////////////////////////////////////////////////////////
// Abstract class set three default function which must support any child.
// Any child must support buffered and non-buffered store
class IStoreLog
{
public:
virtual ~IStoreLog( void ){}; // virtual destructor
virtual int FlushData() = 0;
virtual int WriteString( const std::string &Message ) = 0;
virtual int SetBufferLimit( long lSize ) = 0;
};
Storage classes have the functionality to buffer with data flushing. By
default the storage class must store the data in its own buffer and only on a FlushData function call
will it flush the buffer data to disk or
elsewhere. As you understand the buffer of storage class in mostly cases
are limited by system resources, that is why when the
buffer reaches it's limit it will flush the data automatically. To set the buffer
limit use the function SetBufferLimit. By default the Storage
class implementation must allocate a buffer and only on a SetBufferLimit
function call will it change its size.
FlushData
SetBufferLimit
To store string into storage use the function WriteString. The
Storage class must have no formatting and store RAW data as is.
WriteString
In this section we have two classes: CLog - our main
class and CFuncLog - helper class. Class CLog
declared in clog.h and it implementation is in clog.cpp
file. Class CFuncLog declared in cfuncLog.h,
cfunclog.cpp.
Logger classes have special functions which make logging
easier. While logging you can configure trace output: If you have no
need for time then simply set the flag of CLog class using the function SetLogTime to false and
the time will be not be added to the output. Also you can change the output format
of time by calling SetTimeFormat. By default the class
use Long time format. In the header can be found two defines of most
useful time formats. First is a long default to class format, second
is short one format without millisecond in output.
SetLogTime
false
SetTimeFormat
#define DEF_TIME_LONG_STR "%02u:%02u:%02u ms:%03u"
#define DEF_TIME_SHORT_STR "%02u:%02u:%02u"
WARNING: time formatting string must always
have 4 or less printf formatting templates, otherwise you will
have a stack error.
Also for CLog class can be set such properties as:
Message Output Format -
SetMessageFormat and GetMessageFormat
functions
SetMessageFormat
GetMessageFormat
AutoFlush - SetAutoFlush and GetAutoFlush
functions. true mean flushing of storage buffer after
each trace message. Very useful when application is in alpha testing
and have some GPF's in code. Second mode useful when logging needed
for controlling state of application and it's not very critical by
time - this is top perfomance mode of logging system.
SetAutoFlush
GetAutoFlush
true
For logging in CLog class there are three functions:
LogRawString - trace
raw string without formatting to storage class.
LogRawString
LogString - trace
message with special level and formatting. There are two functions
with the same name, only difference is a format of output string, in
first case is a std::string on second simple char
*.
LogString
std::string
char
*
LogFormatString - formatting function - wrapper
on printf function.
LogFormatString
printf
In many cases developer will need more then three categories of
messages that is why in CLog class is virtual function
LevelText. As input parameter it have number of required
LEVEL. Function will return string with Category Name. By default on
Category name class set limitation on 12 symbols, but this can be
changed by SetMessageFormat function template string.
LevelText
To use Logging in application include into project such headers:
#include "clog.h"
#include "cfunclog.h"
#include "cwinlog.h" // include it if you want logging into GUI window
#include "cfilelog.h" // include it if you want logging into files
...
CFuncLog class is not required it's only simplify
logging of most used features, like: entering and leaving of
function. Special Formatting and all needed to logging operations are
implemented in CLog file. That is why you can choose:
use it or not.
NOTE: CWinLog class store traced messages in
window and are not multiprocess safe. Otherwise for multiprocess
logging can be used only CFileLog class. By Using
CFileLog all traced messages are store in file. All file
operations are synchronized by OS.
CLog *m_pLog = new CLog( new CFileLog( "c:\\log.log" ), LOG_MAX_LEVEL, true );
First parameter of the CLog constructor must be pointer
of class which support IStoreLog interface. In module
are two implementations of IStoreLog virtual class:
CWinLog and CFileLog.
As you understand CWinLog class create GDI window in
which display traced messages and second log class store logging into
file.
As a output file can be used any OS device or named pipe or
something else, which use syntax of CreateFile API
function.
CreateFile
Second parameter is a upper limit of messages. It must be set to
needed upper value limit.
So if you set it to 0 then in log output will be only ERROR
messages. If you set it to 1 then in log will be ERROR and
WARNING messages... and so on.
Third param said to CLog class instance is it a
parent of IStoreLog class instance or not. By default
this value is true. So CLog Class on
destroy delete instance of IStoreLog class
implementation.
CRepTestApp::CRepTestApp()
{
CFuncLog log( m_pLog, "CRepTestApp::CRepTestApp" );
...
}
Such code will add into log entering and leaving of function code.
Here used feature of automatic variables. On Construct into log added
"enter ..." message and on destroy into log
will be added "leave ..." message. To add
something to log you can type such code:
"enter ..."
"leave ..."
int something = 100;
log << something;
...
such code will add 100 into log output.
WARNING: operator << stores log
values in RAW format.
operator <<
To add message to log correctly use LogString
function of the CFuncLog class.
If you want to store log into any other place then you must write
implementation of IStoreLog class and use it instance as
a constructor param. CAutoCritic and CLogSimpleLock
classes is a wrappers on Critical Section API of windows. Such
classes are implemented in stand alone file with namespace LOGGER
and can be freely used by other IStoreLog class
implementations.
LOGGER
First logger module was implemented in 2000 and after that was rewritten some times. I found that in many cases
the Logger system
was a requirement for commercial projects, that is why I spent some
time and wrote such an easy to use and extendible logger/trace. | http://www.codeproject.com/Articles/1877/Advanced-Logging-for-all-kind-of-applications?fid=3347&df=90&mpp=25&sort=Position&spc=Relaxed&tid=427382 | CC-MAIN-2015-27 | refinedweb | 1,168 | 65.62 |
Better.
Before proceeding too far into this process, you should download and test the CSS friendly adapter set; they provide a good reference while building your own adapters.
The main purpose of using control adapters is to alter how the built-in controls are rendered. Using these adapters requires 2 things: an adapter class and a browser definition file. The adapter class defines how the control will be rendered and the browser definition file determines which browsers or agents should use the adapters.
The adapter class is used to override the rendering functions of the original control. The class must inherit the appropriate adapter from the WebControls.Adapaters namespace (for example an adapter for menu controls would inherit WebControls.Adapters.MenuAdapter). The base adapter class contains the rendering functions that you will be overriding in your control adapter. You should also create a property that contains an instance of the WebControlAdapterExtender class. The extender class will save you from writing a lot of code and will help you make sure that your adapted controls still function correctly.
The main functions you will be working with are RenderBeginTag, RenderEndTag, and RenderContents. If you control will need access to any javascript, you can also use the RegisterScripts function to add them to the header.
The RenderBeginTag and RenderEndTag can be generated using the extender. The extender will create a div with an ID that your code-behinds and other controls can recognize. The RenderContents function is where you will actually be generating your markup. You can access any properties of the server control being rendered through the Control property and use the writer parameter to output the markup. The MenuAdapter class in the CSS friendly adapters is a good example of how to render the contents of a control. It reads through the Items property of the menu and uses a pair of functions to render either an unordered list or list item.
The RegisterScripts function is used to attach any javascript files you might need. The benefit of using RegisterScripts is that it will place the script tag for your javascript file in the page’s header not in the body. It will also prevent the same file from being referenced multiple times. If for some reason you need to have the javascript placed in the body, which is usually bad, you can render it in the RenderContents function.
Hopefully this post sheds a bit more light on how to use control adapters for your ASP.NET applications. These adapters will definitely help you make a more standards compliant site and will help keep your designers happy.
- malikyte
- Dave | http://www.sitepoint.com/better-markup-with-control-adapters/ | CC-MAIN-2014-35 | refinedweb | 438 | 61.67 |
Painel
Arch
User documentation
Outdated translations are marked like this.
Descrição
This tool allows you to build all kinds of panel-like elements, typically for panel constructions like the WikiHouse project, but also for all kinds of objects that are based on a flat profile.
The above image shows a series of panel objects, simply made from imported 2D contours from a DXF file. They can then be rotated and assembled to create structures.
Since version v0.17 and above the Arch Panel can also be used to create corrugated or trapezoidal profiles:
Utilização
- Select a 2D shape (draft object, face or sketch) - optional.
- Press the
Arch Panel button, or press P then A keys.
- Adjust the desired properties.
Limitations
- There is currently no automatic system to produce 2D cutting sheets from panel objects, but such feature is in the plans and will be added in the future.
Opções
- Panels share the common properties and behaviours of all Arch Components.
- The thickness of a panel can be adjusted after creation.
- Press Esc or the Cancel button to abort the current command.
- Double-clicking on the panel in the tree view after it is created allows you to enter edit mode and access and modify its additions and subtractions.
- It is possible to automatically make panels composed of more than one sheet of a material, by raising its Sheets property.
- Panels can make use of
Multi-Materials. When using a multi-material, the panel will become multi-layer, using the thicknesses specified by the multi-material. Any layer with a thickness of zero will have its thickness defined automatically by the remaining space defined by the Panel's own Thickness value, after subtracting the other layers.
Propriedades
- DadosLength: The length of the panel
- DadosWidth: The width of the panel
- DadosThickness: The thickness of the panel
- DadosArea: The area of the panel (automatic)
- DadosSheets: The number of sheets of material the panel is made of
- DadosWave Length: The length of the wave for corrugated panels
- DadosWave Height: The height of the wave for corrugated panels
- DadosWave Type: The type of the wave for corrugated panels, curved, trapezoidal or spiked
- DadosWave Direction: The orientation of the waves for corrugated panels
- DadosBottom Wave: If the bottom wave of the panel is flat or not
Scripting
See also: Arch API and FreeCAD Scripting Basics.
The Panel tool can be used in macros and from the Python console by using the following function:
Panel = makePanel(baseobj=None, length=0, width=0, thickness=0, placement=None, name="Panel")
- Creates a
Panelobject from the given
baseobj, which is a closed profile, and the given extrusion
thickness.
- If no
baseobjis given, you can provide the numerical values for the
length,
width, and
thicknessto create a block panel.
- If a
placementis given, it is used.
Example:
import FreeCAD, Draft, Arch Rect = Draft.makeRectangle(1000, 400) Panel = Arch.makePanel(Rect, thickness=36)
Tutoriais
- | https://wiki.freecadweb.org/Arch_Panel/pt-br | CC-MAIN-2021-49 | refinedweb | 482 | 51.89 |
Buffer size for spark video playerrajdeeprath Apr 12, 2010 11:29 PM
is there any way to control the buffer length in spark videoplayer component ?
1. Re: Buffer size for spark video playerbringrags Apr 13, 2010 8:28 AM (in response to rajdeeprath)
No, there currently isn't.
2. Re: Buffer size for spark video playerrfrishbe Apr 13, 2010 1:18 PM (in response to bringrags)
I think you should be able to access the videoDisplay's videoPlayer property (which gives you access to the OSMF MediaPlayer object). It's mx_internal, meaning it's something that Flex provides access to for advanced users, but it's not a documented or officially support API. However, try something like:
import mx.core.mx_internal;
myFlexVideoPlayer.videoDisplay.mx_internal::videoPlayer.bufferTime = XXX;
Good luck,
Ryan
3. Re: Buffer size for spark video playerrajdeeprath Apr 15, 2010 2:19 AM (in response to rfrishbe)
Hi ryan i tries but it throws a null object exception:
can you confirm...
myFlexVideoPlayer.videoDisplay.mx_internal::videoPlayer.bufferTime = XXX;
this will work ?
4. Re: Buffer size for spark video playerrfrishbe Apr 15, 2010 2:54 PM (in response to rajdeeprath)
Accessing the variable and setting it works for me. Here's a simple example:
<s:VideoPlayer
One thing to note is that videoDisplay isn't always around as it's part of the skin. So I can access it at creationComplete time, but I can't access it at pre-initialize time. To debug your particular null pointer exception, I'd have to see code, but my guess is it's with videoDisplay. But you can hop in to the debugger and figure out what's null. It's also a great way to figure out what properties are around and available to you.
Good luck,
Ryan | https://forums.adobe.com/message/2739908 | CC-MAIN-2016-44 | refinedweb | 296 | 61.97 |
It looks like you're new here. If you want to get involved, click one of these buttons!
I was going through a code and I found a class declared like this
public class Tree {
private Node root;
public Tree(T rootData) { root = new Node<T>(); root.data = rootData; root.children = new ArrayList<Node<T>>(); }
What does mean? Why is it declared like this?
It looks like a constructor for Tree class. where we are creating a Node Object and assigning some values to it. In java to declare a class it should be done by the following syntax.
class ClassName
{
}
unless we use the class keyword it is not going to be a class. For more details on class and its declaration please follow the link
Here class Tree is created inside which Constructor is defined.
Constructor is a special method which has same name as class name and is called when the object of a class is created.
Object of class Node is created and assigned with rootData. Also, ArrayList is created.
For more Details Refer following link:
java.meritcampus.com/b/1161/Tree-set-and-list?t=180&n=Treeset?tc=mm211 | http://programmersheaven.com/discussion/435047/class-declaration | CC-MAIN-2015-27 | refinedweb | 195 | 76.62 |
The.
Thanks –
Dean Hachamovitch
General Manager, Internet Explorer
P.S. If you’re a developer, or service provider, or IT professional, how do you prepare for the final release of new software? Leave a comment – we’d like to know.
Ok, I’m on board, glad to finally hear the plan. Thanks for the honesty.
Now, between IE8 Beta 2, and the RC you plan to release in Q1,2009 what bugs have been fixed? Has IE Feedback on Connect been updated to include all the internal fixes?
In particular I have 3 questions:
1.) Has the window.resize event firing been fixed?
2.) Has the z-index -ms-opacity issue been fixed? (broke in Beta 2)
3.) Has the XP theme regression bug on select lists been fixed?
Thanks
I kinda think that performance should be addressed (in addition to the memory leaks that are occasionally found).
The multiple-tab-different-processes gets messy sometimes, and the javascript engine is falling behind. (Seriously, please do something about this; Firefox and Chrome are having a drag race on this one !!)
Though, I must admit, bug fixes for the features that will push IE8 is necessary.
(But, personally, I’m a performance kinda person, so I think that should be worked on a lot.)
So, any timeframe? Minimum requirements? Anything on the *Canvas* element or Acid3 or HTML5 😉 ? (The smiley is for that I probably know that its not gonna happen.)
Javascript performance in beta2 is pretty good; much better than IE7. What really needs work performance wise is the DOM rendering speed.
In beta2, inserting a lot of content into an existing DOM with complex layout/CSS (aka the primary Ajax use case) can freeze the browser for 4-5 seconds. Firefox/Safari/Chrome are much better at this.
Before deploying a final product, we verify compatibility with current service offerings, notify staff about the upcoming change, and establish a date and notify staff of that date.
).
…And to those of you who are wondering, yes, we’ve filed a bug on the case where sometimes hitting Enter in a web form text box immediately after switching windows results in the "Submit" button being triggered (hence, the double entry above). 🙂
IE8 beta2 is still far from a stable product.
1. Switching tabs is extremely slow.
2. IE will crash very frequently, when I am typing Chinese in gmail.
3. The JS engine is extremely slow. Especially by running google applications.
– The spec is 10 year old
– All other browsers handle it
– IE does XML and HTML, why not xml+html?
And what about SVG?
What kind of time frame is between the RC, and the RTM (even presuming that the RC is absolutely perfect!) This is important because we are all waiting to know what fixes are going into the RC (that’s why we were expecting a Beta 3). I’m not making any code changes to handle IE8 until that RC/(Beta 3) comes out because of all the issues still broken in IE8 Beta 2.
I don’t want to spend any time making workarounds for something you plan to fix in IE8 RC (if you had an up to date public bug tracking database I would know, but we’ve seen that that path ends in big disappointment)
As for critical bugs that need fixes for the RC, add these to the list please.
1. Page Zoom performance in IE8 is an Epic Fail
2. SVG Loading in IE8 (w/Adobe SVG plugin) performance is an Epic Fail
3. HTMLSelect/HTMLOption element modification via CSS or JS is an Epic Fail
4. UI Customizability in IE8 is still an Epic Fail
I hope the time between RC and RTM is at LEAST 3 WEEKS so that we have time to sync up with whatever you actually plan to ship.
thanks
Dean Hachamovitch, General Manager for Internet Explorer, has posted today on the IEBlog about what’s
Quirk 1:
1) open IE8 to about:tabs
2) click the "Start InPrivate…" link to open an in private session
3) resize this window, so that only the "abo" of about:inprivate is displayed in the address bar
4) close the inprivate session (we just did this to set the default window size)
5) RE-click the "Start InPrivate…" link to open an in private session
6) Expand the window… note the address is no longer "about:inprivate", but is now just "InPrivate"
7) click in the address bar and scroll back to the left and the "about:" portion re-reveals itself
Quirk 2:
When you resize an inprivate window to very small, (reduce the addressbar to about 14px) and note that the favicon overlaps the addressbar, even though it should be "inside" the addressbar box
Quirk 3:
resizing the window frame is painfully slow if the page is zoomed in
Quirk 4:
Compatibility view icon and "go" arrow icon disappear when address bar is shrunk down to almost nothing
Quirk 5:
shrink an IE8 window to quite small, then hover over menu items. the statusbar text overflows the zoom text in the statusbar
@steve_web: Thanks for your help in filing bugs. The connect site will be updated with all of the bug fixes, and current status on the issues when the RC build is released. Kellie
NM, don’t you ever wonder why the HTML5 spec project was begun? XHTML is a failed spec. There’s no point in wasting time on it.
quirky, I really hope the IE team has better things to do than make the browser pretty when you shrink it to a tiny-weeny size.
concerned, feedback that contains the text "Epic Fail" is an "Epic fail." You need to provide reproducible test cases in order to add value. Blaming the IE team for the Adobe team’s unsupported plugin seems like a significant waste of breath.
stanleyxu, try running without plugins. Use the -extoff command line switch. Be sure you have the latest version of Flash installed too.
"Reopen Last Browsing Session" does not work for me. It is always grayed out from within the menu selection and does not appear upon opening IE8 b2 as a selection from the first new tab that is opened. This makes it a non-starter for me. It is a shame as the feature worked fine (opening active tabs the next time IE is started- or whatever the wording) with IE7.
Yes but…
Why keep the status bar visible in kiosk mode?
Why reverting to "old tabs" in the Favorites panes?
Why resizing the (docked) favorites pane causes the entire pane to flicker badly?
Why the return of the menu and favorites bars by default?
Why remove IE7’s "Add to favorites" button (move, actually, but to the wrong place)? A showstopper as far as I’m concerned.
Why do you still waste vertical screen estate with a window caption? If you include the default toolbars, the whole "header" is more than 150 pixels high, you could just as well replace the whole thing with a ribbon UI perhaps?
Why so little contrast in the new address bar, the dimmed parts of the URL are barely readable – light gray on white, does this meet accessibility standards?
Why still a half-dozen unlabelled little areas on the status bar?
Besides those small quirks and those mentioned above about the performance in some areas, it looks good 🙂
@EricLaw [MSFT]
Hi, here’s a rather quirky way of benchmarking specific DOM performance:
WARNING: If in an IE Browser I would NOT click the ‘Full Render’ as the browser will freeze up for a very long time and use an excessive amount of memory. Just do the Basic Render to compare it to other browsers.
I did this test on full render a couple of months back but the results should still hold true on my computer. My computer has an AMD 3800+ X2 with 2 GB DDR 400MHz RAM:
Chrome – 29.69 seconds
Opera 9.6RC – 31.609 seconds
Safari 3.1.2 – 38.734 seconds
Firefox Nightly- 537.907 seconds
IE8 Beta 2 – CRASH – Pass 84/120 – 2269.468 seconds
Also Memory had increased to over 900 MB by the 84th Pass for IE8 Beta 2, note that each pass was taking longer than the previous pass, so IE8 would have probably taken 2-3 hours to pass the whole thing had it not crashed.
I understand it’s very very specific and a performance gain here will hardly show up in the same magnitude on normal sites. But I still think it’s something worth a quick look at 🙂
I’ve been using beta 2 for sometime now and here is what I’ve to say.
1. JS Engine needs performance improvement. IE 8’s JS Engine is much better than IE 7’s though.
2. Switching between tabs is sometimes very slow.
3. For some reason Vista’s DEP feature kills iexplore.exe when exiting the browser. I have no idea why, but because of this the performance monitor’s index has fallen to below 2.
4. Please let the favourites bar to expand vertically (that is to support multiple lines). Now I have to go through the small arrow button (>>) and look for feeds!
You guys have done a great job in beta 2. Hope to see much improved and bug fixed IE 8 soon!
Ted, please note that HTML5 is designed to be compatible with an XML syntax (i.e., XHTML) as well as a specific HTML syntax. While you’ll probably get less argument that XHTML2 is a failure, XHTML itself (HTML in an XML form) does have appeal in a number of applications. Most of them are not on the Web, though, for various reasons obvious and not.
@Damian,
Basic render in 2.543 seconds.
Full render in 37.768 seconds.
IE8 B2 on Vista x86 Business Ed. 4 GB Ram Intel DC 2.20 ghz, NVidia 8500 GT (512 MB)
@Ted, please use your full handle when posting. "Ted – The Epic Fail" is much easier to decipher than "Ted".
yes, XHTML is a failed spec. Wonder why? did like one of the major browser mfgr’s not get on board?… oh wait, never mind.
you can complain that quirky is being picky, fine, but he/she is highlighting that the UI is not polished or fully tested.
concerned mentioned things that were worrisome and quite frankly I agree with every statement they made.
You Love Microsoft, and that’s cool, but the rest of us want a reliable stable browser to code against, and ATM, that IS NOT IE8B2!
As for SVG, supporting the adobe plugin wouldn’t be an issue if IE supported SVG right out of the box. IE7 supported the plugin just fine without issue. IE8 should be a tenfold downgrade in performance when using the exact same plugin, in fact, it should run even better.
As for comments containing the phrase Epic Fail? yeah, you think they don’t belong, but according to the rules of this blog we can’t use the words that REALLY describe our feelings towards this non-standards browser.
Thus all we can do is indicate in plain language how utterly broken various aspects of IE really are.
Thus "Epic Fail" seems very appropriate.
Thanks again for your trolling Ted – The Epic Fail, you never fail to show your incredibly biased view of web browser development and a total lack of understanding standards and where the web is trying to progress to.
Please troll on a forum where users actually want to hear your FUD.
SVG would be a nice feature to get into IE8, thus i think that is already late for IE8 roadmap. Real shame…
Thanks for the timeline Dean.
We now know where we stand. Roughly, at least!
PLEASE PLEASE PLEASE add a download manager, this is something a lot of people have asked for, for years! And still it doesn’t reach you guys, why isn’t this feature in already? What is the reason, it would be nice to hear from Microsoft why they haven’t done this yet.
Now it is pretty annoying that you have to start multiple IE’s to download multiple large files or otherwise you can’t download them at once as 1 IE only supports 4 concurrent downloads.
I work for a fortune 50 company. With hundreds of internal-facing web applications and somewhat fewer external-facing web applications, we will probably only take immediate action to fix any incompatibilities in the external-facing we bapplications. For the internal apps, they would not be fixed until a project can be allocated in the next fiscal year. It could be 2010 before everything is compatibility-tested — there’s just far too much out there to do a full regression test without diverting resources on other mission-critical projects. For internal apps where we control the deployment of IE to desktops, it is hard to justify spending money on all the testing and fixing of all those applications until it’s absolutely necessary.
Also critical to consider is the testing on embedded browser controls in desktop and mobile-deployed fat clients.
In any case, while all new development is standards-compliant, there is a major issue with all the old legacy code out there. But that’s how software development goes…
Most wanted features:
– Download manager
– Customizable (skinnable?) GUI
– Support for CSS "opacity" so proprietary filters wouldn’t be needed
– More speed! Just compare a default IE with Firefox, Safari or Chrome. What will you see? #1 IE takes much longer to start up, even with Superfetch in Vista. #2 Open a new, empty tab. Again: much slower. #3-#99 Left out.
Nevertheless IE has improved hugely from IE6 over IE7 to IE8 – but that does not mean that there wasn’t much left to do to be on the same level as the other browsers.
I vote for *FULL* CSS 2 compliance. Make the web designer’s job easier PLEASE!
Also how about Microsoft paying for all IE6 users to be flogged in the town square…??!?
"We listen", "we are listening", "we’ve heard you", and others stupid marketing sentences…
You’ve just heard nobody. Where’s beta3 ? Beta2 was unusable and crashed all the time, so we can’t test it. Please give us a testable beta before a release candidate.
@Nick : forget the download manager, they haven’t listened to us. For IE7, they said they will consider the download manager for a future release, now they say the same for IE8. We’ll see in IE9… or IE10…
@Thales : about SVG, you can get the same answer as Nick…
I am not sure if this is specific to my network connection. IE 8 most of times gives "Bad Gateway Error" on my XP machine. On my home PC I have Vista installed and IE 8 works fine. I have no problems with it at all.
If pseudo-elements and generated content is fixed in RC, I’ll be happy; if reflow problems (reset all scrollings, PNG images disappear on DOM modification) are solved, I’ll be glad.
As for SVG support, I’ll simply put a link: "can’t see this image in high resolution? Use a modern browser." using the ‘object’ fallback method.
IE is slow? That’s what multi-Gigahertz quad cores are for: rendering web pages faster, so stop complaining.
We really really really need a download manager. Everybody wants it. It’s the most important missing feature. I’m sure Microsoft can implement it in 1 in 2 weeks… So why is it still missing?
@ Damian
Full run finished in 67,8 seconds on Lenovo Thinkpad T2300.
980Mhz, 1GB RAM…
So there might be something wrong with your computer / browser…
@sebastien
What’s so important about a download manager? If you need to resume downloads, just click on the same download link. The download will resume where you left off if possible.
Hey Now Dean,
I love IE8 beta 2. One thing that I would really like to see is a hotkey to open multiple tabs. If a user goes to the favorite cetner (alt+c) then arrows down to the folder (ctrl+enter) should open the sites in the folder but I’ve had no success in IE7 or IE8. When you are in the favorite center then hover over the folder with the mouse then on the right of the folder there is a little blue arrow. If this arrow is clicked (no hotkey w/ mouse I mean) then the group of tabs open good. That is a small thing but it is something I think of often.
thx 4 the info,
Catto
christophercatto@hotmailNOSPAM.com
Besides the usual feature requests, I would like to point out that I’ve noticed IE8B2 becoming unstable on quite a few occassions (hang/crash). It is very important that the stability is improved for the final product release. I’ll gladly accept a few feature cuts if this results in improved stability.
@Disk4mat
@ajo (also are you sure you did the Full Render for that test? And it did all 120 passes)
Are you sure that the result rendered correctly and computation finished? The basic render looks like a pixlelated version of the full render which is a purply sphere.
I ran the test again on this computer and after 15 mins of running it was clear I was getting the same kind of performance results.
So I went and installed a fresh copy of IE8 on a different computer (AMD 64 single core 2.4GHz, 2 GB DDR2 667MHz). As of pass 66 (still running I’ll post the final result later) it’s currently taken 448015ms
Both computers run other performance benchmarks as expected, they are kept very clean of bloat ware, malware etc… IE is almost never used so it’s also kept quite clean. Other people have confirmed my similarity bad performance.
thanks for detailed timeline around IE8. we like the IE8 BETA so much so that we have created a little community around it at
we hope you like the community. we would be glad if you wanted to share any feedback around the same at admin@merawindows.com
@Disk4mat
@ajo
So yeah, the test on this other computer (AMD 64 single core 2.4GHz, 2 GB DDR2 667MHz) with a fresh install of IE8 Beta 2 crashed in the same place pass 84 and took 1912625 ms
The crash did not automatically recover, I had to terminate the iexplorer.exe process.
Are you sure you both got up to pass 120 of the full render AND it looked like purple shaded sphere? I find it odd that I can run to separate tests on different computers with different OSes (XP x64 vs. XP) and get the same result and it be wrong…
I normally don’t post to these things, but I’d like to make you guys aware of a bug in beta 2 that is driving me nuts…
For some reason, after using IE8 the system does not close open processes and just leaves them there, running… it’s aggravating because after a while there’ll be like 9 – 15 iexplore processes when IE8 will undoubtedly crash (it’s fine, we all understand it’s a beta)… the thing is that the application needs to be restarted… that involves me having to manually go to Task Manager and close the 15 processes, which is a bothersome and slow process, before I can open it again…
Also, if I need to install a program that needs me to close IE to continue, I also either have to restart previously or manually close those dozen processes… VERY aggravating…
@Chase Seibert
That’s like saying that a Ford Edsel is a pretty good car because it’s slightly faster than a Ford Model-T.
By todays standard the performance of both, IE7 and IE8 beta 2, is simply unacceptable.
The only reason why Google even bothered to create Google chrome is due to IE’s horrific Javascript and DOM performance which is holding back the web as a platform.
IE is so far behind in performance and standards compatibility that I don’t know why Microsoft even bothers any more.
I’m wondering if IE8 is going to fix the regression in dealing with offsetParent? If you load this page in IE7 and in IE8, IE8 will show that it failed while IE7 didn’t. IE6 passes as well. This bug in IE8 makes it pretty difficult to support IE8.
Hi there
I know it has been like this for ages and it’s probably too late to address this issue at this stage.
However, it is a pain in the … to have the "script debugging" option enabled, when you browse 3rd party websites (I know, it may come as a surprise that I use the browser both as a developer AND as an end user 🙂
I would though like to have the Script debugging option enabled when browsing/debugging my own websites – but I don’t really bother about other developers script errors.
So well, what I am basicly asking for is an option to restrict script debugging to only be enabled for specifik websites (perhaps Zone based). If this is somehow already possible, I appoligize.
This has been said many times before, so I’ll make it simple…
We want a Beta 3! Beta 2 was no where near the quality we expected. Before getting to an RC, we want to get the last set of bugs reports before you get to RC1.
Closing the door now would be a horrible mistake.
Please make sure IE8 ships with the following:
SVG + Canvas + CSS2.1 Full compliance. Pretty please?
My biggest complaints from IE8 Beta 2 are
-Lack of spell check
-Page zoom is ridiculously slow. This become more important if you have set Windows to a high DPI setting… IE8 automatically adjusts the page zoom to 125% which in turn is very very slow.
Also I sure hope you add the ability to drag tabs into new IE containers, ala Google Chrome.
How do I prepare for a new software release? I start by installing the beta or RC, but until it can sit alongside Visual Studio 2003 I can’t do this. Have you fixed that yet?
@Mike: Spellcheck is available from at least 2 add-ons (IESpell and IE7Pro). Performance issues with Beta-2’s Zoom feature are known and we’ve been working on performance of Zoom (and across the browser). I’m not sure what you mean by "new IE containers", but if you hit CTRL+N, the current tab is opened in a new browser instance.
@Jason Ashdown: As mentioned previously, our goals for IE8 include full CSS 2.1 compliance, which we aim to deliver. We do not plan to natively support SVG or the proposed Canvas tag in IE8.
@Daniel: We’ve received quite a bit of feedback about the script debugging prompts, although most of it is from non-developer users who simply want an easier mechanism to turn it off. You can, of course, reenable script debugging using the dev tools (F12). Stay tuned for the RC.
@Trevan: Thanks for the test case.
@Gabriel Golcher: Do you see the same problem when IE is run without add-ons? Beta-1 had some serious problems with process management, but most were ironed out before Beta-2. I haven’t seen any issues on current builds.
@AccessDenied: Fear not, reliability and performance both remain key areas of investment on the part of the IE team.
@Catto: You can also middle-click the group to open them all.
@sebastien: There are various 3rd party download manager addons available for IE if you haven’t tried them.
Unfortunately, integrating a proper download manager than handles all of the myriad ways that file downloads take place takes a lot longer than you might think. As syb notes, download resumption has been improved in IE.
@Jagannath: Are the Tools / Internet Options / Connections proxy settings the same on both computers?
@Dave: As noted in various places, our target is full CSS2.1 compliance.
@mynetx: You should take a look at what browser add-ons you have installed. The new Tools / Manage Add-ons UI will show the load time for each of your enabled addons.
On a few machines I’ve tested, both IE and Chrome start almost instantly, and Safari and Firefox take several additional seconds. Slowness in creating new tabs is caused almost exclusively by slow add-ons.
@Nick: IE8 supports 6 simultaneous connections per host. Adding more connections usually results in a slower overall experience. But, if you want more, it’s trivial to increase the limit. See in the "Speed tweaks" section.
@Hammad: If IE experiences a DEP crash when closing the browser, this is a signal that you have a buggy add-on that does not shutdown properly. When IE destroys the add-on, the add-on attempts to access already-freed memory and crashes. Remove (or update) the addon and this problem should go away.
@Damian Shaw: I hope you’ll agree that creating 59000 DIV elements isn’t really a good benchmark for overall DOM performance. 🙂 The problem with optimizing for contrived benchmarks is that the benchmarks often don’t map to real-world performance problems. As noted, we focus our performance investments on real-world sites (e.g., GMail) to isolate and remove bottlenecks.
@Howie: Have you configured IE to delete browsing history on exit? That’s one reason that "Reopen last session" could be unavailable.
I do not care how you do it. Just get this browser out and done well. Make it give me tools I need that I cannot get from Mozilla, Opera or any other browser. If you can not, another browser will. Your job is to make my life easier, not yours! Andale!
Wasn’t this product meant to ship end of year *this* year? Just checking…
Dissapointed that there isn’t going to be a beta 3 soon. Beta 2 isn’t nice to use for more than a few minutes, and I don’t think you are going to get much more useful feedback on it now, just the same bugs repeating themselves. Then by the time we get the next public release, you will only be acting on a small number of critical issues as you said, so the opportunity to fix other issues will have been missed.
Not sure if someone has mentioned this or not (because I’m not familiar with the technical reasons for rendering issues), but I have noticed that on one website I help maintain, a logo in the header is consistently not showing until I refresh the pages. On another website I have dealt with, random items on the page do not show up and upon refreshing, they might show up, but others disappear. I have disabled all add-ons but no change. Has anyone else experienced this?
Le blog de MSDN l’a annoncé hier : la sortie de la version finale d’IE 8 sera retardée, puisqu’elle sera précédée d’une version RC au cours du premier trimestre 2009.
I’ve kept using IE8B2, despite the many flaws. I am glad to see you recognize that it wasn’t close to done, but am disappointed that we won’t see another update before Q1. What about making somehting available via MSDN, since folks paying for that are likely all professional developers?
One thing I have is that the debugging just doesn’t work. I expect that it will be as easy to use or better than Firebug on Mozilla, *without* needing to buy or use some VS tool. Getting some update on this would be nice.
I’ve found sesison restore to be flaky and unreliable (yes I know it is btea, but this code is aplha at best). Please publish where the session files are kept and format.
When there is some content blocked, I get an info bar, but it doesn;t tell me what the issue was. Please provide details so advanced users can see what the issue is.
I’d expect that MS’s QA team and employees will try the browser on the top 1000 web sites and will make sure it work on those sites. You have enough staff at the company to do this and your user base deserves this.
B2 was a lot better than B1 and so I am looking forward to the RCs and final.
I guess they took my strident feedback about Beta 2 seriously: IE 8 pretty much breaks the Web at this
Javascript performance. It’s no good comparing yourself against IE7 and claim averything is great and ok.
In our organization we are starting to move away from IE to other browser choices specifically for standards compliance AND javascript performance
I know you are capable of giving us killer Javascript performance but are reluctant to compete with other MS technologies.
–jw
I just ran the basic render test in 60.125 seconds. This is on a plain P4 2.6Ghz with 2mb of PC400 RAM and using the motherboard Intel graphics. Running XP SP3, IE 8 B2. The test ran to completion.
Try running in no add ons and retry your test.
Cookies. Some sites logout, I don’t know why.Also when XP crashes for any reason, it seems that IE deletes cookies and Temp files I suppose for security, but thats very anoying as I preserve my cookies
Images. Some sites are unable to save images in the IE cache :
"save Picture as" tries to save it as bmp
Tabs. MMB opens a new tab in the extreme right of the group. it should open it aside the current page, still inside the group
refresh button, it sometimes does not work, using the address bar and pressing ENTER is usually better
Favorites bar. is it possible to make it like a real menu instead of buttons. and a shorcut to hide/show easily
Restore last session. It needs more depth. and be independent of IE cache, because deleting the history makes it obsolete right?
Popup blocker. Uses Ctrl+Alt to override, but interfieres with IE menu
adress bar.sometimes typing a single word attempts to load it as as url, instead of search
Scroll Bars. when the page is loading it looses focus when you are scrolling, I cannot wait for pages to load 100% its a waste of time, can you make it more independent?
Animated Gifs, etc. When running IE slow down a lot, for example links, etc
Saving Images. IE does not remember the last directory correctly, some tabs do, other tabs suddently remember a different directory.
My Pictures. Always opens as thumbnails,anoying!!
I suppose plugins are not independent, that makes tabs slow, but IE8 is much better than IE7 but still very beta, even google needs to run in compatibility mode
Bill Veghte earlier this year said IE8 was coming out by 12/31/2008. So much for that promise. I guess Microsoft blew through that deadline like a bulldozer running a picket fence!
ok so today microsoft said ie 8 well come out in 2009 after the rcs version well what happends to the ppl that have beta 2 installed on there pcs will we get a update from windows update…..
aww ie 8 is so much better then ie 7 the loading times r much better its faster and i have no problems with it……
Wow, awesome!
IE8 Beta 2 is already a major improvement from IE8 Beta 1, can’t wait till the RC is out 🙂
Great Job!
Seems there’s a bug in the favorites menu, bookmarks (and subfolders) with ampersands doesn’t display correctly.
Back in July, Microsoft indicated that there would be one more beta of Internet Explorer 8 and that the final version would ship before the end of 2008. Beta 2 was duly released in August, but yesterday, Microsoft’s Dean Hachamovitch revealed tha..
@sialivi: I can’t repro the ampersand issue in the current build. Can you provide exact repro steps?
@Lori: What’s the repro URL?
@PCause: Can you please be more specific about what problems you’ve encountered when debugging?
Simple list of needed fixes.
1) Fix Java Script
2) Make sure Automated Crash Recovery works all the time. Also, you could show an icon showing that IE is trying to recover.
3)More web standardization. You guys need a higher score on the ACID 3 test. If you could pass it, no one could doubt that Internet Explorer is awesome.
4)Just an Idea. Take a look at IE7 Pro. Those features make IE great. If Microsoft were to show that site a few grand. IE would take on a whole slew of great features.
is there any hope for a more reliable and usable favorites panel?
problems:
– sometimes it simply rearrange the custom order alphabetically (after a crush or some ie update)
– after some time, wrong favicons are displayed (it simply mix them) , only workaround is to delete the temp files
– bookmark import is also lost the custom order
– during rearranging with d&d the scrolling is extremely slow (any other win app would speed up scrolling according to the cursor distance from the panel)
– positioning the links with d&d is very hard on vista (on xp there is a thin line showing the current cursor position)
– there is no support for d&d from tab to favorites (it is just more handy than using the addressbar icon)
more:
– the addressbar should keep the newly typed(but not yet submitted) text during tab switch
– there should be a close icon on the last tab too, it should change the tab to about:blank or about:Tabs or to the first homepage (an option for setting this would be nice)
I wait for the final release and then i will compare the IE 8 with my other browsers.
Side by side install of different versions. Having to run IE6 on a virtual PC is a really annoying.
@concerned
I agree that the lack of a customizable UI is a problem, but don’t hold your breath waiting for one. I was just browsing some of the old IE blog and found this:
># re: Security strategy for IE7: Beta 1
>overview, Beta 2 preview
>Thursday, August 04, 2005 11:27 AM by redxii
>Is there any chance Beta 2 or final will >allow total customization of every toolbar >position? I myself prefer the File menu below >the title bar.
… so MS has known about this for over 3 years, when IE7 was in Beta 1, and now they tell us it’s not happening in IE8 (well they are throwing us a bone by letting us move 2… count ’em 2… buttons).
Now let’s take a look back at 2005…
… Practically no one had ever heard of Barack Obama
… Seattle was about to start a Super Bowl season
… and IE had 87% market share
Times were really good
Since there will be no canvas support in IE8 I would like to see the IE team working with the ExCanvas guys to make sure ExCanvas will function properly in IE8 standards mode.
On another topic, it would be nice if ticket comments were not signed anonymously, "Best regards, The IE Team", but with the actual name of the team member writing the comment.
It would be nice if tickets were closed as they are resolved, not waiting until the date of the release itself.
I would like to see nightly releases.
As someone else mentioned above, I also think a beta3 would be a good idea. I am very much concerned about VML continuing to function properly until canvas is supported in a future version of IE.
@Trevan
That’s because there is an implicit <tbody> (you can see it with the Developer tools). Therefore, when applying the algorithms given at ,
then the nearest static-positioned ancestor of <tr id="t"> is the implicit <tbody> and not <table>.
Obviously the algorithm given at will have to be tweaked and cover the case of implicit and explicit declaration of <tbody>.
What should happen when
<tbody> <tr id="t">
is explicit?
Regards, Gérard
With multiple tabs open the task manager gets loaded with "Internet Explorer" descriptions/processes. Unfortunatly since tabs in IE crashes surprisingly quite often it would be nice to know which tab I need to "end process" on to get rid of the non-responsive tab since you can’t close it from the browser itself.
Knowing which tabbed process is which would help from having to pick and guess to hope you get the right one.
Other than that I’m pretty happy. Stability seems to be the biggest issue for my IE8b2
In IE8 Beta 2 the default rendering mode for WebBrowser Control is IE7. To force IE8 rendering mode the HKEY_LOCAL_MACHINESOFTWAREMicrosoftInternet ExplorerMainFeatureControlFEATURE_BROWSER_EMULATION must be set to 8. Will this behavior stay the same in the final IE 8 release? Are you going to change this in the future?
@MSFT – What is going to happen from this point forward to solve the lack of transparency with IE.
1.) The public bug tracking site sucks, and you’ll likely shut it down again when IE8 goes final, once again making the whole exercise pointless.
2.) The feature roadmap is non-existent for any release.
3.) The milestones are not identified up front.
4.) No information about bugs being fixed is released until the next public build goes out.
5.) Developers got 2 betas to test/develop against, but both have such major regression bugs that it is almost worthless even trying, and now you tell us that there won’t be a Beta 3 (the one we’ve all been waiting for)
6.) EricLaw is almost the only MSFT team member that replies to comments on this blog. Every other MSFT IE team member needs to follow his lead
7.) Monthly IE chats (although much appreciated) fail horribly because they always happen at the same day/time. In case you didn’t notice, the developer audience is GLOBAL. The question box doesn’t allow pasting, and the question size is limited to anything that would be useful. This would all be fine but every question is responded to with a lame blanket statement to the effect of: "thats an interesting idea, but at this point we have locked down… …thanks for your feedback"
8.) Commit now! Is SVG on the roadmap for IE9 or not?
9.) Commit now! Is CANVAS on the roadmap for IE9 or not?
10.) Commit now! Are proper EventListners going to be handled in IE9?
11.) Commit now! Will all the HTML form elements be fixed in IE9? (checkbox/radios firing onchange properly, not hanging the browser when clicking them, select elements supporting events on the options, or say innerHTML? or say styles? File uploads rendering a file selection box from this decade, etc.
12.) Commit now! Tell us if MSFT plans to have a proper public bug tracking database that is updated AS THE BUGS ARE FIXED
13.) Commit now! Is IE9 going to have a customizable UI? or are we in for a new pile of the same crud?
I am curious if the IE team has any changes to their plan on bug fixing the rendering engine, in case of any. In IE6 and IE7 I see MSFT not going to touch the rendering engine after release, leaving all security unrelated rendering flaws and bugs as they are until the next major release to avoid that so-called "break BC" which already gave enough countless pain to web developers.
IE8 is claimed to commit to standards AFAIK. I really think MSFT should evaluate the possibility to continue fixing rendering bugs that are not comply to standard even after the product goes gold.
the auto recovery may ask the user to recover page or not before it tries to recover.
A typical website: crashie.com, which contains some html that can corrupt/full load IE(any version, in XP but not vista).
Problem is if i open the website, and find IE not responding, I may close it through task manager (well, another problem, it may be difficult for me to identify those iexplore process belongs to main UI(parent?) or tab). If I correctly "End Process" of that mal function tab, in general IE will recover it without enquire me. So the problem will loop back, IE hangs again.
I agree with the others that are asking for a beta 3. Because of the zoom issues (repainting takes multiple seconds when zoom isn’t 100%, and zoom is constantly being reset to 125% in 120 DPI mode when I want 100%), I cannot use IE8 as my default browser. I am pretty sure that anybody using high DPI mode feels the same way. Further, it seems like the developers and MS’s testers are not using IE8 much in high DPI mode either–if they were, these problems never would have made it to beta 2.
I applaud your attempt to reach full CSS 2.1 compliance. But, it will be almost impossible to get there with the current plan. Are you committed to fixing all CSS 2.1 bugs that are reported after RC? I doubt you will be able to, because some issues will likely need risky fixes. You really need a beta release that you believe is 100% CSS 2.1 compliant, stable, is stable, and has usable performance *before* RC in order to reach that goal.
Finally, I just reported the "what is the current tab" usability regression. Every time I use beta 2 I feel the productivity loss from this problem, and the issue has been reported repeatedly by others. All reports so far have been dismissed as "won’t fix". What is the logic in that? Why spend so much time improving usability and productivity in other areas without fixing the regression here?
I *am* very happy with beta 2’s CSS 2.1 capabilities. IE8’s CSS 2.1 support is notably better than Firefox’s CSS 2.1 support.
BTW, what happens when somebody finds a CSS 2.1 compliance bug after RTM? Will there be hotfixes or service pack updates that fix compliance issues?
Please add at least a "No to all" in addition to the "Yes" and "No" choices in the ActiveX / active content prompt dialog. Please also add information on the particular active content in question, such as its name and its certificate, etc. I am referring to the dialogs that are used when IE is set to ask for permission for every single control, such as when going to say, Youtube to watch a video and IE would ask if it’s okay to allow ActiveX to run.
Another issue is the same kind of prompts showing up on top of pages that they do not originate from. Say you’re browsing a news site and you open one of its articles in a new tab. Chances are that the ActiveX controls in the new tab will show up while you’re viewing the first page. This is confusing, even for pages that are grouped / related.
A third issue, also with ActiveX or active content is when IE seem to want to run active content from say, a news site, when I have just navigated away from the news site and onto a pure HTML / CSS page on my hard drive that has got no risky content what so ever. I’m getting this information bar at the top of the page telling me my page wants to run active content but was stopped. Doesn’t make sense. This issue is particularily worrying if I want to sell similar minimalistic pages to customers, it would turn into a support problem as this is too complicated for most customers to have to learn when it should not be necessary.
I have mentioned all three issues before. I thought at least the last one was fixed now, but it turned out it was not.
Other than that, I’d like to say that I really like IE 8 and the improved standardization. Keep up the good work!
IE 8 beta 2.
@Damian
For certain I get all 120 passes. It slows down after pass 90 but still completes. Just tried it again and finished full render in 41.46 seconds.
It may be that on your 2 systems you have 3rd party apps/security/filters that are affecting IE’s rendering.
Here is a screenshot I snagged at pass 115. Showing open tabs and current progress for pass 115.
my main issue is the lack of a simple ingerated download manager, spell checker and if i am not mistaken, there is no way to view saved passwords.
all of the above are simple but effective features from firefox.
@Sialivi: Issue confirmed.
@EricLaw: To reproduce create a new folder or bookmark with a single & symbol in the name. Example: MS Windows & Vista
From the favorites menu the bookmark (or folder) will display an under score. So MS Windows & Vista would display as: MS Windows _Vista
The work around is to use a double &&. Example: MS Windows && Vista
Screenshot:
Note: This bug is also present on the favorites bar when selecting and item that has child items (folder on fav bar that contains bookmarks)
Make Launch Fast…
for e.g. Windows Media Player Light version in windows & beta opens at blazing speed.
Normall web surfer has minimum 10- 15 tabs open. Under this, it should perform best. It is still sluggish like vista taking more time in opening windows explorer.
Windows 7 did great job in speed..and responsiveness..users expect same from IE 8
Finally, keep it light, fast and responsive….
@EricLaw [MSFT]
I totally agree creating 59000 DIV elements isn’t a good general benchmark for DOM performance and I was very careful about never stating such :-).
Still, it does highlight bugs with the IE layout engine and it also impedes how creative web developers can be. No one will ever do anything similar for a web page, not necessarily because it’s not a good idea but simply because IE can’t handle it.
@Disk4mat
Thank you very much for the information, I simply can’t explain why it works well on some but not on others. The only particular difference I notice is the use of Intel CPUs vs. AMD CPUs. I’d find it odd that IE work so radically different on them though.
(P.S I’m 100% sure I have no interfering security applications or anything which is explicitly plugged in to IE other than Flash and Java)
Regarding XHTML being a failed spec, it really isn’t. Just because IE doesn’t parse the stuff, that doesn’t mean that it is a failed spec. Many people are doing things like switching to Linux, which helps Firefox’s case mostly or Mac, which helps Safari. In addition, even people who are sticking with Windows are switching. I’m not saying it is a mass movement, but there are certainly fewer IE users than there were a year ago.
As for the status of XHTML 2.0, it appears to be picking up the pace again. They’re beginning to figure out what to do with the language by figuring out exactly how certain features should be decided. In my opinion, XHTML 2.0 would be quite beneficial to developers, but I’m only one person…
Even if XHTML 2.0 doesn’t get implemented by any user-agents, there will always be XSLT… ^_^
About the download manager : clicking on the same url to resume a download is not always possible. Actually most of the sites I use have automatic mirrors. And sometimes, downloads are corrupted, so you may want to have a donwload manager which lists URLs you already downloaded.
I tried several 3rd party download managers, none works 100% for all the links.
Well, I will still use IE but it’s weird to add gadgets while a basic functionality is still several years late
Full CSS2.1 compliance is not enough to gain the respect of web developers and designers, it’s important to fix as many rendering bugs as possible before final release. Otherwise we’ll be worse off than we are now.
Spend some time on this page [1], and fix every bug you can, much like you did with PiE.net for IE7 [2].
1.
2.
EricLaw: Thank you for the prompt reply, looking forward to see what you have planned for the RC.
Noticed a small bug where text is able to wrap inside an <input type="submit"> button.
FireFox:
IE8:
Just to let you know, if you wasn’t aware of it 🙂
Top News Stories SharePoint Migration in the Hands of the Content Owner? (ITWorld) Let Business Users
Suggestion: IE8 drop down menu are too cluttered and some function have too many launch surface (Accelerator can be accessed via 4 places)
I have uploaded my suggestion to Scribd
hope you will concern
Internet Explorer 8 Release Candidate erscheint Anfang 2009
Please stop fixing your terrible layout engine and just use Webkit or Gecko. You make web developer’s life a nightmare on a regular basis.
Something that would be really helpful to me as a designer/developer would be to have a split view browser for tabs with two widescreen monitors.
In otherwords like in Visual studio you can have File one next to File Two as a splitscreen.
With Tabs already an option in IE, being able to see 2 or even more would be super slick. Instead of having to open up two different IE browsers.
In Win7 you go half way with the idea of wanting a split screen where you can quickly set two apps to split the screen width, now just bring it inside the browser.
@EricLaw: I have a repo for the performance issue. Where would you like me to send it?I already submitted it to the "Email" link on this blog.
@Daniel: Can you provide your input type=submit testcase? Those look like checkboxes to me. Which spec specifies the wrap behavior for such tags, HTML4.01?
@W: A significant number of the "bugs" on the page you cited are not actually bugs. Unfortunately, corrections or feedback on that page have not been accepted. Bugs in IE should be reported through Connect for proper tracking and analysis.
@Damian: I don’t understand why you think there’s a "Bug" in the layout engine, as the page in question works perfectly.
@Disk4mat: Thanks for the clarification. Our test team reports that the ampersand issue was fixed between Beta-2 and current builds.
@Steinar: Very very few users have altered their Internet Explorer configuration to introduce prompting for all use of ActiveX and/or script. Such prompting does get tiresome, which is why it’s disabled by default.
Please keep in mind that "Active Content" includes any binary behaviors (such as Filters) in addition to Javascript, VBScript, ActiveX, and CSS Expressions.
@Brian: On the contrary, MS is testing high DPI extensively. Quite a few issues have been fixed so far. We encourage you to file appropriate bugs on the issues that you’ve noticed.
@wai: When IE recovers, it recovers each tab into its own process. It also stops attempting to recover a given tab after 2 failures in a row.
@Oliver: The IE team is hard at work delivering IE8. IE9 planning will begin in earnest AFTER IE8 is delivered; hence, no, no one from Microsoft is going to speculate on the final plan. For cases where you believe you’ve found a bug (e.g. #11), please be sure you’ve provided a verifiable test case, and preferably filed an issue in Connect.
@Florin: The default rendering mode for non-IE hosts of the Web Browser Object will continue to be controlled by the Feature Control key. We have no plans to change this.
@Zebb: Are you on Windows Vista or Windows XP? Changes in Windows Vista allow for better recovery from hangs.
Thanks for the answer. The first part of question was actually if the IE7 will continue to be the default rendering mode for the WebBrowser control as it is now in Beta 2 or this will change in the future? Thanks and good luck with the next releases.
Oh, first quarter of 2009? Surely that means March 31, like Beta 2 was released in "mid August" on the 28th.
@Florin: Yes, for non-IE users of the Web Browser control, CompatView mode remains the default unless they opt-in to IE8 Standards via the Feature Control.
@cseibert: I received your repro by mail, and I’ll take a look at it.
@Tomas: The IE team’s goal is to deliver a quality IE8 as soon as possible.
Providing exact dates is always problematic; no one wants to ship with nasty bugs just to meet a self-imposed deadline. Keep in mind that we support IE for up to 10 years after its release, and sometimes those extra few weeks save ~everyone~ a lot of pain later.
@EricLaw [MSFT]:
I’m just a bit critical. I mean, there are going to be 3 milstones between IE7 and IE8. Opera brings out a build every few weeks, Mozilla brings out an alpha or beta every few months, and also nightly builds like Apple.
It’s clear to me that nightlies or weeklies aren’t possible here. But I think if more milestones would be released, like every 3 months, that’d be much more useful than to wait another six months.
Like, less bugs that are actually fixed internally get filed over and over again. More milestone releases could result in more and better tester driven QA.
At least, that’s what I think.
Mozilla also has open weekly status calls for Mozilla as a whole, the Platform, and for Firefox… Anyone can call in, read the agenda, or see the updates on the wiki.
IE8 should be out around the same time as Firefox 3.1. So much for IE8 and Firefox 3 coming out around the same time. 🙂
@Al Billings:
Mozilla is the only one who publishes roadmaps, and they never keep to it.
However, the release regularily, and I think that’s great.
Make it fast, really fast! (Rendering, DOM Modification, Javascript, …) The overall performance should be the same as Fx3 or even better. That’s a really critical thing for people who try to build complex web applications.
@Tomas, you expect that roadmaps are set in stone? They are guidelines but at least they are public. Just about every meeting and every discussion is open to the public if they choose to call in or read the wiki. 🙂
@EricLaw, @Dean Hachamovitch
Bugzilla is a software that is widely acknowledged as excellent for community feedback for lots of companies and groups. I would welcome replacing the current IE beta connect with Bugzilla. Over 800 companies, including NASA and W3C, so far have done so. If IE 7 was 3-4 years behind other browsers (Firefox 2, Opera 9), then the first version of IE beta connect was 10 years behind Bugzilla.
@EricLaw, @Daniel Møller
Chances are it was a bug in IE 7 and described here:
Regards, Gérard
La versión definitiva de Internet Explorer no llegará hasta después del primer trimestre del año próximo. El pasado verano Microsoft anuncio su intención de lanzar la versión final de Internet Explorer 8 para finales del presente año, algo que finalme
@Al Billings:
Actually I wanted to say, that this is the problem. A roadmap, given for final products, or so vague like in IE’s case is not ever to be trusted. But at least, the other browser vendors got some regular releases. IE gets release pauses from 3 months to over 1 year. Thats counterproductive imho.
@Gérard Talbot:
I personally doubt they’ll use Bugzilla in the future. I’ve seen wonders though.
What concerns me more is, that the information flow ist just so restricted. We are only intermittently informed about what things *wont’t* make it into IE8 Final. But if it’s not implemented in a released milestone, we can only guess.
For example, we already know, that text/css won’t be properly supported. Because someone filed a bug which was closed later.
Do we know wether bugfix X or fix Y will make it? Even if they’re already fixed, the answer is no..
Tripe like "we already know, that text/css won’t be properly supported" is worth than worthless. Obviously, IE supports stylesheets.
If you’re suggesting that IE doesn’t refuse stylesheet references that don’t contain the expected Content-Type, or something of that nature, why don’t you explicitly say so?
I just hope and pray that one day addEventListener will work in IE.
We’ve been developing for FF/Safari as our primary target. Usually stuff "just works" the first time in both. Strangely enough, Opera usually just works too (though it’s not a primary test target).
We then spend a few days hacking everything up for IE. It’s at the point now where we’re degrading the experience for IE because it’s too much work to deal with the standards-compliant crowd PLUS IE.
Our customers get the best experience in FF, but it works well enough in IE that they don’t complain.
@EricLaw
This is what you said about my IE 8 bugs webpage.
> A significant number of the "bugs" on the page you cited are not actually bugs.
Chris Wilson and, I suspect, a wide majority of IE team dev., PMs, members would say exactly the opposite, Eric. Exactly the opposite.
> Unfortunately, corrections or feedback on that page have not been accepted.
I am not sure I understand what you say. For sure, you are not very precise, targeting what or which bugs in my page: my IE8 bugs? the specific IE 8 bug collection sites? Individual testcases or webpages that fail in MSIE 8? Even there, I am sure there are more valid bugs in those than invalid ones.
I never got any kind of feedback or anything. But I do know that at least 6 bugs on my IE 8 bugs webpage have been closed at connect’s IE beta feedback, some of them even wontfix:
connect.microsoft.com/IE/feedback/ViewFeedback.aspx?FeedbackID=339307
was "Closed (By Design)" despite
"
You are correct in observing the current version of IE does not have support for this. We value your feedback and we will consider this for the future release of IE.
"
but it was Closed (By Design) anyway. Not postponed. Not futured.
connect.microsoft.com/IE/feedback/ViewFeedback.aspx?FeedbackID=348537
"Closed (Won’t Fix)"
even though the comments clearly suggest that this is a valid bug; fixing bug 348537 will be required if IE is going to pass acid3 test one day. Again, closed and resolved as won’t fix. Not postponed. Not futured. Not latered. Not assessed with a respective level of severity, gravity, importance and priority. Just won’t fix-ed.
connect.microsoft.com/IE/feedback/ViewFeedback.aspx?FeedbackID=348575
was "Closed (Won’t Fix)"
The comment clearly and utterly contradicts the "(Won’t Fix)" resolution.
connect.microsoft.com/IE/feedback/ViewFeedback.aspx?FeedbackID=334438
"Closed (By Design)". Valid bug and certainly worth fixing as this equate to IE’s innerText, would elegantly replace cross-browser code. Not postponed. Not futured.
connect.microsoft.com/IE/feedback/ViewFeedback.aspx?FeedbackID=379310
Reported in march 2008; bug clearly contradicting MSDN’s own document. Closed and postponed.
connect.microsoft.com/IE/feedback/ViewFeedback.aspx?FeedbackID=338580
Bug, filed in April 2008, clearly contradicting Microsoft’s own white papers on top of everything.
"Closed (Postponed)"
connect.microsoft.com/IE/feedback/ViewFeedback.aspx?FeedbackID=365833
Valid bug, DOM 1 HTML bug, filed bug at connect IE beta feedback and "Closed (Postponed)" bug.
connect.microsoft.com/IE/feedback/ViewFeedback.aspx?FeedbackID=361953
"Closed (Postponed)"
connect.microsoft.com/IE/feedback/ViewFeedback.aspx?FeedbackID=338278
"Closed (By Design)" Not postponed. Not futured.
And here, I’m not even listing the bugs which had to be REOPENED and get properly fixed.
> Bugs in IE should be reported through Connect for proper tracking and analysis.
"
I have already submitted a number of bug reports to the Internet Explorer feedback system with simple and straight-forward test cases. I have been disappointed with the way the feedback system has been run, specifically how so many reports are marked as ‘By design’ or ‘Won’t fix’ when, according to the explanations in the comments, they should have been filed as ‘Postponed’ or simply switched from a ‘Bug’ to ‘Suggestion’. I’m trying to be as helpful as I can."
– David Hammond, August 10th 2006
"
blogs.msdn.com/cwilso/archive/2006/08/10/694584.aspx#696541
Between 1997 and 2006, there was no way for anyone to report bugs. And bug management at IE beta feedback is certainly improvable.
I have filed about 100 bugs at connect IE beta feedback and, as far as I am concerned, they were all relevant, valid, confirmable, worth to be investigated, useful and helpful, also had clear and reduced testcases.
Eric, ..<deep breath>.., I just don’t understand why you would say
{
"bugs" on the page you cited are not actually bugs.
}
Gérard Talbot
@Gerard, I too have tested well over 50% of the bugs listed on your page as well as other at other sites. All were bugs, either in a really obvious way, or when comparing to the specs became very clear.
IE has always had a public image issue, and today is no different. IE is the AOL of web browsers… it works, it does get on the Internet, but man o man does the experience pale in comparison to any other browser.
As for me personally, I want to know what issues are confirmed as fixed and in for the RC.
If you tell us (or worse yet, say nothing) and just release the RC, we’re all going to have to make a mad scramble to figure out what we need to fix and what was actually fixed in IE.
Please verify the status of ANY BUGS that have been fixed since Beta 2.
============================================
THIS IS MORE IMPORTANT INFO THAN ANY OTHER POSSIBLE BLOG POST YOU CAN DREAM UP AT THE MOMENT.
============================================
thank you
Issues resolved: "By design" are an indication that the feature team does not believe the issues cited represent valid bugs. In some cases, you list reasonable feature requests, but misrepresent these as "bugs." By that metric, any software without an infinite number of features is infinitely buggy.
Issues #1,#2,#40 concern Favicons, a feature invented by the Internet Explorer team many years ago. The fact that subsequent "specifications" were authored by others that were incompatible with the existing design reflects a flaw in the authorship of those documents, not a flaw in IE.
#27: The ECMAScript 3 specification describes no implementation of the "const" keyword, noting that it is reserved only for use in a future specification.
#86: This issue isn’t present if the QuickTime player (which steals the img/png file association) is not installed.
My way of preparing a new browser launch is to give a quick test to beta, check what does it breaks and report it in their bug database. When they fix those regressions I might check the new nightly and verify that they do indeed work and so I might be able to further check if everything is working as expected, it might even replace the stable version as my current browser as the improvements in every new browser are very nice.
Unfortunately in IE case it means waiting several months and hoping that they fix all the reported problems.
Meanwhile I keep on developing with Firefox and then readjusting the pages for the non-standards loving browser.
We have the same mad scramble to address incompatibilities at RTM. For example, right now our tree view works fine in IE8/XP, but not with IE8/Vista. Diverting resources for beta compatibility has not been of value to us in the past. We need a long RC to RTM period.
@EricLaw
> "By design" are an indication that the feature team does not believe the issues cited represent valid bugs. In some cases, you list reasonable feature requests, but misrepresent these as "bugs."
A lot of people reporting bugs or going through the connect IE beta feedback bug reports may disagree then. Spec violation or unsupported property, attribute, method will represent valid bug in their mind. Eg Not being able to select text or viewport unexpectedly jumping/moving back to top of document view is a bug (#197) IMO… but nowhere will you find an official spec stating something like that.
> In some cases, you list reasonable feature requests
In some cases, I list absence of support otherwise clear spec violations (including UAAG 1.0). Sometimes backed up by MSDN’s own documentation. Sometimes I identify a serious accessibility, usability, annoyance problem too (#197).
> Issues #1,#2,#40 concern Favicons
Issue #1 is about requesting a file that has not been explicitly linked to begin with: my verdict is that’s unjustified and not recommendable. And the bug I see has been sufficiently explained, documented and substantiated by others and elsewhere. The original feature invented by the Internet Explorer team in that issue should be upgraded, corrected for the better, for the future and for everyone involved.
Issue #2: The original feature invented by the Internet Explorer team in that issue should be upgraded, corrected/adjusted to meet, to be compliant with 1999’s HTML 4.01
HTML 4 specification, rel attribute value (link-types)
That’s what I am saying.
And
"
‘Shortcut icon’ instead of ‘icon’, now that was definitely a simple oversight.
"
blogs.msdn.com/jeffdav/archive/2007/03/01/why-doesn-t-the-favicon-for-my-site-appear-in-ie7.aspx#1832378
is actually exactly what *you* have been saying, Eric.
Issue #27: I have been told that
"
ECMAScript 3 (ECMA 262-3), which does not list such a construct. The upcoming version 3.1 will include const
"
bugs.kde.org/show_bug.cgi?id=170070
So this one changed (or will change) from reasonable enhancement request (for compatibility purposes) to absence of support for a standard (ECMAscript 3.1).
Issue #40: either IE7+ supports PNG natively or it does not. Notwithstanding other browsers (Firefox, Opera, Safari, Konqueror, etc) support’s for PNG and favicon/webpage icons. Issue #40 is an enhancement request and it’s certainly a fair and reasonable one.
Issue #86: This issue has been already FIXED according to T. Leithead (in an email dated november 4th 2008), so, why mention it here and now?
If it was not a bug according to formal and strict definition, it certainly was an obstacle/annoyance/irritation issue.
Also, I did NOT have QuickTime installed at all when I got that yellow information bar prompt.
If all you can bring to substantiate your
"
> A significant number of the "bugs" on the page you cited are not actually bugs.
"
is 5 issues, then this will certainly look like quite a stretch.
Gérard Talbot
The Internet Explorer 8 team has announced that a release candidate (RC) will be available to the public
I hesitate to use even upgraded versions of Chrome, since my last experience using it (first version) left my computer compromised; have they fixed the security issues beyond all doubt?
1. It gobbles resources
2. The tool bar, links, and other in the same area does not display
3. It is slower to open than IE7
Since it is going to take so long to get to the next lever, I am going to uninstall IE8. The aggrevation is not worth it.
Please IE Team!
Improve/Get the following things:
Speed! Browsing speed.
Use Allot Less RAM.
Have performance great
Have features for everyone
A Download manager.
Better Add-On Support
XHTML Support
If you add all this and make it light weight like Google Chrome. I would use IE 100 percent. But i can’t. It’s a 40 percent till it gets these stuff. I know people that would like the stuff that i, it won’t work in IE 8.0b2! For some reason, it chokes on one of the XHTML 1.1 modules. Turn on Compatibility mode, and it displays the document tree like normal. Could this be a, in IE 8.0b2, it doesn’t seem to work! For some reason, it chokes on xhtml11-model-1.mod for no reason that I can see. Turn on Compatibility mode, and it displays the document tree like normal. Could this be a bug in MSXML or is it an IE bug? ^_^
This "fix" comes at a cost – IE won’t style your documents when you use <link/> or <style>@import….</style> because it is just another XML tag. You’ll need to use the <?xml-stylesheet?> PI to get around this. Then you’ll be able to take full advantage of XHTML 1.1, though any styling you’re used to will need to be recreated. What you’ll actually be doing is designing around the W3C box model to re-create what browsers already create for you. The chances of this working cross-browser, however, aren’t exactly favorable. Another cost is the lack of scripting because XML itself has no concept of scripting.
The other option is to instead skip all of the hacks and use XSLT on the server side for IE only. Deliver the generated HTML or HTML-compatible XHTML document to IE, and deliver the original document to other browsers that can handle XHTML. Of course, then you wouldn’t really be using XHTML 1.1 in IE, would you? ^_^
*sigh* Too much work to code XHTML for IE when every other browser, including Lynx, a text-mode browser, can handle the stuff. XSLT all the way!
@Dave:
>>If you’re suggesting that IE doesn’t refuse stylesheet references that don’t contain the expected Content-Type, or something of that nature, why don’t you explicitly say so?<<
I definitely need more practice in the english language.
It looks like you got my point though.
This is a violation of CSS 2.1, the only clear goal for IE8. The report was WON’T FIXED.
Just please do your best to start pecking away at Firefox and Google Chrome. Seriously. You guys are a group of extremely smart people… I’m sure guys can do something (and gals… no offense to any girls on the team).
Also, would be cool if you could do the "shift+enter" to .NET and "control+shift+enter" to go to .ORG. Doesn’t make sense why it wouldn’t be there + it would be a helpful addition.
Control+shift+enter is customisable from Internet Options > General > Language which is very handy for people to set their local suffix, or whatever one they use regularly.
OK I’m tired of beta testing and waiting I will go back to FF3.
All I care about anymore is that the rendering is on par or better with the browsers that currently render better than IE6 (not hard) and IE7. The fewer need for IE related css tricks, the better. I’m so disillusioned with the continued arrogance towards openly (and rationally) supported standards (xhtml, favicon implementation), whether they are a draft or finalized – or whether or not the IE team originally ‘invented’ them, coupled with the incredible amount of time IE8 development has taken, that I’ve moved on to using a different browser full time. At this point the frequency of updates and the incredible snail-like speed of IE development has convinced me that there’s no point in waiting for a superior browser product from Microsoft.
All I can personally hope for is that IE8 (when it finally gets released) will ease the burden for my daily development tasks. Even then, it will be a year before the majority of users switch to IE8 and possibly longer for corporate adoption of IE8. Hell, I’m dealing with a major insurance company who wont even update to IE7 internally – and how long has that been out?
Ditch the pride and learn to love humility.
"concern Favicons, a feature invented by the Internet Explorer team many years ago. The fact that subsequent "specifications" were authored by others that were incompatible with the existing design reflects a flaw in the authorship of those documents, not a flaw in IE."
The community (ie; ‘otheres’) have left IE in the dust, moving on without it. That’s just fact now.
Talk is cheap.
Become Aglie – at least in some regard.
Take a page from your successful competitors;
RELEASE EARLY, RELEASE OFTEN
> Hell, I’m dealing with a major insurance company who
> wont even update to IE7 internally – and how long has
> that been out?
Too long for developers, not long enough for others. 😛
EricLaw: The button is styled using CSS, the checkbox is not really important here (maybe I shouldn’t have included it in the images).
The input tag looks like this:
<input type="submit" name="loginKnap" value="Log ind" class="navLoginButton" tabindex="4" />
CSS Class:
.navLoginButton {
margin-top: 1px;
margin-left: 5px;
width: 55px;
height: 19px;
font-family: Georgia, "Times New Roman", Times, serif;
background-color: transparent;
background-position: right;
color: white;
border: 0px;
cursor: hand;
cursor: pointer;
}
We were able to get around the text wrapping with an instead of a space character.
Replacing: value="Log ind" with value="Log ind"
Some of the CSS settings may not be optimal, but I don’t suppose text should be able to wrap inside a button at all?
I must say, having done some testing with the beta, I am definitely impressed with how much better IE8’s rendering of various things is than IE7’s, and I’m very much looking forward to seeing it out. Indeed, as a web developer I’d really like to see it hit automatic updates so I can start expecting users to have it.
On the other hand, though, I can sure understand the desire to not release it until it’s stable. (I’m a Debian user myself and still using iceweasel 2, so you _know_ I understand about stability requirements and the need for patience.) So in the long term the slower release cycle you’ve planned is probably for the best. Just don’t make it _too_ long 😉
NM: SVG would admittedly be really nice to have, but it’s also a big spec and would take a lot of time to implement. Firefox is still only partway there after working on it for a couple of years. So I’d say SVG is best left for IE9 (or 8.5, or whatever numbering goes on the next release after 8). Besides, you don’t normally do major feature work after putting out the betas. Focus on getting IE8 out the door first, and then worry about stuff like SVG for next release.
Actually, even the performance issue Stan was talking about, provided it’s not significantly worse than IE7, doesn’t need to be fixed for the first release of 8.0. If it’s going to take more time, do it in a point update ex post release.
Ted, HTML5 is mostly just to help keep the holdouts who aren’t ready for XML yet from falling *completely* out of step with the modern web. XHTML has significant practical advantages and is not going anywhere. (Among other things, you can’t mix XML namespaces like RDF or SVG into an HTML5 document, but with XHTML you can, and people have already been doing this for a couple of years. Also, wellformedness makes XML *much* easier to parse than SGML, which makes XHTML markup much more maintainable than traditional HTML, especially when it’s contructed dynamically from bits and pieces provided by different code in different places.) The people moving to HTML5 are moving mostly from HTML4, not from XHTML. And actually, IE *mostly* supports XHTML okay, but it needs to recognize the content-type and a couple of other details like that. Currently most XHTML content has to be served out as text/html because of this issue.
I’ve been able to reproduce it here:
If first bugs when I add the XHTML 1.0 Transitional Doctype.
Works fine in IE7 Compatibility mode – but wraps in IE8 mode.
Regarding the script debugger it would be really nice if we could specify that it should always be enabled for specific domains, but disabled for all others – perhaps as a setting in the developer toolbar.
Found my first bug submitted to IE to finally have the status changed correctly !!!!!
Status: Resolved (Postponed)
As in, for now they are doing nothing to fix it, but they at least acknowledge that it still does need fixing, thus is on the list for IE9 fixes.
Only took MSFT 5 years, but now you seem to be getting it!
PS the bug was 336252.
Setting .innerHTML on a select element still fails in IE5.5, IE6, IE7, and now IE8.
Just for the record, I do think that this kind of fix would likely only take 5min to fix. I can’t see what is so hard about this.
Maybe you could post the C++ code and we could help you figure out the bug?
Lori: First, check whether it happens in other browsers (e.g., Opera, Firefox, Safari). If it happens in every browser you try, then the problem is probably due to a mistake in the page’s markup or styling. If it only happens in IE, then you should try to create a minimal test case to demonstrate the problem. The programmers listen much more attentively when you have a good test case, as I discovered long ago on b.m.o.
Techdribble: I agree, multiple browser versions on the same OS is highly useful for web developers. Konqueror has the same problem, because of the way it’s integrated into the desktop environment, which I think is somewhat similar, at least on the surface, to the way IE6 was integrated into Windows, e.g., Windows Explorer in some cases would launch IE embedded in the WE window with WE menus and toolbars, and the reverse situation was also possible; Konqueror, for reasons that are not clear to me, does something very similar with its file manager and web browser components, which can be confusing if you don’t realize what’s going on. Admittedly, some things (such as the home button) would not be quite as confusing on Windows (since the user’s personal documents folder isn’t usually called a home directory), but still, integrating the file manager with the browser never made any sense to me, and I hope Microsoft doesn’t go back in that direction. That’s a past better left behind.
@Jonadab the Unsightly One:
Just FYI, HTML5 is not SGML. It’s merely inspired by SGML, but actually its very own syntax and parsing is defined in the spec/draft.
I’m for one amused by reading comments here about how SVG is a-must. What’s with that obssession or are they same people over and over :p
@Daniel Møller, EricLaw:
No specification supported by IE8 specifies in any detail how form elements are to be presented. For example, ."[1]
Hence, browsers are free to do as they wish with the presentation of form elements.
The historical non-wrapping, truncated overflow behaviour of (input-)buttons as implemented by most (all?) released browsers to date is horrible because these browsers also prevent overriding this behaviour via CSS.
IE8’s wrapping behaviour is welcome because it corresponds to web authors’ expectations; where else outside of <pre> does text not wrap by default? Moreover, if required, it /is/ possible to achieve non-wrapping by CSS (or by using no-break spaces as Daniel suggests).
[1] Section 3.2 of
IMHO, IE8 Beta2 is still not the killer IE, that the users are expecting. Lot’s of rough edges adn missing features. Frankly, we want to surf the net, store book marks and be done. Who wants to muck around with 1000IE settings and options. It’s a sad state of too many choices and legacy features. Feels like a patchwork on top after all. Sorry guys.
> text/css won’t be properly supported. Because someone filed a bug which was closed later.
Just so that everyone can understand and follow previous comments from Tomas and Dave:
Bug 364028: External stylesheet not labeled text/css must be ignored.
connect.microsoft.com/IE/feedback/ViewFeedback.aspx?FeedbackID=364028
was Closed and resolved as Won’t Fix
Relevant reduced testcase on this:
HTTP response headers for that stylesheet:
web-sniffer.net/?url=http%3A%2F%2F
Relevant specification section:
indicates that stylesheet should be served as text/css and not as text/plain or served with an incorrect MIME-type such as "application/x-pointplus"
Also HTML 4.01, section 14.6 Linking to style sheets with HTTP headers:
Tomas, you’re right. They should have just futured, latered, postponed that bug. Even just leaving it with status: Active along with a "At this time we do not plan on fixing this issue. We will consider this in the future release of IE." typical comment was more sensible, careful than closing+wontfix-ing it.
Regards, Gérard
No SVG, minimal PNG, and buggy GIF support? Puhleeze.
From the point of view of a web dev the conversation goes something like this:
look at all these cool charts and things that I can easily make from within our webapp framework. I can even embed them in a PDF and it’ll print beautifully. You don’t even need a crufty plugin for your browser! You’re using IE? Oh. Sorry, Microsoft wants to lock you into their proprietary technology that nobody will ever use instead. You’ll have to look at the ugly version of the page.
Political garbage aside, Internet Explorer will be further behind if you don’t support what everyone else does. You IE devs claim to listen to feedback, yet you ignore all the cries for standards compliance and feature parity with other browsers. All we lowly devs get back is drek that’s been filtered a few times through your PR department.
Every once in a while you’ll do something in the name of standards compliance. Something like rename a bunch of CSS properties, so now there are what? three? cases you’ve got to support to handle proprietary MS CSS extensions? Ugh.
@Steve,
They fixed 2 bugs regarding the add() method for adding options to a select:
See bugs #14 and #72 at my webpage. So there is a workaround, a web standards one (innerHTML is not anywhere in DOM 1 & 2) for adding options.
Regards, Gérard
@Kellie [MSFT] re: "Thanks for your help in filing bugs. The connect site will be updated with all of the bug fixes, and current status on the issues when the RC build is released."
I understand that some things are still being worked on, however the connect site needs to be updated before the release. e.g. if window.onresize will not be fixed by the RC, then many of us need to redesign our sites to make a downgraded version for IE8. I would certainly hope that we wouldn’t need to do this, but there are still many fairly big regression bugs that are going to cause us a lot of grief if they are not fixed. Knowing up front, what things have been fixed will allow us to prioritize on what workarounds we need to put in to make our applications and web sites work in IE.
thanks
@Gérard Talbot: yes, glad to see the .add() method is fixed, but I find it surprising that the browser that added the .innerHTML property is the only one that doesn’t seem to support it on every element (theres at least 3 that IE fails on)
Since setting the .innerHTML renders much faster, and in the past IE had so many issues setting attributes, using .innerHTML became the status quo. Was just hoping that there would be a fix for this… and for Tables too.
I think it’s cool that MS is considering dropping IE (Trident?) and shifting to a better (open source) engine, like Webkit…
At least then you’d come close to achieving parity with the other browsers on the market, all of whom are giving your browser efforts a serious paddlin’ in the web standards, functionality, and performance stakes… and doing so on budgets at least an order of magnitude smaller than yours… wonder what the MSFT shareholders make of that…
I recommend certainly recommend an "If you can’t beat ’em, join ’em" approach – something that open source encourages. Anyway, Microsoft – admitting defeat is probably the best way to get yourselves out of this rather embarrassing IE browser mess.
I’m struck, reading the above comments, by the fact that just about all of them are from developers/users who are requesting that IE8 support features that neither IE6 nor 7 support, but all other browsers do. I though MS were supposed to be innovators… I fail to see innovation. I just see catch-up with the leaders, and I see the failure of the MS development model.
Many of these XHTML, SVG, etc. issues are related to MSXML, which the IE team has no control over. If I’m thinking about it correctly, they only use the MSXML libraries to add some XML support. In many people’s opinions, there is more important work to do on the rendering engine and the scripting engine (CSS, DOM, script execution speed) than anything else anyway.
In addition, XHTML isn’t really the MSXML team’s responsibility since it is a "reformulation of HTML 4 in XML 1.0". The HTML bit means it is the IE team’s job. However, MSXML doesn’t support some of the things that the W3C does in its DTDs, resulting in IE’s current inability to do anything to support XHTML.
Unless the IE team created its own XML parsing engine or the MSXML team "fixes" theirs to support XHTML or at least the stuff that causes errors in IE currently, XHTML support in IE is only a fantasy.
Don’t waste time on Xhtml, canvas of SVG or a download manager.
Concentrate on de standards compliance and the user experience of the browser.
I have seen on Gerard Talbot pages at least 30 or 40 still open standards related bugs and even if the IE team does not consider all of them bugs that would leave at least 20-30 standard bugs that should be fixed. Especially those that are regressions form earlier versions.
Make sure that the browser feel faster and correct all issues to do with the user experience.
> Don’t waste time on Xhtml, canvas of SVG or a
> download manager.
> …
> Make sure that the browser feel faster and correct
> all issues to do with the user experience.
I agree with all but the last one. The team doesn’t just have us developers to think about; they need to make sure the user experience is good too, as noted in the last of the quoted lines above. A download manager would be a welcome addition for users. Even something as simple as Firefox 3’s DM would work. You can pause, resume and cancel unfinished downloads, and you can clear finished/cancelled downloads with the click of a button. That would be enough, and it shouldn’t take too long to add if it is that minimalist.
As for the standards-compliance with regards to XHTML and SVG, I agree. Those can wait. As for HTML 5 features such as the canvas element and the canvas API, those could wait too. After all, it’s just a draft right now, and it could change at any time.
SVG is a real must for the kind of corporate reporting web apps that my team has to work on. If Microsoft can’t implement this in time, please work with a third party to include at least some support as a plugin (license a plugin from a third party, and ship it). If IE8 doesn’t deliver at least SVG+XHTML, it will definitely be dead (completely gone) in my organisation by Q3 2009.
In order of importance, I would like to see the following: MathML, proper XHTML, and Canvas.
I have tried ie 8 beta 1 and 2 and niether one would ever open up. It is all i know of that has problems on my comoputer. i have a dual core amd x86 vista ready laptop Hp dv9008nr..
any suggestions? Thanks Miles
I would like to see interim release / "CTP" or whatever in the next month.
@jrsmith:
There’s no need to wait. It’s already clear that IE8 won’t include any XHTML/SVG/MathML support.
If that weren’t the case, these features would already be visible in Beta 1 and 2.
Personally, I’m happy that my company recently upgraded to IE7 (hope they’ll wait not that long to ship IE8). But we’re free to use another browser anyway.
There are some function in IE8 that I miss:
In Firefox:
When Firefox ask you if it should remember your password. It pops up a menu with different choises. I like this MUCH better than the one in IE. Also, in firefox, it will login to your account before you have answered if it will remember or not. In ie you have to answer first. That bad… So get some inspiration from Firefox guys!
Firefox have a lot of addons. I miss this in IE. In Firefox I have an YouTube Downloader. I love it! It makes it much easier to save movies on the internet.
In Firefox 3.1 they have added a new Ctrl + Tab function. Now it looks much more like the Shift + Tab. That function is much better than their old and yours. You should do the same!
Exept from that I love IE8! So good luck guys!
@Dustin
A download manager is not needed. Downloading and resuming of downloads works fine in IE8 and is more than enough for nearly all users. Users that need more can always download a download manager plugin.
@jrsmith
If SVG is a real must and you can”t use alternative methodes your tooling is inflexible at best. Als I wonder how you accept that no browser has full SVG support yet.
@Dean
Who are you listening to your boss? Because you haven’t been listening to the Beta Testers. The only thing Microsoft has been able to say to Beta testers is"NO" So much for changing, listening and learning from your mistakes. What happened to "Life with out Walls" As far as IE 9’s UI customization I predict it will be a nice big fat wide ribbon, it will be Microsoft’s way. Ted is right about complaining to Apple, because Microsoft has become Apple, just say NO
I know what IE 8 needs to succeed, The "Mojave Experiment" of it’s very own.
Hola..,recien me entere que salio esta nueva version…….la testeare de inmediato a ver que tal funciona.
Saludos,
Martin.
Argentina.
Stop asking for SVG and Canvas, as well as proper DOM support and fast JavaScript engine. You will not see them even in IE10 because they may hurt Silverlight.
I’m very satisfied with the Beta 2. But there is one thing I miss: a good and safe passwordmanager that also works on sites that have turned autocomplete off.
Now I use WebReplay passwordmanager and I don’t understand why such a program can’t be built in IE. I mean, today passwords must be used everywhere and they must be complex.
Yes there is Windows CardSpace but this good solution (IMO) is hardly used.
In the final version, could you please make IE8 available for Greek x64? Thank you.
Al Billings, when you come over to troll on the IEBlog, you probably should use a signature that indicates you work for Mozilla.
Dave, you have no idea what the IE team’s budget is, nor, I suspect, do you have any idea how much Apple, Google, and others spend on Webkit. If you read what Ballmer *actually* said, he merely pointed out that Microsoft is *always* interested in what open source products are up to.
steve, I’m entirely confident that they’ll fix the resize event. The IE team isn’t going to break the web, that’s the whole point of taking so long to release.
Alex, when you say stupid, unsubstantiated things like "buggy GIF support", no one will take you seriously. If you have a complaint, build a test case and file a bug.
Gerard, I’d imagine they "won’t fixed" the bug because they have no intention of EVER fixing it. There’s little point in breaking the web just to comply with a poorly thought-out standard. Even HTML5 allows for content-type sniffing.
Sisl, I suspect you’re right, and most people complaining about SVG support don’t understand that other browsers are far from full support either. This is partly due to the SVG specification being *dramatically* overcomplicated. We don’t need yet another way to do form in the browser, for instance.
Jonadab, consider reading and the HTML5 working group charter, and what the WG had to say about XHTML. IE doesn’t support XHTML (no verification, case-sensitivity, etc), although if the content happens to look enough like regular HTML, the parser accepts it anyway.
I have been experiencing the same rendering issue Lori pointed out (comment)
Some page elements (images, text parts, etc.) are not fully displayed (or displayed at all) untill I click on the page or refresh it. It is really really annoying.
It happens with a lot of web sites, for example when searching on Google or with WordPress-based blog posts.
(This rendering issue does not happen when the compatibility view mode is turned on)
P.S.: I am running IE8 Beta 2 on Windows XP SP3.
Ted,
if Steve Ballmer merely pointed out that Microsoft is *always* interested in what open source products are up to, then Bugzilla as a global bug tracking and reporting system for *all* Microsoft products, not just IE, is definitely an excellent opportunity/possibility. Big corporations (eg NASA) have done so.
Bug 348537
connect.microsoft.com/IE/feedback/ViewFeedback.aspx?FeedbackID=348537
was Closed and WONTFIX-ed while the comment from IE team clearly indicates IE team intends/wants to fix it. In any case, to pass acid3 test implies to fix bug 348537.
Bug 348575
connect.microsoft.com/IE/feedback/ViewFeedback.aspx?FeedbackID=348575
was WONTFIX-ed. But the comment clearly indicates that the bug is valid and will need to be fixed. If Microsoft had Bugzilla installed and properly configured, then this bug 348575 would have been futured with a milestone/target like IE 9 (or 8.5) and with "highrisk" keyword added: the status would remain as it was: ACTIVE and not closed.
Fixing bugs should still remain a priority for *many* reasons. Remember that IE is widely in use (~=71% worldwide) and there never was any place to report bugs in IE for proper tracking and analysis during many years (1997-2005).
@fionbio
I think you’re absolutely right. Implementing SVG and canvas does not make sense with investing time, energy, resources of all kinds in Silverlight.
DOM 2 Events will be implemented in IE 9 (or IE 8.5) because the DOM 2 Events model is more versatile, much more powerful than IE event model. Web authors will want + demand it more than SVG. SVG will hit performance and memory footprint and not everyone would want to see SVG animations. A more nuanced, flexible, less commital approach makes more sense: develop a SVG plugin first in IE 9 and then see how it goes/what happens. Anyway, Microsoft has other plans (Silverlight) besides SVG.
Regards, Gérard
XP SP3, on Thinkpad T60
Since mid-October, my Favorites Center button stopped working. (The "Favorites" on the menu bar works.) The flyout does not work at all. A quick internet search reveals that there are a few similar incidents. Are you aware of this problem? If so, will the fix be included in the next beta release?
Need for speed!
Obviously the all known bugs must be fixed before release and as importent, IE8 must be faster than all its competitors. That by the way goes for Windows 7 too.
Speed is the key for success, the key for microsofts future as a leading competitor in this business.
"We will release one more public update of IE8 in the first quarter of 2009, and then follow that
The Internet Explorer 8 team has announced that a release candidate (RC) will become available to the
@EricLaw
I sent an email to the blog with the repro/website info.
@net
Thank you for confirming I’m not the only one seeing the problem.
@jonadab
I have tested these two sites in several browsers:
IE6, IE7, Firefox 2 & 3 (pc and mac), Safari (pc and mac), Opera, Chrome.
I only see the issue in IE8 – beta 2. I have Windows XP Pro, Service Pack 2.
Are there plans for a virtual pc download of IE 8?
@Jenn: See for links to all of the VirtualPC images, including the IE8B2 image.
@Sue-Jean: Do you see this problem if you start IE in no-addons mode? explains how.
@eghost: IE8 has millions of beta-testers, and as Dean has elaborated above, there are lots of ways in which their feedback has shipped the product. No product ever includes every feature from every individual’s wishlist (not even mine!) but we strive to delight the greatest number of people as often as possible. IE8 represents a significant leap over IE7, and we’ll continue our efforts into IE9.
To EricLaw: First, thanks for your interest. This morning EST, I found this fix:.
As of yesterday, I saw the no-flyout problem with Favorites Center with or without addons. Curiously enough, at work, I use XP Professional SP3 on Thinkpad T61 and have had no comparable problem. I doubt that the problem is related to certain Thinkpad models, though, because the original poster on the C-Net thread had an eMachine.
For Vista users, there appears to be a workaround:.
At we prepare by alerting the employees and making time available to answer questions and anticipate individual issues.
At we prepare by alerting the employees and making time available to answer questions and anticipate individual issues.
Been using IE8 B2 since it was released, and other then a few crashes and incompabilities it was a good experience.
I would love to see a different Favorites bar.
Especially the History area is not very user friendly (even a browser tab with a grid and search/filter ability would be far better).
More speed is always welcome.
Good job so far otherwise.
The developer tools are a nice addition but it seems to me it is missing an obvious piece of functionality – the ability to inspect request/response headers. Any chance this could be added before IE8 ships? Or is this feature in there and I’m just not finding it?
I’d like to see the Command Bar share the same space with the Favorites Bar. I usually don’t have that many favorites on this bar but I usually have lots of tabs open.
Hey guys, just wanted to give ya the heads up that I am ready with my IE8.css style sheet for your wonderful browser.
@KeithH: Adding lightweight header inspection to the developer tools is a feature request we’ve gotten a few times. The devtools team concentrated on adding value in places where they could uniquely do so, and the result is the new script debugger and profiler.
For examining HTTP traffic, I’d suggest you take a look at Fiddler (). Fiddler offers much more than just header-viewing: You can also *modify* traffic (manually or automatically) as it flows across the network, enabling a much richer set of testing. You can collect HTTP/HTTPS traffic captures and archive it to files (see too) for later viewing or comparison. You can use Fiddler to modify the performance characteristics of traffic (e.g. what does my site look like on a modem) and can view timing charts to understand how your site is downloaded. There are a set of tutorial videos for Fiddler here:
Beyond Fiddler, there are a number of header inspector plugins that work directly within IE (e.g. HTTPWatch) if that’s how you prefer to work.
@Sue-Jean: Thanks for the notes. We’re looking into the Connect bug.
Is Compatibility View fixed yet? In beta 2, if you add to the compatibility list then it adds the entire govt.nz second-level domain instead of just the one site. This is the second time that I’ve mentioned the issue on this blog but nobody replied to the first one so I’d like to just make sure that you know about the problem.
I also posted the same problem to the NZ IE8 blog (blogs.msdn.com/nzie8) and my comment was deleted. That doesn’t bode well…
@Behodar: Thanks for the note; I’m not sure why your comment wasn’t posted.
Yes, the issues with the .NZ domain are known. The challenge with the .NZ domain is that it’s not set up in the typical ccTLD fashion, which means that it’s difficult to reliably know which part of the FQDN is the TLD. Stated another way: "There is no algorithmic method of finding the highest level at which a domain may be registered for a particular top-level domain (the policies differ with each registry)." (publicsuffix.org)
We’ll have more to say on this topic in the RC1 timeframe.
Two questions.
1. Why is it ie8-beta freezes on me a lot, but then when I start to press CTRL ALT DELETE to find which iexplore.exe is screwing up it suddenly releases itself from its frozen state and comes back to life? Was it scared of me? was I threatening it?
3. Why should we the consumers, or users, or whatever, have to beta test a product from the world’s second most successful software company? (First being Google, and they’re doing the same thing with Chrome which will perpetually be in "beta")
stalepie: You don’t "have to" beta test anything. You get to beta test browsers because there’s no other way to test a product this complex. If you don’t understand why that is, you probably should not bother reading the IE Blog, just wait a few months and install the release version like the hundreds of millions of other "normal" users.
(By almost any metric (market cap, revenue per year, number of users, etc) Microsoft smashes Google.)
In order for Internet Explorer 8 to be finished it has to be beta tested. Beta testers are not getting paid. We are the guinea pigs. Don’t you at least find this rude that both Google and Microsoft indulge in this behavior?
I guess it doesn’t matter since we’re not paying for it. It’s not like a video game where you pay $40 dollars for it and expect it to work.
@stalepie:
I fail to see your point…
Why would beta testers get paid to test *free* products? We’re the guinea pigs because we’re the ones that care enough to make our pages work in a variety of browsers, including browsers that will be released in the coming months such as IE8. If you don’t dare to care to prepare, then why are you on IEBlog?
Care to enlighten us as to what your purpose for complaining is?
I hope I can create my own search provider just like in IE7. It is not officially supported although is still available.
As a user I have used Beta 2 since it’s release in August to try to provide the IE Team with usage statistics. After three months of using IE Beta 2 as my only browser I have had enough. I was hoping that there would be an update soon to fix some of the problems with Beta 2 (speed, compatibility) but now that I know that it’s going to be a few months yet I’m going back to Firefox.
In my opinion you would be better off to release smaller incremental beta updates to satisify users like myself and keep them testing. Being stuck with month’s old beta code is frusturating.
After using Firefox again for the last few days, my only comment is make sure IE8 is at least as fast as Firefox 3. The increased responsiveness and page-rendering speed of Firefox over IE 8 Beta 2 has been a breath of fresh air. I miss the IE UI though, which I consider to be better than Firefox’s more traditional UI.
I’ll come back and use (i.e. test) the next release when it’s available.
Thanks Eric, good to hear that you know about the .nz issues 🙂
@KeithH:
Right click on the command bar area and uncheck "Lock the Toolbars", and then drag the command bar up to the favorites bar. I recommend locking the toolbars again after you do that.
I do exactly that for exactly the reasons you mentioned :).
Tab opening time is slow no matter what, several hundred ms on absolutely clean XP system. I suspect this is architecture bug, and very hard to fix, but please, have a look into it. No program is usable when it has such big responsibility problems.
There’s only one feature that interests me. I’d like to be able to open links in the same window and tab regardless of the target attribute. Currently we have the option of choosing to open in a new tab or window (from the settings) but i can already control this by holding ctrl or shift respectively whilst clicking (or by right clicking and choosing the appropriate option).
If i knew that clicking on a link would open in the same window all the time then this would save me loads of hastle. Currently if i know a site is linking to an outside site i open the site in a new tab and then close the current tab to avoid any inconsistencies and confusion. Obviously this method is far from ideal and i lose all my history.
@Eric Lawrence [MSFT]
I have an web application that includes images (gif,jpg,png) as normal, but also some links to images in this format…
src="special/itemtype.xml"
where on the webserver, this is served up with the correct headers for a GIF image.
Content-Length: 809
Content-Type: image/gif
(as well as all regular headers… and no-cache, and a 1969 expiry)
What is odd is that these images where the filename extension is not .gif .jpg or .png but .xml (possibly other extensions) work just fine in Firefox Chrome and Opera but IE6,7,+8 all seem to randomly choke on some of the images.
e.g. If I have 10 images loading, ~4 of them won’t load, and I get those little red x’s. But if I reload the page, 1 or 2 of them might display just fine, and 1 or 2 more might vanish. Best of all, if I right click on any of the broken ones and choose "Show Picture" it loads just fine.
Using Fiddler I see that when they don’t load, they "claim" to get a 500 server error.
Does IE see the file extension then peek at the first bit of content being returned and determine that there is some sort of issue because (in my case) it isn’t xml, but binary?
In the short term, I’m reworking all images to have the real extension in the filename (even if a generated reference).
Thanks in advance,
Jesse
@Jesse: If you see a HTTP/500 server error in Fiddler, that means that your server is failing to generate the images properly. Is there text in the status line, headers, or body that explains what the failure is (a JSP/ASP/CFM exception, perhaps)?
Changing the URL to the file is unlikely to fix this server-side error.
If you provide me with a repro URL (or a Fiddler SAZ capture) I’m happy to take a look and see if there’s any further help I can provide.
@Tihiy: What’s your new tab homepage? Is it about:blank?
@EricLaw [MSFT]
There is definately something messed up, because it works fine in all other browsers. More importantly, the "generated" bit is really just Apache rules that do mod rewrites to point to GIF images.
Since I can load the image one second, but not the next, and vica versa IE is just getting confused somewhere.
If I can setup a log that shows this I will send.
@Jesse, if your server is returning HTTP/500 error messages, it’s not IE getting confused, it’s the server.
Can we expect IE in Indian Languages? Firefox already has it in Telugu (India).
@Jagannath, Thanks for your question and yes IE8 will also be avaialble on several Indian languages at RTW – stay tuned.
The term is RTM (Release To Market) not RTW.
Alpha – Beta – RC – RTM
In IE8’s case, thus far we only know…
Beta1, Beta2, RC1, ???, RTM
@harold: At Microsoft and some other companies, the acronym "RTW" means "Release to Web".
RTM (or "Release to Manufacturing") is usually used to refer to "traditional" products that ship on CD or DVD.
The terms are often used interchangeably by development teams.
Dean Hachamovitch, General Manager f�r den Internet Explorer bei Microsoft, hat letzte Woche im IEBlog einen Release Candidate des Internet Explorer 8 f�r das erste Quartal 2009 angek�ndigt. Typischerweise wird dieser RC das Ende des Beta-Stadiums des
I found that having the Java add-on enabled caused very slow new tab creation, over a second.
Disabling the Java add-on has made new tab creation almost instantaneous.
This was with Java 6 Update 10…
Could you have a look on your implementation for :hover. This seems to be very slow if used for non-link elements on complex pages.
This really hurts us and we need to find dirty workarounds to fix this issue.
A small suggestion to make to Favorites bar more awesome. When I click on the ‘Add to Favorites Bar’ button on for example: I get an entry on the bar called ‘Welcome to Flickr – Photo Sharing’. It would be much nicer to not have an overflowing text and simply ‘Flickr’. So why not let the website address be the default title for a new favourite entry instead of some long discriptive title from a webpage. All these long entries clutter the favourites bar.
I am really curious about the next releases and especially their standard complience. I appreciate it very much that you keep an eye on all the usability issues. Hopefully your work to review all the comments and suggestions will fructify. Good luck! And thanks for keeping us informed!
After installing Beta 2, it is not possible to see the headed one of the Outlook Express messages in the not-main identities.
@abc : the long name of the favorite is the name of the current page. IE can’t guess a name, so it takes the one in the title tag.
After installing Beta 2, it is not possible to see the headed one of the Outlook Express messages in the not-main identities.
How do you adjust IE8 so the "What’s new in IE8" tab doesn’t appear each time you open the browser? I only expected to see that screen the first time and now it comes up every time I open IE8.
@JG: What’s the URL of the page you see? What are your homepages set to in Tools / Internet Options?
@abc: You can rename items on your Favorites bar by right-clicking them. The problem with just using the domain name is that it may not be specific enough (e.g. "Yahoo" would include their mail page, their groups page, their search page, etc, etc.)
@EricLaw: That is indeed a valid point. Maybe for the next version you could develop a smart algorithm to come up with a short and useful name for default? The default favourites title should be right more of the time (now almost every added favourite needs renaming).
Please allow the simple opacity:nn CSS style, otherwise your browser will appear to be broken. This is just a matter of your future browser understanding a soon-to-be CSS property that is sure to be implemented soon, and that has been essential to modern web development for many years now. So what if it doesn’t validate for CSS 2, but only for CSS 3? You should allow it anyway, and let web designers decide for themselves what standard, if any, if it even matters, they want to validate to. The important thing is that we can make web pages that work to current or future standards, who cares about stupid semantic rules anyway?
1. Review your tests. Usually we do prepare tests for important functionality but we leave the details to be tested later. Make a list of all your features and test them.
2. Because your product is quite big, do an Release Candidate and collect information from your "real world" testers.
3. Prepare for the next steps. While developers and engineers test each other software and finish, you should plan for "the after".
3.1. Is the product gonna require frequent updates?
3.2. What parts did you left out for a 8.1 version?
3.3. Prepare to analyze market impact of your new product and their competitors. Did you do a great job?
As far as I’ve seen and using the history of IE6 and IE7, I don’t know if your product is gonna be good for the web industry or just another weight to carry.
Think that people will have to "change" and when you do have to change from one browser to another, some people will ask themselves if IE8 is the alternative or perhaps other browser is.
So take all the information you can from the RC, because your team will make: or thousands of developers happy, or just confirm what IE6 and IE7 were.
On my office PC I have windows XP installed. I have the IE 8 Beta 2, Firefox 3.0.4, Chrome installed. I am able to use Windows Live Mesh in the other two browsers. But, in IE I am not able to sign into the Live Mesh. The Connection settings are same for all the browsers.
My apologies in advance if these things have been mentioned.
1. I found IE8 beta 2 to be largely unusable. There were so many pages that didn’t work that I switched compatibility mode on permanently, to be able to use IE8 on a day-to-day basis. In essence I gave up on IE8 beta 2 and now run it in compatibility mode. The rest of this information is with IE8 running in compatibility mode.
2. As the creator and maintainer of the Australian English dictionary files used by hundreds of thousands of people, I’d really like the ability to have spellcheck without having to install IE7Pro. IE7Pro adds many features I don’t need and I find it is cumbersome for people to set up. The IE7Pro download manager has been a cause of problems for a number of people who have contacted me. A simple spellcheck add-on would be preferred, or for the feature to be built-in. I personally wouldn’t recommend a browser that can’t spellcheck with all the online work we do now.
3. I found the customer search engine facility from Google causes an infinite loop in IE8. I receive a pop-up blocked message and allowing pop-ups then causes the loop. This works fine in all other browsers. You can use the site to do a search and check what happens.
4. I just went to print the following PDF (). The dialogue was malformed and thus didn’t work. I had to resort to using Firefox to print the PDF. The printer was an Epson C1100.
5. Would prefer new tab with nothing in it.
6. I find the speed lower than desired.
7. I have no need for a download manager and find the built-in features to be sufficient for my needs. This isn’t to go against what others want, just to let you know it isn’t important to me. It does appear important to others.
I hope the feedback helps.
Неужели это будет настольки хороший браузер? не верю! Насколько это реально?
@Kelvin Eldridge:
In Tools->Options’ general tab, go to tab settings.
Under "When a new tab is opened, open:" select "A Blank Page".
Then new tabs will open blank. You can also set your homepage to about:blank if you like.
Ens, thanks for your help. I was looking for a way to make that work. I found it today! Thanks!
When I save a web page (complete) in IE8 Beta 2, why does it alter my CSS files?
I have things like:
#foo{
border:1px solid #000;
}
which then gets converted to:
*#foo {
BORDER: 1px solid #000000;
}
I don’t care about the case change or the spacing, but why does the star (*) get added?
I noticed it gets added to every CLASS or ID based CSS declaration.
If I overlay a sem-transparent div over (e.g. lightbox) over my page, I shouldn’t be able to select the text underneath it (since the div blocks my clicks to the content below).
This BREAKS in IE8 due to the Activities feature. I can now select any content that is under a semi-transparent div to then copy/paste/print or do whatever with.
Since IE does not yet support the user-select CSS property (or have their own -ms- implementation) just how does one now deal with this regression issue?
Or will this be fixed in IE8 RC?
PS The above is in ADDITION to the opacity/z-index regression bug in IE8 Beta 2.
Is it possible to set IE8 to render ALL sites in IE7 mode for now? How would I do this?
@cwilso / @EricLaw / @anyone at MSFT:
Can I get confirmation that MS is aware of, and looking into the z-index bug with:
-ms-filter: "progid:DXImageTransform.Microsft.Alpha(Opacity=25)";
I really don’t want to have to submit a test case and file a bug in IE Connect if this has already been fixed.
thanks,
steve_web
@321: On the Tools menu, click the "Compatibility View settings" option.
@steve_web: I’ve seen various bugs related to the opacity filter and the Z-Index. It would of course be helpful to see your repro case to ensure that the issue you’re seeing is the same one we know of.
@Kelvin: Issue #3 does not repro in current builds. Please check back for the RC build when it’s available. For issue #4, was the dialog an IE dialog or an Acrobat Reader dialog? Either way, it would be helpful to know specifically which dialog it was.
@Jagannath: What specifically happens when you attempt to log into the Live Mesh?
@Ens: Thank you for the tip with new tab.
@EricLaw: Will check RC build when available for issue #3. Thanks for confirming it is probably no longer an issue. For issue #4 if I open the PDF from my Desktop using Acrobat reader and click on the printer icon the dialog opens fine.
If I open the document in IE8 using and then click on the Acrobat Reader printer icon within IE8 the dialog which displays isn’t correct. The following is what the dialog looks like. ()
I hope that helps.
i wish theyd get the thing going so i can see if it is going to work with my mcafee security crap.
@EricLaw(MSFT) In IE the progress bar goes on forever but I never the login page. But, for the other browsers, I get the login page immediately. At home, I don’t have any problems with Live Mesh on IE.
Agreed I really hope the IE team has better things to do than make the browser pretty when you shrink it to a tiny-weeny size.
I saw as you said that #1 IE takes much longer to start up, even with Superfetch in Vista. #2 Open a new, empty tab. Again: much slower. #3-#99 Left out.
Simple summary: Lack of SVG and/or Canvas support SEVERELY limits the applicability of IE8. Here’s why: YOU CANNOT DRAW A CURVE OR A CIRCLE IN HTML!!! Get it? It really is that simple. To do any kind of meaningful graphical UI you and up being required to use plug-ins (Flash, Silverlight, Java, etc) or a proprietary markup language (VML). SVG and VML are close enough that it is RIDICULOUS that Microsoft could not have invested *at most* 2 developer years to add SVG Tiny or SVG Basic support (the SVG spec does not require full implementation of all of SVG to achieve compliance). Alternatively, the browser could support the Canvas object, but I prefer the declarative nature of SVG plus the scriptability that can be achieved, and it is consistent with Microsoft’s support for XML-based markup models.
The problem with plug-ins (e.g. Silverlight) is that they take a LOOOOOONG time to find their way to the corporate desktop, for a variety of practical and organizational reasons. For whatever reason, it is easier to get a new browser onto the desktop as long as there are security or other good reasons to do so.
In any case, I think IE8 was a blown opportunity…
I have found a problem regarding the tab-grouping feature in IE8. Let say I have 2 tab groups, A and B in yellow and blue colour respectively. If I close one tab in B group, B group will disappear, but when I open another tab from the page, the colour of B group sometimes is same as the A group (yellow). This problem not occur everytime because when I reproduce the step, B group will become another colour.Should’t the colour of each tab group is different?
Is this a wishlist? I will switch back to IE if a free adblock plugin is available. Thanks in advance! 🙂
@EricLaw [MSFT]
I’ve filed a new bug in Connect for the z-index issue:
It is actually a smaller bug than in the title, only affecting certain elements (TABLE is the only one I have tested thus far)
A test case is attached.
Thanks,
steve_web
Why is SVG important?…
It creates dynamic imagery, seamlessly from XML data. The XML DOM and SVG DOM can be easily linked to literally change data to images.
IE7 uses the Adobe SVG Viewer plugin, within an embed, to nicely create the above dynamic imagery.
Adobe has planned to discontinue support for its SVG Viewer in January. IE8 should be finalized sometime shortly thereafter.
MS can probably license this robust ActiveX component, and easily bundle it in the finished IE8 package.
This has presented a unique opportunity for MS, providing a seamless means of initiating SVG support in the IE8 browser. Although
not ‘native’, it surely will create a supremely positive message to SVG developers who want to employ SVG in the IE environment.
how do i completely uninstall internet explorer. I love firefox only, and i dont want to see any traces of internet explorer. Any help available?
how do i completely uninstall internet explorer. I love firefox only, and i dont want to see any traces of internet explorer. I have googled this problem but no luck and I thought guys who made it may help me. Any help available?
How are you guys going to receive data on which websites aren’t rendering correctly in IE8?
Are you exclusively taking reports from the Report a Webpage Problem?
I thought it would be nice if you guys would do that, and then also, for those who opt in to a Customer Experience Improvement Program also send the sites when you click the button to go into compatibility view.
It would be a lot easier, and would give you more feedback. 🙂 You probably would run the risk of false positives when people go into compatibility sometimes when it’s not needed. However, big trouble sites would probably bubble up over time.
Just a suggestion! I can’t wait for the next beta. At this stage, I’m hoping for more perf improvements and some other tweaks to IE8’s rendering engine so I won’t have to go into compatibility view as much.
One bug I would love to be fixed is on a forum I go to that uses vBulletin, avatars and some other data does not show up on the left side.
URL:
Look at the site with/without compatibility view and you will see a difference (at least in B2).
@Stefan
Free plugin IE7pro also contains adblock capabilities.
Ted:
Watch what you assume, eh? I don’t care about full SVG support. Rudimentary SVG would be a huge step in the right direction. Firefox, Opera, and Safari do it. SVG support is hardly a slippery slope. Aim for feature parity.
With those non-IE browsers I can easily generate and use scalable graphics (think charts and stuff that I’d want to display on a 42" TV or a laptop) with free, non-proprietary tools that integrate with the webapp framework I’m using. The resulting graphics won’t even require plugins.
Maybe I want to reuse those charts in a PDF. No problem because the tools I’m using to generate PDFs will handily parse SVG. Yay for code reuse.
Unfortunately, the customer has mandated IE be supported… so, as usual, a different solution must be found for the IE users.
And if you haven’t seen the "IE fires an onload event for every GIF frame" bugs or the "IE only animates GIFs in some modes"… yeah.
Dustin: Who cares? If the XML parser is holding IE back, it should be fixed. I can guarantee you that end-users don’t care one whit about the why, they care about the end result.
Ditto with developers. Certainly I don’t care why so much time has to be spent to implement IE6 only, then IE7 only, and now a bunch of IE8 only fixes. I don’t care about why IE6, FF2/3, and Safari do one thing with regard to content negotiation, while IE7 does something entirely different. I care that IE7 does something different, and that I have to spend time working around it.
All of the hand waving about how horrible XHTML, the canvas tag, or SVG are is besides the point. The competition has come to some manner of consensus, IE should too.
SVG wouldn’t be bloat at all. Take a look at Google Maps on IE. To implement the various overlays (lines and polygons) on Firefox/Safari/Opera they use SVG. On IE, Google falls back to VML (ugh) and performance suffers.
But hey, we get pretty colored tab bars. That totally makes IE8 a must use!
Are there any plans to bring back the inline AutoComplete for the AddressBar?
Instead of improving it in next version, it has completely been removed(replaced). Selecting an address from the suggestions listed in the drop down list requires many more key presses than the previous behavior.
I’ll add my vote for SVG support (even though it obviously won’t be there for IE8). It’s downright insulting that the IE team still hasn’t officially stated its position on the matter, despite the fact thousands of people have been asking for it for years.
So, some people are worried about "bloat", heh? Then feel free to take out VML, glowing fonts, and all other worthless proprietary stuff. It’s not like developers use them anyway.
"If you look around at browsers, you’ll find that most of them support scalable vector graphics," Berners-Lee said. "I’ll let you figure out which one has been slow in supporting SVG."
In case you don’t know who is Tim Berners-Lee or what his credentials are.
MS, forever holding back the web with non standard, closed proprietary vendor lock-in bs like Silvercrap that no one wants.
."
Good old EEE? Embrace, Extend, Extinguish. We all know that playbook.
."
Or Microsoft has forgotten their own Interoperability Principles and pledge? Here’s a quick reminder.
Of course, if non compliance with open web standards supported by the other 3 major browser engine means unbundling IE from Windows by EU Commission ruling, that is quite acceptable too.
@steve_web
Can you visit
connect.microsoft.com/IE/feedback/ViewFeedback.aspx?FeedbackID=381071
Regards, Gérard
The slow loading of new tabs in Vista seems to be related to Protected Mode.
I’ve an XP machine that runs IE8 with the same add-ons and it opens tabs almost instantly, whereas on Vista, it took 2 to 3 seconds.
However, if I turn off IE8’s protected mode, the tabs open instantly.
Is there something in Protected Mode that slowed the loading of add-ons?
@Gerard, thanks I’ll look into your test cases, and refine one of my own (and upload/send you a link).
@EricLaw: I read the "@Mike: Spellcheck is available from at least 2 add-ons (IESpell and IE7Pro)." in one of your earlier posts and feel you need to offer something better.
IESpell doesn’t offer in-line spellchecking and I found IE7Pro to give problems so I stopped using it.
IMHO that leaves IE8/7 severely lacking in spellcheck area.
Has anyone got any profiling tools that will tell me what is happening during the 10 seconds it takes for IE to load up a new tab?
I have my settings set to open "about:tabs" in the new tab (not because I want it, but because it is the only entry point into private browsing… typing "about:inprivate" doesn’t work)
In any other browser, I can type [CTRL] + ‘T’ and then say ‘IE Blog’ which will either take me straight to this site, or to a Google search with the IE Blog as the first link.
However in IE7 and more so in IE8 I can’t.
I have to wait the full 10 seconds for IE to open a new tab – and FULLY render or when I start typing I get some mish mash url like ‘ie blogabout:tabs’
I don’t have any addons installed other than Flash/Acrobat which shouldn’t need to do anything when opening a new tab (without any web content)
I know there has been many bugs reported on how slow this new tab is but I’d like to narrow down the biggest culprit so that I can file a bug report that might get this fixed.
thanks
lenard.
We’d like to know the date of the release candidate (if known)for our development testing. It looks like first quarter or possibly early second Q? but do you have a date yet?
Found an odd issue. I normally develop in Firefox but now that IE8 is closer to being on the standards track I’ve started testing stuff out in it.
I use bookmarklets a lot because they are just so darn powerful/helpful but IE8 makes them very hard to use.
If I load a local page that has no JavaScript on it then click my bookmarklet I get the yellow security bar warning me about the dangers of ActiveX and all that mess which I then dismiss because it is just harmless JavaScript.
However IE doesn’t actually run my script… I have to re-click the same bookmarklet to execute the code.
Are all these extra steps really necs.?
Especially since to add the bookmarklet in the first place I had to jump through hoops to add it (I can’t drag a link to my toolbar, I have to right click the link, then choose bookmark, then confirm that the bookmarklet isn’t an evil script).
Is there an option to turn off these annoying warnings?
jerry
@mhinkle: As noted in the above blog post, the Release Candidate will come out in 1Q 2009. The exact date has not yet been set.
@lenard: You can enter Private Browsing in many different ways (most of which are available in Beta-2, I believe): 1> CTRL+SHIFT+P, 2> Tools Menu > InPrivate Browsing, 3> Safety toolbar button > InPrivate Browsing, 4> Tools toolbar button > InPrivate Browsing, 5> Start > Run > iexplore.exe -private
As for the slowness in starting new tabs, can you please run IE with add-ons disabled () and verify you still see the problem? I’ve never seen the new tab page take more than 500 ms with addons disabled.
Note that you could alternatively type the address in the ~current~ tab or search bar, then hit ALT+ENTER to open the new page in a new tab.
@jerry: The feature you’re having problems with is called "Local Machine Lockdown" and there are good security reasons that it exists; however, a full explanation would take a lot more space than this comment box. You can disable the feature if you’d like by clicking Tools / Internet Options / Advanced, and check the box near the bottom labeled "Allow active content to run from my computer." However, I encourage you to reset that checkbox before browsing untrusted Internet sites.
The workaround that does not require a configuration change is to either view pages on a HTTP server, or modify your locally stored HTML pages such that they contain a "Mark of the Web" for about:internet; see //msdn.microsoft.com/en-us/library/ms537628%28VS.85%29.aspx to learn more.
can someone direct me to the posting for bugs found.
Whew finally got it to work. I must say Norton internet security does not like beta2 on my laptop. I tried 4 times lastnight to install but only to find that norton had shut down everything . I had to uninstall Norton Internet security 2007. Does anyone have info or a link where I can understand what happened. This did not happen on my home PC and I have the same Norton version on my PC as I have on my laptop
EricLaw – I’ve been using IE8B2 for quite a while and have been very happy with it. However, there is one troubling thing I’ve noticed about some of your comments. When people asked for a download manager and spell checking you stated that there were several good add-ons that accomplished both. However, every time people talk about an IE issue, you tell them to run with add-ons disabled. So, what is the position of the team, fill our gaps with add-ons or stop using add-ons? I realize you comment about disabling add-ons is to troubleshoot issues, but if people fill gaps with add-ons it sounds like they are destined to face perf issues eventually.
@Tim: Let me clarify that my suggestion is that users encountering problems try running without add-ons in order to ensure that add-ons are not the source of the problems. Very very often, add-ons are the source of problems.
However, you seem to have jumped to the incorrect conclusion that this means that somehow ALL add-ons are bad. The reality is that there are many great add-ons available that are both stable and performant. My favorites are here: //. I run with the Mouse Gestures add-on, for instance, and cannot imagine using a browser without it.
In many cases, users who are encountering problems are running with unwanted add-ons that offer no real value (e.g. CD burning plugins), or running older versions for which newer, more stable versions are available. If they remove the unwanted add-ons, they find that they have a faster, more stable browser.
Note that this isn’t unique to IE or even browsers in general: You can download all sorts of utilities to your system (Windows, Mac, Unix, etc) that can either be really useful, or a source of nearly endless misery.
I’m always amazed at reading just the pure hatred of web developers towards IE on this blog.
I’d be interested to see how so many of the "experts" on here would handle the problems that the IE team face in making a forward progressing browser, while still supporting countless, poorly written custom web applications for corporations all over the world. Some people seem to forget that the flexability of IE has been abused by self proclaimed "web programmers" (and still is if you were to see the application that a vendor is trying to push onto us now).
The IE Team is in a lose-lose battle on all fronts, but they are working hard to move the web forward all while not bringing internal corporate applications to a halt. Not bad if you ask me…
… now, while they just didn’t make IE6 a separate download, I’ll never know… (I know there was technology claims, but I’m sure it could have been made to work somehow)
@HB : Just try to code a Web 2.0 application or anything more complicated than HTML4 and you will see why people hate IE so much.
As many people have pointed out (including yourself), the best solution would be to create a standalone version of IE for all the intranets and bank websites that will just not update, and then either create a new standards compliant renderer, or use an existing one. It would actually be better security-wise too.
Trying to incorporate the needs of ancient intranets and new websites seems to be causing a lot more problems than it is solving. At the end of the day, nobody will be happy. Corporate users will stick to IE6 like their business depends on it and anyone wanting a modern web browser will go for any of the other alternatives. Web developers will be even more unhappy as each version of IE introduces new incompatibilities. We just do not have this with a new release of Firefox or Safari.
EricLaw – I wasn’t trying to conclude that all add-ons are bad. In fact, I would agree that add-ons are an essential part of a healthy web browser. I was merely trying to make the point that as you depend on more and more add-ons, your browsing performance may begin to decline. In particular, using add-ons to fill core/frequently used browser capabilities make this issue spring to light even sooner.
In addition, you cite that some add-ons offer no real value. While the ability to Manage Add-ons has gotten easier, I’d argue that your average user (say, my Father-in-law) hasn’t really been helped since they don’t understand the impact of add-ons let alone know how to find the dialog.
Wow that did speed things up dramatically. It still takes a second or two but it is much faster.
Ok so as Tim said what now? There’s no way that I’m going to run a browser without addons, so where can I tweak the performance?
If the issue is directly related to certain extensions, has MSFT contacted the developers to get them to tweak their code?
AFAIK if I open a tab to about:blank or about:tabs I shouldn’t be loading up anything other than the tiniest of hooks for extensions. There is no DOM to deal with etc.
I’d be curious to see a list of performance for each of the top 20 extensions.
@billybob
I understand that IE6 doesn’t represent the web in it’s current state. I’ve written plenty of applications that have had a feature or two dropped because of backwards compatibility.
Thus why I brought up, and still think, a stand-alone IE6 browser would be in the best interests of the web.
Microsoft has many talented, professional developers that could make that happen. I think they’ve done well to take IE as far as it’s gone, but I don’t think they can win this battle. If they had the chance to start from scratch, they could develop something incredible.
The phrase ‘IE’ musters up too much nerd rage. It’s time, IMO, to move on and start from scratch.
And when I say ‘nerd rage’, I mean us as in developers…
My father-in-law doesn’t give a crap what his browsers name is, it is "the internet" to him…
@EricLaw [MSFT]
It does not make difference if new tab page is about:blank.
I narrowed down slowdown to Sun Java BHO. It makes noticeable difference when enabled (although absolutely clean system is still slow for me).
Is it possible for IE to check which BHO is affecting performance? Like, record initialization / event catch time and mark add-on as ‘slow’. It won’t help me, but can help in bad performance situations, like Google Chrome does.
"I’m always amazed at reading just the pure hatred of web developers towards IE on this blog."
yet….
"I understand that IE6 doesn’t represent the web in it’s current state…. If they had the chance to start from scratch, they could develop something incredible"
So, you do understand why there is hatred. It costs us time, money and features, not to mention sheer frustration and annoyance. Yet – if they stopped trying to dominate the web (or whatever it is they are trying to do), they COULD write something which would make our lives easier and the web a better place.
INSTEAD, internal politics trumps better web and more productive developers, and I think thats what people get angry about. Our time and money is directly sacrificed for theirs.
Why would people not be angry? Especially after being told that we should add all manner of headers just to make our sites render, then being told "Actually don’t bother". Then to be told, "we love the community, and really value you" followed by bugs being closed or just plain ignored.
What about features that developers care about? Just look at any discussion (or lack of) about embedded fonts, the new headers, SVG or Canvas. Developers are normally told their opinion is wrong and feature X will go ahead as planned.
I think we have come to the state where most developers do not think their opinion will even be heard so they just use this site to vent frustration at whatever IE bug they had to workaround today, rather than try to be productive.
Hey guys, I dont know how all to put this, but I’m a beta tester for Win_Server_2008_Enterprise, and one of the biggest things that I find hard to cope with is IE in Server 08. I tried to go to any website, and its all blocked. I try to download something, and its all blocked, and it gets really annoying to the point that I almost gave up on using IE! XD But.. I think that you need to give the user more control over there security levels, especially just being able to turn everything off for a moment, and browse the internet freely, w/o having to worry or fret over IE being in your face! 🙂 Thanks guys, and hope that this IE8 is gonna be way better than IE7. Keep up the good work! 😉
@EricLaw [MSFT],
When IE8 gets RTW will we get a list of the current bugs within it and workarounds or are you going to let us figure out the work for you?
Cause honestly everyone on here knows and I bet you sure as hell do that there will be just as many bugs within IE8 as there were in IE7 and IE6.
Ooo Ooo but you fixed [insert bug here] and that I guess deserves a job well done, ha.
Quit being smug. You have become a hypocrite with your comments.
@billybob
You’re an example of what I’m talking about. You view their efforts in improving IE as "web domination". It angers you that they don’t do enough all while it angers you that they do anything to prolong the life of IE. Lose-lose.
I’m not going to engage in a blog argument with you about the appropriate reaction to IE8. That is entirely up to the people that use it. I’m just giving my +1 to the IE team for keeping all these worthless, vendor applications intact while moving their web compliance efforts forward, even if it is just inches at a time.
I agree with you that IE is a lost cause. You can’t be 100% backwards compatible with IE6, because many of IE’s forgiving "features" are really issues that should be repaired. As I said before, time to move on.
@lenard, @Tihiy: You can see how long each add-on takes to start in the "Load Time" column of Manage Addons. While we’ve previously written about how to build faster add-ons () the unfortunate reality is that many ISVs don’t follow best-practices.
Microsoft has been in touch with the makers of popular add-ons about their performance, but certainly there’s nothing stopping users from contacting add-on vendors to request that they make faster versions. (When Microsoft says "Hey, ISV, make a faster add-on" the ISV can just yawn and say "yeah, maybe later." When ~users~ contact the ISV and say "Hey, ISV, I’m uninstalling your add-on until you make a faster version" the ISV is more likely to listen.)
@StarBase Computer Tech: If you’re using Win2k8 as a workstation, then you’ll probably want to disable the "Enhanced Security Configuration" feature that is the source of the prompts you see (normally, we don’t advise browsing on Servers, but you’re just using a server OS as a workstation). You can disable the ESC feature on the Win2k8 Server Configuration Manager dialog.
@EricLaw [MSFT] – you are right, users moaning is much more effective than a gentle corporate request. Thus;
Dear Microsoft;
Based on the (incomplete) load time column for Add-ons in IE8 it has become painfully obvious that the addon that is ruining my performance the most is "Microsoft Research" – I personally have no idea what this addon does (AFAIK) I have never used it. Since it is created by the same MFGR that creates the browser it lives in, and is installed by default, I think that it would be highly advantageous for MSFT to either (a) disable/uninstall this addon, and or fix the performance of it so that it loads at least 10x faster.
EricLaw [MSFT] on the IE Blog posted a link to help you fix any performance issues with your addon. Please follow this link for information.
Thank you
Concerned User
@SoftEngi: We are implementing some address bar improvements which will decrease the time it takes to select a URL. Stay tuned for the RC release to see these changes.
Microsoft research add on is a MS Office add-on. When activated you get a references pane for looking up information.
Fairly useless.
It would be relavant to know which Office versions you use to determine which version of that reasearc addon software causes the slow opening of the new tabs.
certainty IE cant replace Firefox or Opera for Professional users. whenever we have to use IE we will open Maxthon !
certainty IE cant sub of Firefox or Opera for Professional users. whenever we have to use IE we will open Maxthon !
I guess the vitriol reached quite a high level here; not all of it is unwarranted, and I’m the first to curse IE for lost productivity.
Still.
IE 8 adds some incredibly useful features, such as (at last) powerful debugging tools (which is quite a leap from no debugging tool at all).
Now, however, several features could be implemented as add-ons, which could be developed outside of the IE team, but which should be kickstarted and easily loaded from IE.
For example, Firefox doesn’t come with any extension by default; it doesn’t even come with Adobe Flash. However, loading Flash is easy (there’s a prompt for it as soon as Flash content is found on page), and the extension browser takes care of the rest.
Why not do the same thing for SVG and canvas? Why not start, say, an open source project for SVG support, based on rsvg, with some proof-of-concept code to allow easy integration with IE, and set as default downloading location for the plugin when an SVG tag is found?
This way, IE can still be distributed using the MS EULA, the plugin can be developed independently, and it reduces the amount of SVG renderers on the market.
That’s already being done somewhat for canvas (but the IE team didn’t go out of its way to allow easy downloading and installation) and would work better with active participation. Making it an independent extension also reduces risks of license cross-contamination – after all, if it works for Office and the OOXML – ODF converter plugin, it should work for IE too.
@hAl: I have 3 WinXP boxes. 2 running IE7, 1 running IE8 Beta2.
On the 2 running IE7, it is Office 2003, on the 1 running IE8, it is Office 2007.
In all 3 cases – disabling the Microsoft Research Toolbar caused a massive performance gain when opening new tabs.
I would HIGHLY recommend to ANYONE running IE to immediately, without hesitation, disable this addon.
If someone can tell me what "good" purpose it serves – then great. Otherwise this is just pure Bloatware [TM] and should be removed.
PS Does anyone know if it can be uninstalled? (safely?). I’m fine if it just lives there disabled, but it would be much cleaner if I just removed it from my system.
Disabling add-ons advice is useless, you can see it’s still loaded in the Manage Add-on window eventhough it says Disabled and IE still slow as hell when launching or opening a new tab.
Run Regedit, go to HKEY_LOCAL_MACHINESOFTWAREMicrosoftInternet ExplorerExtensions and carefully check all the CLSIDs there. Delete only the stuff installed by Office 2003/2007 like Research, Send to OneNote, Groove blah blah.
Another place where Browser Helper Objects(BHO) hides is in HKEY_LOCAL_MACHINESOFTWAREMicrosoftWindowsCurrentVersionExplorerBrowser Helper Objects, go through the CLSIDs there also and delete anything installed by Office 2003/2007 like Research, Send to OneNote, Groove blah blah.
YOU WILL LOSE ALL THE FUNCTIONALITY provided by the crappy Office BHOs, but at least now IE will load faster. Gotta love the Office team and their useless BHO slowing down your load times.
A better solution? Dump IE and switch to Firefox or Chrome.
@EricLaw [MSFT]:
Oh right, i was so dumb so i didn’t scrolled right. Java adds a whopping 0.6 s to tab open time!
Thank you for clarifying this.
@OfficeBHO: I think you misunderstand: Add-ons marked "Disabled" are not loaded by IE and should have no performance impact.
Add-ons like "Send to OneNote" plug into extensibility points where there is no performance impact until you actually use them.
The Groove add-ons aren’t actually intended for use inside IE (they’re meant for the Windows Explorer Shell) and can be safely disabled for a small perf win.
@jason: With the Research sidebar, you can click View > Explorer Bars > Research to get the Office research sidebar. In most cases, it’s not very useful, but if your corporation has customized it with additional search engines, it might have some value.
You can uninstall the Research sidebar using Office Setup if you’d like.
@Mitch74: I think you might be missing the point. No one needs the IE team’s blessing, permission, or investment to make an SVG or Canvas add-on (open-sourced or not) that runs inside IE. The Flash guys don’t have some sort of special deal with IE that lets them install on first-use– The IE extensibility architecture provides all of that, and the Adobe Flash guys simply plug in using the methods described on MSDN.
Statements like "the IE team didn’t go out of its way to allow easy downloading and installation" don’t really make a lot of sense. It’s would be just as true to say "the IE team didn’t go out of its way to allow easy downloading and installation ***of Adobe Flash***" but you’ll notice that the Flash team has no trouble at all getting installed when used. The only reason it’s easier to install Flash than the canvas plugin is that the Flash guys actually follow the instructions on MSDN and signed their code.
reorganizing website in favorite bar and tab is just awkward. Can it slide automatically Just like the chrome tabs or Windows 7 reorganizing taskbar it’s much smoother and easier that way.
In Beta 2, you are unable to right click a favorite (in the undocked favorites menu) and delete the bookmark.
@EricLaw [MSFT] / @Mitch74: I think the trick with the SVG bit, is that yes, any of us could write an extension, but none of us know if Microsoft does intend to write their own native one and if so by when.
If Microsoft had a roadmap that indicated "yes, for IE9 MS plans to ship IE with native SVG support" there would be little point in writing our own plugin (the old Adobe one works fine, for now)
What is frustrating is that we don’t know when IE plans to natively support SVG, or if it will EVER support SVG natively.
Which in turn holds developers back from using SVG because a lot of their end users might not have it.
I dream of the day that I visit the IE Blog and the post topic is:
The ROADMAP for release [X] of Internet Explorer.
That would be the day that developers would be sooooooooooooooooooooo happy.
Because then, and only then, would we have any clue as to Microsoft’s direction with IE.
We know IE8 doesn’t have SVG. If the IE team stays on their stated roadmap of a new IE version every 18-24 months, that means that the earliest that IE would support SVG is Q1 2011, which isn’t exactly right around the corner. Even if IE9 does include SVG, it will be ~2012 before it has >50% marketshare, if history is any guide.
I think the amusing thing here is that if you look closely at the other browsers’ built-in SVG support, it’s actually not very good (). SVG is an incredibly complicated spec, which some of the fanboys here conveniently ignore when they imply that adding SVG would be "easy" or suggest that Microsoft is just "lazy."
I think the interesting question in all of this mess is why Adobe is pulling support for their SVG add-on. THAT is a far more interesting move than the fact that IE still doesn’t support it.
Stephen –
The answer to why Adobe is pulling their SVG plugin (and haven’t actually worked on it in years) is quite simple:
They blew $3B or so a few years back on a pig-in-a-poke named "Flash", and they’ve been struggling ever since to justify that purchase. In a 180 degree turn, it is now absolutely *not* in their best interest to support a graphics format that is:
– Standardized under the W3C patent policy
– A non-binary format that can easily be generated by other vendors and tools that Adobe doesn’t control and can charge obscene amounts of money for.
– Is increasingly getting support in browsers, though we’re about 5 years late here – even amongst non-MS browsers.
– Plays well with other XML tool chains, such as XSLT
All of this makes it easier to have powerful 2D graphics that are not tied to any particular company, whether that be Adobe with Flash or Microsoft with XAML.
Adobe is much too busy trying to shoehorn Flash into application areas it was never intended (‘rich applications’) to spend any energy, money, etc. on anything that could be construed as competition to Flash. Too much financial (and, more importantly, political) capital expended by Adobe execs to do anything else.
Cheers,
– Bill
@Stephen – thats an awesome chart and breakdown much appreciated!
For me my SVGs are fairly basic… vectors, fills, text, and links… the animation stuff is cool, but I don’t need it – Thus the current Firefox/Safari support is plenty. The Elephant is my biggest concern.
Its too bad the Adobe plugin wasn’t open source, that would be a great project to jump on in the absence of native IE support.
Any chance to get dynamic vml creation support back?
This is a major backward compatibility issue, isn’t it ?
Irbabe –
I and others have been poking MS folks about the fact that, if they’re not gonna support SVG, they could at least bug fix VML. I’ll be watching for this in the IE8 RC due in Q1.
In addition to not being able to dynamic creation of VML elements (in IE8 "strict standards" mode), there were other bugs that actually got introduced in VML support for *IE7*. It’d sure be nice to get those fixed as well.
Eric, if you or other MS folks need specifics about these bugs, I can go dig them up. Unfortunately, I no longer have ‘write’ access to the Connect database since it got locked down to ‘beta testers’ or whatever.
Cheers,
– Bill
Obviously the final release for IE8 will still have *some* issues outstanding – many will indeed be discovered after release.
What I’d *really* love to see, in addition to MS fixing the security bugs that are discovered, they also fix rendering bugs with any weekly updates.
The number of outstanding rendering bugs for IE7 is colossal (172 listed on), so PLEASE Microsoft don’t make us wait until yet another version is written before any IE8 rendering bugs are addressed!
@EricLaw: sorry, you obviously took my comment as being reproachful – it actually wasn’t, it was a suggestion.
I was thinking something could be made inside IE that would cumulate Mozilla’s method of downloading, installing and loading Netscape plugins (like Flash is being dealt with currently) which means: prompt, one-click download, license display, install (or update): there, it’s active on the page.
In short, if canvas, SVG, whatever can’t be supported right inside IE’s source code trunk, it could be done in independent code repositories, under a different license (which would allow code reuse, as there are enough free SVG parsers and renderers in existence); I’m just not sure embedding these in ActiveX controls is the best solution, although IE8’s better thread, memory and authorization management may actually make it work.
Helping it along inside IE would be adding the installation source’s security signature part of the IE list of pre-approved certificates (that would remove quite a few security prompts: "IE blocked some content from this page" => allow, page reloads, "Is this plugin’s origin valid?" => yes, "Are you sure you want to install?" => Yes, "the plugin is signed by ImtheMaker" => Allow, then license, install, refresh – note the extras); the IE team would need to negotiate with existing projects to start said IE branches…
This may be the way to provide much asked for functionality, without having to write everything from scratch and without forcing users to jump through hoops.
I would guess that for now SVG support is waiting for proper XML support within IE (SVG being XML, I bet you guys are thinking about using msxml6 or later as the parser, and then extend the VML renderer to make it cover SVG too; you could do that, but then you’d need to fix VML first, judging from other comments).
Which would create a different SVG engine, which you (the IE team) would have to maintain, and get flak on because:
– (external) it wouldn’t match other existing engines;
– (external) bugs wouldn’t be fixed for years;
– (internal) it could be used by MS Office if it ever supported SVG instead of (explicitly deprecated by MS Office team in writing the ECMA OXML proposal) VML (say, in ODF 1.2), but the Silverlight team would have a problem with it.
What say you?
Heck, you could simply have a contract with the Mozilla Foundation, or with Opera Software, or with Apple (easier as it’s already the case for Flash) to develop an IE version of their existing browser’s SVG Tiny modules, with the same terms the MS Office team gave CleverAge – for business peace of mind.
There’s this pesky Acid3 thingy, too.
Dave: rendering bug fixes in weekly updates would be a bad thing, IMO. No other browser vendor does that, either.
It’d be bad because not all users update their system, and you’d then have a wide variety of IE8. It’d be nearly impossible to serve them all working pages…
IE6 and 7 are quite buggy, indeed, but at least they’re all the same everywhere, and you know how they behave.
@Dave
Why would anyone look at IE7 rendering issues fir the IE8 betas ?
Most of those are already solved in IE8.
@hAl re:"Most are already solved in IE8"
You must have a special version of IE8Beta2 that we don’t have. The version I have is still very beta and is no where near ready for release based on lack of fixed for old IE6/IE7 issues as well as IE8 regressions)
I don’t care how long IE8 takes to ship, I just hope that the Q1 – 2009 RC1 is considerably fixed up in terms of rendering, JS bug fixes, and UI improvments/fixes.
If the RC1 is only mildly better than Beta 2 then we are in for lots of trouble.
We have a status for our clients regarding supported browsers and their stability. Currently IE8Beta2 is reading 61% stable and has a 3% recommendation for users to upgrade. We will be updating when the RC1 comes out but until the "recommended" flag hits 85%, none of our users will be upgrading.
we might update before the release if information about fixes to Beta2 are released before the RC.
The next public update of Internet Explorer 8 includes improvements to Compatibility View that help end-users
Paul Cutsinger,Principal Lead Program Manager has posted about IE8 in Windows 7 Beta in the IE Team's
Back in October , Sunava described changes that we made to the XDomainRequest (XDR) object in IE8 between
Durante la presentazione che feci ai MS Days 08 su IE8, ho brevemente accennato ad alcune API che possono
A few weeks back, we announced Compatibility View improvements available in the Release Candidate build | https://blogs.msdn.microsoft.com/ie/2008/11/19/ie8-whats-after-beta-2/ | CC-MAIN-2017-43 | refinedweb | 27,526 | 71.24 |
Authors: Yury Kashnitskiy and Maxim Keremet. Translated and edited by Artem Trunov, and Aditya Soni. This material is subject to the terms and conditions of the Creative Commons CC BY-NC-SA 4.0 license. Free use is permitted for any non-commercial purpose.
Prior to working on the assignment, you'd better check out the corresponding course material:
You'll get up to 12 credits for this assignment - the web-form score will be halved and rounded.).
Here's how you reply in a thread (press this dialog icon to drill down into a thread):
Please stick to special threads for your questions:
Help each other without sharing correct code and answers. Our TA Maxim @maximkeremet is there to help (only in the mentioned threads, do not write to him directly).
Lastly, you can save useful messages by pinning them, further you can find pinned items on the top, just below the channel name:
import numpy as np import pandas as pd # pip install seaborn import seaborn as sns import matplotlib.pyplot as plt
Consider the following terms we use:
Reading data into memory and creating a Pandas
DataFrame object
(This may take a while, be patient)
We are not going to read in the whole dataset. In order to reduce memory footprint, we instead load only needed columns and cast them to suitable data types.
dtype = {'DayOfWeek': np.uint8, 'DayofMonth': np.uint8, 'Month': np.uint8 , 'Cancelled': np.uint8, 'Year': np.uint16, 'FlightNum': np.uint16 , 'Distance': np.uint16, 'UniqueCarrier': str, 'CancellationCode': str, 'Origin': str, 'Dest': str, 'ArrDelay': np.float16, 'DepDelay': np.float16, 'CarrierDelay': np.float16, 'WeatherDelay': np.float16, 'NASDelay': np.float16, 'SecurityDelay': np.float16, 'LateAircraftDelay': np.float16, 'DepTime': np.float16}
%%time # change the path if needed path = '../../data/2008.csv.bz2' flights_df = pd.read_csv(path, usecols=dtype.keys(), dtype=dtype)
CPU times: user 34.6 s, sys: 309 ms, total: 34.9 s Wall time: 33.6 s
Check the number of rows and columns and print column names.
print(flights_df.shape) print(flights_df.columns)
(7009728, 19) Index(['Year', 'Month', 'DayofMonth', 'DayOfWeek', 'DepTime', 'UniqueCarrier', 'FlightNum', 'ArrDelay', 'DepDelay', 'Origin', 'Dest', 'Distance', 'Cancelled', 'CancellationCode', 'CarrierDelay', 'WeatherDelay', 'NASDelay', 'SecurityDelay', 'LateAircraftDelay'], dtype='object')
Print first 5 rows of the dataset.
flights_df.head()
Transpose the frame to see all features at once.
flights_df.head().T
Examine data types of all features and total dataframe size in memory.
flights_df.info()
<class 'pandas.core.frame.DataFrame'> RangeIndex: 7009728 entries, 0 to 7009727 Data columns (total 19 columns): Year uint16 Month uint8 DayofMonth uint8 DayOfWeek uint8 DepTime float16 UniqueCarrier object FlightNum uint16 ArrDelay float16 DepDelay float16 Origin object Dest object Distance uint16 Cancelled uint8 CancellationCode object CarrierDelay float16 WeatherDelay float16 NASDelay float16 SecurityDelay float16 LateAircraftDelay float16 dtypes: float16(8), object(4), uint16(3), uint8(4) memory usage: 387.7+ MB
Get basic statistics of each feature.
flights_df.describe().T
Count unique Carriers and plot their relative share of flights:
flights_df['UniqueCarrier'].nunique()
20
flights_df.groupby('UniqueCarrier').size().plot(kind='bar');
We can also group by category/categories in order to calculate different aggregated statistics.
For example, finding top-3 flight codes, that have the largest total distance traveled in year 2008.
flights_df.groupby(['UniqueCarrier','FlightNum'])['Distance'].sum().sort_values( ascending=False).iloc[:3]
UniqueCarrier FlightNum CO 15 1796244.0 14 1796244.0 UA 52 1789722.0 Name: Distance, dtype: float64
Another way:
flights_df.groupby(['UniqueCarrier','FlightNum'])\ .agg({'Distance': [np.mean, np.sum, 'count'], 'Cancelled': np.sum})\ .sort_values(('Distance', 'sum'), ascending=False)\ .iloc[0:3]
Number of flights by days of week and months:
pd.crosstab(flights_df.Month, flights_df.DayOfWeek)
It can also be handy to color such tables in order to easily notice outliers:
plt.imshow(pd.crosstab(flights_df.Month, flights_df.DayOfWeek), cmap='seismic', interpolation='none');
Flight distance histogram:
flights_df.hist('Distance', bins=20);
Making a histogram of flight frequency by date.
flights_df['Date'] = pd.to_datetime(flights_df.rename( columns={'DayofMonth': 'Day'})[['Year', 'Month', 'Day']])
num_flights_by_date = flights_df.groupby('Date').size()
num_flights_by_date.plot();
Do you see a weekly pattern above? And below?
num_flights_by_date.rolling(window=7).mean().plot();
We'll need a new column in our dataset - departure hour, let's create it.
As we see,
DepTime is distributed from 1 to 2400 (it is given in the
hhmm format, check the column description again). We'll treat departure hour as
DepTime // 100 (divide by 100 and apply the
floor function). However, now we'll have both hour 0 and hour 24. Hour 24 sounds strange, we'll set it to be 0 instead (a typical imperfectness of real data, however, you can check that it affects only 521 rows, which is sort of not a big deal). So now values of a new column
DepHour will be distributed from 0 to 23. There are some missing values, for now we won't fill in them, just ignore them.
flights_df['DepHour'] = flights_df['DepTime'] // 100 flights_df['DepHour'].replace(to_replace=24, value=0, inplace=True)
flights_df['DepHour'].describe()
count 6873482.0 mean NaN std 0.0 min 0.0 25% 9.0 50% 13.0 75% 17.0 max 23.0 Name: DepHour, dtype: float64
1. How many unique carriers are there in our dataset?
# You code here
2. We have both cancelled and completed flights in the dataset. Check if there are more completed or cancelled flights. What is the difference?
# You code here
3. Find a flight with the longest departure delay and a flight with the longest arrival delay. Do they have the same destination airport, and if yes, what is its code?
# You code here
4. Find the carrier that has the greatest number of cancelled flights.
# You code here
5. Let's examine departure time and consider distribution by hour (column
DepHour that we've created earlier). Which hour has the highest percentage of flights?
# You code here
6. OK, now let's examine cancelled flight distribution by time. Which hour has the least percentage of cancelled flights?
# You code here
7. Is there any hour that didn't have any cancelled flights at all? Check all that apply.
# You code here
8. Find the busiest hour, or in other words, the hour when the number of departed flights reaches its maximum.
Hint: Consider only completed flights.
# You code here
9. Since we know the departure hour, it might be interesting to examine the average delay for corresponding hour. Are there any cases, when the planes on average departed earlier than they should have done? And if yes, at what departure hours did it happen?
Hint: Consider only completed flights.
# You code here
10. Considering only the completed flights by the carrier, that you have found in Question 4, find the distribution of these flights by hour. At what time does the greatest number of its planes depart?
# You code here
11. Find top-10 carriers in terms of the number of completed flights (UniqueCarrier column)?
Which of the listed below is not in your top-10 list?
# You code here
# You code here
13. Which route is the most frequent, in terms of the number of flights?
(Take a look at 'Origin' and 'Dest' features. Consider A->B and B->A directions as different routes)
# You code here
14. Find top-5 delayed routes (count how many times they were delayed on departure). From all flights on these 5 routes, count all flights with weather conditions contributing to a delay.
Hint: consider only positive delays
# You code here
15. Examine the hourly distribution of departure times. Choose all correct statements:
# You code here
16. Show how the number of flights changes through time (on the daily/weekly/monthly basis) and interpret the findings.
Choose all correct statements:
Hint: Look for official meteorological winter months for the Northern Hemisphere.
# You code here
17. Examine the distribution of cancellation reasons with time. Make a bar plot of cancellation reasons aggregated by months.
Choose all correct statements:
# You code here
18. Which month has the greatest number of cancellations due to Carrier?
# You code here
19. Identify the carrier with the greatest number of cancellations due to carrier in the corresponding month from the previous question.
# You code here
20. Examine median arrival and departure delays (in time) by carrier. Which carrier has the lowest median delay time for both arrivals and departures? Leave only non-negative values of delay times ('ArrDelay', 'DepDelay'). (Boxplots can be helpful in this exercise, as well as it might be a good idea to remove outliers in order to build nice graphs. You can exclude delay time values higher than a corresponding .95 percentile).
# You code here
That's it! Now go and do 30 push-ups! :) | https://nbviewer.jupyter.org/github/Yorko/mlcourse.ai/blob/master/jupyter_english/assignments_spring2019/assignment1_USA_flights_EDA.ipynb?flush_cache=true | CC-MAIN-2019-13 | refinedweb | 1,434 | 61.43 |
Hosein Ahmadi
- Total activity 10
- Last activity
- Member since
- Following 0 users
- Followed by 0 users
- Votes 0
- Subscriptions 5
Hosein Ahmadi created a post,
Cannot recognize view after change default view engine!Hi,I replace default razor view engine in my asp.net mvc 4.5 project. ViewLocationFormats is dynamic, means views created based on controller namespace. bute resharper cannot recognize views locati...
Hosein Ahmadi created a post,
Bundles support in ASP.NET MVC 4?Hi,do you have any plan to support autocompletion for bundles in asp.net mvc?Thank you...
Hosein Ahmadi created a post,
Cannot Resolve Area Symbol in Resharper 6.1 RTMHello,i'm using resharper 6.1 RTM, but areas in my application is not recognized, for example, in Url.Action("Index","Home",new{area="Admin"), Admin area name is not recognized.Cannot Resolve Area ...
Hosein Ahmadi created a post,
Type completion not worked properly.Hello resharper team,There is a problem with Intellisense Type Completion, when nothing typed and pressing Ctrl+Alt+Space, No Suggestion apeared and i think this is a bug in this version, because t...
Hosein Ahmadi created a post,
Some bugs in Razor filesHi,I have some problems with razor files and this is about auto suggestions.For example, when i want to use @Url.Action, suggestions in action and controllers not worked properly.@Url.Action("HomAf... | https://resharper-support.jetbrains.com/hc/en-us/profiles/2122983125-Hosein-Ahmadi | CC-MAIN-2020-24 | refinedweb | 228 | 51.85 |
In our last tutorial on REST API Best Practices, we designed and implemented a very simple RESTful mailing list API. However our API (and the data) was open to public, anyone could read / add / delete subscribers from our mailing list. In serious projects, we definitely do not want that to happen. In this post, we will discuss how we can use http basic auth to authenticate our users and secure our APIs.
PS: If you are new to REST APIs, please check out REST APIs: Concepts and Applications to understand the fundamentals.
Setup API and Private Resource
Before we can move on to authentication, we first need to create some resources which we want to secure. For demonstration purposes, we will keep things simple. We will have a very simple endpoint like below:
from flask import Flask from flask_restful import Resource, Api app = Flask(__name__) api = Api(app, prefix="/api/v1") class PrivateResource(Resource): def get(self): return {"meaning_of_life": 42} api.add_resource(PrivateResource, '/private') if __name__ == '__main__': app.run(debug=True)
If we launch the server and access the endpoint, we will get the expected output:
$ curl -X GET { "meaning_of_life": 42 }
Our API is for now public. Anyone can access it. Let’s secure it so it’s no longer publicly accessible.
Basic HTTP Authentication
The idea of Basic HTTP Authentication is pretty simple. When we request a resource, the server sends back a header that looks something like this: WWW-Authenticate →Basic realm=”Authentication Required”. Generally when we try to access such resources from a browser, the browser shows us a prompt to enter username and password. The browser then base64 encodes the data and sends back an Authorization header. The server parses the data and verifies the user. If the user is legit, the resource is accessible, otherwise we are not granted permission to access it.
While using a REST Client, we would very often need to pass the credentials before hand, while we make the request. For example, if we’re using
curl, we need to pass the
--user option while running the command.
Basic HTTP Authentication is a very old method but quite easy to setup. Flask HTTPAuth is a nice extension that would help us with that.
Install Dependencies
Before we can start writing codes, we need to have the necessary packages installed. We can install the package using
pip:
pip install Flask-HTTPAuth
Once the package is installed, we can use it to add authentication to our API endpoints.
Require Login
We will import the
HTTPBasicAuth class and create a new instance named
auth. It’s important to note that name because we will be using methods on this auth instance as decorators for various purposes. For example, we will use the
@auth.login_required decorator to make sure only logged in users can access the resource.
In our resource, we added the above mentioned decorator to our
get method. So if anyone wants to
GET that resource, s/he needs to login first. The code looks like this:
from flask import Flask from flask_restful import Resource, Api from flask_httpauth import HTTPBasicAuth app = Flask(__name__) api = Api(app, prefix="/api/v1") auth = HTTPBasicAuth() class PrivateResource(Resource): @auth.login_required def get(self): return {"meaning_of_life": 42} api.add_resource(PrivateResource, '/private') if __name__ == '__main__': app.run(debug=True)
If we try to access the resource without logging in, we will get an error telling us we’re not authorized. Let’s send a quick request using
curl.
$ curl -X GET Unauthorized Access
So it worked. Our API endpoint is now no longer public. We need to login before we can access it. And from the API developer’s perspective, we need to let the users login before they can access our API. How do we do that?
Handling User Logins
We would generally store our users in a database. Well, a secured database. And of course, we would never store user password in plain text. But for this tutorial, we would store the user credentials in a dictionary. The password will be in plain text.
Flask HTTP Auth will handle the authentication process for us. We just need to tell it how to verify the user with his/her username and password. The
@auth.verify_password decorator can be used to register a function that will receive the username and password. This function will verify if the credentials are correct and based on it’s return value, HTTP Auth extension will handle the user auth.
In the following code snippet, we register the
verify function as the callback for verifying user credentials. When the user passes the credentials, this function will be called. If the function returns
True, the user will be accepted as authorized. If it returns
False, the user will be rejected. We have kept our data in the
USER_DATA dictionary.
USER_DATA = { "admin": "SuperSecretPwd" } @auth.verify_password def verify(username, password): if not (username and password): return False return USER_DATA.get(username) == password
Once we have added the above code, we can now test if the auth works.
$ curl -X GET --user admin:SuperSecretPwd { "meaning_of_life": 42 }
But if we omit the auth credentials, does it work?
$ curl -X GET Unauthorized Access
It doesn’t work without the login. Perfect! We now have a secured API endpoint that uses basic http auth. But in all seriousness, it’s not recommended. That’s right, do not use it in the public internet. It’s perhaps okay to use inside a private network. Why? Please read this thread.
Wrapping Up
With the changes made, here’s the full code for this tutorial:
from flask import Flask from flask_restful import Resource, Api from flask_httpauth import HTTPBasicAuth app = Flask(__name__) api = Api(app, prefix="/api/v1") auth = HTTPBasicAuth() USER_DATA = { "admin": "SuperSecretPwd" } @auth.verify_password def verify(username, password): if not (username and password): return False return USER_DATA.get(username) == password class PrivateResource(Resource): @auth.login_required def get(self): return {"meaning_of_life": 42} api.add_resource(PrivateResource, '/private') if __name__ == '__main__': app.run(debug=True)
As discussed in the last section, it’s not recommended to use basic http authentication in open / public systems. However, it is good to know how http basic auth works and it’s simplicity makes beginners grasp the concept of authentication / API security quite easily.
You might be wondering – “If we don’t use http auth, then what do we use instead to secure our REST APIs?”. In our next tutorial on REST APIs, we would demonstrating how we can use JSON Web Tokens aka JWT to secure our APIs. Can’t wait for that long? Go ahead and read the introduction.
And don’t forget to subscribe to the mailing list so when I write the next post, you get a notification!
3 thoughts on “Securing REST APIs: Basic HTTP Authentication with Python / Flask” | http://polyglot.ninja/securing-rest-apis-basic-http-authentication-python-flask/ | CC-MAIN-2018-43 | refinedweb | 1,133 | 65.93 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.