text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
In this post I want to talk a bit about Java exception in combination with generic type parameters.The post is divided in two sections. The first one is about generic exceptions, the second one is about passing and throwing exceptions as generic type parameters.
Generic classes are not allowed to subclass Throwable
Let's look at the following code
public class MyException<T> extends RuntimeException { // won't compile }
This code won't compile because subclasses of Throwable cannot have a generic type parameter. RuntimeException subclasses Exception which again is a subclass of Throwable. So the compiler rejects this code.
If you think a bit about this it makes sense. I cannot think of a useful way of catching generic exception.
Let's assume for the moment the exception definition from above would be valid Java code. How would you throw and catch this exception?
public void catchIt() { try { throwIt(); } catch (MyException<String> e) { .. } catch (MyException<Integer> e) { .. } catch (MyException<?> e) { .. } }
In order to make the generic type on the exception class useful it should be possible to create different catch blocks for different generic types. This however, is not possible because of type erasure. At runtime all the generic type information will be lost and MyException<String> will just be MyException. So there is no way for the Java runtime to decide which catch block should be executed.
Additionally it would not be possible to generalize the type parameter in the catch block because Generics in Java are invariant. This means that MyException<Integer> cannot be assigned to MyException<Number> although Integer is a subclass of Number. Therefore, a MyException<Integer> could not be caught by defining a catch block with MyException<Number>.
Be aware that this limitation also effects inner classes. The following code will produce a compile error:
public class MyClass<T> { private class MyInnerException extends Exception { // won't compile .. } }
It is not possible to define a (non generic) exception class inside a generic class. This is a bit strange because the exception itself has no generic type parameter. I cannot think of a reason why this should cause problems (if you can, please tell me!). Making the inner class static solves the compile error.
Generic type parameters can be thrown
While exception are not allowed to contain generic type parameters it is perfectly fine to throw generic types (if they extend Throwable). The following code compiles fine:
public <T extends Exception> void throwIt(T t) throws T { throw t; } public void catchIt() { try { throwIt(new Exception()); } catch (Exception e) { .. } }
However, it is not possible to use generic type parameters in catch blocks. So the next snippet won't compile:
public <T extends Exception> void throwIt(T t) throws T { throw t; } public <T extends Exception> void catchIt(T t) { try { throwIt(t); // fine } catch (T e) { // compile error .. } }
The reason for this is again type erasure.
Joy C - Tuesday, 7 March, 2017
Good article. (not sure why there aren't many comments here) | https://www.mscharhag.com/java/java-exceptions-and-generic-types | CC-MAIN-2019-18 | refinedweb | 497 | 56.45 |
We are all aware of the fact that one of biggest advantages brought about by the Internet, is the ability of instant communication, no matter where people may be located. With regards to this, there have certainly been so many developments on this front. Starting with instant messaging and video calling to the one of oldest forms, i.e. email, we have seen a great amount of progress being made. However, it is still definitely true that the most formal of all electronic forms of communication happens to the email. Now, while there are plenty of email services, both paid and free, one can always make use of Java, which is a powerful programming language, to build email applications that are platform, as well as protocol independent. This is exactly what we shall learn in the coming sections of this article.
What if JavaMail API?
An API, or Application Programming Interface, is basically a bunch of tools and rules that is meant for the development of software applications.
In a similar way, the JavaMail API is one that offers email sending and fetching abilities in Java-based applications. The framework it provides is platform and protocol independent for building messaging and mail applications. It basically consists of abstract classes, which define objects for a mail system. The API, as such, happens to be an optional package, allowing for composing, sending and reading emails.
What you need to know?
JavaMail API, being based on the Java programming language, is easy to use if you are competent with the basics of Java. This will also make it easier for you to learn how the API framework can be used fully.
Setting up the environment
When you are looking to send emails with a Java application, the process happens to be fairly simple enough. However, almost all applications require some kind of an environment to run on and so is the case here. You will need to install the JavaMail API and also the Java Activation Framework in your machine first. However, if you happen to be using Java SE 6 or higher, it is not required that you have the Java Activation Framework in your computer.
Both of these though, can be downloaded from Java’s official website. After downloading them, you will need to unzip them. The top level directories will have several jar files belonging to these applications. You must add the activation.jar and mail.jar files to your CLASSPATH.
The SMTP server
In order to be able to send emails, we make use of the SMTP (Simple Mail Transfer Protocol) server. This can be done in a few ways.
- You can install and make use of Apache James or Postfix server, by installing them.
- You can also choose to make use of the SMTP server provided by the host for free.
- Another option to go with is make use of the SMTP server provided by companies, like yahoo or Gmail.
Here, we have chosen to go with the second option. We make use of the JangoSMTP server to send emails. Visit the site, create and account and then configure your email address.
Figure 1. Jango SMTP settings
Now that we are aware of the few basic prerequisites, we shall learn how to send a simple email using JavaMail API.
Sending simple emails
The sending process of a simple email comprises of a few basic steps, which are mentioned below.
- The first step is to get a session.
- Then, go on to create a MimeMessage in default message, and set the details, like Subject, from, and to.
- Then, the actual message is set, like – message.setText(“write here”);
- The final step is to use the Transport object to send the message.
Creating the Java Class
We shall create a Java class file named Emailsend, with the contents given below. Let us have a look at different parts of the code.
Listing 1. Showing Emailsend.java
The below defined code is mainly to import all the various packages that we shall be requiring in our code.
packagejavamaildemo;;
Now we will have the main class to execute the function. Here we are setting the properties, authentication details, session objects etc.
public class Emailsend { public static void main(String[] args) { // The email ID of the recipientis to be mentioned here. String to = "emailrecipient@yahoomail.com"; // The email ID of the recipient is to be mentioned here. String from = "emailsender@outlook.com"; final String username = "keithsoloman"; //change as per your setting final String password = "**********"; //change as per your setting // Taking into consideration that you are sending the email through relay.jangosmtp.net String host = "relay.jangosmtp.net"; Properties props = new Properties(); props.put("mail.smtp.auth", "true"); props.put("mail.smtp.starttls.enable", "true"); props.put("mail.smtp.host", host); props.put("mail.smtp.port", "25"); // Getting the Session object. Session sesion = Session.getInstance(props, new javax.mail.Authenticator() { protected PasswordAuthentication getPasswordAuthentication() { return new PasswordAuthentication(username, password); } }); try { // Creation of a default MimeMessage object. Message msge = new MimeMessage(sesion); // Setting From: the header field of the header. msge.setFrom(new InternetAddress(from)); // Setting To: the header field of the header. msge.setRecipients(Message.RecipientType.TO, InternetAddress.parse(to)); // Setting the Subject: the header field msge.setSubject("Testing of Subject for JavaMail"); // Now, we set the actual message to be sent msge.setText("Hello!This is a sample testing message to check" + "email sending using JavaMailAPI "); // Sending of the message Transport.send(msge); System.out.println("The message has been successfully sent...."); } catch (MessagingException excp) { throw new RuntimeException(excp); } } }
Since we are making use of the SMTP server that is offered by JangoSMTP, which is of course the host here, the username and password is necessary to be authenticated. Authentication of the password is done by thejavax.mail.PasswordAuthentication class.
Compilation and running
Just like any other program, once the code is ready, we should look to compile and run it to see where it stands. Here, we have saved the file Emailsend.java in the directory - /home/keithsolomon/JavaMailAPITEST. Since the jar files, javax.mail.jar and activation.jar are already in the CLASSPATH, we do not need to include them again. Now, we use the following command for the compilation process.
javac -cp /home/keithsolomon/activation.jar:/home/keithsolomon/javax.mail.jar: Emailsend.java
After the compilation process, we then run the code.
java -cp/home/keithsolomon/activation.jar:/home/keithsolomon/javax.mail.jar: Emailsend
On executing the above command, you will see a message on your console, like
The message has been successfully sent....
Verifying the output
Now, since we have sent the email message to the destination email, we can log into it and then check for the message in the inbox.
Receiving emails
We have just learnt how to send emails using the JavaMail API. Now, we shall delve into how you can receive emails via this application framework.
Here, we shall look to write a Java class file names Emailfetch, which is capable of reading the several kinds of emails, like –
- Basic simple emails.
- And, emails that have an inline image.
The steps that we take to implement the receiving of emails are –
- Getting the Session object is the very first step, just like while sending.
- Creation of a POP3 store and then a connection to it.
- Creation of a Folder object and opening of the same in the mailbox.
- Retrieval of messages.
- Closing the folder and then storing objects.
Creating the Java Class
We basically create a Java class file by the name of Emailfetch, with the contents given below.
Listing 2. Example showing Emailfetch.java
package javamaildemo; import java.io.BufferedOutputStream; import java.io.BufferedReader; import java.io.DataOutputStream; import java.io.File; import java.io.FileOutputStream; import java.io.IOException; import java.io.InputStream; import java.io.InputStreamReader; import java.util.Date; import java.util.Properties; import javax.mail.Address; import javax.mail.Folder; import javax.mail.Message; import javax.mail.MessagingException; import javax.mail.Multipart; import javax.mail.NoSuchProviderException; import javax.mail.Part; import javax.mail.Session; import javax.mail.Store;
Again here, we are importing a large number of packages and classes that we shall need in this part of the program.
public class Emailfetch { public static void fetch(String hostpop3, String storagetype, String usr, String passwrd) { try { // creating the properties field Properties propts = new Properties(); propts.put("mail.store.protocol", "pop3"); propts.put("mail.pop3.host", hostpop3); propts.put("mail.pop3.port", "995"); propts.put("mail.pop3.starttls.enable", "true"); Session emailsesion = Session.getDefaultInstance(propts); emailsesion.setDebug(true);
Store storage = emailsesion.getStore("pop3s"); storage.connect(hostpop3, usr, passwrd); Folder mailfolder = store.getFolder("INBOX"); mailfolder.open(Folder.READ_ONLY);
BufferedReader brread = new BufferedReader(new InputStreamReader( System.in));
Message[] msges = mailfolder.getMessages(); System.out.println("msges.length is -" + msges.length); for (int a = 0; a & lt; msges.length; a++) { Message msges = msges[a]; System.out.println("*******************************"); writeportion(msges); String lines = brread.readLine(); if ("YES".equals(lines)) { msge.writeTo(System.out); } else if ("QUIT".equals(brread)) { break; } } emailfolder.close(false); storage.close(); }
Now, we just close the storage and the folder objects after the retrieval process.
catch (NoSuchProviderException excp) { excp.printStackTrace(); } catch (MessagingException excp) { excp.printStackTrace(); } catch (IOException excp) { excp.printStackTrace(); } catch (Exception excp) { excp.printStackTrace(); } } public static void main(String[] args) { String hosting = "pop.outlook.com"; // change as per need String mailStorage = "pop3"; String usrname = "yournamehere@outlook.com"; // change as per need String passwrd = "***********"; // change as per need
After we have mentioned the host, email address and password, we are calling the method fetching. This method is meant for checking the content type, following which, it processes and fetches content of messages.
fetching(hosting, mailStorage, usrname, passwrd); } public static void writeportion(Part p) throws Exception { if (p instanceof Message) //Callingthe method writeenvelope writeenvelope((Message) p); System.out.println("******************************"); System.out.println("TYPE of CONTENT: " + p.getContentType());
Here, we are checking if the content is in plain text.
if (p.isMimeType("text/plain")) { System.out.println("This is in simple plain text"); System.out.println("*************************"); System.out.println((String) p.getContent()); }
Here, we are checking if the email is has any attachments and then fetching them.
else if (p.isMimeType("multipart/*")) { System.out.println("This has a multipart"); System.out.println("*************************"); Multipart mpart = (Multipart) p.getContent(); int counter = mpart.getCount(); for (int b = 0; b< count; b++) writeportion(mpart.getBodyPart(b)); }
In this part, we check if the content happens to be a nested message.
else if (p.isMimeType("message/rfc822")) { System.out.println("This is a nested message!"); System.out.println("*************************"); writeportion((Part) p.getContent()); }
We also check for any inline image in the content.
else if (p.isMimeType("image/jpeg")) { System.out.println("*************************> image/jpeg"); Object obj = p.getContent(); InputStream xyz = (InputStream) obj; // Construction of the byte array required System.out.println("xyz.length = " + xyz.available()); while ((a = (int)((InputStream) xyz).available()) & gt; 0) { int rslt = (int)(((InputStream) xyz).read(bArray)); if (result == -1) int a = 0; byte[] barray = new byte[xyz.available()]; break; } FileOutputStream fil = new FileOutputStream("/tmp/image.jpg"); fil.write(barray); } else if (p.getContentType().contains("image/")) { System.out.println("Type of content " + p.getContentType()); File fi = new File("image" + new Date().getTime() + ".jpg"); DataOutputStream out = new DataOutputStream( new BufferedOutputStream(new FileOutputStream(fi))); com.sun.mail.util.BASE64DecoderStream test = (com.sun.mail.util.BASE64DecoderStream) p .getContent(); byte[] buffr = new byte[1024]; int bytesread; while ((bytesread = test.read(buffr)) != -1) { output.write(buffr, 0, bytesread); } } else { Object obj = p.getContent(); if (obj instanceof String) { System.out.println("This is a string."); System.out.println("****************************"); System.out.println((String) obj); } else if (obj instanceof InputStream) { System.out.println("This is just an input stream."); System.out.println("****************************"); InputStream is = (InputStream) obj; is = (InputStream) obj; int c1; while ((c1 = is.read()) != -1) System.out.write(c1); } else { System.out.println("This is an unknown type."); System.out.println("****************************"); System.out.println(obj.toString()); } } }
This method will print FROM, TO and SUBJECT of the email messages.
public static void writeenvelope(Message m1) throws Exception { System.out.println("This is the message envelope"); System.out.println("***************************"); Address[] add; // FROM if ((add = m1.getFrom()) != null) { for (int i = 0; i & lt; add.length; i++) System.out.println("FROM: " + add[i].toString()); } // TO if ((add = m1.getRecipients(Message.RecipientType.TO)) != null) { for (int i = 0; i & lt; add.length; i++) System.out.println("TO: " + add[i].toString()); } // SUBJECT if (m1.getSubject() != null) System.out.println("SUBJECT: " + m1.getSubject()); } }
Compiling and running
Once you compile and run the above code, using the same process as we did with the first example, you shall get the output.
Conclusion
The above example can very easily teach you the process of how to send and receive emails, making use of the JavaMail API. The process can be adopted when you are looking to use it for the development of mail and messaging applications. | http://mrbool.com/how-to-use-javamail-api-to-send-and-receive-emails/36781 | CC-MAIN-2017-09 | refinedweb | 2,141 | 52.97 |
5 Things You Don’t Know About Microsoft Azure
Although Microsoft landed quite late in the Cloud World with its IaaS and PaaS platform, Microsoft Azure is growing steadily and is definitely one ...Learn More
Welcome to part one of our series on Azure Storage. Stay tuned for the second part.
Microsoft Azure Storage is a cloud-based storage offering that provides multiple storage solutions for organizations. In addition to a massively scalable object store for data objects, Azure Storage also offers a cloud-based file-sharing solution, a messaging store, NoSQL store, and disk storage for virtual machines.
All storage offerings available through Azure Storage are designed to be highly available and redundant. The underlying hardware that supports Azure Storage provides redundancy in the event of transient hardware failures, while many different replication offerings provide protection against local and regional outages.
Understanding that organizations appreciate and require secure data, Microsoft has designed Azure Storage so that all data written to it is encrypted when at rest and while in transit. With fine-grained controls available, organizations can manage who has access to what data as well.
Because it’s a managed service, maintenance of Azure Storage services, regular updates to the service, and issue resolution are all handled by Microsoft. This ensures that organizations can rid themselves of day-to-day care and feeding of the underlying hardware and services that support data storage.
The flexibility of Azure Storage ensures data stored in Azure Storage is accessible from anywhere in the world, via several methods and languages. Data hosted in Azure Storage can be accessed via HTTP or HTTPS, as well as via .NET, Java, Node.js, Python, PHP, and more. Data is also accessible via a stable REST API. Azure Storage also supports scripted data access via Azure PowerShell and Azure CLI. Data is also accessible visually via the Azure Storage Explorer and the Azure Portal.
Azure Storage includes several storage services. They include:
All services available from Azure Storage are accessed through a storage account.
The Azure Blob Storage offering is built for massive object storage in the cloud. It is optimized for storing large amounts of unstructured data, which by definition does not adhere to any specific data model. Such data might include text data and binary data.
Typical uses for blob storage might include things like image serving or audio/video streaming, as well as log file storage. Other uses might include storing files for distributed access and storing backup data, archive data, and storing data for analysis later.
Blob storage consists of three key resources: storage account, containers within the storage account, and blobs that are hosted within the containers.
Storage Accounts
Each storage account provides a unique namespace within Azure for hosting data. All objects that are stored in Azure Storage feature an address that includes the unique storage account name.
Containers
Containers within an Azure storage account are used to organize blobs in much the same way that directories organize files within a traditional file system. Storage accounts can contain an unlimited number of containers, which in turn can store an unlimited number of blobs.
Blobs
Blobs come in a few different types. They include:
Block blobs consist of blocks of data that can be individually managed, and are used to store up to about 4.7TB of text and binary data. Append blobs are similar to block blobs since they are made up of blocks of data. However, append blobs are optimized for append operations, making them perfect for uses such as data logging from virtual machines. Page blobs can be used to store random access files that are up to 8TB in size. Virtual hard drives (VHDs) that serve as disks for virtual machines are stored in page blobs.
Azure VMs, just like any other computer, use disks as a place to store things like the OS, data, applications, and more. Every Azure VM has at least two disks attached, which include an OS disk and a temporary disk. Both disks are virtual hard disks, or VHDs, that are stored in an Azure storage account. In addition to an OS disk and a temporary disk, a virtual machine can also have one or more data disks attached as well. Data disks are also stored at VHDs.
Operating System Disk
The OS disk on every Azure VM is created from either a marketplace image or a custom image. It’s labeled as the “C: drive” by default and is registered as a SATA drive. The maximum size of the OS drive is 2TB.
Temporary Disk
The temporary disk that’s attached to a VM is used for short-term storage for apps and processes. It’s intended for storing things like page files and swap files. Temporary disks should not be used to store data that must be kept because data stored on temporary disks may be lost during maintenance events and whenever a VM is redeployed.
The temporary disk is labeled as “D: drive” by default.
Data Disk
Data disks are VHDs that are attached to virtual machines. They are used to store application data and other data that needs to be kept. Unlike OS disks, which are registered as SATA disks, data disks are registered as SCSI drives and labeled with a drive letter that you choose. Data disks have a maximum capacity of 4095 GB (or 4TB), while managed disks support a maximum capacity of 32,767 GB (32TB). The chosen size of a VM determines how many data disks can be attached to it. The size of the VM also determines the type of storage that can be used to host the disks.
Whether it’s a VHD that’s been uploaded or an empty VHD created in Azure, a data disk can be added to a VM at any time, by attaching it to the VM. When a disk is attached to a VM, the VM places a “lease” on the associated VHD file so that the VHD can’t be deleted while it’s attached to the VM.
A Note About VHDs:
VHDs that are used in Azure are .vhd files that are stored as page blobs in either a standard or premium storage account in Azure. It’s also important to note that Azure only supports the fixed disk VHD format.
When creating a disk in Azure, you have three performance tiers to choose from: Premium SSD Disks, Standard SSD, and Standard HDD Disks. In addition, there are two different types of disks that are offered, unmanaged and managed.
Performance Tiers
Standard HDD disks, as the name implies, are backed by mechanical HDDs. This tier offers cost-effective storage that can either be replicated locally within a single data center, or it can be geo-redundant across primary and secondary data centers.
Standard SSD disks are offered to support similar workloads as Standard HDD disks. However, Standard SSD disks provide consistent performance and better reliability than HDD. Standard SSD disks feature elements of both Premium SSD disks and of Standard HDD disks in order to provide an affordable storage solution that’s suitable for applications that do not require high disk IOPS (e.g., web servers). Microsoft recommends Standard SSD disks for most workloads.
Premium SSD disks are backed by SSDs. As such, they are a high-performance, low-latency disk option for virtual machines that run heavy I/O workloads (e.g., databases).
Disk Types
The “older” or “traditional” type of disk used by VMs in Azure is the unmanaged Disk. When using unmanaged disks, you’ll need to create and manage your own storage account, which will host your unmanaged disks.
If you choose the “managed disk” option when deploying a virtual machine, the creation and management of the storage account that hosts the managed disks is handled by Azure. All you need to do is specify the size of the managed disk and the performance tier (Standard or Premium), and Azure will create and manage the disk for you.
Microsoft recommends that managed disks be used for all new virtual machines and that any existing unmanaged disks be converted to managed disks. This should tell you all you need to know about the future of unmanaged disks.
Azure Files is a fully-managed file share offering hosted in the cloud. It provides hosting of file shares in Azure Storage that are accessible via the industry standard Server Message Block (SMB) protocol. As with traditional file shares, Azure file shares are concurrently mountable by cloud and on-premises machines, including Windows, Linux, and macOS. Azure file shares can also be used with Azure File Sync and cached on Windows Servers to provide quick access to data.
Azure file shares can be used to replace or supplement traditional on-premises file servers or even NAS devices. Because Azure file shares can be replicated to on-premises and cloud-based Windows servers via Azure File Sync, they are great for providing a distributed data cache for remote offices. When moving applications to the cloud, Azure file shares can facilitate the “lift and shift” approach because data that applications expect to reside on a file share can sit right in Azure files, in the cloud, close to the applications themselves.
Azure file shares are fully managed by Azure and they can be created, mounted, and managed via PowerShell and Azure CLI, meaning you can script solutions that access data stored in Azure file shares. Because Azure Files was built for resiliency, they are always available and you need not worry about downtime.
Stay tuned for the second part of this blog, where we explore other forms of Azure Storage. Learn more about designing and implementing an Azure storage strategy and leverage our multi-cloud learning platform to enhance your knowledge and practical experience in a cloud‑first environment.
Archive StorageArchive Storage offers the lowest storage costs of all Azure storage. Its retrieval costs, however, are higher when compared to Hot and Cool storage. The archive tier of storage is designed for data that can tolerate several hours of latency when being retrieved. It’s ...
What are Azure Blueprints?Blueprints, in the traditional sense, are used by architects and engineers to design and build new things. They are used to ensure that the final products are built to specifications and in compliance with certain standards and requirements.Azure Bluepri......
IoT, or the ‘Internet of Things’, is an intriguing and rapidly growing technology that's bringing significant change to important elements of modern life. According to Gartner, IoT security spending alone is set to reach $1.5 billion during 2018.Like many newly minted terms, the def...
Az f...
Your messaging application will need to interact with other applications and services to deliver and exchange information. Microsoft Azure allows us to send and receive messages through a simple queue. If an ordinary queue isn't enough, a queue with a publish and subscribe mechanism cou... | https://cloudacademy.com/blog/an-overview-of-azure-storage-part-1/ | CC-MAIN-2019-22 | refinedweb | 1,828 | 61.87 |
Last edited: January 26th 2018Last edited: January 26th 2018
Splines is a type of data interpolation, a method of (re)constructing a function between a given set of data points. Interpolation can be used to represent complicated and computationally demanding functions as e.g. polynomials. Then, using a table of a few function evaluations, one can easily approximate the true function with high accuracy.
In spline interpolation the data is interpolated by several low-degree polynomials. This differs from polynomial interpolation, in which the data is interpolated by a single polynomial of a high order. For a general discussion on polynomial interpolation we refer you to our notebook on polynomial interpolation. The simplest example of spline interpolation is linear splines, where the data points are simply connected by straight lines. We are going to discuss interpolation by cubic splines, which interpolates the polynomials using cubic polynomials with continuous first and second derivatives. We will create an algorithm and some functions for computing a cubic spline.
We start by importing needed packages and setting common figure parameters.
import numpy as np import matplotlib.pyplot as plt import scipy.sparse as sp import scipy.linalg as la %matplotlib inline # Set some figure parameters newparams = {'figure.figsize': (15, 7), 'axes.grid': False, 'lines.markersize': 10, 'lines.linewidth': 2, 'font.size': 15, 'mathtext.fontset': 'stix', 'font.family': 'STIXGeneral'} plt.rcParams.update(newparams)
Assume that we are given four data points: $\{(0, 0), (1, -1), (2, 2), (3, 0)\}$. The cubic spline interpolating these points is $$ S(x) = \begin{cases} -\frac{12}{5}x + \frac{7}{5}x^3, & 0\leq x < 1,\\ -1 + \frac{9}{5}(x - 1) + \frac{21}{5}(x-1)^2 - 3(x-1)^3, & 1 \leq x < 2,\\ 2 + \frac{6}{5}(x - 2) -\frac{24}{5}(x-2)^2 + \frac{8}{5}(x-2)^3, & 2 \leq x < 3.\\ \end{cases} $$ Let's plot the data points, linear spline and cubic spline!
n = 200 x1 = np.linspace(0, 1, n) x2 = np.linspace(1, 2, n) x3 = np.linspace(2, 3, n) # Cubic spline S1 = -12/5*x1 + 7/5*x1**3 S2 = -1 + 9/5*(x2 - 1) + 21/5*(x2 - 1)**2 - 3*(x2 - 1)**3 S3 = 2 + 6/5*(x3 - 2) - 24/5*(x3 - 2)**2 + 8/5*(x3 - 2)**3 plt.plot(np.concatenate([x1, x2, x3]), np.concatenate([S1, S2, S3]), label="Cubic spline") # Linear spline plt.plot([0, 1, 2, 3], [0, -1, 2, 0], "--", label="Linear spline") # Data points plt.plot([0, 1, 2, 3], [0, -1, 2, 0], "o", label="Data points") plt.legend() plt.show()
Exercise: Check that the cubic spline above has continuous first and second derivatives.
A general cubic spline $S(x)$ interpolating the $n$ data points $\{(x_1,y_1), (x_2, y_2),..., (x_n, y_n)\}$ can be written as\begin{equation} S(x) = \begin{cases} S_1(x) &= y_1 + b_1(x - x_1) + c_1(x-x_1)^2 + d_1(x-x_1)^3, & \text{for } x\in[x_1, x_2],\\ S_2(x) &= y_2 + b_2(x - x_2) + c_2(x-x_2)^2 + d_2(x-x_2)^3, & \text{for } x\in[x_2, x_3],\\ &\vdots&\\ S_{n-1} (x) &= y_{n-1} + b_{n-1}(x - x_{n-1}) + c_{n-1}(x-x_{n-1})^2 + d_{n-1}(x-x_{n-1})^3, & \text{for } x\in[x_{n-1}, x_n],\\ \end{cases} \label{eq:spline} \end{equation}
for some constants $b_i, c_i, d_i$, $i=1, ..., n$. As mentioned in the introduction, we demand that the spline in continuous and has continuous first and second derivatives. This gives the following properties: [1]
1. $S_i(x_i)=y_i$ and $S_i(x_{i+1})=y_{i+1}$ for $i=1,...,n-1$,
2. $S_{i-1}'(x_i)=S_{i}'(x_i)$ and $S_i(x_{i+1})=y_{i+1}$ for $i=2,...,n-1$,
3. $S_{i-1}''(x_i)=S_{i}''(x_i)$ and $S_i(x_{i+1})=y_{i+1}$ for $i=2,...,n-1$.
The three properties make sure that the spline is continuous and smooth.
The total number of constants $b_i, c_i, d_i$ that we need to compute is $3(n-1)$.
Note that the total number of conditions imposed by the properties above is $3n-5$. However, the total number of coefficients $b_i, c_i, d_i$ we need to compute is $3(n-1)$. Hence, we need two additional conditions to make the spline $S(x)$ unique. This is achieved thru endpoint conditions.
There are several choices of endpoint conditions (see e.g. [1]). We will be considering natural cubic splines,
4a. $S''_1(x_1)= 0$ and $S''_{n-1}(x_n)=0$,
and not-a-knot cubic splines
4b. $S_1'''(x_2)=S_2'''(x_2), \; S_{n-2}'''(x_{n-1})=S_{n-1}'''(x_{n-1})$.
Exercise: Which endpoint condition is used in the example above?
From property 1, 2 and 3 we obtain\begin{equation} y_{i+1} = y_{i}+b_{i}(x_{i+1}-x_i) + c_i(x_{i+1}-x_i)^2 + d_i(x_{i+1}-x_i)^3, \quad i=1,...,n-1, \label{eq:prop1} \end{equation}\begin{equation} 0 = b_i + 2c_i(x_{i+1}-x_i) + 3d_i(x_{i+1}-x_i)^2-b_{i+1}, \quad i=1,...,n-2, \label{eq:prop2} \end{equation}
and\begin{equation} 0 = c_i+3d_i(x_{i+1}-x_i)-c_{i+1}, \quad i=1,...,n-2, \label{eq:prop3} \end{equation}
respectively. The derivation is straight forward, and is left as an exercise for the reader. If we solve these equations, we obtain the constants $b_i, c_i, d_i$ and thus the cubic spline. To simplify the notation, we define $\Delta x_i=x_{i+1}-x_i$ and $\Delta y_i = y_{i+1}-y_i$. By using equation \eqref{eq:prop1} and \eqref{eq:prop3} we obtain the following expressions for $b_i$ and $d_i$ in terms of the $c$-coefficients:\begin{align} d_i &= \frac{c_{i+1}-c_i}{3\Delta x_i}, \label{eq:d}\\ b_i &= \frac{\Delta y_i}{\Delta x_i}-\frac{1}{3}\Delta x_i (2 c_i + c_{i+1}).\label{eq:b} \end{align}
If we insert this into equation \eqref{eq:prop2} we obtain,$$\Delta x_ic_i + 2(\Delta x_i + \Delta x_{i+1})c_{i+1}+\Delta x_{i+2}c_{i+2} = 3\left(\frac{\Delta y_{i+1}}{\Delta x_{i+1}}-\frac{\Delta y_{i}}{\Delta x_{i}}\right)$$
which is $n-2$ equations for $c_1,..., c_n$. The natural spline endpoint condition gives $c_1=c_n=0$. We can write this as the matrix equation$$ \begin{pmatrix} 1 & 0 & 0 & 0 & \cdots & 0 & 0 \\ \Delta x_1 & 2\Delta x_1 + 2\Delta x_2 & \Delta x_2 & 0 & \cdots&0&0\\ 0 & \Delta x_2 & 2\Delta x_2 + 2\Delta x_3 & \Delta x_3 & \cdots&0&0\\ \vdots & \vdots & \ddots & \ddots & \ddots & \vdots & \vdots\\ 0 & 0 & 0 & 0 & \Delta x_{n-2} & 2\Delta x_{n-2} + 2\Delta x_{n-1} & \Delta x_{n-1}\\ 0 & 0 & 0 & 0 & 0 & 0 & 1 \end{pmatrix} \begin{pmatrix} c_1 \\ c_2 \\ \\ \vdots \\ \\ c_n \end{pmatrix} = \begin{pmatrix} 0 \\ 3\left(\frac{\Delta y_{2}}{\Delta x_2}-\frac{\Delta y_1}{\Delta x_1}\right) \\ \vdots \\ 3\left(\frac{\Delta y_{n-1}}{\Delta x_{n-1}}-\frac{\Delta y_{n-2}}{\Delta x_{n-2}}\right) \\ 0 \end{pmatrix}. $$
The algorithm for finding the spline is now quite apparent. We start by constructing the matrix equation, then solve it to find $c_i$ and in turn compute $b_i$ and $d_i$ via the equations \eqref{eq:d} and \eqref{eq:b}.
The first and last row of the matrix is altered with the not-a-knot endpoint conditions. Note that property 4b implies $d_1=d_2$ and $d_{n-2}=d_{n-1}$. If we insert this into equation \eqref{eq:d} for $d_i$, we obtain $$\Delta x_2 c_1 -(\Delta x_1 + \Delta x_2) c_2 + \Delta x_1 c_3 = 0,$$ $$\Delta x_{n-1} c_{n-1} -(\Delta x_{n-2} + \Delta x_{n-1})c_{n-1}+\Delta x_{n-2}c_n = 0.$$ The first row in with the not-a-knot end conditions becomes $$(\Delta x_2\;\; -(\Delta x_1 + \Delta x_2)\;\; \Delta x_1\;\; 0\;\; 0\;\; ...).$$ Likewise, the last row becomes $$(0\;\; ... \;\; 0 \;\; \Delta x_{n-1}\;\; -(\Delta x_{n-2} + \Delta x_{n-2})\;\; \Delta x_{n-2}).$$
We now proceed to create a function that computes the cubic spline that interpolates the points $\{(x_1,y_1), (x_2, y_2),..., (x_n, y_n)\}$. Note that the matrix equation above is tridiagonal, and it can therefore be stored as a $3\times n$ array and solved effectively by using scipy.linalg.solve_banded. In the not-a-knot-case the matrix becomes a banded matrix with two upper and lower diagonals.
Exercise: Derive equations \eqref{eq:prop1}, \eqref{eq:prop2} and \eqref{eq:prop3} from the properties 1-3.
def cubic_spline_coeffs(x, y, endpoint="natural"): """ Computes the coefficients in the cubic spline that interpolates the points (x_1,y_1), (x_2, y_2),..., (x_n, y_n). Parameters: x: array_like, shape (n>2,). x-value of the points being interpolated. Values must be real and in strictly increasing order. y: array_like, shape (n>2,) y-value of the points being interpolated. Values must be real. Returns: array, shape (3, n). The coefficients b, c and d, stored in the first, second and third row, respectively. """ x = np.asarray(x) y = np.asarray(y) n = len(x) dx = np.diff(x) dy = np.diff(y) # Find the vector for the right hand side rhs = np.zeros(n) rhs[1:-1] = 3*(dy[1:]/dx[1:] - dy[:-1]/dx[:-1]) # Compute the matrix and store a matrix diagonal ordered form if (endpoint == "natural"): matrix = np.zeros((3, n)) bands = (1, 1) matrix[1, 1:-1] = 2*(dx[:-1] + dx[1:]) # Diagonal matrix[1, 0] = matrix[1, -1] = 1 matrix[0, 2:] = dx[1:] # Upper diagonal matrix[2, :-2] = dx[:-1] # Lower diagonal if (endpoint == "not-a-knot"): matrix = np.zeros((5, n)) bands = (2, 2) matrix[2, 1:-1] = 2*(dx[:-1] + dx[1:]) # Diagonal matrix[1, 2:] = dx[1:] # Upper diagonal matrix[3, :-2] = dx[:-1] # Lower diagonal # First row matrix[2, 0] = dx[1] matrix[1, 1] = -dx[0] - dx[1] matrix[0, 2] = dx[0] # Last row matrix[2, -1] = dx[-2] matrix[3, -2] = -dx[-2] - dx[-3] matrix[4, -3] = dx[-1] # Call a solver for a banded matrix c = la.solve_banded(bands, matrix, rhs, overwrite_ab=True, overwrite_b=True, check_finite=False) # Find the remainding coefficients d = np.diff(c)/(3*dx) b = dy/dx - dx*(2*c[:-1] + c[1:])/3 return b, c, d
We also need a function that can evaluate the spline given the coefficients.
def cubic_spline_eval(x, xdata, ydata, b, c, d): """ Evaluates the cubic spline that interpolates {(xdata, ydata)} at x with coefficients b, c and d. Parameters: x: array_like, shape(m,). x-values (axis) at which the spline is evaluated. a, b, c, d: array_like, shapes (n,), (n-1,), (n,) and (n-1,). Coefficients of the spline. Return: array, shape(m,). Function evaluation of the spline. """ x = np.asarray(x) y = np.zeros(len(x)) m = 0 for i in range(len(xdata) - 1): n = np.sum(x < xdata[i + 1]) - m xx = x[m:m + n] - xdata[i] y[m:m + n] = ydata[i] + b[i]*xx + c[i]*xx**2 + d[i]*xx**3 m = m + n xx = x[m:] - xdata[-2] y[m:] = ydata[-2] + b[-1]*xx + c[-2]*xx**2 + d[-1]*xx**3 return y
def func(x): return np.sin(x) xdata = np.asarray([0, 2, 4, 6, 8, 10])*np.pi/5 ydata = func(xdata) x = np.linspace(xdata[0] - 2, xdata[-1] + 2, 200) y = func(x) b, c, d = cubic_spline_coeffs(xdata, ydata, "natural") ya = cubic_spline_eval(x, xdata, ydata, b, c, d) b, c, d = cubic_spline_coeffs(xdata, ydata, "not-a-knot") yb = cubic_spline_eval(x, xdata, ydata, b, c, d) plt.figure() plt.plot(x, y, "--", label=r"$\sin(x)$") plt.plot(x, ya, label="Natural cubic spline") plt.plot(x, yb, label="Not-a-knot cubic spline") plt.plot(xdata, ydata, 'o', label="Data points") plt.xlim(x[0], x[-1]) plt.ylim(np.max(y)*1.1, np.min(y)*1.1) plt.legend() plt.show()
Note that we have defined the cubic splines outside the domain of the data points. In the not-a-knot cubic spline outer polynomials are extended. That is, the polynomial at $x<x_1$ is the same as for $x_1<x<x_2$. The natural cubic spline on the other hand, defines a polynomial with "opposite curvature" outside the data points. That is, if the spline curves away from the axis at $x_1<x<x_2$ it will curve towards the axis at $x<x_1$.
# Define some data points xdata = [-0.5,-1.0,-0.5, 0.2, 1.5, 2.0, 1.0] ydata = [ 5.0, 3.7, 1.0, 1.0,-0.5, 1.5, 4.0] n = len(xdata) # Parameter values t = np.linspace(0,1,100) # Curve interpolation using uniformly disturbuted parameter nodes ti = np.linspace(0,1,n) # x-axis b, c, d = cubic_spline_coeffs(ti, xdata, "not-a-knot") Px = cubic_spline_eval(t, ti, xdata, b, c, d) # y-axis b, c, d = cubic_spline_coeffs(ti, ydata, "not-a-knot") Py = cubic_spline_eval(t, ti, ydata, b, c, d) plt.figure plt.plot(Px,Py,'g',label='Uniformly disturbuted nodes.') plt.plot(xdata,ydata,'r*',label='Data points.') plt.title('Polynomial curve interpolation of a set data points.') plt.legend(), plt.xlabel('x'), plt.ylabel('y') plt.show()
Exercise: Use the Chebychev nodes instead of the uniformly distributed parameter nodes. Hint. See the polynomial interpolation notebook.
There are several benefits of using cubic splines opposed to e.g. a high order polynomial interpolation or higher order splines. In our notebook on polynomial interpolation we showed that large oscillations (and thus large errors) may occur when approximating a function using a high order polynomial. This is called Runge's phenomenon. We showed that the error due to this oscillation is minimized when using so-called Chebyshev nodes as base for the interpolation. Another workaround is to use low order polynomial splines.
You may be wondering why we don't use a quadratic spline instead of cubic splines. Approximating a function using more function evaluations and higher order polynomials must always be better, right? Wrong! We may of course argue that higher order polynomial splines gives smoother curves, since we are demanding that higher order derivatives are continuous. This may, however, lead to larger errors due to Runge's phenomenon. In addition, higher order polynomial interpolations require more data points. The global change in the curve due to the change in a single data point is thus increased in higher order polynomial splines.
[1] Sauer, T.: Numerical Analysis international edition, second edition, Pearson 2014
[2] Press, W.H., Teukolsky, S.A., Vetterling, W.T., Flannery, B.P.: Numerical Recipes, the Art of Scientific Computing, 3rd edition, Cambridge University Press 2007
scipy.interpolate has several function for polynomial and spline interpolation, such as CubicSpline and interp1d. Check them out! | https://nbviewer.jupyter.org/urls/www.numfys.net/media/notebooks/cubic_splines.ipynb | CC-MAIN-2019-43 | refinedweb | 2,497 | 58.48 |
GSoC2013 Ideas/OWASP ZAP Exploring Advanced reporting using BIRT
- 1 Abstract
- 2 Work breakdown structure with Timeline and expected results
- 3 Progress First phase(June 22, 2013)
- 4 Progress First phase(June 27, 2013)
- 5 Progress 2nd Phase: Integration with OWASP ZAP(27 June - 10th July, 2013)
- 6 Progress 2nd Phase (10th July - 18th July 2013)
- 7 Progress final midterm phase
Abstract
OWASP ZAP (Zed Attack Proxy) is an open source penetration testing tool for finding vulnerabilities in web applications. The ZAP application’s current report capability is to generate limited types of reports for ZAP testing results in the formats such as in HTML and XML. analyse the testing results in a productive way. Objectives:
- Installed and Configured BIRT environment to be used in Eclipse OWASP ZAP project.
- Be able to generate reports from the application using the BIRT report engine API.
- Creation of prototype reports regarding the results output of the Sessions & attacks such as: Alerts, History, Search etc.
- A new user interface for generating reports which is easy to use and provides the user with a wide range of options.
- Analysis report of the pros-and cons of using BIRT within OWASP ZAP as a reporting tool.
Work breakdown structure with Timeline and expected results
Introduction
The current reporting module in ZAP is capable to generate limited types of reports on the results produced by ZAP e.g., in HTML and XML formats. analyze the testing results. The report structure shall be designed by using BIRT RCP Report Designer.
BIRT (The Business Intelligence and Reporting Tools) project is an open source software project that provides reporting and business intelligence capabilities for rich client and web applications.
In relation to this project, there are two main components of BIRT:
- A report designer within the Eclipse IDE for creating BIRT Report prototypes.
- A runtime component (BIRT Report Engine API) for generating reports that can be deployed to OWASP ZAP.
- Proposed Solution and Implementation
The proposed solution consists of the following three stages:
1. Create a Reporting Module Develop a reporting module using BIRT Report Engine API. This module shall be able to generate reports within OWASP ZAP. The Report Engine API is a part of the package "org.eclipse.birt.report.engine.api". This API shall provide the most commonly used functionality for the proposed module. The module shall use the "ReportEngine" class of the API for generating reports.
2. Design Report structure Several report prototypes for various ZAP result outputs shall be designed using the BIRT RCP Report Designer application. It is a standalone tool that is used to build a BIRT report design and preview a report. The prototypes, created by the tool, will be used by the proposed Reporting module to display the reports for ZAP output results.
3. Create a Data source OWASP ZAP shall produce XML results which will be fed into the proposed BIRT reporting module. The reporting module shall read it as a data source to generate reports. XML output is generated by the ZAP namespace “org.parosproxy.paros.extension.report”
System Context Diagram The following diagram shows the high level system context diagram within ZAP.
Figure: System context diagram.
Progress First phase(June 22, 2013)
Prototype project using the Reporting Engine API
Rauf has created a prototype. This prototype contains the reporting engine API with a sample report. The next challenge with the prototype consist in using a XML generated output data source from OWASP ZAP and how to use this XML output to render the first report.
The Actual OWASP ZAP xml output comes from data on the alerts tab.
Figure: Alert Tab from owaspzap
Figure: XML output generated by OWASP ZAP
UNIT tests prototype
Rauf will be working on UNIT test to make sure the prototype has the proper error handling.
Extending OWASP ZAP with new reporting module
We are researching the best way to integrate this new module into OWASP ZAP. The first part of this is by creating a new extension as explained in
Once the prototype is working properly, the code will be integrated in the new extension module as shown the following figure.
Source Code repository
Once the prototype code is integrated into OWASP ZAP.
the code will be set up into a ZAP dev environment:
Creating new reporting module as an ADD-ON
We will consult with Simon Bennets(project leader) on the best alternative to whether create a new menu, or keep the Reports one and create a new sub-menu.
Progress First phase(June 27, 2013)
First Draft Report using OWASP xml generated output
Using a ready to run instance of Eclipse Juno with BIRT plugins installed in it,Raul was able to create a BIRT draft report using the xml generated output from OWASP ZAP as the xml data source. This report still needs improvements in layout and design. CSS can be used for this part to enhace the look and feel of reports. Next week we will be concentrating on creating a nice CSS for the reports
Prototype running the BIRT report API with the created report
Implementation of the Report API is the coolest part of the project. Indeed running the report from a prototype project provided us with the possibility to create reports in multiple formats. By using HTML or PDF render options, we can create 2 reports at once as shown in the code:
Progress 2nd Phase: Integration with OWASP ZAP(27 June - 10th July, 2013)
During this phase we have focused on integrating the code with OWASP ZAP. The challenges in this phase are:
- Understand how extensions work within OWASP ZAP
- Library structure
- Flow and interaction with the user
- New Design Report Alerts
Understand how extensions work within OWASP ZAP
For this part, the extensions examples was of great help. Rauf practiced using both examples(TopMenu & RightClickMenu) and he was able to complete this part. By implementing these examples, Rauf was able to understand how the extensions work and create an extension for the BIRT module
Library Structure
BIRT Report API contains many JAR files. One , js.jar conflicts with the existing one in OWASP ZAP library. For this part we replaced the old one with the one from BIRT engine and the OWASP ZAP code was able to build and run without issues. We asked Simon about this particular issue.It seems that this Jar is not been used by OWASP ZAP, however, the way extentions and Add-on works, should allow us to set the library in the extension of the package we have created for the BIRT project.
Integrating the rough prototype
For the purpose of testing the integration : A new package “org.zapproxy.extension.birtreports” was created 2 classes were added as seen here including the Message.properties file
On the ReportLastScan.java (which is a ripoff of the same ReportLastScan from paros.extension) we added a new method
On the BirtTopMenu.java class we call the method:
Then we run OWASP ZAP
Report is generated and saved on the hard-coded location in the code
Work-flow and UML classes - Interaction with the user
One of the upcoming tasks for Rauf consists in creating a better flow for interacting with the user The work flow must answer to questions such as:
- Will a user be allowed to define a report? (he could pass is as a parameter , in the future a user could create his own reports to be generated from XML data-source the TEMP HSQL database?)
- The report is using an XML data source generated from OWASP ZAP. The source path must be defined and must be a relative path when OWASP ZAP is installed. Propose a clear method to do this
- The user should have the option to define the output path in his drive to save the generated PDF/HTML report
- BIRT engine supports multiple formats :HTML, Paginated HTML, PDF, WORD, XLS, and PostScript . Do we create Menu items for each one?
- Implement Exceptions and messages to interact with the user once reports are generated
- Create Unit tests
New Design
For this section a new style needs to be defined to be used with the reports. We will propose 2 designs and users can vote for selection.
Right now this is the first version. Charts are also generated by BIRT, creating a summary of the alerts xml output.
Progress 2nd Phase (10th July - 18th July 2013)
Redefine prototype workflow
Rauf(student) created a workflow which was discussed with Johanna(mentor). The original workflow was missing a clear integration based on how the code works and how OWASP ZAP generates an XML file. The following represents the actual flow built in the code after Rauf did the correct representation and understood how OWASP ZAP generates the Alert data and later on generates an XML file .
We presented the prototype to Simon, code was made available to him through DropBox. Simon was able to generate a report. Code will be soon available through Github
Rauf is working on unit tests for the methods created and enhance some bugs in the prototype. Also, he must research if we are able to generate a report on the fly, so no predefined report file is used, but instead, report is created on the fly. This seems possible using the Birt Design API jars, which are already included in the Jar library files. This has lower priorities , but we see the advantages as basically , everything could be generated code wise.
Unit tests and enhacements
In the birtreports package, Rauf did some big fixing and made the function openPDF functional so it opens the PDF after saving the file.
- The testbirt package contains a file that has 4 test cases on the LastReportScan file. We can add more test cases as we proceed that's why Rauf created a separate package(test birt). Rauf used Junit for the test cases.
- The four test cases are:
i. To test if the xml file is missing ii. To test if the rptdesign file is missing ii. To test the generation of xml file ii. To test openPDF() functionality
Jar file JS.JAR issue
A while ago we reported some issues with this jar. What we found out is that the jar from BIRT library contains digital signatures that must be removed. After doing that, the code is able to run from the Build source.
Enhancing Alerts report data
One feedback we received from Simon was to generate the data on the following format:
- Each type of alert should only be shown once
- Where multiple instances occur it would be good to see the first X details shown (where the user can specify X?), eg:
Type of Alert: X-Content-Type-Options header missing Low (Warning) The Anti-MIME-Sniffing header X-Content-Type-Options was not set to 'nosniff' First 10 examples: URL: URL: URL: Solution: This check is specific to Internet Explorer 8 and Google Chrome. Ensure each pagesets a Content-Type header and the X-CONTENT-TYPE-OPTIONS if the ContentType header is
For this part, we can think of different solutions such as:
- Transform the XML to another XML with the right regrouping using XSLT
- Create an interface which generates the data we need and use BIRT Scripted data source to generate the reports
Make the extension an Add-On
A lot of information is found on the OWASP ZAP developer Wiki on how to create an add-on, but for new developers working with OWASP ZAP this can be confusing. For this purpose, Johanna (mentor) created a guide step by step on how to create extensions and Add-ons. This is especially handy for developers working for the first time in OWASP ZAP. This guideline is available here: (feel free to comment and feedback!)
File:GuidelineZAPExtensionsAddOns1.0.pdf
Generating BIRT reports in ZAP using scripted data source & Enhancing data in the Alert reports
So far we have been able to create a working prototype as original planned, however,this prototype uses predefined BIRT reports and an XML file as data source. After talking to Simon, he pointed out that there is a more efficient way to Generate reports using BIRT: using a scripted data source.
Also Simon provided us a very good link about this subject here:
As explained in the book “Integrating and Extending BIRT, 3rd edition”: “BIRT supports accessing data using JavaScript code. This type of data source is called a scripted data source. Using a scripted data source, you can access objects other than the built-in data source types. Because the JavaScript code for accessing and managing a scripted data source can wrap Java objects, a scripted data source can access an EJB, an XML stream, a Hibernate object, or any other Java object that retrieves data. A scripted data source must return data in tabular format, so that BIRT can perform sorting, aggregation, and grouping.” (Weathersby and et al, 2011)
The challenge for the next phase consists in understanding ZAP code more deeply, especially because there is already an API.
We need to define a plan for this phase after Rauf has completed refining the code and submitting the code for final revision to Simon Bennets,project leader of OWASP ZAP.
Progress final midterm phase
By the midterm, Rahul was able to finalize refining the code, enhancing error handling and some extra features such as been able to set a custom logo and opening the PDF file after the report has been generated.
Right now, an important part is understanding the code to be able to create the interface to generate BIRT reports using scripted data source. For this part, Rauf(Student) and Johanna(mentor) will be discussing the actual flow and feedback with Simon about this flow and propose how to build this interface that will work with an scripted data source | https://www.owasp.org/index.php?title=GSoC2013_Ideas/OWASP_ZAP_Exploring_Advanced_reporting_using_BIRT&redirect=no | CC-MAIN-2015-22 | refinedweb | 2,306 | 58.32 |
Dec 26, 2015 08:57 PM|Arvind2015Asp.net|LINK
Hi Team,
I am facing the below listed issue while trying to make use of the serializable object in C#. I have used the below code snippet
Session Used
Session["YYY"] = dt;
The session variable was losing its value after the execution of the method in which the declaration was made. Hence had to go with the below listed config for setting the timeout to 940.
Web config
<sessionState mode="StateServer" cookieless="false" timeout="940" />
Received the below error after making the above change
Had marked the class as serializable as shown below
[Serializable()]
public class xyz
{
}
All-Star
43720 Points
Dec 27, 2015 08:52 PM|Arvind2015Asp.net|LINK
Hi Patrice,
I am able to find the rootcause of my issue now I had added the web config entry with session that has caused the issue with the error message. I had added this because the value stored in my session variable loses its content when I actually hit the finally block. So is there any option to deal with this because I would require the value again to be used in another method downstream.
All-Star
43720 Points
Dec 27, 2015 10:02 PM|PatriceSc|LINK
Good point. It is always better to solve your actual problem rather than to just try something else without even understanding your initial issue.
Showing some code could help. Could it be that you are disposing the object you are trying to store in the session? Or could it be that the web app is restarting?
Star
12320 Points
Dec 29, 2015 08:51 AM|Candice Zhou|LINK
Hi Arvind,
It seems to be like session state is disabled in your web config on the webserver where you are deploying.
Check the difference in setting for session on your local machine and deployment server.
Best Regards,
Candice Zhou
5 replies
Last post Dec 29, 2015 08:51 AM by Candice Zhou | https://forums.asp.net/t/2081047.aspx?Unable+to+serialize+the+session+state+In+StateServer+and+SQLServer+mode+ASP+NET+will+serialize+the+session+state+objects+and+as+a+result+non+serializable+objects+or+MarshalByRef+objects+are+not+permitted+ | CC-MAIN-2019-18 | refinedweb | 328 | 68.91 |
Does Pythonista have the ability to show function prototypes as you enter a script.
I would like to be able to see what the prototype is for the function as I am creating my script. I am new to Python, NumPy and MatPlotLibl and it would be helpful.
It takes awhile if you have to look up the syntax for the function arguments.
Thanks
If you long-press an identifier, it will pop up an option to open the help browser.
You can also search for methods in the buil in doc viewer (question mark).
Of course, you can also use the python built in help from the console.
import os help(os.relpath) | https://forum.omz-software.com/topic/4364/does-pythonista-have-the-ability-to-show-function-prototypes-as-you-enter-a-script/2 | CC-MAIN-2020-40 | refinedweb | 114 | 82.75 |
ok, so the general assignment is to create a class called SearchMyString and its driver that allows the user to enter any word or words that they want, and then choose to find the number of vowels, find the number of words, and search their entry for any word (which will either tell them the word is found in their original entry, or not). i know, very simple, but i am only in a high school computer programming class; give me a break. what i am having trouble with is creating the third method (searching their entry). the teacher told us that we need to use boolean for it, obviously, but i'm not quite sure how to make it work in my code because every time i compile, something is wrong (i am using blue j, by the way). i have been working on this for hours, and i just really need some help on this! if you could explain to me how to do it (not just do it for me, please), that would be amazing! thank you!
THIS IS MY ORIGINAL CLASS CODE:
Code :
import java.util.Scanner; public class SearchMyString { public String userStr; Scanner in = new Scanner(System.in); public SearchMyString(String userStr) { this.userStr = userStr; }
here are the choices the user has:
Code :
public void displayMenu() { System.out.print("What would you like to do with your entry?\n"); System.out.print("(1) Find Vowels\n"); System.out.print("(2) Find Number of Words\n"); System.out.print("(3) Search Word\n"); }
method to find vowels (im finished with this already):
Code :
public int findVowels() { int counter = 0; System.out.print("==============================\n\n"); for(int i = 0; i < userStr.length(); i++) { if(userStr.charAt(i) == 'a') { System.out.print("a is at position " + i + "\n"); counter++; }else if(userStr.charAt(i) == 'e') { System.out.print("e is at position " + i + "\n"); counter++; }else if(userStr.charAt(i) == 'i') { System.out.print("i is at position " + i + "\n"); counter++; }else if(userStr.charAt(i) == 'o') { System.out.print("o is at position " + i + "\n"); counter++; }else if(userStr.charAt(i) == 'u') { System.out.print("u is at position " + i + "\n"); counter++; }else if(userStr.charAt(i) == 'y') { System.out.print("y is at position " + i + "\n"); counter++; } }//looks for only vowels. if found, adds +1 to counter System.out.print("Number of vowels = " + counter + "\n\n"); System.out.print("==============================\n\n"); return(counter); }
the method to find the number of words (finished this, too):
Code :
public int findNumOfWords() { int numWords = 0; System.out.print("==============================\n\n"); for(int i = 0; i < userStr.length(); i++) { if(userStr.charAt(i) == ',') {//if the character is "," numWords++;//+1 to numWords (if true) }else if(userStr.charAt(i) == '.') {//if the character is "." numWords++;//+1 to numWords (if true) } } System.out.print("Number of words = " + numWords + "\n\n"); System.out.print("==============================\n\n"); return(numWords); }
AAAAAND here's where the trouble starts... i am so lost:
Code :
public boolean searchWord(String srchTerm) { for(int i = 0; i < userStr.length(); i++) { if(srchTerm.charAt(i) == userStr.charAt(i)) { boolean srch = true; break; }else { boolean srch = false; break; } } if(true) { System.out.print("That word was found!\n\n"); }else if(false) { System.out.print("That word was not found.\n\n"); } System.out.print("==============================\n\n"); return(srch); } }
THIS IS MY DRIVER CLASS CODE (in case any of this matters...):
Code :
import java.util.Scanner; public class SearchMyStringTest { public static void main(String args[]){ while(true){ Scanner in = new Scanner(System.in); System.out.print("Enter Your Words! (Separate with commas (,) End with period (.))\n"); String userInput = in.nextLine();//receives the user input SearchMyString str = new SearchMyString(userInput);//holds the user input str.displayMenu(); int ans = in.nextInt(); if(ans == 1){ int counter = str.findVowels(); }else if(ans == 2){ int numWords = str.findNumOfWords(); }else if(ans == 3){ System.out.print("==============================\n\n"); System.out.print("Please enter the word you would like to search for:\n\n"); String word = in.nextLine(); SearchMyString newSearch = new SearchMyString(word); int srch = str.searchWord(word); } System.out.print("Would you like to play again?\n"); System.out.print("1. Yes\n"); System.out.print("2. No\n"); int c = in.nextInt(); if(c == 1){ System.out.print("==============================\n\n"); }else if(c == 2){ System.out.print("==============================\n\n"); System.out.print("Thanks for playing!\n\n"); System.out.print("==============================\n\n"); break; } } } }
thank you so much!!! | http://www.javaprogrammingforums.com/%20whats-wrong-my-code/18939-boolean-string-trouble-printingthethread.html | CC-MAIN-2014-15 | refinedweb | 739 | 62.64 |
You already might have noticed, as part of vSphere 6.5, VMware introduced vCenter Server REST APIs. I really enjoyed playing around them using vCenter apiexplorer as well as Postman REST client. Recently, I wanted to code around these APIs using one of the programming languages and I am happy that I was able to do it using Python. I thought it is worth to share with you. In this blog post, I will take you through all the steps required to get started with vCenter REST API using python. Here we go.
Step 1. First important thing is to get familiar with vCenter server REST API documentation. Similar documentation is available from vCenter apiexplorer as well. I would recommend you to play with apiexplorer, which will not only make you familiar with documentation but also will enable you to quickly invoke these APIs against your vCenter server. install “requests” python module as follows
$ pip install requests
Step 3. Now let us take a look at below python module developed to simplify REST API usage.
[python]
# Author: Vikas Shitole
# Website:
# Product: vCenter server
# Description: Python module for vCenter server REST APIs
# Reference:
# How to setup vCenter REST API environment?: Just have VM with python and install "requests" python library using pip
import requests
import json
from requests.packages.urllib3.exceptions import InsecureRequestWarning
requests.packages.urllib3.disable_warnings(InsecureRequestWarning)
s=requests.Session()
s.verify=False
# Function to get the vCenter server session
def get_vc_session(vcip,username,password):
s.post(‘https://’+vcip+’/rest/com/vmware/cis/session’,auth=(username,password))
return s
# Function to get all the VMs from vCenter inventory
def get_vms(vcip):
vms=s.get(‘https://’+vcip+’/rest/vcenter/vm’)
return vms
#Function to power on particular VM
def poweron_vm(vmmoid,vcip):
s.post(‘https://’+vcip+’/rest/vcenter/vm/’+vmmoid+’/power/start’)
# Function to power off particular VM
def poweroff_vm(vmmoid,vcip):
s.post(‘https://’+vcip+’/rest/vcenter/vm/’+vmmoid+’/power/stop’)
[/python]
Above vcrest.py module is available on my github repo.
Let us understand above code.
Line 8: Imported powerful “requests” python library required to make API calls
Line 9: Imported “json” library required to parse json response we get from REST APIs
Line 10/11: Here we are disabling warnings related to SSL connection. In production, we should not disable it.
Line 13/14: Here we are creating Session object to have session persisted during the current request. If you see “s.verify” is set to False, it does mean that we are ignoring verifying SSL certificates. If you want to set it to true, please take a look at SSL Cert Verification section
Line 16 to 32: I have added 4 methods i.e. get_vc_session(), get_vms(), poweron_vm() & poweroff_vm(). We would be calling these methods from below sample script. If you see, in all the methods, I have used REST API documentation and called these APIs using “requests” library.
Step 4. Now that we understood above “vcrest.py” module, let us import above module into our script to demonstrate its usage.
[python]
# Description: Python sample to get VMs and its moid using vCenter server REST API.
# Reference:
# Make sure you have "rest.py" file into your python directory.
import vcrest
import json
vcip="10.192.23.143" # vCenter server ip address/FQDN
#Get vCenter server session and can be used as needed. pass vcenter username & password
vcsession = vcrest.get_vc_session(vcip,"Administrator@vsphere.local","VMware1!")
#Get all the VMs from inventory using below method from "vcrest" module.
vms = vcrest.get_vms(vcip)
# Parsing the JSON response we got from above function call (it has all the Vms present in inventory
vm_response=json.loads(vms.text)
json_data=vm_response["value"]
print "VM names and its unique MOID"
print "============================"
for vm in json_data:
print vm.get("name")+" :: "+vm.get("vm")
#We are powering on all the VMs those are in powered off state
if vm.get("power_state") == "POWERED_OFF":
vcrest.poweron_vm(vm.get("vm"),vcip)
[/python]
Above script i.e. vcrestsample.py is available on my github repo as well
Output :
vmware@localhost:~$ python vcrestsample.py
VM names and its unique MOID
============================
NTP-India-1 :: vm-42
NTP-PA-2 :: vm-43
WebApp-1 :: vm-44
vThinkBeyondVM :: vm-45
vmware@localhost:~$
Let us understand above script.
Line 5: Imported “vcrest” module we just discussed above.
Line 10: We are getting vCenter server session by calling function defined in “vcrest” module. We can use this session object as needed.
Line 13: We are getting all the VMs from inventory using “get_vms() function defined in “vcrest” module. Note that with this call, we will get JSON response as shown below, which we need to parse to fetch useful information.
[python]
{
"value": [
{
"memory_size_MiB": 512,
"vm": "vm-42",
"name": "NTP-India-1",
"power_state": "POWERED_OFF",
"cpu_count": 1
},
{
"memory_size_MiB": 512,
"vm": "vm-43",
"name": "NTP-PA-2",
"power_state": "POWERED_OFF",
"cpu_count": 1
},
{
"memory_size_MiB": 512,
"vm": "vm-44",
"name": "WebApp-1",
"power_state": "POWERED_ON",
"cpu_count": 1
},
{
"memory_size_MiB": 512,
"vm": "vm-45",
"name": "vThinkBeyondVM",
"power_state": "POWERED_ON",
"cpu_count": 1
}
]
}
[/python]
Line 16/17: As we got JSON response as pointed above, here we parsed it so that we can easily access as part of python dictionary.
Line 21 to 25: Iterating through dictionary and printing vm names & its moid (managed object id). Finally powering on VMs those are off.
That is all, is not it cool? Since we have REST APIs available for vCenter VM life cycle, VCSA, content library, tagging etc, there is lot to learn and play around. I will keep adding more methods to vcrest.py module. If you are interested in contributing to this module, let me know, it would be really great. In case, you would like to explore vCenter SOAP based APIs, please refer my last post
>. | https://vthinkbeyondvm.com/tag/vcenter-server/ | CC-MAIN-2022-27 | refinedweb | 948 | 57.47 |
14 May 2013 23:22 [Source: ICIS news]
?xml:namespace>
Seyed Majid Modirzadeh urged for investors to step forward, calling the site “a concealed treasure” during his presentation at the 10th Iran Petrochemical Forum (IPF).
The site has access to feedstock from the South Pars gas field and Khuzestand reserves, rich ethane content from the country’s gas reserves, stable natural gas prices as compared to other oil-related feedstock prices and a strategic location as the only port which has direct access to open seas, he said.
At Chabahar in southern
A gas-to-polypropylene (PP) project is also being planned, with an estimated cost of €780m, he said. The project is expected to produce 1.65m tonnes/year of methanol as an intermediate; 514,000 tonnes/year of propylene and 500,000 tonnes/year of PP, he added.
A gas-to-dimethyl ether (DME) project, which requires a €515m investment, is also being mulled. The project is expected to produce 1.65m tonnes/year of methanol, which will be an intermediate to yield 1.2m tonnes/year of DME, Mordirzade | http://www.icis.com/Articles/2013/05/14/9668503/iran-identifies-chabahar-as-new-petrochemical-site.html | CC-MAIN-2014-42 | refinedweb | 182 | 63.29 |
java.lang.Object
org.netlib.blas.Strsvorg.netlib.blas.Strsv
public class Strsv
Following is the description from the original Fortran source. For each array argument, the Java version will include an integer offset parameter, so the arguments may not match the description exactly. Contact seymour@cs.utk.edu with any questions.
* .. * * Purpose * ======= * * STRSV solves one of the systems of equations * * A*x = b, or A'*x = b, * * where b and x are n element vectors and A is an n by n unit, or * non-unit, upper or lower triangular matrix. * * No test for singularity or near-singularity is included in this * routine. Such tests must be performed before calling this routine. * * equations to be solved as * follows: * * TRANS = 'N' or 'n' A*x = b. * * TRANS = 'T' or 't' A'*x = b. * * TRANS = 'C' or 'c' A'*x = b. * *. * * A - REAL array of. * right-hand side vector b. On exit, X is overwritten * with the solution vector x. * * INCX - INTEGER. * On entry, INCX specifies the increment for the elements of * X. INCX must not be zero. *. * * * .. Parameters ..
public Strsv()
public static void strsv(java.lang.String uplo, java.lang.String trans, java.lang.String diag, int n, float[] a, int _a_offset, int lda, float[] x, int _x_offset, int incx) | http://icl.cs.utk.edu/projectsfiles/f2j/javadoc/org/netlib/blas/Strsv.html | CC-MAIN-2017-51 | refinedweb | 210 | 69.58 |
It looks like you're new here. If you want to get involved, click one of these buttons!
You can change the location in Preferences > General in FontLab VI.
For more info on this and other custom data files for FontLab VI, see
from FL import fl
Thomas Phinney said:
Mark’s suggestion works for FontLab Studio 5, but this feature has not (as yet) been implemented in FontLab VI.
Claudio Piccinini said:Thanks for the folder location, but the folder is empty and I can’t seem able to open older .enc files which were in the Fontlab Studio 5 application support folder.
Claudio Piccinini said:Also, if you want to export an .enc file in Fontlab VI how can you do?
#FLM: Save encoding
import os
import fontlab
f=fontlab.CurrentFont()
if f == None:
exit
import sys
sys.stdout = open(os.path.expanduser('~')+"/Desktop/my_new_enc.enc", "w")
glyphs = f.glyphs
print "%%FONTLAB ENCODING: 21000888; My New Encoding"
for index in range(len(f.glyphs)):
print f.glyphs[index].name, index
Just to clarify, having a custom encoding (.enc file) just changes the view order in FontLab, when that encoding is invoked. If you want that glyph order baked into the font (changing the Glyph IDs), go to the menu and choose Glyph > Sort Glyphs > By Encoding.
@Thomas Phinney: Do you know from where you can open .enc files in the Fontlab VI interface? I can’t seem to find it. :-(
The default location of the FontLab VI encodings folder is:
You can change the location in Preferences > General in FontLab VI.
For more info on this and other custom data files for FontLab VI, see(Cribbed from my own writing at:)
Thanks for the folder location, but the folder is empty and I can’t seem able to open older .enc files which were in the Fontlab Studio 5 application support folder.
Also, if you want to export an .enc file in Fontlab VI how can you do?
They do not need to be “opened” to work. Just to be in a location FontLab VI knows about.
If you do want to change the contents, they are just text files. You can use any plain text editor to open and edit them, such as NotePad on Windows, BBEdit or TextWrangler on Mac.
But if you want to use an older .enc file, either:
I see Jameson Spires shared a script above. I *think* his "FLS 5?" comment means it is an FLS 5 script, though.
- just copy it to the new Fontlab VI location as discussed above:
- macOS: "Macintosh HD/Users/Your_Name/Library/Application Support/FontLab/FontLab VI/Encoding/"
- Windows: "C:\Users\Your_Name\AppData\Roaming\Fontlab\FontLab VI\Encoding"
- Or go to Preferences > General and change the location of the "User data folder" to wherever you like ... which could be the old FontLab Studio 5 location if you like.
I expect it could still be scripted in VI, to export the current view order, and/or the current GID order, as an encoding. Somebody like @Vassil Kateliev could probably do so.
If you want to change the name of the encoding, locate and edit the .enc file in Notepad etc. ...you'll see the name on the first line. Save then restart FontLab.
- Switch to index mode an arrange the glyphs however you like them
- Glyph/Glyph Names/Save Encoding
- Switch to Names mode
- Click the dropdown beside the names mode button and go all the way to the bottom. You'll see the name of your font there
- Glyphs/Sort Glyphs/By Encoding
- Switch to index mode to confirm that sorting worked correctly
If By Encoding is grayed out, you're probably in the wrong mode. This only works in Names mode.
After the export, you will only need to change the encoding ID (21000888) and the encoding file name. | https://typedrawers.com/discussion/3382/sort-multiple-mapping-sequentially-in-fontlab | CC-MAIN-2020-16 | refinedweb | 643 | 74.59 |
What's coming in Xtend 2.3 New & Noteworthy
The next release of Xtend will be available in June. It will come with many, many bug fixes, performance improvements and the following new features:
Several Language Enhancements
On the language level the next release has a couple of nice improvements and new features as well. Some of them are small but neat changes, e.g. it's no longer mandatory to write empty parentheses in constructor calls. Others deserve an entry on their own:
Properties Since M7 (aka Value Objects)Since M7Name String.
With Operator Since M7.
Multiple Classes per File Since M7
You can now have any number of classes in a single file. Also the name of the first class must no longer match the file's name. But still if it matches it will be renamed in a rename refactoring.
Number Literals Since M6
Xtend now has comprehensive support for number literals. This includes most of the things you can do in Java, and in addition it has first class support for BigDecimal and BigInteger (including overloaded arithmetic operators).
new BigDecimal("6.1").subtract( new BigDecimal("0.755").multiply(BigDecimal.valueOf(3L)))
In Xtend you can use literals and operators :
6.1bd - 0.755bd * 3bd
Var-Args Since M6
Varargs can now be declared in Xtend methods as well. The syntax and semantics are just like in Java. Also calling varargs works as expected.
def String concat(String... strings) { .... } def void useConcat() { concat('can', 'pass', 'any', 'number', 'of', 'strings') }
Abstract Classes Since M7
Long overdue but finally you can declare abstract classes in Xtend. The syntax is much like in Java although you don't have to use an abstract keyword for abstract methods. Flagging the class abstract is sufficient. Here's an example:
abstract class MyAbstractClass { def String toBeImplemented(String myArg) def doStuff(String otherArg) { return toBeImplemented(otherArg) } }
Enhanced Field Decaration Since M7
Also overdue but finally here: Fields can be declared as 'val's which make them final. Also type inference is now supported on fields as well.
class MyClass { val finalField = 'hello world' var nonFinalInitializedField = myField.toUpperCase String nonFinalNonInitializedField }
Debugging since M6
Debugging through Xtend and Java is now fully supported. Setting breakpoints in Xtend works just as in Java and you can even switch to the Java level whenever wished by a context menu action.
Also all the cool features known from the Java debugger, like ‘Display’, ‘Inspect’, or ‘Exception Breakpoints’ are available.
Seamless JDT Integration since M6
Not only the debugger but all of JDT's navigation functionality now works seamlessly with Xtend. No matter if you click on a stack frame in an exception, a failing unit tests, some node in the call hierarchy or the type hierarchy: if the target element is implemented in Xtend, you'll end up in the Xtend editor at the original source location.
Generated Code View Feel the Trace
But what if you want to see the generated Java source? You can of course open the Java file in an editor, but there's now an even better tool : The Generated Code view. In that view you see detailed information of which parts in the Java source were derived from which in the Xtend code, simply by marking the corresponding sections.
Improved Hovers since M6
The hover comes with a completely new HTML rendering for documentation comments. It supports all Javadoc tags like @param, @link or @see and even provides the resolved type information for generified types and methods. It is now easily possible to have a look at the inferred type of local variables and expressions. Also, the hover shows an expanded version of a sugared expressions.
Call-Hierarchy since M6
In Xtend the open call hierarchy action is now available using the same shortcuts as in Java. It shows all calls to the selected method, field or constructor, no matter whether they are called from Xtend or Java.
Type-Hierarchy since M6
Also the Type Hierarchy view can be opened and populated from within an Xtend class. Just press ‘F4’ when your cursor sits on a type's name. It shows the type hierarchy of the selected classes. You implemented parts of the hierarchy in Java and other in Xtend? Don't worry. They will work perfectly together.
Inherited members in quick outline since M6
When pressing CTRL+O in an Xtend editor the quick outline pops up. It shows all the members of the currently edited class. Pressing CTRL+O once more will add the members inherited from super classes as well. A search field lets you simply filter by name in order to navigate to any member quickly.
Create Method Quickfix since M7
For unresolvable method invocations there now is a quick fix to create trhe missing method.
Compiler Improvements since M6
To make the generated Java code even more readable, the compiler can now inline certain operations. As a result the generated Java code is more idiomatic and therefore more readable. Also some superfluous statements have been removed. | http://www.eclipse.org/xtend/new_and_noteworthy/index.html | CC-MAIN-2014-41 | refinedweb | 838 | 64 |
I’ve been waiting to start this blog series for a couple of months. It’s nice to finally get cracking.
Hopefully some of you have already read some of my thoughts around C# 5’s async feature, mostly written last year. Since that initial flurry of posts, I’ve been pretty quiet, but I’m still really excited about it. Really, really excited. I’ve given a few talks on it, and I have a few more still to give – and this blog series will partly be complementary to those talks. In particular, there’s a DevExpress webcast which covers most of the same ground, with similar code. (It was before the CTP refresh, and also before my laptop was stolen in a burglary, so the code here is a rewrite.)
Async from a compiler’s point of view
Most of this blog series (at least the bits I anticipate at the moment) will deal with what the compiler does with async methods. (I haven’t used async delegates much at all, but I can’t imagine that the machinery is particularly different.)
As far as I’ve seen, most of the coverage on the web so far has dealt with using async. That’s natural, logical and entirely proper. Oh, and a bit boring after a while. I like knowing how a feature works before I go too far using it. This is a personal idiosyncrasy, and if you’re happy just using async with no “under the hood” details, that’s absolutely fine. It’s probably worth unsubscribing from my blog for a little while, that’s all.
This can all be seen as pretty similar to my Edulinq series of posts, which is why I’ve called it Eduasync this time.
My plan is to walk you through what the C# compiler relies on – the types which are currently part of AsyncCtpLibrary.dll, and how it interacts with Task / Task from .NET 4. We’ll then look at the code generated by the compiler – essentially a state machine – and some of the less obvious aspects of it. I’ll give examples of any bugs I’ve found in the CTP, just for the heck of it – and as a way of checking whether they’re fixed in later versions. (Obviously I’ve let the C#/VB team know about these as I’ve come across them.)
I’ll assume that you know the basics of using async – so if you don’t, now would be a good time to look at the numerous resources on the Visual Studio Async home page. There are loads of videos, specs (including the C# spec changes, most importantly from my point of view)
Get the source now
There’s already quite a bit of source code (everything I’m currently planning on writing about, which is almost inevitably less than I’ll actually end up writing about) on the Google Code Eduasync project. This takes a different approach from Edulinq – instead of just a couple of projects (production and tests, basically) I’ve got a separate project for each topic I want to talk about, with pretty minimal code for that topic. The reason for this is to show the evolution of the code – starting off with almost nothing, and progressing until we’ve got an implementation which achieves at least the bare bones important bits of an async system.
I’ve numbered the projects within the solution, although the assemblies themselves don’t have the same numbers. They all use a default namespace of just Eduasync, and they don’t refer to each other. Each is meant to be self-contained – oh, and there are no references to AsyncCtpLibrary.dll. The whole point is to reimplement that library :) Of course, you’ll still need the CTP installed to get the compiler changes.
The Google Code repository will also contain the blog posts eventually, including any diagrams I need to create (such as the one in a minute).
The three blocks and two boundaries
One of the things I’ve found important to think about in async is the various parts involved. I’ve ended up with a mental model like this:
The bits in blue and red are the ones we’re focusing on here: the contents of the async method, and the boundaries between that and the code that calls it, and the tasks (or other awaitable types) that it awaits.
For most of this series we’re not really going to care much about what the caller does with the result, or how the awaitable object behaves other than in terms of the methods and properties used by the C# 5 compiler. I’ll discuss the flexibility afforded though – and how it doesn’t extend to the “caller/async” boundary, only the “async/awaitable” boundary.
Just to give an explicit example of all of this, here’s a simple little program to asynchronously determine the size of the Stack Overflow home page:
using System.Net;
using System.Threading.Tasks;
class Program
{
// Caller (block 1)
static void Main()
{
Task<int> sizeTask = DownloadSizeAsync(“”);
Console.WriteLine(“In Main, after async method call…”);
Console.WriteLine(“Size: {0}”, sizeTask.Result);
}
// Async method (block 2)
static async Task<int> DownloadSizeAsync(string url)
{
var client = new WebClient();
// Awaitable (block 3)
var awaitable = client.DownloadDataTaskAsync(url);
Console.WriteLine(“Starting await…”);
byte[] data = await awaitable;
Console.WriteLine(“Finished awaiting…”);
return data.Length;
}
}
The comments should make it reasonably clear what the blocks in the diagram mean. It’s not ideal in that the first two blocks are basically methods, whereas the third block is an object – but I’ve found that it still makes sense when we’re thinking about the interactions involved at the boundaries. Notably:
- How does the async method create an appropriate value to return to the caller?
- How does the async method interact with the awaitable when it hits an “await” expression?
We can (and we’re going to) look at these boundaries very separately. We’ll start off with the first bullet, in part two, which will hopefully follow in the next few days.
17 thoughts on “Eduasync part 1: introduction”
Looking forward to the rest of this series. I was thinking about asking if you could do some of the new c#5 async stuff on your c#4 tekpub series. No need to now.
common typo/mistake :)
…and this blog series will partly be ***complimentary*** to those talks…
While I’m sure many of us will be complimentary to your talks, I’m guess you meant the blog series will be complementary instead :)
@James: Doh! Of course – fixed, thanks :)
How about including an implementation of DownloadDataTaskAsync() method using the TaskCompletionSource type?
Even though mainstream asynch lib adoption will mostly be consumption side, the facelift at production side should not go under the radar.
Thanks for this. Looking forward to the rest of the series.
How is this different from Reactive Framework?
I’m starting on this series a little late, but I’m interested to see how all this compares to TPL.
@Jim: It’s a *companion* to TPL, rather than an alternative. It makes TPL easier to work with.
There is a broken imagem after “The three blocks and two boundaries” topic.
Thanks for this posts.
That’s odd – it looks fine to me. The URL is – what happens if you open that in a new tab?
If I open that link in a new tab, I get this text: “Error 0004. Unable to load the image.”
Hmmm, it does for me now too – will have to fix it when I get time, although I don’t know when that’ll be…
Informative article. Thank you. The await expression is where the asynchronous operations begins, yet this forms the continuation. The code split at this point returns an incomplete Task to the calling method, yet the integer return type is returned by a stub of generated code that sets the Result property of the Task. Yet the Task is still in progress, and promises a future result. Yet it can be queried, set up with a continuation, or synchronized into its initial context.Yet if the IsCompleted property is true, then the next continuation can go on. So what am I saying? This functionality derives from compiler inference, and not from a library. Yet one would wonder if the calling method loops until the Task is complete. If it attempts to find out on its own, it would obviously block the code flow. Your articles indicate that you must be very good at what you do. No runs, drips, callbacks, and no rebooting. Blocked threads enable allocated thread resources to sit idle. CPU clock cycles must exceed a safe number to clean up what sat during a blocking time.
Dear Jon, the dead link to the image in this post still remains dead !
Dear Jon, I just noticed that the missing image and the code for eduasync are available on github. Thanks !
Yup – I’ve edited the post to refer to the github repo :) | https://codeblog.jonskeet.uk/2011/05/08/eduasync-part-1-introduction/?like_comment=27949&_wpnonce=2e9a87ace4 | CC-MAIN-2021-49 | refinedweb | 1,516 | 71.04 |
NAME | SYNOPSIS | DESCRIPTION | USAGE | ATTRIBUTES | SEE ALSO | NOTES
#include <stdio.h>char *tmpnam(char *s);
These functions generate file names that can safely be used an internal static area and returns a pointer to that area. The next call to tmpnam() will destroy the contents of the area. If s is not NULL , it is assumed to be the address of an array of at least L_tmpnam bytes, where L_tmpnam is a constant defined in <stdio.h> ; tmpnam() places its result in that array and returns s .
The tmpnam_r() function has the same functionality as tmpnam() except that if s is a null pointer, the function returns NULL . This interface is as proposed in the POSIX.4a Draft #6 document, and is subject to change to be compliant to the standard when it is accepted..
The tempnam() function uses malloc(3C) to allocate space for the constructed file name, and returns a pointer to this area. Any pointer value returned from tempnam() may serve as an argument to free(3C) (see malloc(3C) ). If tempnam() cannot return the expected result for any reason (for example, malloc(3C) failed), or if none of the above-mentioned attempts to find an appropriate directory was successful, a null pointer is returned. This function fails if there is not enough space..
See attributes(5) for descriptions of the following attributes:
creat(2) , unlink(2) , fopen(3C) , free(3C) , malloc(3C) , mktemp(3C) , tmpfile(3C) , attributes(5)
The tmpnam() function is unsafe in multithreaded applications. The tempnam() function is safe in multithreaded applications and should be used instead.
When compiling multithreaded applications, the _REENTRANT flag must be defined on the compile line. This flag should be used only with multithreaded applications.
NAME | SYNOPSIS | DESCRIPTION | USAGE | ATTRIBUTES | SEE ALSO | NOTES | https://docs.oracle.com/cd/E19455-01/806-0627/6j9vhfn96/index.html | CC-MAIN-2018-05 | refinedweb | 294 | 55.03 |
24 June 2011 19:49 [Source: ICB]
US, FELIZA MIRASOL profile last published JANUARY 14, 2008
Correction: The ICIS Chemical Business story headlined “?xml:namespace>
USES
Bisphenol A (BPA) is mainly used in the production of polycarbonate (PC). Its second-biggest use is in epoxy resins. Other applications include flame retardants (mainly tetrabromobisphenol-A), unsaturated polyester resins and polyacrylate, polyetherimide and polysulphone resins.
SUPPLY/DEMAND
Global BPA market growth is expected to be stable at 5%/year. The Asian markets are predicted to grow at 11%/year, with China growing by over 13%/year. In contrast, the US market is expected to be flat in the 2010-2012 period, with an expected growth rate of 1-2% in 2012-2015, according to estimates by German trader Mitsui & Co. Deutschland.
BPA growth over the last few years has been driven primarily by increasing demand for PC resins used in the manufacture of optical media, but growth in this sector has slowed significantly. Growth is also expected to decline as the use of optical media declines and gives way to the downloading of music and films directly from the internet, as well as competing with other increasingly popular technologies.
In comparison, automotive glazing offers potentially strong growth opportunities for BPA/PC producers. PC resins are used in place of traditional materials such as metal and glass in automotive components. Glazing and sheet products can also be used in architectural, security and transportation applications.
On March 28, Bayer MaterialScience lifted the force majeure on its North American PC and BPA production. The Germany-headquartered producer had declared force majeure on February 2, when production issues caused by freezing weather arose at its 260,000 tonne/year facility in Baytown, Texas.
PRICES
US BPA prices are expected to fall on improved supply because phenol-to-BPA producers have been running at high rates after being low on feedstocks during the first quarter. Asia prices have already tumbled.
The lifting of Bayer's force majeure is expected to weaken momentum for several price increase announcements in the US PC market, where the company sought a price increase of 25 cents/lb ($551/tonne, €391/tonne) effective March 31, or as contracts allow. In addition, US-based Styron sought a PC price increase of 22 cents/lb, effective April 1, or as contracts allow, while US-headquartered SABIC Innovative Plastics sought a price increase of 22 cents/lb, effective April 4, or as contracts allow.
On the downstream epoxy resin side, there were several price increase nominations set for June 1 that pushed US epoxy resin prices to or above the $1.90/lb ($4,189/tonne, €2,890/tonne) delivered (DEL) North America level, as assessed by ICIS.
US epoxy resins buyers and importers used weaker feedstock prices initially to resist June price increases of 10 cents/lb. But US epoxy resin producers were said to be confident that buyer resistance would weaken as inventories get used up.
TECHNOLOGY
BPA is produced by the condensation of phenol and acetone in the presence of an acid catalyst (hydrogen chloride) and usually a promoter such as methyl mercaptan. Cation exchange resins can replace the acid catalyst in newer plants.
After the reaction and recovery of acid and phenol, the BPA is washed with water, neutralized with calcium hydroxide and distilled under vacuum. Newer processes employ distillation and extractive crystallization under pressure to purify the BPA.
OUTLOOK
Long-term growth for the epoxy resin markets is expected to be 5%/year globally. However, concerns over health issues with BPA have attracted the attention of environmentalists.
In response, US industry trade group The American Chemistry Council (ACC) has claimed that fears about BPA are overblown, citing research that the levels ingested by most people are far too low to have adverse effects.
Despite this, however, some North American retailers have removed baby bottles and water bottles containing PC from the shelves, while Canada has announced plans to ban PC baby bottles. Yet the impact on BPA/PC demand is expected to be small, as packaging applications in total only account for around 3% of overall PC | http://www.icis.com/Articles/2011/06/27/9472256/us-chemical-profile-bisphenol-a.html | CC-MAIN-2014-52 | refinedweb | 687 | 50.16 |
Reduce-and-Broadcast (RB) version of DistTsqr. More...
#include <Tsqr_DistTsqrRB.hpp>
Reduce-and-Broadcast (RB) version of DistTsqr.
Reduce-and-Broadcast (RB) version of DistTsqr, which factors a vertical stack of n by n R factors, one per MPI process. Only the final R factor is broadcast; the implicit Q factor data stay on the MPI process where they are computed.
Definition at line 29 of file Tsqr_DistTsqrRB.hpp.
Constructor
Definition at line 43 of file Tsqr_DistTsqrRB.hpp.
Fill in the timings vector with cumulative timings from factorExplicit(). The vector gets resized to fit all the timings.
Definition at line 56 of file Tsqr_DistTsqrRB.hpp.
Fill in the labels vector with the string labels for the timings from factorExplicit(). The vector gets resized to fit all the labels.
Definition at line 72 of file Tsqr_DistTsqrRB.hpp.
Whether or not all diagonal entries of the R factor computed by the QR factorization are guaranteed to be nonnegative.
Definition at line 86 of file Tsqr_DistTsqrRB.hpp.
Internode TSQR with explicit Q factor.
Definition at line 107 of file Tsqr_DistTsqrRB.hpp. | http://trilinos.sandia.gov/packages/docs/r10.6/packages/anasazi/doc/html/classTSQR_1_1DistTsqrRB.html | CC-MAIN-2014-35 | refinedweb | 178 | 52.56 |
Michael J Kellen writes: > The following patch to lilo 21.5 using the 2.4.0-test7 kernel allows > me to boot a machine using no non-lvm partitions in the lilo.conf. I'd > appreciate some feedback from other testers ... Forgive me for being ignorant, but I don't see how this works... It is something that I've been wanting for a long time, but I always thought it would be much more complicated... > + #ifdef HAVE_LVM > + case MAJOR_LVM: > + geo->device = 0x80+(MINOR(device) >> 6)+(MAJOR(device)==MAJOR_LVM ? > + 0 : last_dev(MAJOR_LVM,64)); > + if (ioctl(fd,HDIO_GETGEO,&hdprm) < 0) > + die("geo_query_dev HDIO_GETGEO (dev 0x%04x): %s",device, > + strerror(errno)); > + geo->heads = hdprm.heads; > + geo->cylinders = hdprm.cylinders; > + geo->sectors = hdprm.sectors; > + geo->start = hdprm.start; > + break; > + #endif This is the only part of the patch that really does anything, but I don't _think_ it is enough, as bmap() on the kernel file will only return LVM relative block numbers. Does LILO already do recursive bmaps on blocks for each layered device? Am I totally out-to-lunch in how LILO works? Also, have you thought at all about restrictions for keeping the kernel all on the same physical device (definitely required for booting)? It may also be that LILO checks this, but it is currently unlikely. I'm not advocating that we put extensive checks on the boot LV (admins can suffer if they do it wrong), but it should at least warn you if the kernel is on two different physical devices, like it warns you when the kernel is past 1024 cylinders... Don't get me wrong - I think this is GREAT, I just want to know a bit more about it... I will give it a test as soon as I can find the LILO source. Cheers, Andreas | https://www.redhat.com/archives/linux-lvm/2000-September/msg00012.html | CC-MAIN-2015-22 | refinedweb | 302 | 74.29 |
Ralayr.io - Getting Started With DragonBoard 410c
Introduction: Ralayr.io - Getting Started With DragonBoard 410c
This guide will explain you how to connect your devices to the relayr.io platform.
First, login in to the relayr platform and in the lef-hand tab click on devices, next add device button!
In the next screen choose the model of the device you want to add, you can decide by relayr, by community or by me, or leave your device without a model, but you cant send commands if your device dont have a model configured.
Step 1: Choose a Code Language and Run the Code
This screen will ask you what code language your device is using, to run the code in the dragonboard chosse python and test the firmware.
In the board, create a folder named relayr, in this folder create a file named relayr,py and copy the code in this file.
In order to generate a fake data, import the random package in to the code and send it to the platform to the topic someMeaning described in this model. Like this:
import random
while True:
client.loop()
temperature = random.randrange(0, 101, 2)
# publish data
message = { 'meaning': 'someMeaning', 'value': temperature }
client.publish(credentials['topic'] +'data', json.dumps(message))
time.sleep(publishing_period / 1000.)
All the messages comming from the board is shown in dialog box of the test screen.
Ps: all codes examples can be found in:
Step 2: Create New Device Model (optional)
You can create a new device model and configure variables to receive data and send commands.
Go to the left-tab and click on Model, select By me tab and create your own model.
In creation model screen, add readings and commmands that you want to send and receive from the board, and save!
Follow the previous steps again and connect your board!
Add sensors and actuators to dragonboard and connect them remotely.
Have fun! | http://www.instructables.com/id/Ralayrio-Getting-Started-With-DragonBoard-410c/ | CC-MAIN-2017-43 | refinedweb | 321 | 70.23 |
444/how-to-add-org-peer-in-org-dynamically-in-hyperledger-fabric
I want to add a new organization or new peer in already existing Organization dynamically. I am not sure how to do this. Please help.
You have to first, use the configtxlator tool to read the genesis block and modify its contents, then submit it as a new transaction that updates the network/channel configuration.
The configtxlator tool simplifies configuration tasks in Hyperledger Fabric blockchain networks. This tool easily converts between different equivalent data representations/formats.
Make sure you have the Version 1.1.0-preview of Hyperledger Fabric installed since this version introduces the peer channel signconfigtx command for collection of multiple signatures before submitting configuration updates.
IBM has a step-by-step guide showing how to use this tool, as adding a new Org section in config JSON has additional steps involved.
I know it is a bit late to answer to this question but i think it this would help. Go to this link and in the pdf go to Chapter 11 (page 59). There is a good explanation one how to do it.
Follow the below mentioned steps to add an organization:
Download the Hyperledger fabric samples.
$ git clone -b master
$ cd fabric-samples
Download platform specific binaries
$ curl -sSL | bash -s 1.1.0
Bring the network down:
$ cd fabric-samples/first-network
$ ./byfn.sh -m down
Generate artifacts required and then bring the network up:
$ ./byfn.sh -m generate
$ ./byfn.sh -m up
Bring the new network up:
$ ./eyfn.sh up
That's all
Please refer to the official documentation. There's a proper step-by-step guide to add an organization to the Fabric. Here's the link:
The peers communicate among them through the ...READ MORE
There are two ways you can do ...READ MORE
When you run the command:
peer channel create ...READ MORE
To do this, first enter you cli ...READ MORE
Summary: Both should provide similar reliability of ...READ MORE
To use multiple anchor peers, you need ...READ MORE
This will solve your problem
import org.apache.commons.codec.binary.Hex;
Transaction txn ...READ MORE
Best practice would be to leverage an ...READ MORE
This link might help you: ...READ MORE
OR
Already have an account? Sign in. | https://www.edureka.co/community/444/how-to-add-org-peer-in-org-dynamically-in-hyperledger-fabric?show=12173 | CC-MAIN-2019-43 | refinedweb | 381 | 59.9 |
#include <rte_common.h>
#include <rte_cryptodev.h>
#include <rte_security.h>
Go to the source code of this file.
Defines API to manage IPsec Security Association (SA) objects.
Definition in file rte_ipsec_sa.h.
Indicates that SA will(/will not) need an 'atomic' access to sequence number and replay window. 'atomic' here means: functions:
Definition at line 70 of file rte_ipsec_sa.h.
SA type is an 64-bit value that contain the following information:
Definition at line 84 of file rte_ipsec_sa.h.
get type of given SA
Calculate required SA size based on provided input parameters.
initialise SA based on provided input parameters.
cleanup SA | https://doc.dpdk.org/api-20.11/rte__ipsec__sa_8h.html | CC-MAIN-2021-39 | refinedweb | 102 | 53.47 |
gsasl_stringprep_saslprep - API function
#include <gsasl.h> char * gsasl_stringprep_saslprep(const char * in, int * stringprep_rc);
const char * in input ASCII or UTF-8 string with data to prepare according to SASLprep. int * stringprep_rc pointer to output variable with stringprep error code, or NULL to indicate that you don’t care about it.
Process a Unicode string for comparison, according to the "SASLprep" stringprep profile. This function is intended to be used by Simple Authentication and Security Layer (SASL) mechanisms (such as PLAIN, CRAM-MD5, and DIGEST-MD5) as well as other protocols exchanging user names and/or passwords.
Return a newly allocated string that is the "SASLprep" processed form of the input string, or NULL on error, in which case stringprep_rc contain the stringprep library error code.
Use gsasl_saslprep(). | http://huge-man-linux.net/man3/gsasl_stringprep_saslprep.html | CC-MAIN-2017-17 | refinedweb | 127 | 51.18 |
Office plant monitoring
There is proberbly something with my forks
They are (or at least should be) completely passive, so there is nothing that should be able to go wrong.
Could you post a photo of your connections?
If you disconnect the fork from the Arduino and measure the resistance between the two connectors using a multimeter (in both directions), what resistance do you get?
Did I mention that I upgraded the sketch to 2.0? Could that be a problem?
Not if you did it right
I have no experience with 2.0 unfortunately.
- Martin Tellblom last edited by
I at the office right now, measure them when I home
- Martin Tellblom last edited by
@mfalkvidd said:
should be) completely passive, so there is nothing that should be able to go wrong.
Could you post a photo of your connections?
Here is my setup. Using the Easy/Newbie board that @sundberg84 created with a 5V arduino mini pro 16Mhz. Using A4 and A5 to the sensor
Regarding the resistance when its out of the soil there is no connection at all. If I put it in the soil there a strange thing happening .
The resistance start around 10K and then slowly increase (no matter if I switch the direction) the longer I test the higher value.
If I switch the direction the resistance start from whre it just was and keep rising . If I let the senor pause for a while (5 minutes or so) it's back to around 10K.
Maybe I test it to often? Don't let the moisture get back to the soil?
Stange. The setup looks good. The soil in my plants doesn't behave like that.
- Martin Tellblom last edited by Martin Tellblom
It seems like @sundberg84 nailed it
The battey metering I use with his PCB is this:
#if defined(__AVR_ATmega2560__) analogReference(INTERNAL1V1); #else analogReference(INTERNAL); #endif
And that reflect on all analog channels and break the function..
After I removed the battery sensing code it work great
You should check out this side (Use Google Side Translator):
Theres also the sensor from here:
Giesomat
So my Bonsai tree humidity node just celebrated 1 year on battery!
During the last year, the gateway has received 76,164 updates on humidity level (and an additional 13,996 updates on voltage level).
The battery level has gone from 3.187V to 3.108V, which means an average drop of 0.0066V per month. Assuming I let it go down to 2.34V (limit for 8MHz according to the datasheet) and that the voltage drop is linear, I should get (3.187-2.34)/0.0066 = 128 months = ~10.7 years. There are several error sources in this calculation, but it looks like battery life will be quite good.
Here are the voltage and humidity graphs for the last year
- Nicklas Starkel last edited by Nicklas Starkel
@mfalkvidd that is really awesome!
Did you set any Brown Out Fuses?
From what I gather the arduino pro mini will stop working at 2.8V and you have to set other fuses for it to reach the lower voltages.
I did a quick test with one flowerpot and from what I see my power consumption will be around 0.2V per year. It's more than double of what you're getting. But still low enough as changing 2 AA over 1-2 years is OK in my book if I'm not changing the fuses
EDIT: Actually, calculating with your numbers of updates I get around 0.6V consumption per year for my setup and that is far from your low power consumption.
I think I will have to go through this thread once more to try to find why the difference is so big and what you have done. I see 2 potential culprits. One is the arduino mini pro (china clone without power LED and voltage regulator) or maybe it could be my mysensors 2.0 version vs your 1.6.
@Nicklas-Starkel if I remember correctly I disabled bod completely.
@mfalkvidd, @Nicklas-Starkel
similar problem here. I have a bare ATMega 328P, running @ 8 MHz internal oszillator. no LED, bod disabled, (if enabled, the ADC is running also during sleep, so this means additional power consumption), nothing else connected that could draw additional power.
I use mfalkvidd's sketch (BTW, thanks a lot for it !), but converted to mysensors 2.0. I see a voltage drop way higher than mfalkvidd, although I don't use a china clone ;-).
So it seems, that the higher power consumption may be due to mysensors V2 ? I cannot imagine a reason for that, because why should relatively low level functions like power save routines be different in 2.0 ?
Perhaps hek can comment ?
@mfalkvidd
I think that in a previous post you mentioned that you are using mysensors V1.6, right ? Where did you get it from ? On the mysensors pages I only found references and links to V1.4 and V1,5, not V1.6. I would like to try to remove V2.0 from my system and switch to V1.6 - no idea if this will work ...
I would like to use a setup as close as possible to yours to track down the problem. Your very low power consumption is really amazing and I would like to come as close as possibe to int in my case. I have a 'clone' of your hardware setup described in openhardware.io - minus the LED. So in my case, power consumption should be even lower than yours
- korttoma Hero Member last edited by
@joshmosh I think that version 1.5.4 or 1.5.3 was called 1.6 while it was under development but no version 1.6 was ever released.
@korttoma
OK, thanks for the hint. I will try if I can get this version work with the Arduino version I am using.
@korttoma
OK, after some fiddling I was able to exchange mysensors V2.0 with V1.5.4 and to compile mfalkvidd's sketch. I will adapt it now to my hardware (removing references to LED etc) and give it a twirl. Please be patient, since I need to run it at least a coupl eof days to see if there is a difference in power consumption.
Very interesting stuff
- Nicklas Starkel last edited by
@joshmosh , actually there is so many variables that it is impossible to check and the term "milage may vary" is spot on.
All batteries are not the same.
Temperature.
Arduino.
Time between readings (some batteries prefer small current over time and some handle bursts better).
etc
I've set to report moisture every 30 seconds and obviously the voltage report from an arduino is not 100%. And with that said, it is probably not 100% consistent neither as it could differ between readings as well.
The only real way to tell is know starting point via multimeter and then check after a month what has happened.
For me, doing a check, I've come down to 0.08V per 76000 readings.
I did the estimation based on about 8500 readings and extrapolated it to 76000 reports based on @mfalkvidd information.
For me, this is enough and I'm sure my sensor will survive for a long time and can now program it to take more reasonable moisture readings. Example, 1reading/h for the plots that dries the fastest (in direct sunlight) and less when they are in the shade.
In the spring the balcony will have several sensors with automatic watering
@Nicklas-Starkel
After some reading and thinking, I came to a very similar conclusion. There are tons of parameter which will influence the mesurement.
I am planning to use four or five moisture probes distributed at various places in my garden for irrigation automatisation. I guess my lawn will not suffer if I start watering at a reading of 41 % instead of 44 %
During the upcoming winter months there is enough time to gather empirical data about the behaviour of my probes.
Anyway, it's a fun project ...
Well I have some issues with battery-power.
When ill use the Usb cable to power the sensor up, everything is working fine. But when ill use 2x1,5V battery´s it does not show up in domoticz. Im not sure if I have connected the battery correct. Its on the VIN and GND and nothing more. Is it correct?
@cattoo which Arduino are you using? I'm asking because the Pro Mini doesn't have any pin called VIN.
@mfalkvidd
Its a Arduino Nano (clone)
Why not using this sensor?
Giesomat
It didn't cost much and work like an angel.
And if the frequency is to high, or you need a other logic level, you can use this one:
frequency divider and level shifter
Don't have any troubles with this.
I only count the pulses. That all.
@NetRap You have already stated the same 3 times earlier in this thread. It looks like spam/advertising. What's your point?
@cattoo VIN on the Nano is used when powering with higher than 5V.
The Nano is not suitable for battery power. I recommend that you use a Pro Mini instead. See
I don't want to spam.
The point is, that the conducting based sensors are poison.
The electrolytic processes destroy the sensor and giving ions into the
earth !!!
Look at this site:
All Technologies
There are compared all possible technology's.
Giesomat wins.
@mfalkvidd said:
Ah okey, well then ill use them with proper power and buy new pro mini´s instead. Tack
- TON RIJNAARD last edited by
Hello,
Is there a sketch for mysensors 2.0
I get error on the Mysensor gw;
Ton
Just to give a feedback on power consumption: I have switched back to mysensors V 1.5.4. This was roughly one month ago. I take meadurements every two hours. Battery voltage hasn't changed a bit since then. So my guess is, that - for whatever reason - mysensors V 2.0 seems to produce a more power hungry code.
Whatever ...
I am happy now and will stick with V 1.5.4
I am
- Nicklas Starkel last edited by
@joshmosh I use MySensors V2 and took a sample every 30 seconds over a few days simulating almost 28000 transmits.
I've come down to have roughly 0.08V decrease for all these transmits which is very close to what @mfalkvidd has.
@TON-RIJNAARD , here is my sketch! I think I use signing as well, so if you don't use it just remove
// Enable debug prints to serial monitor #define MY_DEBUG //The node ID #define MY_NODE_ID 7 //250 is test // Enable and select radio type attached and also set parent ID #define MY_RADIO_NRF24 #define MY_PARENT_NODE_ID 0 #define MY_PARENT_NODE_IS_STATIC //Signing, make sure the arduino is prepped for signing before! #define MY_SIGNING_SOFT #define MY_SIGNING_SOFT_RANDOMSEED_PIN 7 #define MY_SIGNING_REQUEST_SIGNATURES #include <SPI.h> #include <MySensors.h> #define round(x) ((x)>=0?(long)((x)+0.5):(long)((x)-0.5)) #define N_ELEMENTS(array) (sizeof(array)/sizeof((array)[0])) #define CHILD_ID_MOISTURE 0 #define CHILD_ID_BATTERY 1 #define SLEEP_TIME 1800000 // Sleep time between reads (in milliseconds) #define STABILIZATION_TIME 1000 // Let the sensor stabilize before reading #define BATTERY_FULL 3000 // 3,000 millivolts for 2xAA #define BATTERY_ZERO 2800 // 1,900 millivolts (1.9V, limit for nrf24l01 without step-up. 2.8V limit for Atmega328 without BOD disabled)) const int SENSOR_ANALOG_PINS[] = {A0, A1}; // Sensor is connected to these two pins. Avoid A3 if using ATSHA204. A6 and A7 cannot be used because they don't have pullups. MyMessage msg(CHILD_ID_MOISTURE, V_HUM); MyMessage voltage_msg(CHILD_ID_BATTERY, V_VOLTAGE); long oldvoltage = 0; byte direction = 0; int oldMoistureLevel = -1; void setup() { sendSketchInfo("Plant moisture w bat", "1.5"); present(CHILD_ID_MOISTURE, S_HUM); delay(250); present(CHILD_ID_BATTERY, S_CUSTOM); for (int i = 0; i < N_ELEMENTS(SENSOR_ANALOG_PINS); i++) { pinMode(SENSOR_ANALOG_PINS[i], OUTPUT); digitalWrite(SENSOR_ANALOG_PINS[i], LOW); } } void loop() {; } send(msg.set((moistureLevel + oldMoistureLevel + 0.5) / 2 / 10.23, 1)); oldMoistureLevel = moistureLevel; long voltage = readVcc(); if (oldvoltage != voltage) { // Only send battery information if voltage has changed, to conserve battery. send(voltage_msg.set(voltage / 1000.0, 3)); // redVcc returns millivolts. }```
@Nicklas-Starkel
Strange ...
But since I am not missing / using any of the advanced features offered by V 2.0, I don't see a problem (at least for now) to stick with V 1.5.4
In any case it is amazing what you can do with low power battery poweroperated sensors.
Hi,
in last MySensors lib version 2.1.0, I see a new sensor type : S_MOISTURE
It's more logicial to use this new type instead of S_HUM ?
and so use V_LEVEL instead of V_HUM ?
Thanks for respons
If I use more than one sensor for this. It still will only monitor one plant right? Or can I say monitor 3-4 plants with one node?..
@Stric interesting (and strange) effect. I wonder what causes it. Maybe the wait time beween turning on the pin and doing the measurement is too short so the level doesn't settle completely?
@Tetnobic i tried this and Ended up with a soil sensor that didn't display % but something called cb?
But yeah it works.
I made a node with 1 fork and it works great!
Any tips on how i can use more than one fork to monitor say 3-4 plants?
..
@meanmrgreen See my post a few up, I'm seeing some weird effects when connecting to more than one plant..
Does reverse polarity really help with corrosion then? Maybe it has something to do with that.
Small problem with my plant sensor. Used only the fork and the reverse polarity.
But the sensor is reporting around 80-70% all the time, alltho the plant is pretty dry.
When i remove the sensor from pot it shows 0% ?
Can you calibrate the fork somehow?
@meanmrgreen I calibrate by putting my finger in the soil. When the soil is too dry I note the current value and set a notification to trigger next time it reaches that level.
Ok
But Its normal just to lower by a few percent in a few days time?
I'm going to try do this with the slim node and a cr2032 battery. Which cap do you recommend using to help battery?
@meanmrgreen said:
But Its normal just to lower by a few percent in a few days time?
It depends on the soil type, the plant, the temperature, if the plant gets direct sunlight and probably some more factors.
I need to check my node. It Constantly shows 80-90%. Have switched forks and plant same number. If I pull it out of the dirt it gets to 0%
Using only the fork no board in the middle with the alternate current sketch.
- Dennis van der Wolf last edited by
Re: Office plant monitoring
Hello, i have build the sensor from the building page. I directly connect the fork to pin D6,D7. When i show the measurement i see strange values. anyone an idea what i did wrong?
09.02.2017 19:22:23 Multi Sensor (multi2) SoilMoistPercentageSensor 3 % 09.02.2017 19:16:51 Multi Sensor (multi2) SoilMoistPercentageSensor 0 % 09.02.2017 19:11:20 Multi Sensor (multi2) SoilMoistPercentageSensor 3 % 09.02.2017 19:05:48 Multi Sensor (multi2) SoilMoistPercentageSensor 5 % 09.02.2017 19:00:17 Multi Sensor (multi2) SoilMoistPercentageSensor -1 % 09.02.2017 18:54:46 Multi Sensor (multi2) SoilMoistPercentageSensor 8 % 09.02.2017 18:48:41 Multi Sensor (multi2) SoilMoistPercentageSensor 5 % 09.02.2017 18:43:10 Multi Sensor (multi2) SoilMoistPercentageSensor 6 % 09.02.2017 18:37:38 Multi Sensor (multi2) SoilMoistPercentageSensor 1 % 09.02.2017 18:32:07 Multi Sensor (multi2) SoilMoistPercentageSensor -1 % 09.02.2017 18:26:36 Multi Sensor (multi2) SoilMoistPercentageSensor -1 % 09.02.2017 18:21:04 Multi Sensor (multi2) SoilMoistPercentageSensor 4 % 09.02.2017 18:15:33 Multi Sensor (multi2) SoilMoistPercentageSensor 4 % 09.02.2017 18:10:02 Multi Sensor (multi2) SoilMoistPercentageSensor 6 % 09.02.2017 18:03:57 Multi Sensor (multi2) SoilMoistPercentageSensor 6 % 09.02.2017 17:58:26 Multi Sensor (multi2) SoilMoistPercentageSensor 0 % 09.02.2017 17:52:54 Multi Sensor (multi2) SoilMoistPercentageSensor 6 % 09.02.2017 17:46:50 Multi Sensor (multi2) SoilMoistPercentageSensor 5 % 09.02.2017 17:41:19 Multi Sensor (multi2) SoilMoistPercentageSensor -1 % 09.02.2017 17:34:41 Multi Sensor (multi2) SoilMoistPercentageSensor 6 % 09.02.2017 17:29:10 Multi Sensor (multi2) SoilMoistPercentageSensor -3 % 09.02.2017 17:23:39 Multi Sensor (multi2) SoilMoistPercentageSensor -11 %
- Jan Gatzke last edited by
Which sketch did you use? The one from this page needs the fork connected to analog input pins Ax.
- Dennis van der Wolf last edited by
@Jan-Gatzke I have connected the fork to A0 and A1 of mine arduino nano. Now i have this result:
10.02.2017 12:17:35 Multi Sensor (multi2) SoilMoistPercentageSensor -182 % 10.02.2017 12:12:04 Multi Sensor (multi2) SoilMoistPercentageSensor -455 % 10.02.2017 12:06:33 Multi Sensor (multi2) SoilMoistPercentageSensor -317 % 10.02.2017 12:01:02 Multi Sensor (multi2) SoilMoistPercentageSensor 255 % 10.02.2017 11:55:31 Multi Sensor (multi2) SoilMoistPercentageSensor -547 % 10.02.2017 11:49:26 Multi Sensor (multi2) SoilMoistPercentageSensor 1169 % 10.02.2017 11:43:55 Multi Sensor (multi2) SoilMoistPercentageSensor 40 % 10.02.2017 11:38:24 Multi Sensor (multi2) SoilMoistPercentageSensor -250 %
This is the sketch mesuring * *_NRF24 //#define MY_RADIO_RFM69 #include <math.h> // Conversion equation from resistance to % #include <MySensors.h> // Setting up format for reading 3 soil sensors #define NUM_READS 10 // Number of sensor reads for filtering #define CHILD_ID 0 MyMessage msg(CHILD_ID, V_LEVEL); unsigned long SLEEP_TIME = 30000; // Sleep time between reads (in milliseconds) long buffer[NUM_READS]; int index; /// [index] = resistance; index++; if (index >= NUM_READS) { index = 0; } } long average() { long sum = 0; for (int i = 0; i < NUM_READS; i++) { sum += buffer[i]; } return (long)(sum / NUM_READS); }
- ronnyandre last edited by
Thanks for this great solution @mfalkvidd! It works great when my Arduino Pro Mini is connected to the computer, but not when I try to run it off a battery pack.
I have a Pro Mini 3.3v connected to a 0.8-3.3v step up from a battery pack (2xAA; 3v). And then I have connected the radio and sensor to VCC on the Pro Mini. When the Pro Mini is connected to my iMac, Domoticz receives everything as it should. However, when I disconnect it from the computer and connect the battery source, all LEDs light as they should, indicitating that they have power, but it won't connect to Domoticz over NRF24.
I have used a multimeter to check the voltage and if the radio receives enough power, and it does. All power/ground pins show around 3.3v. Any ideas to debug what's wrong?
@joshmosh said in Office plant monitoring:
mfalkvidd
Where i find mfalkvidd's sketch ?
@ronnyandre a multimeter is unfortunately not sensitive enough to display if there is enough power during the short bursts when the radio is active.
Most step-ups don't deliver power that is stable enough. You could try adding more/larger capacitors, but from what I have seen in the forum, people seldom get thing working reliably with a step-up. I have never tried using step-up myself, I use power directly from the batteries.
If you haven't checked already, see the troubleshooting chart at
Where i find mfalkvidd's sketch ?
At github, but be aware that this code is for MySensors 1.x. It does not work with MySensors 2.x.
- ronnyandre last edited by
@mfalkvidd Thanks for the quick answer! I also read on the page for battery powered sensors that the step up generates alot of noise that can interfere with the radio, and that a solution might be to add capacitators (which I already tried), but also powering the radio directly from batteries. I'll try that later today. Thanks!
And thank you for the link to the troubleshooting. It's now bookmarked!
Please somebody help me add to skech one relay for water pump. I use this in Domoticz. I do not want build other hardware for this.It is possible ?
Just copy the relay code from and add it your sketch and make necessary changes to adapt it: like setting child_id in presentation
@gohan I make something this. Please look if all is right. My software writer skill is low.
Some small errors.
because you are using old code, you need to convert it from libraries 1.5 to 2.x: there is a guide to do that
Can i use old libraries and gateway version 2.1 ?
You can downgrade libraries and use old examples, but I'd suggest to stick to the new version
@gohan said in Office plant monitoring:
but I'd suggest to stick to the new version
I try but idonotknow.
@gohan
Please help with small errors. I try for library 2.x
It's a pity that at least one sketch for the analog reading of moisture sensor is not v2 compatible and made available on the main page ... i have no idea if making it v2 compatible would be a hard job...even if it's just a question of changing some library calls i'm afraid i don't have the skill for that...
My Bonsai tree humidity node celebrates 2 years on battery today!
During these two years, the gateway has received 146,528 updates on humidity level (and an additional 30,870 updates on voltage level).
The battery level has gone from 3.187V to 3.049, which means an average drop of 0,0058V per month. Assuming I let it go down to 2.34V (limit for 8MHz according to the datasheet) and that the voltage drop is linear, I should get (3.187-2.34)/0.0058 = 146 months = ~12 years. There are several error sources in this calculation, but it looks like battery life will be quite good.
Here are the voltage and humidity graphs for the last year.
As you can see, there was a problem in November. I was asked to verify the battery voltage reading by using a multimeter. When I opened the box to do that, I must have tripped something because the node got caught in some sort of loop, consuming battery. I restarted the node and the batteries recovered almost to the level they had before the problem occurred.
Last year's report:
@mfalkvidd said in Office plant monitoring:
and that the voltage drop is linear
You wish!!!
@gohan no actually I don't. The voltage drop is normally a s-shaped curve that is very flat in the middle. That means I am experiencing a higher drop at the beginning. That's likely the reason that the prediction after 2 years is more than 10% longer battery life than the prediction after 1 year was.
Yes, it depends when the voltage starts to drop significantly, but unless you have tested another battery before it is hard to know in advance
@gohan alkaline batteries in general have been tested quite extensively and don't deviate much from the S-curve characteristics. | https://forum.mysensors.org/topic/2147/office-plant-monitoring/208 | CC-MAIN-2019-18 | refinedweb | 3,864 | 66.94 |
Shared Library Dialog
I am new to Qt, ... so new I'm not sure this is the place to ask questions. :-)
I am evaluating Qt to see if it will work for our next project. A major requirement is to create "modules" for our application. In Windows we did this with .dll's, some of which contained resources.
I'm not sure how to do this in Qt, but assuming something similar I attempted to create a library and add a dialog. The code contains the following:
#include <QDialog>
But this generates the error: "C1083: Cannot open include file: 'QDialog': No such file or directory"
I'm not seeing any examples on how to do this or if this is even a reasonable approach in Qt. I would be happy to create "modules" in any manner Qt does it. In the end the goal is to share dialogs, resources, code, data,... etc. at runtime.
Does Qt have this feature and if so, is there an example or documentation on how to do this?
Thanks.
- kshegunov Qt Champions 2016
@TigerBunny
Hello,
A major requirement is to create "modules" for our application. In Windows we did this with .dll's, some of which contained resources.
If you mean to load those "modules" at runtime you'd want to research the plugins.
But this generates the error: "C1083: Cannot open include file: 'QDialog': No such file or directory"
Probably you have forgotten to add the
QT += widgetsline in your project file.
Kind regards.
Hi @kshegunov
I did add QT+= widgets, but it didn't change the error.
#-------------------------------------------------
Project created by QtCreator 2016-03-03T10:18:32
#-------------------------------------------------
QT -= gui
QT += widgets
TARGET = SharedLibTest
TEMPLATE = lib
DEFINES += SHAREDLIBTEST_LIBRARY
SOURCES += sharedlibtest.cpp
dialog.cpp
HEADERS += sharedlibtest.h
sharedlibtest_global.h
dialog.h
unix {
target.path = /usr/lib
INSTALLS += target
}
FORMS +=
dialog.ui
I have not researched "plugins". I heard the term and assumed it referred to adding functionality to the IDE. I'll take a look at that.
ty.
@kshegunov, I read the link. Are you sure "plugins" is what I want?
I simply want to start seeing if Qt has the ability to be modular by loading a dialog inside an application where the dialog is replaceable without recompiling the application. Much like a resource .dll is in Windows.
Do you know if there is an example of using "plugins" to load a dialog?
hi
please try/read this small sample
Plugins can BOTH be to extend the qt. but same method/functionality can also be used to extend any Qt app.
So its like with DLLS just more modern using interfaces.
So it can easy be a modul for sharing Dialogs or what ever needed.
The example doesn't seem to show a dialog (or any other resource) inside a module. It appears to simply echo text back from a module without any resources to an application.
Ideally I would like an example already done that demonstrates my goal. If there is none, the next choice would be a tutorial on how to do it.
Failing all of that, documentation that will give me confidence this will work. I can work from there to create my own example.
I would very much appreciate a full example.
Ok. I know no samples that pops a dialog.
The other examples are about expanding the host application.
So im afraid u must keep looking.
@TigerBunny
Hi
I was bored. so I modified echo sample to open dialog
dialog lives in plugin.
note the UI file is the resource.
@mrjj, I don't see a UI file. All I get is an error message. Also, there is only 1 project.
I would expect an example to have 2 projects. One that creates the plugin with the dialog, the other an application that uses it. Are my expectations not correct?
ok let me check zip. there should be 2 folders. did u unzip correctly?
@TigerBunny
hi
for me all is in zip ?
including the UI file. in the plugin folder
Make sure you unzip it all. do not click inside the zip file.
extract all first.
I must have used the wrong extraction procedure. I'm not up on Windows compression, my mistake.
I'll take a look. This looks much better. Thanks.
@TigerBunny
well windows native zip handling, does "help" one by sort of showing like a folder. :)
Note. to add dialog to sample was very little.
Just add Dialog via FIle->New and then add include to plugin and change
interface function to show ( exec) dialog.
This works, but I can't find the output file. I assume there is something like a .dll this produces. How do I find it and what would it be called?
yes its a DLL
for me its in
E:\build-echoplugin-Desktop_Qt_5_5_0_MinGW_32bit-Debug\plugins
The PRO file alters the destination.
Its so the main program can find it.
look for echoplugind.dll
it might also copy to
$$[QT_INSTALL_EXAMPLES]/widgets/tools/echoplugin/plugin
- kshegunov Qt Champions 2016
@TigerBunny
It seems @mrjj beat me to it. This thread, however, could also be useful for you. | https://forum.qt.io/topic/64794/shared-library-dialog | CC-MAIN-2017-51 | refinedweb | 852 | 77.64 |
In Java we can use
Collections.shuffle method to randomly reorder items in a list. Groovy 3.0.0 adds the
shuffle and
shuffled methods to a
List or array directly. The implementation delegates to
Collections.shuffle. The
shuffle method will reorder the original list, so there is a side effect using this method. Or we can use the
shuffled method that will return a copy of the original list where the items are randomly ordered.
In the next example we use both methods to randomly order lists:
def signs = (2..10) + ['J', 'Q', 'K', 'A'] def symbols = ['♣', '♦', '♥', '♠'] // Create list as [♣2, ♦2, ♥2, ♠2, ..., ♣A, ♦A, ♥A, ♠A] def cards = [symbols, signs].combinations().collect { it.join() } // Store original cards list. def deck = cards.asImmutable() // We should have 52 cards. assert cards.size() == 52 // Let's shuffle the cards. // Notice this will change the cards list. cards.shuffle() assert cards.every { card -> deck.contains(card) } println cards.take(5) // Possible output: [♣6, ♠A, ♥Q, ♦Q, ♠5] // We can use our own Random object for shuffling. cards.shuffle(new Random(42)) assert cards.every { card -> deck.contains(card) } println cards.take(5) // Possible output: [♦5, ♦2, ♦3, ♣7, ♦J] // Store first 5 cards. def hand = cards.take(5) // Using shuffled we get a new list // with items in random order. // The original list is not changed. def shuffledCards = cards.shuffled() assert shuffledCards.size() == cards.size() assert shuffledCards.every { card -> cards.contains(card) } // Original list has not changed. assert hand == cards.take(5) println shuffledCards.take(5) // Possible output: [♣4, ♠2, ♠6, ♥Q, ♦4] // We can pass our own Random object. def randomizer = new Random(42) def randomCards = cards.shuffled(randomizer) assert randomCards.size() == cards.size() assert randomCards.every { card -> cards.contains(card) } println randomCards.take(5) // Possible output: [♥5, ♠6, ♠8, ♣3, ♠4]
Written with Groovy 3.0.0. | https://mrhaki.blogspot.com/2020/02/groovy-goodness-shuffle-list-or-array.html | CC-MAIN-2020-16 | refinedweb | 308 | 64.78 |
Train your first Decision Transformer
In a previous post, we announced the launch of Decision Transformers in the transformers library. This new technique of using a Transformer as a Decision-making model is getting increasingly popular.
So today, you’ll learn to train your first Offline Decision Transformer model from scratch to make a half-cheetah run. We'll train it directly on a Google Colab that you can find here 👉
Sounds exciting? Let's get started!
- What are Decision Transformers?
- Training Decision Transformers
- Conclusion
- What’s next?
- References
What are Decision Transformers?
The Decision Transformer model was introduced by “Decision Transformer: Reinforcement Learning via Sequence Modeling” by Chen L. et al. It abstracts Reinforcement Learning as a conditional-sequence modeling problem.
The main idea is that instead of training a policy using RL methods, such as fitting a value function that will tell us what action to take to maximize the return (cumulative reward), we use a sequence modeling algorithm (Transformer) that, given the desired return, past states, and actions, will generate future actions to achieve this desired return. It’s an autoregressive model conditioned on the desired return, past states, and actions to generate future actions that achieve the desired return.
This is a complete shift in the Reinforcement Learning paradigm since we use generative trajectory modeling (modeling the joint distribution of the sequence of states, actions, and rewards) to replace conventional RL algorithms. It means that in Decision Transformers, we don’t maximize the return but rather generate a series of future actions that achieve the desired return.
The process goes this way:
- We feed the last K timesteps into the Decision Transformer with three inputs:
- Return-to-go
- State
- Action
- The tokens are embedded either with a linear layer if the state is a vector or a CNN encoder if it’s frames.
- The inputs are processed by a GPT-2 model, which predicts future actions via autoregressive modeling.
Decision Transformer architecture. States, actions, and returns are fed into modality-specific linear embeddings, and a positional episodic timestep encoding is added. Tokens are fed into a GPT architecture which predicts actions autoregressively using a causal self-attention mask. Figure from [1].
There are different types of Decision Transformers, but today, we’re going to train an offline Decision Transformer, meaning that we only use data collected from other agents or human demonstrations. The agent does not interact with the environment. If you want to know more about the difference between offline and online reinforcement learning, check this article.
Now that we understand the theory behind Offline Decision Transformers, let’s see how we’re going to train one in practice.
Training Decision Transformers
In the previous post, we demonstrated how to use a transformers Decision Transformer model and load pretrained weights from the 🤗 hub.
In this part we will use 🤗 Trainer and a custom Data Collator to train a Decision Transformer model from scratch, using an Offline RL Dataset hosted on the 🤗 hub. You can find code for this tutorial in this colab notebook
We will be performing offline RL to learning the following behavior in the mujoco halfcheetah environment.
We host a number of Offline RL Datasets on the hub. Today we will be training with the halfcheetah “expert” dataset, hosted here on hub.
First we need to import the
load_dataset function from the 🤗 datasets package and download the dataset to our machine.
from datasets import load_dataset dataset = load_dataset("edbeeching/decision_transformer_gym_replay", "halfcheetah-expert-v2")
While most datasets on the hub are ready to use out of the box, sometimes we wish to perform some additional processing or modification of the dataset. In this case we wish to match the author's implementation, that is we need to:
- Normalize each feature by subtracting the mean and dividing by the standard deviation.
- Pre-compute discounted returns for each trajectory.
- Scale the rewards and returns by a factor of 1000.
- Augment the dataset sampling distribution so it takes into account the length of the expert agent’s trajectories.
In order to perform this dataset preprocessing, we will use a custom 🤗 Data Collator.
Now let’s get started on the Custom Data Collator for Offline Reinforcement Learning.
class DecisionTransformerGymDataCollator: return_tensors: str = "pt" max_len: int = 20 #subsets of the episode we use for training state_dim: int = 17 # size of state space act_dim: int = 6 # size of action space max_ep_len: int = 1000 # max episode length in the dataset scale: float = 1000.0 # normalization of rewards/returns state_mean: np.array = None # to store state means state_std: np.array = None # to store state stds p_sample: np.array = None # a distribution to take account trajectory lengths n_traj: int = 0 # to store the number of trajectories in the dataset def __init__(self, dataset) -> None: self.act_dim = len(dataset[0]["actions"][0]) self.state_dim = len(dataset[0]["observations"][0]) self.dataset = dataset # calculate dataset stats for normalization of states states = [] traj_lens = [] for obs in dataset["observations"]: states.extend(obs) traj_lens.append(len(obs)) self.n_traj = len(traj_lens) states = np.vstack(states) self.state_mean, self.state_std = np.mean(states, axis=0), np.std(states, axis=0) + 1e-6 traj_lens = np.array(traj_lens) self.p_sample = traj_lens / sum(traj_lens) def _discount_cumsum(self, x, gamma): discount_cumsum = np.zeros_like(x) discount_cumsum[-1] = x[-1] for t in reversed(range(x.shape[0] - 1)): discount_cumsum[t] = x[t] + gamma * discount_cumsum[t + 1] return discount_cumsum def __call__(self, features): batch_size = len(features) # this is a bit of a hack to be able to sample of a non-uniform distribution batch_inds = np.random.choice( np.arange(self.n_traj), size=batch_size, replace=True, p=self.p_sample, # reweights so we sample according to timesteps ) # a batch of dataset features s, a, r, d, rtg, timesteps, mask = [], [], [], [], [], [], [] for ind in batch_inds: # for feature in features: feature = self.dataset[int(ind)] si = random.randint(0, len(feature["rewards"]) - 1) # get sequences from dataset s.append(np.array(feature["observations"][si : si + self.max_len]).reshape(1, -1, self.state_dim)) a.append(np.array(feature["actions"][si : si + self.max_len]).reshape(1, -1, self.act_dim)) r.append(np.array(feature["rewards"][si : si + self.max_len]).reshape(1, -1, 1)) d.append(np.array(feature["dones"][si : si + self.max_len]).reshape(1, -1)) timesteps.append(np.arange(si, si + s[-1].shape[1]).reshape(1, -1)) timesteps[-1][timesteps[-1] >= self.max_ep_len] = self.max_ep_len - 1 # padding cutoff rtg.append( self._discount_cumsum(np.array(feature["rewards"][si:]), gamma=1.0)[ : s[-1].shape[1] # TODO check the +1 removed here ].reshape(1, -1, 1) ) if rtg[-1].shape[1] < s[-1].shape[1]: print("if true") rtg[-1] = np.concatenate([rtg[-1], np.zeros((1, 1, 1))], axis=1) # padding and state + reward normalization tlen = s[-1].shape[1] s[-1] = np.concatenate([np.zeros((1, self.max_len - tlen, self.state_dim)), s[-1]], axis=1) s[-1] = (s[-1] - self.state_mean) / self.state_std a[-1] = np.concatenate( [np.ones((1, self.max_len - tlen, self.act_dim)) * -10.0, a[-1]], axis=1, ) r[-1] = np.concatenate([np.zeros((1, self.max_len - tlen, 1)), r[-1]], axis=1) d[-1] = np.concatenate([np.ones((1, self.max_len - tlen)) * 2, d[-1]], axis=1) rtg[-1] = np.concatenate([np.zeros((1, self.max_len - tlen, 1)), rtg[-1]], axis=1) / self.scale timesteps[-1] = np.concatenate([np.zeros((1, self.max_len - tlen)), timesteps[-1]], axis=1) mask.append(np.concatenate([np.zeros((1, self.max_len - tlen)), np.ones((1, tlen))], axis=1)) s = torch.from_numpy(np.concatenate(s, axis=0)).float() a = torch.from_numpy(np.concatenate(a, axis=0)).float() r = torch.from_numpy(np.concatenate(r, axis=0)).float() d = torch.from_numpy(np.concatenate(d, axis=0)) rtg = torch.from_numpy(np.concatenate(rtg, axis=0)).float() timesteps = torch.from_numpy(np.concatenate(timesteps, axis=0)).long() mask = torch.from_numpy(np.concatenate(mask, axis=0)).float() return { "states": s, "actions": a, "rewards": r, "returns_to_go": rtg, "timesteps": timesteps, "attention_mask": mask, }
That was a lot of code, the TLDR is that we defined a class that takes our dataset, performs the required preprocessing and will return us batches of states, actions, rewards, returns, timesteps and masks. These batches can be directly used to train a Decision Transformer model with a 🤗 transformers Trainer.
Training the Decision Transformer model with a 🤗 transformers Trainer.
In order to train the model with the 🤗 Trainer class, we first need to ensure the dictionary it returns contains a loss, in this case L-2 norm of the models action predictions and the targets. We achieve this by making a TrainableDT class, which inherits from the Decision Transformer model.
class TrainableDT(DecisionTransformerModel): def __init__(self, config): super().__init__(config) def forward(self, **kwargs): output = super().forward(**kwargs) # add the DT loss action_preds = output[1] action_targets = kwargs["actions"] attention_mask = kwargs["attention_mask"] act_dim = action_preds.shape[2] action_preds = action_preds.reshape(-1, act_dim)[attention_mask.reshape(-1) > 0] action_targets = action_targets.reshape(-1, act_dim)[attention_mask.reshape(-1) > 0] loss = torch.mean((action_preds - action_targets) ** 2) return {"loss": loss} def original_forward(self, **kwargs): return super().forward(**kwargs)
The transformers Trainer class required a number of arguments, defined in the TrainingArguments class. We use the same hyperparameters are in the authors original implementation, but train for fewer iterations. This takes around 40 minutes to train in a colab notebook, so grab a coffee or read the 🤗 Annotated Diffusion blogpost while you wait. The authors train for around 3 hours, so the results we get here will not be quite as good at theirs.
training_args = TrainingArguments( output_dir="output/", remove_unused_columns=False, num_train_epochs=120, per_device_train_batch_size=64, learning_rate=1e-4, weight_decay=1e-4, warmup_ratio=0.1, optim="adamw_torch", max_grad_norm=0.25, ) trainer = Trainer( model=model, args=training_args, train_dataset=dataset["train"], data_collator=collator, ) trainer.train()
Now that we explained the theory behind Decision Transformer, the Trainer, and how to train it. You're ready to train your first offline Decision Transformer model from scratch to make a half-cheetah run 👉 The colab includes visualizations of the trained model, as well as how to save your model on the 🤗 hub.
Conclusion
This post has demonstrated how to train the Decision Transformer on an offline RL dataset, hosted on 🤗 datasets. We have used a 🤗 transformers Trainer and a custom data collator.
In addition to Decision Transformers, we want to support more use cases and tools from the Deep Reinforcement Learning community. Therefore, it would be great to hear your feedback on the Decision Transformer model, and more generally anything we can build with you that would be useful for RL. Feel free to reach out to us.
What’s next?
In the coming weeks and months, we plan on supporting other tools from the ecosystem:
- Expanding our repository of Decision Transformer models with models trained or finetuned in an online setting [2]
- Integrating sample-factory version 2.0
The best way to keep in touch is to join our discord server to exchange with us and with the community.
References
[1] Chen, Lili, et al. "Decision transformer: Reinforcement learning via sequence modeling." Advances in neural information processing systems 34 (2021).
[2] Zheng, Qinqing and Zhang, Amy and Grover, Aditya “Online Decision Transformer” (arXiv preprint, 2022) | https://huggingface.co/blog/train-decision-transformers | CC-MAIN-2022-40 | refinedweb | 1,842 | 50.43 |
According to announcements from its creator Michael Neumann, Wee is "a framework for very dynamic, component-oriented, stateful web applications, largely inspired by Seaside." The name comes from the claim that Wee makes Web Engineering Easy.
The simplest way to install Wee is to use rubygems (gem install wee). The version at the time of writing was 0.10.0. The Wee documents explain that although the code is pretty stable, there is a chance of some issues when using continuations, and that, overall, you may not want to use the framework for mission-critical applications.
However, even with those caveats, Wee is well worth exploring for its component model, and because continuations are an interesting but underexplored area in mainstream web development. The creator says that he was influenced by ideas presented in Seaside, a continuations-based web framework written in Smalltalk by Avi Bryant.
The Wee gem installation includes a number of varied examples. One is a web-based browser into ObjectSpace; another shows some basic Ajax using the Prototype JavaScript library. There's also an example showing how to use Wee with Nitro.
At the heart of Wee is the idea of components. These are like widgets in a GUI. Wee components are thoroughly reusable, encapsulating state, presentation, and behavior, though you may prefer to have them delegate to external templates or models.
Installing Wee creates a simple application generator script called, naturally, wee. Running wee create my-demo will create a directory named my-demo off the current path and populate it with a simple WEBrick-based application.
The created app does little more than track the number of times a link has been clicked. The server file, run.rb, sets up the application components and main class and starts the application under WEBrick.
require 'wee' require 'wee/utils' require 'wee/adaptors/webrick' # Your components require 'components/main' app = Wee::Utils.app_for do Main.new.add_decoration(Wee::PageDecoration.new('Wee')) end Wee::Utils::autoreload_glob('components/**/*.rb') Wee::WEBrickAdaptor.register('/app' => app).start
The class Main will be called as the main application component. Components need to implement a render method to emit their markup. The call to add_decoration(Wee::PageDecoration.new('Wee')) alters the rendering pipeline such that the results of Main#render will be wrapped in an HTML header and footer.
Next, automatic file reloading is set up, so you can change code and retry the application without restarting WEBrick. Finally, an instance of WEBrick is started to serve the application from the URL path '/app'. The default port is 2000; you can pass a different port number as a parameter to start:
Wee::WEBrickAdaptor.register('/app' => app).start(:Port => 8787 )
The Main component defines the render method to produce the markup.
class Main < Wee::Component def initialize super() # Put your own initialization code below... end def render r.anchor.callback(:click).with { r.h1("Welcome to Wee!") } r.text "#{ @clicks || 'No' } clicks" end def click @clicks = (@clicks || 0) + 1 end end
Wee allows you to use Ruby syntax to define the HTML to emit in a manner similar to Jim Weirich's XML Builder library and the XML generator in Nitro. However, in Wee this syntax also allows you to connect a link with an action (in this case, the click method). When a user clicks the link generated by Wee, the application knows that it should invoke click.
This example, as is, tracks the current value of @click but does not tie it to a URL. If you run the program, you'll see that Wee is generating a fairly lengthy URL that is, essentially, a GUID (globally unique identifier). The URL stays the same except for a trailing slash and an integer. Each time you click the Welcome to Wee link that integer increases.
If you manually edit the URL in the browser, you'll get the same page; the displayed click count does not change. There is no association between URLs and server state. (Make sure that your browser is not caching pages when you try this.)
We can change this, though, with a simple addition to main.rb. Add the following method to Main:
def backtrack_state(snap) super snap.add(self) end
Then restart the application. After clicking the link a few times, manually edit the URL in the browser to reload a previous page. The click count should reflect the value of @click at the time that URL was rendered.
To try this using Wee's continuation code, add the following after the calls to require in run.rb:
require 'wee/continuation'
There's much more to Wee than can be covered here. For more information, consult these references:
Wee project page ()
Nemo project page ()
Seaside ()
One interesting feature is the capability to nest components and chain the behavior, allowing you to assemble websites from reusable UI widgets. You should also take a look at Nemo, an implementation of Mewa (Meta-level Architecture for Web Applications) in Wee. | https://flylib.com/books/en/2.491.1.215/1/ | CC-MAIN-2019-22 | refinedweb | 829 | 56.15 |
Big Picture Machine Learning: Classifying Text with Neural Networks and TensorFlow
In this article, the author discusses the main six topics about creating a machine learning model to classify texts into categories:
1. How TensorFlow works
2. What is a machine learning model
3. What is a Neural Network
4. How the Neural Network learns
5. How to manipulate data and pass it to the Neural Network inputs
6. How to run the model and get the prediction results
The author also provided the code that can be run in Jupyter notebook. I’ll review these six topics and combine them with my own experience.
1. Overview of TensorFlow:
TensorFlow is one of the most popular open source AI libraries. Its high in computing efficiency, and the rich development resources make it widely adopted by companies and individual developers. In my mind, the best way to learn TensorFlow is by using its official website:. In this website, you can go through the “getting started” tutorial and the list for all symbols in TensorFlow.
I will first give you the fundamental definition and the main characteristics of TensorFlow. Tensor is a kind of data structure that can shape the primitive values into an array of any number of dimensions[1]. The rank of tensor is its dimension number. Here, I recommend reading Python’s API for TensorFlow, because it is very friendly for TensorFlow beginner. You should install TensorFlow and configure the environment, just follow the instructions from the official websites. The way to test whether you’ve correctly installed TensorFlow is by importing the library of TensorFlow. In TensorFlow, the computational graph is a core component. The dataflow graph is used to represent the computation process. Under the graph, the Operation is for the units of computation and the Tensor represents units of data. To run the code, we should initialize the Session function. Here is the complete code to execute sum operations:
You can see that writing in TensorFlow follows a pattern, and it is easy to remember. You will import the library, create constant tensors, and build the graph. Then we should define which graph will be used in the Session, and define the operation unit. Finally you can use the run() method in Session and evaluate every Tensor, which is passed in the argument fetches.
2. The Predictive Model:
Predictive model can be very simple. It combines machine learning algorithm and the data-sets. The process of constructing a model is shown in the figure below:
We should first find the correct data as the input, and use some data processing function to manipulate the data. This data is then used to build the model by combining with machine learning algorithms. After you get the model, you can see the model as a predictor and input the data needed to predict, which will yield the result. The process is depicted as the following figure:
For this article, the input is text and the result is the category. This type of machine learning method is called supervised learning, where the training dataset has the texts labeled to which category it belongs. It’s also a classification task and Neural Networks are used to create the model.
3. Neural Networks:
The main characteristic of Neural Networks is self-learning, rather than being explicitly programmed. It is inspired by human’s central nervous system. The first neural network algorithm is Perceptron.
To understand the mechanism of how neural network works, the author built a neural network architecture with TensorFlow.
Architecture of Neural Network:
Here, the author uses 2 hidden layers, and the job of each hidden layers is to transform the inputs into something the output layer can use[1]. The number of the nodes in first hidden layer should be defined. These nodes, called neurons, are multiplied by weights. The training phase is to adjust these values in order to produce a correct output. The network also introduces bias, which allows you to shift the activation function to the left or right and help the prediction fit better[2]. The data also pass through an activation function, which defines the final output of each neuron. Here, the author uses the rectified linear unit (ReLU) activation, which can improve the non-linearity. This function is defined as:
f(x) = max(0,x) (the output is x or 0 (zero), whichever is larger)
For the 2nd hidden layer, the input is the 1st layer, and the function is the same as the 1st hidden layer.
For the output layer, the author uses one-hot encoding to get the results. In one-hot encoding, all bits gets a 0 value except for one bit that has a value 1. Here, the author uses three categories as an example, shown in the following figure:
We can find that the number of output nodes is the number of classes. If we want to classify the different categories, we use the Softmax function which transforms the output of each unit to a value between 0 and 1, and makes the sum of all units equals 1. It will tell us the probability of each category.
The above can be shown as the code:
Here it calls the matmul() function to realize the multiply function between matrices, and calls the add() function to add the biases into the function.
4. How the neural network gets trained:
We can see the main point of the above section is to construct a reasonable structure, and make the weights of the network optimal enough to make the prediction. So the following is to introduce how to train the neural network in TensorFlow. In TensorFlow, we use Variable to store the weights and biases. Here, we should compare the output values with the expected values, and guide the functions to get minimum loss results. There are plenty of methods to calculate the loss function. Since it is a classification task, we should use the cross-entropy error. Prior work by James D. McCaffrey[3] analyzed and concluded the reason to use cross-entropy is to avoid the training stalling out. So we use the cross-entropy error by calling the function: tf.nn.softmax_cross_entropy_with_logits(), we will also calculate the mean error by calling the
function: tf.reduced_mean().
We should find the best value to minimize the output error. Here we use stochastic gradient descent (SGD):
Through many iterations, we will get the weights close to the global cost minimum. The learning rate should not be too large. The Adaptive Moment Estimation function is often used to compute the gradient descent. In this optimization algorithm, running averages of both the gradients and the second moments of the gradients are used[4].
The code is shown as follows, and in other projects, the learning rate can be dynamic which will make the training process faster.
5. Data manipulation:
This part is also very important for making classification successful. The developers in machine learning should pay more attention to the data. This will save you a lot of time when you want to improve the accuracy of your experiment, because you don’t need to change the configuration from the beginning. Here, the author points out two things that require attention. First, create an index for each word. Then, create a matrix for each text, in which the values are 1 if a word is in the text and 0 otherwise. You can see the code below, as it will help you understand the process.
Counter() in python is a hash table. When the input is “Hi from Brazil”, the matrix is [1 ,1, 1]. For a different input, which is “Hi”, it will get a different matrix:
6. Running and getting the results:
In this part, we will use the 20 Newsgroups as the dataset. It consists of 18,000 posts about 20 topics. The scilit-learn library is used to load this dataset. Here, the author uses 3 categories: comp.graphics, sci.space and rec.sport.baseball. It has two subsets, one for training and one for testing. Below is the way to load the dataset:
This follows a pattern, and it’s easy for the developers to use.
In this experiment, the epoch is set at 10, which means there are ten times of forward passes and backward passes to go through the whole dataset. In TensorFlow, the placeholder is defined to serve as the target of feeds, which is used to pass the data for each run step.
Here we should separate the training data in batches, because we will feed the dict with a larger batch when testing the model. We call the get_batches() function to get the number of texts with the size of the batch. We can run the model.
Here we should also build the test model and calculate the accuracy.
Then we can get the results.
Conclusion:
This article gives us a introduction on how to classify texts with neural networks and TensorFlow. It introduces the basic information that relates to this experiment. The results from running my own version is not as good as the author’s. We can make this architecture deeper, and use the dropout in the hidden layers. This will undoubtedly improve the accuracy.
Also, when you run the code, you should make sure you have the latest version of TensorFlow installed. Sometimes you’ll fail to import the twenty_newsgroups datasets. When this happens, you can use the following code to make it works.
The completed code is shown below:
import pandas as pd
import numpy as np
import tensorflow as tf
from collections import Counter
from sklearn.datasets import fetch_20newsgroups
# if you didn't download the twenty_newsgroups datasets, it will run with error
# this logging can help to solve the error
import logging
logging.basicConfig()
categories = ["comp.graphics","sci.space","rec.sport.baseball"]
newsgroups_train = fetch_20newsgroups(subset='train', categories=categories)
newsgroups_test = fetch_20newsgroups(subset='test', categories=categories)
print('total texts in train:',len(newsgroups_train.data))
print('total texts in test:',len(newsgroups_test.data))
vocab = Counter()
for text in newsgroups_train.data:
for word in text.split(' '):
vocab[word.lower()]+=1
for text in newsgroups_test.data:
for word in text.split(' '):
vocab[word.lower()]+=1
total_words = len(vocab)
def get_word_2_index(vocab):
word2index = {}
for i,word in enumerate(vocab):
word2index[word.lower()] = i
return word2index
word2index = get_word_2_index(vocab)
def get_batch(df,i,batch_size):
batches = []
results = []
texts = df.data[i*batch_size:i*batch_size+batch_size]
categories = df.target[i*batch_size:i*batch_size+batch_size]
for text in texts:
layer = np.zeros(total_words,dtype=float)
for word in text.split(' '):
layer[word2index[word.lower()]] += 1
batches.append(layer)
for category in categories:
y = np.zeros((3),dtype=float)
if category == 0:
y[0] = 1.
elif category == 1:
y[1] = 1.
else:
y[2] = 1.
results.append(y)
return np.array(batches),np.array(results)
# Parameters
learning_rate = 0.01
training_epochs = 10
batch_size = 150
display_step = 1
# Network Parameters
n_hidden_1 = 100 # 1st layer number of features
n_hidden_2 = 100 # 2nd layer number of features
n_input = total_words # Words in vocab
n_classes = 3 # Categories: graphics, sci.space and baseball
input_tensor = tf.placeholder(tf.float32,[None, n_input],name="input")
output_tensor = tf.placeholder(tf.float32,[None, n_classes],name="output")
def multilayer_perceptron(input_tensor, weights, biases):
layer_1_multiplication = tf.matmul(input_tensor, weights['h1'])
layer_1_addition = tf.add(layer_1_multiplication, biases['b1'])
layer_1 = tf.nn.relu(layer_1_addition)
# Hidden layer with RELU activation
layer_2_multiplication = tf.matmul(layer_1, weights['h2'])
layer_2_addition = tf.add(layer_2_multiplication, biases['b2'])
layer_2 = tf.nn.relu(layer_2_addition)
# Output layer
out_layer_multiplication = tf.matmul(layer_2, weights['out'])
out_layer_addition = out_layer_multiplication + biases['out']
return out_layer_addition
# Store layers weight & bias
weights = {
'h1': tf.Variable(tf.random_normal([n_input, n_hidden_1])),
'h2': tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2])),
'out': tf.Variable(tf.random_normal([n_hidden_2, n_classes]))
}
biases = {
'b1': tf.Variable(tf.random_normal([n_hidden_1])),
'b2': tf.Variable(tf.random_normal([n_hidden_2])),
'out': tf.Variable(tf.random_normal([n_classes]))
}
# Construct model
prediction = multilayer_perceptron(input_tensor, weights, biases)
# Define loss and optimizer
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=prediction, labels=output_tensor))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(loss)
# Initializing the variables
init = tf.initialize_all_variables()
# Launch the graph
with tf.Session() as sess:
sess.run(init)
#})
# Compute average loss
avg_cost += c / total_batch
# Display logs per epoch step
if epoch % display_step == 0:
print("Epoch:", '%04d' % (epoch+1), "loss=", \
"{:.9f}".format(avg_cost))
print("Optimization Finished!")
# Test model
correct_prediction = tf.equal(tf.argmax(prediction, 1), tf.argmax(output_tensor, 1))
#}))
Reference:
[1]
[2]
[3]
[4]
Author: Shixin Gu | Localized by Synced Global Team: Junpei Zhong | https://medium.com/@Synced/big-picture-machine-learning-classifying-text-with-neural-networks-and-tensorflow-da3358625601 | CC-MAIN-2018-13 | refinedweb | 2,067 | 50.12 |
I noticed that reading from file with mmap sometimes return wrong data
on 2.4 kernel.
This is a test program to reproduce the problem.
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <unistd.h>
#include <sys/mman.h>
int main(int argc, char **argv)
{
int fd;
struct stat st;
volatile unsigned char *buf;
unsigned char dat, dat2;
fd = open(argv[1], O_RDONLY);
fstat(fd, &st);
buf = mmap(NULL, st.st_size, PROT_READ, MAP_SHARED, fd, 0);
dat = *buf;
cacheflush(0, 0, 0); // flush cache all
dat2 = *buf;
printf("dat %x dat2 %x\n", dat, dat2);
munmap(buf, st.st_size);
close(fd);
return 0;
}
'dat' and 'dat2' should be same value, of course. But sometimes they
differ.
This problem often happens when I read a file in IDE disk (using PIO)
just after mounted. I saw same problem on a mtd JFFS2 partition a
while ago. I suppose it is not a filesystem/driver problem.
After calling cacheflush(), it returns correct data. And I checked
the virtual/physical address return by the mmap and found they had
different 'color' when the problem happens. So it seems to be a
virtual aliasing problem..
But flush_dcache_page() did not called between the mmap() call and the
cacheflush() call.
Tracing the code path on the page fault, I noticed filemap_nopage()
uses old flush_page_to_ram() interface. I suppose flush_dcache_page()
should be called in same place. Is this a correct fix?
--- linux-2.4.25/mm/filemap.c Wed Feb 18 22:36:32 2004
+++ linux/mm/filemap.c Thu Mar 25 21:19:29 2004
@@ -2111,6 +2111,7 @@
* and possibly copy it over to another page..
*/
mark_page_accessed(page);
+ flush_dcache_page(page);
flush_page_to_ram(page);
return page;
---
Atsushi Nemoto | http://www.linux-mips.org/archives/linux-mips/2004-03/msg00185.html | CC-MAIN-2015-14 | refinedweb | 283 | 77.74 |
Question
A cognitive retraining clinic assists outpatient victims of head injury, anoxia, or other conditions that result in cognitive impairment. Each incoming patient is evaluated to establish an appropriate treatment program and estimated length of stay. To see if the evaluation teams are consistent, 12 randomly chosen patients are separately evaluated by two expert teams (A and B) as shown. At the .10 level of significance, are the evaluator teams consistent in their estimates? State your hypotheses and show all steps clearly.
.png)
Answer to relevant QuestionsRates of return (annualized) in two investment portfolios are compared over the last 12 quarters. They are considered similar in safety, but portfolio B is advertised as being "less volatile." (a) At α = .025, does the ...One group of accounting students used simulation programs, while another group received a tutorial. Scores on an exam were compared. (a) Construct a 90 percent confidence interval for the true difference in mean scores, ...Using the following Excel results: (a) What was the overall sample size? (b) How many groups were there? (c) Write the hypotheses. (d) Find the critical value of F for α = .05. (e) Calculate the test statistic. (f) Do the ...Refer to Exercise 11.5. Are the population variances the same for scrap rates (3 plants)? Exercise 11.5 Scrap rates per thousand (parts whose defects cannot be reworked) are compared for 5 randomly selected days at three ...A small independent stock broker has created four sector portfolios for her clients. Each portfolio always has five stocks that may change from year to year. The volatility (coefficient of variation) of each stock is ...
Post your question | http://www.solutioninn.com/a-cognitive-retraining-clinic-assists-outpatient-victims-of-head-injury | CC-MAIN-2017-04 | refinedweb | 272 | 60.51 |
Computed properties in Swift: A basic feature for safer and cleaner code
Computed properties are one of the basic features of the Swift language. But despite their simplicity, they are a great tool to keep your code clean and free of bugs. I rely heavily on computed properties for any code I write.
Architecting SwiftUI apps with MVC and MVVMGET THE FREE BOOK NOW
Contents
- Stored properties can get out of synch and contain inconsistent data
- Making the value of a property dependent on other information
- Assigning a new value to a computed property
- Adding computed properties to existing types using extensions
- Adding computed properties to types you don’t own
- Computed properties inside enumerations
Stored properties can get out of synch and contain inconsistent data
First of all, let’s see why computed properties exist alongside stored properties.
As an example, let’s suppose we are writing a time tracker. Time tracking apps are quite popular nowadays, and you find them in many shapes and forms.
For starters, we need a type to represent the time entries in our app.
import Foundation struct TimeEntry { let title: String let start: Date var end: Date }
This is a pretty simple structure. We can get the values for the
startand
endstored properties when the user starts and stops a timer when working on some task.
Dates are not enough, though. A serious time-tracking app also needs to show the total duration of each time entry. For that, we can add a new property to our structure.
struct TimeEntry { let title: String let start: Date var end: Date var duration: DateComponents }
When we create a new entry, we have to calculate the duration of the task and store it in the
durationproperty.
That’s not so hard. But that’s code we need to repeat every time we create a new entry. In a real app, it’s unlikely that that will happen in a single place.
That duplicates code, which makes it likelier to make mistakes somewhere and create entries where the
durationis inconsistent with the
startand
endproperties.
Making the value of a property dependent on other information
The obvious solution is to move the code that calculates the duration for an entry to a separate function. Any code that creates a new time entry can then rely on it, avoiding repetitions.
But let’s dig deeper.
There is a reason why these properties need to be consistent: they represent the same piece of information. The
durationvalue is nothing else than a different way of looking at the data already stored in the other two properties.
Since we already have that information, it does not make sense to store it twice. It’s better to derive the duration of a time entry from that information.
struct TimeEntry { let title: String let start: Date var end: Date var duration: DateComponents { Calendar.current .dateComponents([.hour, .minute, .second], from: start, to: end) } }
Now,
durationis a computed property. Instead of storing its content separately, it derives it from the
startand
endproperties. We can’t store inconsistent values anymore.
I didn’t use the
returnkeyword in the code above because, since Swift 5.1, that’s not required in single expression functions.
Computed properties contain code, so they are, in practice, methods with no parameters. Then, why does Swift offer this extra feature?
There are different benefits of using this notation. The first one is that, semantically, the duration is an intrinsic property of a time entry. So, syntactically, it makes sense to represent it as a property.
Functions represent calculations. While this is one indeed, that’s not information we want to convey to the caller of our code.
But computed properties also have other benefits.
Assigning a new value to a computed property
Time tracking apps also allow the user to edit existing time entries to fix mistakes.
Making the user enter the start and end dates would provide a bad experience. It’s better to allow the user to enter the duration of an entry directly. Once we have that value, we can calculate the value of the
endstored property.
We need, again, some code that ties the properties together.
Currently, our
durationcomputed property is read-only. But Swift also allows computed properties that accept new values.
struct TimeEntry { let title: String let start: Date var end: Date var duration: DateComponents var duration: DateComponents { get { Calendar.current .dateComponents([.hour, .minute, .second], from: start, to: end) } set { end = Calendar.current .date(byAdding: newValue, to: start) ?? end } } }
The
getand
setkeywords define a getter and a setter for a computed property. The setter takes a new value and calculates the
enddate.
Again, we could have done the same with a method. But computed properties are better because, from the outside, they look like any other property. Semantically, the caller can simply assign a new duration to a time entry, which is more intuitive.
Moreover, a computed property with both a getter and a setter provides a single point of entry. If we used methods, we would have two provide two separate ones, which would make the interface of
TimeEntryless intuitive.
Adding computed properties to existing types using extensions
Computed properties in Swift are not limited to concrete types like structures and classes. You and also add them to extensions.
With extensions, you can add functionality to a type without changing its implementation. This is particularly useful for types you don’t own and is at the core of protocol-oriented programming.
But extensions are limited and cannot add stored properties to a type (the reason is that extra stored properties would change the size of a type in memory, creating problems for the Swift compiler).
But computed properties are functions, so we can use them in extensions as well.
Many time tracking apps also allow the user to create invoices for clients based on the time spent on a project. For that, we need the multiply the hours spent on a project by an hourly rate.
The
durationproperty of
TimeEntryreturns a
DateComponentsvalue, which is helpful when entering values but cumbersome for calculations.
We can fix it with a new computed property that returns the duration of an entry in hours.
extension TimeEntry { var hours: Double { DateInterval(start: start, end: end).duration * 3600 } }
I added the
hoursproperty to
TimeEntryusing an extension as an example. In our particular case, it makes no difference, and we could have added to the type instead.
There are cases, though, where using extensions for your types makes sense since you can:
- keep your code organized by functionality;
- add convenience that does not belong to the type itself, e.g., data formatting for the UI of your app;
- share types between apps without sharing code that is specific to a project.
Adding computed properties to types you don’t own
We can now create a new type representing a project, with a list of time entries and an hourly rate for the billing.
struct Project { let name: String let entries: [TimeEntry] let rate: Int var billableAmount: Int { let totalDuration = entries.reduce(0.0, { $0 + $1.hours }) return Int(totalDuration * Double(rate)) } }
To calculate the
totalDurationis used the
reduce(_:_:)method, which belongs to functional programming. I prefer it because it allows me to perform the calculation on a single line. But you can use a Swift for loop instead if you prefer.
Calculating the billable amount for a project is not enough. We usually want to display it in a readable format in the user interface of our app and invoices. Money amounts are typically formatted using thousands and decimal separators, and currency symbols.
Currency formatting often appears in many places, so that’s code we want to reuse.
But adding it to the
Projectstructure would be wrong.
In the MVC pattern, data types should not contain any code related to the UI. So, to keep formatting code out of model types, we can add it to the
Inttype instead, using an extension.
extension Int { var currencyFormat: String { let formatter = NumberFormatter() formatter.numberStyle = .currency return formatter.string(from: NSNumber(value: self)) ?? "" } }
Adding code to types you don’t own is not only useful to respect design patterns. You can also use it to make types you don’t own conform to some protocol.
Computed properties inside enumerations
There are times in which you need to make enumeration code reusable. Swift enumerations cannot have stored properties. But again, computed properties are functions, so we can use them also with enumerations.
Not all projects in a time tracking app are billed hourly. Some projects have a fixed fee. Personal or internal projects, instead, are not billable.
This is a good use case for an enumeration.
struct Project { enum Billing { case nonBillable case hourly(rate: Int) case fixed(fee: Int) } let name: String let entries: [TimeEntry] let billing: Billing var billableAmount: Int { switch billing { case .nonBillable: return 0 case let .hourly(rate): let totalDuration = entries.reduce(0.0, { $0 + $1.hours }) return Int(totalDuration * Double(rate)) case let .fixed(fee): return fee } } }
Again, we want to show the different billing options clearly in our user interface. But a single number alone is confusing. The user might not understand that $0 means non-billable. Moreover, an amount of $500 might be a fixed fee for a small project, or the hourly rate of a well-paid freelancer.
(yes, that’s possible)
We can format each case of the
Billingenumeration inside a computed property.
extension Project.Billing { var formatted: String { switch self { case .nonBillable: return "Non billable" case let .hourly(rate): return "Hourly rate: " + rate.currencyFormat case let .fixed(fee): return "Fixed fee: " + fee.currencyFormat } } }
Again, I used an extension instead of adding the computed property to
Billingto keep formatting code separate from the type itself.
Conclusions
While computed properties are a form of syntactic sugar, they have several benefits.
- You can make a property dependent on other information, avoiding inconsistent states.
- You can hide functionality behind a simple interface. The caller can use a computed property as if it was a real one.
- With getters and setters, you can keep related code in a single place.
- Through extensions, you can add properties to any type. This is necessary to make a type you don’t own conform to a protocol.
Architecting SwiftUI apps with MVC and MVVM
It’s easy to make an app by throwing some code together. But without best practices and robust architecture, you soon end up with unmanageable spaghetti code. In this guide I'll show you how to properly structure SwiftUI apps.
The post Computed properties in Swift: A basic feature for safer and cleaner code appeared first on Matteo Manferdini. | https://laptrinhx.com/news/nGkQrO2/ | CC-MAIN-2021-31 | refinedweb | 1,780 | 57.16 |
Creating Twitter-esque Relative Dates in C#
Recently, I was asked if we could have “relative dates like on Twitter” for a project. With a great deal surprise (they knew what Twitter was??), I thought about it for a second and said, “Sure, why not”
It’s just a machine—how difficult can it be? Well, actually, not very difficult in C#.
Here’s what I came up with to handle everything from seconds to years. It’s not really pretty and, honestly, may not be that efficient (I haven’t ran it with tens of thouasnds of dates at once and benchmarked it), but it works well and is convenient as an extension method.
/// <summary>
/// Converts the specified DateTime to its relative date.
/// </summary>
/// <param name=”dateTime”>The DateTime to convert.</param>
/// <returns>A string value based on the relative date
/// of the datetime as compared to the current date.</returns>
public static string ToRelativeDate(this DateTime dateTime)
{
var timeSpan = DateTime.Now – dateTime;
// span is less than or equal to 60 seconds, measure in seconds.
if (timeSpan <= TimeSpan.FromSeconds(60))
{
return timeSpan.Seconds + ” seconds ago”;
}
// span is less than or equal to 60 minutes, measure in minutes.
if (timeSpan <= TimeSpan.FromMinutes(60))
{
return timeSpan.Minutes > 1
? “about “ + timeSpan.Minutes + ” minutes ago”
: “about a minute ago”;
}
// span is less than or equal to 24 hours, measure in hours.
if (timeSpan <= TimeSpan.FromHours(24))
{
return timeSpan.Hours > 1
? “about “ + timeSpan.Hours + ” hours ago”
: “about an hour ago”;
}
// span is less than or equal to 30 days (1 month), measure in days.
if (timeSpan <= TimeSpan.FromDays(30))
{
return timeSpan.Days > 1
? “about “ + timeSpan.Days + ” days ago”
: “about a day ago”;
}
// span is less than or equal to 365 days (1 year), measure in months.
if (timeSpan <= TimeSpan.FromDays(365))
{
return timeSpan.Days > 30
? “about “ + timeSpan.Days / 30 + ” months ago”
: “about a month ago”;
}
// span is greater than 365 days (1 year), measure in years.
return timeSpan.Days > 365
? “about “ + timeSpan.Days / 365 + ” years ago”
: “about a year ago”;
}
Output:
Friday, September 18, 1981 :: about 26 years ago
Monday, August 06, 2001 :: about 7 years ago
Known Gotchas:
Notice that I’m not accounting for leap years—every year is 365 instead of 366 every fourth year [year calculations are fun!]. Given a certain number of years, you’d eventually be a few days, then weeks, then months ahead. You could accomodate for this in the timeSpan, but for a ‘relative date’—does that add value (unless wanting the relative date of 10,000 years ago and it shows up as 10,001.
I’ve got an almost identical method that I use in a ton of projects. In most situations, people don’t care about the exact date, so I slip it in there by using the ACRONYM tag and have the tooltip show the exact date and time if they mouse over it.
string.Format(“{1} minutes ago“, dt, timeSpan.Minutes)
Then I’ll style the acronym with a dotted border and a help cursor so it’s a little more obvious that something’s there, but not so bold as to be distracting.
acronym { border-bottom: 1px dotted #ccc; cursor: help; }
@Bryan-
Ahh, brilliant idea. It’s coming up in alt tags at the moment, but that’d be less intrusive (and misused) as it won’t flood the page with fake link tags.
Thanks for this useful function
I noticed that it sometimes generates “about 1 months ago”
This can be fixed by changing the line
return timeSpan.Days > 30
to
return (timeSpan.Days / 30) > 1
Also how about “yesterday” instead of “about a day ago” ?
@Ben-
Could you provide a test case (if you have one) where it doesn’t generate one month correctly–I haven’t ran into that yet and would love to add the edge case to my tests. 🙂
The “yesterday” vs. “about a day ago” is a great idea, though IMHO, would break the consistency of the labels and the “about..”. Also, it’d require some testing to make sure that it communicated ‘yesterday’ appropriately.
At 01:00, something posted at 23:00 the prior day would technically be “yesterday”, but with the current counts would read as “about 2 hours ago…” 🙂 | https://tiredblogger.wordpress.com/2008/08/21/creating-twitter-esque-relative-dates-in-c/ | CC-MAIN-2017-22 | refinedweb | 708 | 73.07 |
Color and moving object recognition system
1. Development tools
Python version: Anaconda's Python environment version 3.8
Development software: pycham Community Edition
Recognition model: deep learning model, general learning model
Related modules: opencv python = 3.4.8.29 module
2. Environmental construction
Install Anaconda and add the path to the environment variable, install pychar and add the path to the environment variable, and use pip to install the required related modules.
3. Procedure flow
1, Color recognition system
1) Open pycharm and create a folder and a py file
2) Import two libraries, cv2 and numpy
3) Set the high and low thresholds for green and red
4) Judge whether the video is opened normally
5) Read each frame and when the read frame is normal
6) Turn the picture gray
7) Filter red and green based on color range
8) Median filter processing
9) Process the two colors to find the green and red range
10) Draw a square in the Green area and display "Green"
11) Draw a square in the Red area and display "Red"
12) Display each frame, wait for playback, and press "q" to interrupt
13) If the video is played, it will automatically jump out of the loop and the window will close
14) Release the video and destroy all created windows
15) Runtime example screenshot
Source code display
import numpy as np # Import numpy Library import cv2 # Import opencv Python library, i.e. cv2 Library lower_green = np.array([35, 110, 106]) # Green range low threshold upper_green = np.array([77, 255, 255]) # Green range high threshold lower_red = np.array([0, 127, 128]) # Red range low threshold upper_red = np.array([10, 255, 255]) # Red range high threshold # Need more colors, you can go to Baidu HSV threshold! cap = cv2.VideoCapture("2.mp4") # Open video file num = 0 while (cap.isOpened()): # Open video normally ret, frame = cap.read() # Read every frame if ret == True: # When judging that the read frame is correct hsv_img = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV) # Turn the picture gray for later processing mask_green = cv2.inRange(hsv_img, lower_green, upper_green) # Filter Green by color range mask_red = cv2.inRange(hsv_img, lower_red, upper_red) # Filter red by color range mask_green = cv2.medianBlur(mask_green, 7) # median filtering mask_red = cv2.medianBlur(mask_red, 7) # median filtering mask = cv2.bitwise_or(mask_green, mask_red) # Process two colors mask_green, contours, hierarchy = cv2.findContours(mask_green, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE) # Looking for green range mask_red, contours2, hierarchy2 = cv2.findContours(mask_red, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE) # Look for the red range for cnt in contours: (x, y, w, h) = cv2.boundingRect(cnt) cv2.rectangle(frame, (x, y), (x + w, y + h), (0, 255, 255), 2) cv2.putText(frame, "Green", (x, y - 5), cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 255, 0), 2) # Draw a square in the Green area and display Green for cnt2 in contours2: (x2, y2, w2, h2) = cv2.boundingRect(cnt2) cv2.rectangle(frame, (x2, y2), (x2 + w2, y2 + h2), (0, 255, 255), 2) cv2.putText(frame, "Red", (x2, y2 - 5), cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 0, 255), 2) # Draw a square in the red area and display Green cv2.imshow("dection", frame) # Display each frame if cv2.waitKey(20) & 0xFF == ord("q"): # Similar to the key to interrupt playback, press q to jump out of the loop and terminate playback break else: break cap.release() # Release video cv2.destroyAllWindows() # Destroy all created windows
2, Moving object recognition system
1) Open pycharm and create a folder and a py file
2) Import cv2 library. The system only needs one cv2 library
3) Path to read video
4) Find out the length and width of the video and output it
5) Assign a variable to the ellipse and a value to the background
6) Determine whether the video stream can be read correctly
7) Read the video and judge whether the video is over
8) The frame is preprocessed, first converted to gray, and then Gaussian filtering
9) Sets the first frame as the background for the entire input
10) For each frame read from the background, the difference between it and the background is calculated and a difference graph is obtained
11) Apply the threshold to get a black-and-white image, and expand the image through the following code to normalize the holes and defects
12) Displays a rectangular box for moving objects
13) Play the video and press "q" to exit the video
14) If the video playback ends. Jump out of the loop and close the window
15) Release the video and destroy all created windows
16) Screenshot of running example
Source code display
import cv2 # Import opencv Python library, i.e. cv2 Library camera = cv2.VideoCapture("5.mp4") # Read video path size = (int(camera.get(cv2.CAP_PROP_FRAME_WIDTH)), int(camera.get(cv2.CAP_PROP_FRAME_HEIGHT))) # Find the length and width of the video and assign it to the variable print('size:' + repr(size)) # The length and width of the output picture es = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (9, 4)) # Draw points as ellipses and assign them to variables background = None # Give the background the initial value None while (camera.isOpened()): # Can the video stream be read correctly grabbed, frame_lwpCV = camera.read() if grabbed == True: gray_lwpCV = cv2.cvtColor(frame_lwpCV, cv2.COLOR_BGR2GRAY) # The frame is preprocessed, first converted to gray image, and then Gaussian filtering. gray_lwpCV = cv2.GaussianBlur(gray_lwpCV, (21, 21),0) # Gaussian filter is used for fuzzy processing. The reason for processing: each input video will produce noise due to natural vibration, illumination change or camera itself. The purpose of smoothing noise is to avoid detecting it during motion and tracking. # Sets the first frame as the background for the entire input if background is None: background = gray_lwpCV continue # For each frame read from the background, the difference between it and the background is calculated and a different map is obtained. # You also need to apply the threshold to get a black-and-white image, and dilate the image through the following code to normalize the hole and defect diff = cv2.absdiff(background, gray_lwpCV) diff = cv2.threshold(diff, 148, 255, cv2.THRESH_BINARY)[1] # Binarization threshold processing diff = cv2.dilate(diff, es, iterations=2) # Morphological expansion image, contours, hierarchy = cv2.findContours(diff.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) # This function calculates the contour of the target in an image # Show rectangle for c in contours: if cv2.contourArea(c) < 15: # For rectangular areas, only contours larger than the given threshold are displayed, so some small changes will not be displayed. For cameras with constant illumination and low noise, the threshold of the minimum size of the contour may not be set continue (x, y, w, h) = cv2.boundingRect(c) # This function computes the bounding box of a rectangle cv2.rectangle(frame_lwpCV, (x, y), (x + w, y + h), (0, 255, 0), 2) # Draw a rectangle with moving objects cv2.imshow('contours', frame_lwpCV) # Play video key = cv2.waitKey(20) & 0xFF # Press the 'q' key to exit the cycle if key == ord('q'): break else: break camera.release() # Release video cv2.destroyAllWindows() # Destroy all created windows | https://programmer.ink/think/color-and-moving-object-recognition-system.html | CC-MAIN-2022-21 | refinedweb | 1,162 | 61.97 |
On 09/26/2012 08:44 AM, Daniel P. Berrange wrote: > From: "Daniel P. Berrange" <berrange redhat com> > > In the cgroups APIs we have a virCgroupKillPainfully function > which does the loop sending SIGTERM, then SIGKILL and waiting > for the process to exit. There is similar functionality for > simple processes in qemuProcessKill, but it is tangled with > the QEMU code. Untangle it to provide a virProcessKillPainfuly > function It is also similar to virProcessAbort, although the two differ on how long to wait between SIGTERM and an eventual SIGKILL. Maybe those should be consolidated in a later patch? > --- > src/libvirt_private.syms | 1 + > src/qemu/qemu_driver.c | 8 ++--- > src/qemu/qemu_process.c | 79 ++++++++---------------------------------------- > src/util/virprocess.c | 57 ++++++++++++++++++++++++++++++++++ > src/util/virprocess.h | 2 ++ > 5 files changed, 76 insertions(+), 71 deletions(-) > +++? > +++ b/src/util/virprocess.c > @@ -235,3 +235,60 @@ int virProcessKill(pid_t pid, int sig) > return kill(pid, sig); > #endif > } > + > + > +/* > + * Try to kill the process and verify it has exited > + * > + * Returns 0 if it was killed gracefully, 1 if it > + * was killed forcably, -1 if it is still alive, s/forcably/forcibly/ > + * or another error occurred. > + */ > +int > +virProcessKillPainfully(pid_t pid, bool force) > +{ > + int i, ret = -1; > + const char *signame = "TERM"; > + > + VIR_DEBUG("vpid=%d force=%d", pid, force); > + > + /* This loop sends SIGTERM, then waits a few iterations (10 seconds) > + * to see if it dies. If the process still hasn't exited, and > + * @force is requested, a SIGKILL will be sent, and this will > + * wait upto 5 seconds more for the process to exit before s/upto/up to/ > + * returning. > + * > + * Note that setting @force could result in dataloss for the process. s/dataloss/data loss/ The move looks okay if you can answer my question about the ignore_value() change. -- Eric Blake eblake redhat com +1-919-301-3266 Libvirt virtualization library
Attachment:
signature.asc
Description: OpenPGP digital signature | https://www.redhat.com/archives/libvir-list/2012-September/msg01816.html | CC-MAIN-2015-11 | refinedweb | 303 | 56.66 |
- NAME
- VERSION
- SYNOPSIS
- DESCRIPTION
- USAGE
- FUNCTIONS
- BUGS
- SEE ALSO
- AUTHOR
NAME
Test::MockRandom - Replaces random number generation with non-random number generation
VERSION
This documentation describes version 1.00.
SYNOPSIS
# intercept rand in another package use Test::MockRandom 'Some::Other::Package'; use Some::Other::Package; # exports sub foo { return rand } srand(0.13); foo(); # returns 0.13 # using a seed list and "oneish" srand(0.23, 0.34, oneish() ); foo(); # returns 0.23 foo(); # returns 0.34 foo(); # returns a number just barely less than one foo(); # returns 0, as the seed array is empty # object-oriented, for use in the current package use Test::MockRandom (); my $nrng = Test::MockRandom->new(0.42, 0.23); $nrng->rand(); # returns 0.42
DESCRIPTION
This. See "USAGE" for details.
Alternatively, this module can be used to generate objects, with each object maintaining its own distinct seed array.
USAGE
By default, Test::MockRandom does not export any functions. This still allows object-oriented use by calling
Test::MockRandom->new(@seeds). In order for Test::MockRandom to be more useful, arguments must be provided during the call to
use.
use Test::MockRandom 'Target::Package'
The simplest way to intercept
rand in another package is to provide the name(s) of the package(s) for interception as arguments in the
use statement. This will export
rand to the listed packages and will export
srand and
oneish to the current package to control the behavior of
rand. You must
use Test::MockRandom before you
use the target package. This is a typical case for testing a module that uses random numbers:
use Test::More 'no_plan'; use Test::MockRandom 'Some::Package'; BEGIN { use_ok( Some::Package ) } # assume sub foo { return rand } was imported from Some::Package srand(0.5) is( foo(), 0.5, "is foo() 0.5?") # test gives "ok"
If multiple package names are specified,
rand will be exported to all of them.
If you wish to export
rand to the current package, simply provide
__PACKAGE__ as the parameter for
use, or
main if importing to a script without a specified package. This can be part of a list provided to
use. All of the following idioms work:
use Test::MockRandom qw( main Some::Package ); # Assumes a script use Test::MockRandom __PACKAGE__, 'Some::Package'; # The following doesn't interpolate __PACKAGE__ as above, but # Test::MockRandom will still DWIM and handle it correctly use Test::MockRandom qw( __PACKAGE__ Some::Package );
use Test::MockRandom %customized
As an alternative to a package name as an argument to
use, Test::MockRandom will also accept a hash reference with a custom set of instructions for how to export functions:
use Test::MockRandom { rand => [ Some::Package, {Another::Package => 'random'} ], srand => { Another::Package => 'seed' }, oneish => __PACKAGE__ };
The keys of the hash may be any of
rand,
srand, and
oneish. The values of the hash give instructions for where to export the symbol corresponding to the key. These are interpreted as follows, depending on their type:
String: a package to which Test::MockRandom will export the symbol
Hash Reference: the key is the package to which Test::MockRandom will export the symbol and the value is the name under which it will be exported
Array Reference: a list of strings or hash references which will be handled as above
Test::MockRandom->export_rand_to()
In order to intercept the built-in
rand in another package, Test::MockRandom must export its own
rand function to the target package before the target package is compiled, thus overriding calls to the built-in. The simple approach (described above) of providing the target package name in the
use Test::MockRandom statement accomplishes this because
use is equivalent to a
require and
import within a
BEGIN block. To explicitly intercept
rand in another package, you can also call
export_rand_to, but it must be enclosed in a
BEGIN block of its own. The explicit form also support function aliasing just as with the custom approach with
use, described above:
use Test::MockRandom; BEGIN {Test::MockRandom->export_rand_to('AnotherPackage'=>'random')} use AnotherPackage;
This
BEGIN block must not include a
use statement for the package to be intercepted, or perl will compile the package to be intercepted before the
export_rand_to function has a chance to execute and intercept calls to the built-in
rand. This is very important in testing. The
export_rand_to call must be in a separate
BEGIN block from a
use or
use_ok test, which should be enclosed in a
BEGIN block of its own:
use Test::More tests => 1; use Test::MockRandom; BEGIN { Test::MockRandom->export_rand_to( 'AnotherPackage' ); } BEGIN { use_ok( 'AnotherPackage' ); }
Given these cautions, it's probably best to use either the simple or custom approach with
use, which does the right thing in most circumstances. Should additional explicit customization be necessary, Test::MockRandom also provides
export_srand_to and
export_oneish_to.
Overriding
rand globally: use Test::MockRandom 'CORE::GLOBAL'
This is just like intercepting
rand in a package, except that you do it globally by overriding the built-in function in
CORE::GLOBAL.
use Test::MockRandom 'CORE::GLOBAL'; # or BEGIN { Test::MockRandom->export_rand_to('CORE::GLOBAL') }
You can always access the real, built-in
rand by calling it explicitly as
CORE::rand.
Intercepting
rand in a package that also contains a
rand function
This is tricky as the order in which the symbol table is manipulated will lead to very different results. This can be done safely (maybe) if the module uses the same rand syntax/prototype as the system call but offers them up as method calls which resolve at run-time instead of compile time. In this case, you will need to do an explicit intercept (as above) but do it after importing the package. I.e.:
use Test::MockRandom 'SomeRandPackage'; use SomeRandPackage; BEGIN { Test::MockRandom->export_rand_to('SomeRandPackage');
The first line is necessary to get
srand and
oneish exported to the current package. The second line will define a
sub rand in
SomeRandPackage, overriding the results of the first line. The third line then re-overrides the
rand. You may see warnings about
rand being redefined.
Depending on how your
rand is written and used, there is a good likelihood that this isn't going to do what you're expecting, no matter what. If your package that defines
rand relies internally upon the system
CORE::GLOBAL::rand function, then you may be best off overriding that instead.
FUNCTIONS
new
$obj = new( LIST OF SEEDS );
Returns a new Test::MockRandom object with the specified list of seeds.
srand
srand( LIST OF SEEDS ); $obj->srand( LIST OF SEEDS);
If called as a bare function call or package method, sets the seed list for bare/package calls to
rand. If called as an object method, sets the seed list for that object only.
rand
$rv = rand(); $rv = $obj->rand(); $rv = rand(3);
If called as a bare or package function, returns the next value from the package seed list. If called as an object method, returns the next value from the object seed list.
If
rand is called with a numeric argument, it follows the same behavior as the built-in function -- it multiplies the argument with the next value from the seed array (resulting in a random fractional value between 0 and the argument, just like the built-in). If the argument is 0, undef, or non-numeric, it is treated as if the argument is 1.
Using this with an argument in testing may be complicated, as limits in floating point precision mean that direct numeric comparisons are not reliable. E.g.
srand(1/3); rand(3); # does this return 1.0 or .999999999 etc.
oneish
srand( oneish() ); if ( rand() == oneish() ) { print "It's almost one." };
A utility function to return a nearly-one value. Equal to ( 2^32 - 1 ) / 2^32. Useful in
srand and test functions.
export_rand_to
Test::MockRandom->export_rand_to( 'Some::Class' ); Test::MockRandom->export_rand_to( 'Some::Class' => 'random' );
This function exports
rand into the specified package namespace. It must be called as a class function. If a second argument is provided, it is taken as the symbol name used in the other package as the alias to
rand:
use Test::MockRandom; BEGIN { Test::MockRandom->export_rand_to( 'Some::Class' => 'random' ); } use Some::Class; srand (0.5); print Some::Class::random(); # prints 0.5
It can also be used to explicitly intercept
rand after Test::MockRandom has been loaded. The effect of this function is highly dependent on when it is called in the compile cycle and should usually called from within a BEGIN block. See "USAGE" for details.
Most users will not need this function.
export_srand_to
Test::MockRandom->export_srand_to( 'Some::Class' ); Test::MockRandom->export_srand_to( 'Some::Class' => 'seed' );
This function exports
srand into the specified package namespace. It must be called as a class function. If a second argument is provided, it is taken as the symbol name to use in the other package as the alias for
srand. This function may be useful if another package wraps
srand:
# In Some/Class.pm package Some::Class; sub seed { srand(shift) } sub foo { rand } # In a script use Test::MockRandom 'Some::Class'; BEGIN { Test::MockRandom->export_srand_to( 'Some::Class' ); } use Some::Class; seed(0.5); print foo(); # prints "0.5"
The effect of this function is highly dependent on when it is called in the compile cycle and should usually be called from within a BEGIN block. See "USAGE" for details.
Most users will not need this function.
export_oneish_to
Test::MockRandom->export_oneish_to( 'Some::Class' ); Test::MockRandom->export_oneish_to( 'Some::Class' => 'nearly_one' );
This function exports
oneish into the specified package namespace. It must be called as a class function. If a second argument is provided, it is taken as the symbol name to use in the other package as the alias for
oneish. Since
oneish is usually only used in a test script, this function is likely only necessary to alias
oneish to some other name in the current package:
use Test::MockRandom 'Some::Class'; BEGIN { Test::MockRandom->export_oneish_to( __PACKAGE__, "one" ); } use Some::Class; seed( one() ); print foo(); # prints a value very close to one
The effect of this function is highly dependent on when it is called in the compile cycle and should usually be called from within a BEGIN block. See "USAGE" for details.
Most users will not need this function.
BUGS
Please report any bugs or feature requests using the CPAN Request Tracker. Bugs can be submitted through the web interface at
When submitting a bug or request, please include a test-file or a patch to an existing test-file that illustrates the bug or desired feature.
SEE ALSO
Test::MockObject
Test::MockModule
AUTHOR. | https://metacpan.org/pod/release/DAGOLDEN/Test-MockRandom-1.00/lib/Test/MockRandom.pm | CC-MAIN-2016-30 | refinedweb | 1,757 | 51.78 |
Article Source:
I've been asked the same question a few times recently by a couple of BizTalk projects about how to map their reference data. When this question comes up we often get involved in a discussion about the pros and cons of caching the reference data and increasing memory usage versus hitting the database every time.
As a rule I tend to use the BizTalk Cross Referencing features for this data mapping unless there is a specific requirement which requires some custom approach. I've blogged about this kind of thing a few times before but I thought its worth a post with some thoughts on the different approaches I've seen used when people have wanted to use caching.
I mentioned in a previous post that the Value cross referencing features already implement a simple caching mechanism. In my opinion though the value cross referencing is aimed more at mapping data type values between types of systems rather than business reference data which would be held in instances of systems which is what I feel the ID cross referencing is aimed more at.
Anyway when it comes to this design decision the things people are usually trying to balance are as follows:
There are a number of possible ways to solve this problem and each have their own considerations which are discussed in the rest of this article.
This is probably the most common approach I've seen. In this approach I've normally seen a custom database implemented to manage the reference data. The developer would then implement a custom data access method and a singleton which would be used to control access to the reference data. This is a pretty standard use of the singleton pattern. In this approach I think some of the considerations which need to be made are:
Sometimes I've seen an approach where a custom database has been implemented then a web service façade has been implemented on top of it. The web service will access the data and return it. In consuming this from BizTalk a C# assembly has been developed which uses the web service to get the reference data which is then consumed by a map.
In this approach I've normally seen it implemented in the same way as the singleton approach above. The key difference is that the reference data is usually held locally in a static hash table in the singleton approach where as in this approach the HttpCache object from the System.Web namespace is used. This gives a couple of options around a sliding and absolute expiration which will remove unused data from the cache helping to control the memory usage. You can also add one of the .net cache dependency objects which would allow you a way to detect changed and refresh the cache.
Enterprise Library has a caching block which provides a number of features which could help you solve this problem. One of the key benefits of enterprise library is that it supports different types of stores for the cached data including:
If I remember right the cache supports the same features as the HTTPCache approach which allows you to have a dependency and also expirations. There is an article at the following location which discusses using Enterprise Library Caching in BizTalk.
Enterprise Library can also integrate with external backing stores to support out of process caching.
One approach I quite like involves caching the data outside of the BizTalk process. This provides the benefit that you can cache without having to worry about the impact on the BizTalk process memory usage. There are a number of caching tools which you can use to help here such as:
Alachisoft offer an express version of their caching product which is free and a version for a relatively small cost which comes with some management tools for their distributed caching system.
Memcached is an open source distributed caching system. I know of some guys who have used this very successfully on a .net project with a major UK company.
Velocity is an initiative at Microsoft at present to create a distributed in memory caching platform. I feel that as this evolves it is important to keep an eye on this as it will in the future be likely to become the best approach to this.
These distributed caching systems offer the benefit of taking the memory usage out of your process, but offer fast access to the data via their API. Most of these products also offer high availability and synchronisation across a group of caches when you distribute them across your server group. I have in particular looked at NCache for this example and it is setup as a windows service which you would deploy on each BizTalk box. These services would then be configured to work as a cluster meaning they would synchronise themselves when changes were made.
Hopefully this article has highlighted the many options available when you are considering a caching solution to support your BizTalk implementation. There are many considerations which can be made and there isn't always a one size fits all rule like in most design decisions. I think some of the things that stand out from this discussion are that most of the approaches above always end up using a custom database to manage the reference data. I think in a future post I will look at how to combine some of the approaches discussed here with the BizTalk Cross Referencing features to produce a fairly simple yet effective combination of all of the approaches.
Print | posted on Sunday, September 21, 2008 7:42 PM |
Filed Under [
BizTalk
] | http://geekswithblogs.net/michaelstephenson/archive/2008/09/21/125352.aspx | crawl-002 | refinedweb | 946 | 56.79 |
Create a Homepage Banner
Welcome to the second article in the getting started with Prismic and Gatsby tutorial series. We'll be walking through the steps required to convert the content from the top homepage banner from hard-coded to use content from Prismic.
🕙 Before Reading
If you haven't already gone through the first step, we recommend you start there to download the project, launch a Prismic repository, and install the required plugins and dependencies.
The content modeling should be the first step to consider when converting a project to use a headless CMS like Prismic.
Run npm start to re-launch your site and take a look at your homepage. There's a banner at the top of the page that we'll keep as a static top-level component. Let's review the fields used to build it:
Background
API ID: banner_background
An Image field.
Title
API ID: banner_title
A Rich Text field that only accepts <h1> elements for titles.
Description
API ID: banner_description
A Rich Text field that only accepts <p> elements for text.
Learn More Button
API ID:
banner_link
banner_link_label
A Link field for the clickthrough URL and a Rich Text field only accepts <p> elements for texts.
The model for this banner is already in your Prismic repository. Click on Custom Types, and select the Homepage type to see the content modeling of the banner.
Then, click the Documents button and select the Homepage to view the live version of the banner. It should look something like this: we have a good understanding of the model let's retrieve your content from Prismic.
Run the project with npm start. Open the GraphQL Playground at and paste this GraphQL query:
query Homepage { prismicHomepage { data { banner_title { text } banner_description { text } banner_link { url type uid } banner_link_label { text } banner_background { url } } } }
Run the query by pressing the "play" button ▷ at the top to see the query results on the right.
Let's now render the results to create the homepage banner.
Here we query the homepage document and pass the results to the <HomepageBanner /> component as props. Open the file at /src/pages/index.js and paste the following code.
// index.js file import * as React from 'react' import { graphql } from 'gatsby' import { Layout } from '../components/Layout' import { Seo } from '../components/Seo' import { HomepageBanner } from '../components/HomepageBanner' import { MainContent } from '../components/MainContent' const HomeTemplate = ({ data }) => { if (!data) return null const doc = data.prismicHomepage.data return ( <Layout isHomepage> <Seo title="Home" /> <HomepageBanner title={doc.banner_title.text} description={doc.banner_description.text} linkUrl={doc.banner_link.url} linkLabel={doc.banner_link_label.text} backgroundUrl={doc.banner_background.url} /> <MainContent /> </Layout> ) } export const query = graphql` query Homepage { prismicHomepage { data { banner_title { text } banner_description { text } banner_link { url type uid } banner_link_label { text } banner_background { url } } } } ` export default HomeTemplate
Open your src/components/HomepageBanner.js file. To retrieve the props that we passed in the index.js file, replace the static content with the following code:
// HomepageBanner.js file import * as React from 'react' import { PrismicLink } from '@prismicio/react' export const HomepageBanner = ({ title, description, linkUrl, linkLabel, backgroundUrl, }) => ( <section className="homepage-banner" style={{ backgroundImage: `linear-gradient(rgba(0, 0, 0, 0.4), rgba(0, 0, 0, 0.6)), url(${backgroundUrl})`, }} > <div className="banner-content container"> <h2 className="banner-title">{title}</h2> <p className="banner-description">{description}</p> <PrismicLink href={linkUrl} {linkLabel} </PrismicLink> </div> </section> )
We're now populating the homepage banner with our Prismic content.
Some of you might have noticed that we no longer use the hard-coded Link path: /about. Link fields are now processed using the PrismicLink component from @prismicio/react and the resolved url field provided from the query.
Now let's build a Link Resolver and add it to our plugin configuration to make our links work.
The Link Resolver is a function that takes in a Prismic document object or Link field and returns the corresponding URL for that page in your site. For example, If a document type is "page" with a UID of "about" it will generate the URL path: /about. If the document type is other than "page, " it will return the root "/" URL without the UID.
Create a file such as 〜/src/LinkResolver.js and add the following code:
// LinkResolver.js file exports.linkResolver = (doc) => { if (doc.type === 'page') { return `/${doc.uid}` } return '/' }
If you're curious to learn more, check out the Link Resolving article.
Now register it with the plugin. Open the gatsby-config.js file and update the file to this:
//, linkResolver: require('./src/LinkResolver').linkResolver,`], }, }, ], }
We need to configure <PrismicLink> to use Gatsby's <Link>. Let's add a PrismicProvider component for handling internal links.
Create two files at the root of your project: gatsby-browser.js and gatsby-ssr.js, and paste the following code in both.
import * as React from 'react' import { Link } from 'gatsby' import { PrismicProvider } from '@prismicio/react' import './src/styles/reset.css' import './src/styles/common.css' import './src/styles/style.css' export const wrapRootElement = ({ element }) => ( <PrismicProvider internalLinkComponent={({ href, ...props }) => ( <Link to={href} {...props} /> )} > {element} </PrismicProvider> )
Congrats, now the content for the homepage banner comes from Prismic!
To test that the content is coming from Prismic, do the following:
- Go to your Prismic repository and open the homepage document.
- Make a change to your banner content.
- Save and publish your changes.
- In your terminal, stop the current Gatsby server by pressing Ctrl + C.
- Relaunch your server by running npm start.
This will re-build your site and update the content from your Prismic repository. When the build is complete, you can refresh the homepage and should see your updated content.
Next up, we will be replacing the rest of the homepage with Prismic content using Slices.
Was this article helpful?
Can't find what you're looking for? Get in touch with us on our Community Forum. | https://prismic.io/docs/technologies/tutorial-2-create-homepage-banner-gatsby | CC-MAIN-2022-05 | refinedweb | 962 | 58.58 |
I'm working on c++ and I need to save this struct into a file:
struct myStruct{
int value;
int value2;
MyClass * myClass[10];
};
The way that I'm saving this struct is the following:
myStruct my_struct;
my_struct.value = 1;
my_struct.value2 = 2;
for ( int i = 0; i < 10; i++){
my_struct.myClass[i] = new MyClass();
}
FILE* f = fopen(path.c_str(), "wb");
if ( f != NULL){
fwrite(&my_struct, sizeof(myStruct), 1, f);
fclose(f);
}
But, when I want to read this file, my program crashes when try to access to the array of "MyClass":
FILE* f = fopen(path.c_str(), "rb");
if ( f != NULL){
fread(&my_struct2, sizeof(struct myStruct), 1, f);
fclose(f);
}
for ( int i = 0; i < 10; i ++ ){
if ( my_struct2.myClass[i] != NULL ){
//Here is the crash
}
}
I've been searching but I can't find a solution. I only find topics about arrays of structs. I know that maybe I'm not searching very well.
Thanks.
Your
MyStruct contains twenty pointers to other structures.
By
fwrite()ing the contents of your
MyStruct to a file, you have successfully written twenty raw memory addresses of your other structures into the file, together with the other members of the
MyStruct class.
Which, of course, is utterly meaningless when you try to read them back in another process. You've read back twenty raw memory addresses. Which mean nothing to a completely unrelated process. And, accessing those memory addresses, unsurprisingly, leads to a crash since those memory addresses, for all intents and purposes, are completely random values.
What your code needs to do is not write twenty raw pointer addresses to the file, but the contents of those pointers, and what they point to.
I want to add some things to Sam's answer, even if I know this is not code review, you are writing C in C++.
C++ is not meant to be coded in C, it doesn't want to... It fought its entire life to break its bound with its deprecated father, to surpass him, to explore new meanings and way to solve problems and build efficient code. Don't do this to him... (I love C by the way, deprecated was a joke obviously ;) )
Here's how I'd do it:
#include <fstream> #include <iostream> class MyClass { public: MyClass() : _type(-1) {} MyClass(int type) : _type(type) {} inline const int &type() const { return _type; } private: int _type; }; // -- overload of operator<< that permits me to write a MyClass* to a stream std::ostream &operator<<(std::ostream &stream, MyClass *myClass) { stream << "myClass::type: " << myClass->type(); return stream; } struct MyStruct { int value; int value2; MyClass *myClasses[10]; MyStruct() { value = -1; value2 = 1; for (std::size_t i = 0 ; i < 10 ; ++i) { myClasses[i] = new MyClass(-i); } } }; // -- overload of operator<< that permits me to write a MyStruct to a stream std::ostream &operator<<(std::ostream &stream, const MyStruct &myStruct) { stream << "myStruct::" << "\n\t value: " << myStruct.value << "\n\t value2: " << myStruct.value2 << "\n\t myClasses: "; for (std::size_t i = 0 ; i < 10 ; ++i) { stream << "\n\t\t " << myStruct.myClasses[i]; } return stream; } int main() { std::ofstream outputFile("output.txt"); if (outputFile.is_open() == false) { std::cerr << "Could not open file." << std::endl; return -1; } outputFile << MyStruct() << std::endl; // -- this is where the information is written into the file outputFile.close(); }
See simple way to write a struct, you could even get it back into the struct the same way with operator>> overload, bonus is you can use on any ostream, which means it will work with sstream, std::cout and everything!
Still this is not really c++-like as there is too much (unprotected) pointers and unchecked magical number sizes (
MyClass *myClasses[10]; this is a no-no for me, because it implies this thing:
for (std::size_t i = 0 ; i < 10 ; ++i), and this shit gives me shivers).
I would probably use an std::array here , but I wanted to keep MyStruct as you defined it so the example stay "close" to what you wrote. Another way would have been to use std::unique_ptr or std::shared_ptr.
This can seem as quite a bit of work or intimidating, but you may find that useful in the future. Same goes for using the std containers(array, set, vector, map, etc...), unique_ptr and shared_ptr. But I assure you it's worth giving some time to understand them and learn how to use them. It makes things simpler and safer.
What gave me shivers earlier would be written like this:
std::array<MyClass, 10> myClasses;
Loops would go like this:
for (std::size_t i = 0 ; i < myClasses.size() ; ++i) { myClasses[i].type(); } for (std::array<MyClass, 10>::iterator itC = myClasses.begin() ; itC != myClasses.end() ; ++itC) { itC->type(); }
Or even better, a c++11 way to write a loop, that I find easier to read and write:
for (auto myClass : myClasses) { myClass.type(); }
Note that if you want to modify myClass inside this one you need to write
auto& myClass : myClasses
Hope it helps you.
Using
fwrite(&my_struct, sizeof(myStruct), 1, f); is good if your struct
my_struct contains purely static data(i.e the data for which memory was allocated at compile time). If it contains dynamic data(i.e the data for which memory is allocated at runtime) then you need to manually store such dynamic data.
Overloading
operator<< as shown my @vianney is a good method of saving/serializing dynamic data. | http://www.dlxedu.com/askdetail/3/5ea6fcc1d94b782c91992e18bdbe5233.html | CC-MAIN-2019-26 | refinedweb | 895 | 62.98 |
>
Im attempting to add and minus from a text value using buttons. What my script currently does is - when I +++++ it shows 5, then I use the - button 3 times and it gives me -3, I then use + and I get 6. I hope this makes it clear.
public class PlusMinusScript : MonoBehaviour {
private int textNumber;
public Text TextObject = null;
public void addOne(){
if (TextObject != null) {
textNumber++;
TextObject.text = textNumber.ToString ();
}
}
public void minusOne(){
if (TextObject != null) {
--textNumber;
TextObject.text = textNumber.ToString();
}
}
}
Are you sure you have only one script PlusMinusScript to a gameobject ? You must not attach this script to your 2 buttons. Only one GameObject must hold this script, then, select your buttons, drag & drop the object holding this script and call the appropriate functions.
PlusMinusScript
Do you only use one instance of this component? If there are multiple instances (ie you have one for Plus and one for Minus), then this will happen as they both have their own values for textNumber.
@hellium Inside my plus minus script I should put two buttons? @TreyH $$anonymous$$e question to you :) Thanks for your speedy replies. Im hoping i can get this now
Well not quite, we're asking if you've maybe got two "PlusMinusScript" components sitting in your scene, with your two "Plus" and "Minus" buttons each looking at unique ones.
Ah, yes. Each button has a plusminus script on it. This is where I am getting confused
Then that is probably the issue. :-)
Remove one and have your buttons reference the same instance of that script.
edit: alternatively, you can just make the textNumber static if you only intend to have one number like this ever exist and be shared across other objects, but that might start you on bad habits with other situations.
static
I, unfortunately, have 5 values that need to be incremented and decremented.
YAYAYAYYY!!!! Thank you so much for your help and patience @TreyH
Answer by Hellium
·
Mar 16, 2018 at 07:09 PM
As expected, you have attached the PlusMinusScript script to two scripts. Since the value of textNumber is not shared between these two scripts, your problem occurs.
textNumber
Attach the PlusMinusScript to only one gameObject (to your TextObject for example)
Drag & Drop the gameObject holding the Text component to the public field of the PlusMinusScript
Text
Select your first button, drag & drop the gameObject in step #1, and select the addOne function
addOne
Select your second button, drag & drop the gameObject in step #1, and select the minusOne function
minus.
Terrain collider - add
2
Answers
Create a game objective.
2
Answers
Adding a life on collision, otherwise subtract
1
Answer
C# Adding Multiple Elements to a List on One Line
4
Answers
Adding tools with respect to points
1
Answer | https://answers.unity.com/questions/1481250/incrementing-and-decrementing-a-text-value-for-a-t.html?sort=oldest | CC-MAIN-2019-22 | refinedweb | 460 | 61.56 |
domino_ Encyclopedia FAQ Center1, Q: DOMCFG login Login button appears at the bottom of the
A: to add the final surface <div>
2, Q: newly installed server, copy the DOMINO directory is still build another server to launch the initial directory server
A: To modify the following registry: HKEY_LOCAL_MACHINE \ SYSTEM \ CurrentControlSet \ Services \ LotusDomino
Server (LotusDominoData)
3, Q: Startup Tips Received the following error performing a update server's
A: If when starting SRV error, the fully qualified host name of the input value if it is started the client default user settings NAMES library readers.
4, Q: Lotus Domino on AIX, the data can not be removed
A: because the AIX operating system character set is different transplant procedures,
under unix "\" is a normal character. all paths are "/" separated.
5, Q: In the web page, send a message to the user fill <A href="mailto:test@test.com"> test@test.com outlook prepared to receive when in use is displayed in the recipient address [email = test / test@test.com], how change <A href="mailto:test@test.com"> test@test.com
A: In the names.nsf in the user's e-mail address also written on
6, Q: R6 or more database corruption
A: Remove the index - rebuild - Compression - repair (to the log file reports all deal with the database \ Repair Service records database) - Update Index
7, Q: in the Domino server, LEI 6 or 7 failed to install, no error message
A: If the server's notes.ini in the following parameters may fail to install LEI. debug_threadid = 1, before the installation of LEI, the notes.ini file to debug_threadid = 1, delete or comment out this line, the installation is complete and then open it. Because the JVM installation program calls a program called NotesAccess with the Domino server. NotesAccess through the Notes API Toolkit commands to the server, when set debug_threadid, the return process and the thread number is treated as an error message, resulting LEI installation failed.
8, Q: e-mail appears "No route found to domain"
A: you can modify the network domain name
9, Q: If not a registered user login
A: tell adminp process all registered users can immediately
10, Q: restart the HTTP service alone
A: tell http restart to restart the http task
11, Q: If prompted to use the file server is
A: dbcache flush to clear the database server cache.
12, Q: change the name of the folder path problem caused
A: load updall-r directory \ *. nsf
13, Q: the Lotus of the entire directory copy it from one server to another server. HTTP service sometimes baffling problem. If the page can not be displayed, HTTP suddenly stop services.
A: re-run the installer and then install and then delete the new copy
14, Q: In the Lotus inside the HTML if the page is as blank when the situation sometimes, but to re-open the page click on the save was not any problem.
A: This question will be repeated, while good, what time they might appear, so use this issue to the attention of the page, and make better use of the form.
15, Q: want to find a space from the position of the string and found Instr and Instrbp have problems. 1.Instr ("any string has space in it", "") == 0 2. Agent Instrbp ("Some Chinese ","") make notes or domino pop out.
A: Function strInStr (str1, str2)
Dim i, length As Integer
strInStr = 0
length = Len (str2)
For i = 1 To Len (str1)-length +1
If Mid (str1, i, length) = str2 Then
strInStr = i
Exit Function
End If
End Function
16, Q: In the offline functionality using Lotus iNotes development database, I found Lotus iNotes Sync soon after the implementation of offline database will all the code, formulas, view column formula, agents, and all hidden. Open the form prompts "hidden formula" but you can see the form design, but the formula so all gone. and then open the agent and the view to see that some agents do not have the code but there is no problem. Lotus iNotes is a will there is a problem of the same name in different folders to synchronize with the following database. But in the local Lotus iNotes Data directory but can not find the database. View Dolslog.nsf Diary library iNotes sync did find the same name in different directory database (even if different name as long as a copy of the same database will be synchronized). This has a number of backups in the Lotus \ domino \ data directory of the database design is all hidden.
A: Lotus iNotes to remind everyone in the test do not take the time off-line capabilities of the database is being developed to do the test, and pay attention to the backup to the other letter.
17, Q: When you use another form to show when a document (such as: A form with documents created with the B form to display), when the document is in edit mode when the field is set and the calculation of the RTF . then the domain will find RTF format confusion. unprovoked more N-<UL> tags, and document the value of the RTF domain is not <UL> tag. that is displayed in the RTF domain calculation appears.
A: The text and the RTF field is set to change the calculation, can be displayed.
18, Q: in the form or page in the JS Header of the structure of the js with the try catch statement can not be saved.
A: can only be written in the form of built-in.
19, Q: If the JS file in the Lotus on the page, such as test.js and then in the other page contains this js file, they would often be empty the contents of the file test.js bug that is not unknown test.js file white is cleared, the access test.js ie see also blank.
A: Only to re-save or refresh the template. In addition to test.js file in the repository in the picture, but although this method does not solve the test.js suddenly empty, but also the emergence of new problems, if the template file is test.js updated, when the database design template test.js refresh files will not be updated.
20, Q: If a page hidden in the conditions of too much or too concentrated will result in failure of hidden conditions or formulas, especially when the performance tables for the linked hidden ways, namely: to modify the conditions of a hidden cell, other single cell of the hidden conditions without a corresponding change.
A: The solution is: modify a hidden conditions close the form or page, close the open again, and then view hidden conditions. And then set the appropriate conditions are generally hidden problems can be solved. If the same document have the same domain name more than one (usually generated by a program or agency.) hidden under the conditions valid in the time. The second can not hide the contents of the same name as the domain.
21, Q: bs mode, there is a check box form field, select some values and submit them, this time the submit button if your @ command ([filesave]); @ command ([fileclosewindow] ), then no problem, if it is to use js to write: document.forms [0]. submit (), hey, a problem. See the following: open in edit mode and then just this document, to remove all selected items, and then save, next time you open the time, Oh, to modify the results did not change
22, Q: in the signature database, signature database regularly hit the success 0 errors, signature time soon flashed.
A: New copy or make a cross-validation so that you do not need to sign out, there is another solution is to refresh the template in the new design of the server that does not need signed.
23, Q: When you set doc = NotesDatabase.GetDocumentByUNID (uid)
Method, if the uid document does not exist, then the error is not a valid Lotus documents directly ID, rather than return an air-to-like. So that we can not
set doc = NotesDatabase.GetDocumentByUNID (uid)
if not doc is nothing
then
end if
A: so to determine whether we get a document. In this case the final solution I used
on error resume next
set doc = NotesDatabase.GetDocumentByUNID (uid)
if not doc is nothing
then
end if
To ignore this error. Of course you can also jump to the error, you can see there is no description Return value returns an error when the parameters of what value? In fact a direct error.
24, Q: the domino set a letter to the smtp server (may not be prepared to receive). In accordance with the default setting, when the letters were always said address contains non-ASCII characters.
A: non-ASCII characters in user name is usually caused by the Chinese. Setting Domino Server as SMTP Server, all documents on the server settings: routing tasks: e-mail routing, SMTP routing fully qualified Internet host name: host name + Internet domain name (with. Connection) SMTP listener task: Enable first need to set a a separate server to send and receive Internet mail (SMTP Mail task that postal mail), and then set the address book in the public domain worldwide network, external network SMTP SMTP connection document domain and three documents. Major global network domain settings:
In the "Basic" section: a global network of domain name, "global network domain role" for the "SMTP MTA", "default global network domain"
In the "SMTP Address Translation" section: "Internet network domain suffix" and the Notes network domain settings, SMTP Internet domain settings: Internet network domain as *.*, arbitrary setting of a network domain name.
Create a server connection document, set the connection type is SMTP, the purpose of the network domain and in front of the "SMTP Internet domain" in the same set of Internet network domain, the other should be set to send a message to be immediately after completion of the above configuration.
R5, SMTP can use the operating system, DNS, only the configuration of the document in the Domino Directory in the "Routing / SMTP" page, the basic section is set: to leave the local Internet news network domain using SMTP, leaving the local Internet network domain forwarding host: If you are connecting through a proxy server or firewall to the Internet, enter its IP address; if it is directly connected to the Internet, then the field is empty, change is completed, restart the DOMINO server.
25, Q: how to shield a document save a document saved as a conflict of conflict, how to shield the system message box, but his message box pops up?
A: You can write a program in querySave event. According to the current document UNID of a document found in the database. If the document is modified by others, then the $ Revisions field value and the current is kept positive difference!
26, Q: When the Notes quit unexpectedly, the system information is usually prompted to restart your computer. Is there a way not to restart the computer and immediately begin Notes?
A: Only need to manually kill the exception of a withdrawal due to Notes left in memory of a program: nhldaemn.exe, can not restart the computer, and immediately began to Notes. But when the computer running the Domino, you also need to shut it down first before you can restart Notes.
27, LOTUS limited database of known maximum size?
Maximum OS file size limit - (up to 64GB)
The maximum text field size?
15KB (storage);
Column in the view show RTF text field the maximum size?
Limited only by available disk space, up to 1GB
RTF text in a single paragraph of the maximum field size?
64KB
A hierarchical view of several class response; each level there are several documents?
31; 300,000 document views, such as the name of the form allows a maximum number of characters included?
Database Title: 96 bytes File Name: Windows and Unix platforms in the minimum limit of 255 and / or limited by the operating system;
On local Macintosh workstation 31
Domain: 32
View Name: 64
Form name: 32
Agent Name: 32
A database can contain many domain?
t3000 (the total length of all domain restrictions in t64K). Can enable the database property "Allow database contains more than one domain," to make the domain name in the database can be reached only 64K.
A table can contain many columns?
64
A table can contain many rows?
255
Can be added to the database to a number of view?
There is no limit; However, with the view to increase the number, the time used to display other views also increases can be added to the database to a number of form?
Only by database size limit.
A view of the number of columns allowed in?
289 10-character columns; depends on # or the number of characters per line can be introduced to a view of the number of documents?
Documentation of at least 350K total
A database allows the number of stacked view?
200
Enter the maximum value of the margin (in inches) for how much?
46
Enter the maximum page size reduction (in inches) for how much?
46
Select / print font size is the maximum number?
250
A view of the number of documents allowed?
A view of the index can be up to 130MB
Up to the "tab text" brings the number of documents?
Limited only by available disk space for an "access control list" in the maximum number of items?
t50 a name ("Access Control List" in size can not exceed 32767 bytes)
An "access control list" in the maximum number of characters?
75 character identifier for the maximum allowable password length is how much?
63 characters for more than a password identifier, with a maximum number of users have authorized password?
8 users
28, how to maximize the speed of Domino applications under the Web (Equation articles)
1, the best place to use @ ClienType @ UserRoles (4.6 or higher), used to hide the condition.
2, in the use of @ DbLookup @ DbColumn and when the number of columns instead of by name, as Domino in the calculation of when to compare the domain name, but with the number of columns will be much faster.
3, in the use of @ DbColumn, @ DbCommand and @ DbLookup, the possible use of "Cache", because it is more than
no-Cache faster.
4, try to hide the view of the small amount of data, perform a lookup formula. In the column values using a single string or column of the data in the same attempt to reduce the amount of data.
5, the search view to establish the value of the useful combination in the same column, which can improve the search speed and more range.
6, the return value of the variable with the wet, to avoid redundant search. Sometimes the formula in your search results will be used many times, so the variable should be used instead of the return value chain.
7, with LotusScript for GetView, Search and FTSearch method instead of the formula, which can increase by at least 15% of the speed.
29, Q: full-text index for specific words "Topic" will generate an error, in a full-text index has been created in the database, either through the index of the view box, or the program LotusScript index, when for the "Topic" word the index will return an error message: "Query not understandable." or Chinese error
A: "Topic" is actually the word Notes full-text indexing engine is one of reserved words. Other reserved words include: AND, NOT, OR, CONTAINS, NEAR, ACCRUE, EXACTCASE, TERMWEIGHT, PARAGRAPH, FIELD, SENTENCE. If want to bypass the limitations of this software, you can add wildcards way. For example, if you search for "Topic" word, you can add a "*" in that "Topic *". or put the word in double quotes up.
30, Q: remove yourself from the administrator, and a difficult challenge!
A: In the Start / Run, type: "d: / lotus / domino / nlnotes.exe", then enter your server id
Password, and then open the database to operate, directly modify the acl.
31, Q: When was fired after administrators take away all of the ID, how can you do?
1. In the ADMIN authentication configuration menu to change the user identifier attribute, and cert password.
2. Server document's "security" in the comparison and set the record "and stored in Contacts Notes public key comparison" is enabled. Whether to allow anonymous connections, "Notes ID password check" only use
3. In the individual and the group set personal settings "check Notes ID password."
4. In the individual and group settings in the settings "Check Notes ID password."
5. To create a new public key
32, we as administrators should do?
This table lists the system administrator daily, weekly or monthly server maintenance tasks should be completed to ensure the efficient operation of the server. Frequency of the backup server tasks daily, weekly, monthly, daily routing control file to run Fixup task to repair all the damaged databases *
When the server is enabled and the need to monitor the shared mail database (MAILOBJ.NSF) a day monitored Administration Requests database
(ADMIN4.NSF) weekly monitoring of the database need to maintain control copy weekly monitoring modem communications every day, monthly monitoring memory, disk space monitor daily, weekly, monthly, monthly monitoring monitoring server load monthly monitoring Web server performance monthly monitoring server cluster server requests per day
*
If the database for the Domino R5 database format and do not use transaction logging, you can use the Fixup task of repairing the damaged database. If the database for the Domino R5 database format and use the transaction log, you can not run the Fixup task on this database, it is because Fixup task interference record-keeping database to track transactions. Must be restored from a backup damaged database. Can still form in the Domino R4.x and lower Fixup tasks run on the database.
33, on the way to shut down the server time in the server configuration, - "process, add the program, enter the server or in the program, enter nserver-q command or-cquit arrangements in time, set the execution time on it.
34, id theft record
domino has a bookkeeping service (billing), the service can access this domino record of all client information, including of course the ip address. Do not need to re-use the other tools.
35, how to design a form to keep track of the document?
Add a hidden form of shared domain, the name can be run from the formula:
@ If (@ IsNewDoc; @ UserName; From), so this field to record this document the author.
36, Lotus Domino program debugging, often trigger the execution of qnc.exe program and exit Notes, how to solve?
A: In the Debug Lotus Domino / Notes program, the program design, we often will trigger the implementation of qnc.exe program, and an error. In fact, only Notes qnc.exe memory implementation of the purpose of protective measures, the program itself is not necessarily an error. To facilitate the debugging process, we can LotusDomino command window, type qnc _u, to suspend the operation. If you want to reuse qnc _i restore command can load it.
37, in a DominoWeb how R5 server configuration for multiple Web sites - Virtual Server Solutions: You can set up a Domino Web server into multiple virtual servers, such a DominoWeb server can have multiple Web sites. Before configuring the virtual server, you must set for each virtual server's network connection. Each virtual server in R5 can have its own separate IP address, or multiple names mapped to the same IP address. Domino does not limit the number of virtual servers, the number of major decisions in the operating system and system hardware
** Note: In the R4.6 each virtual server must have its own separate IP address.
The following document describes how to create a virtual server:
1. Admin start Domino Administrator software, click on the "Configuration" tab
2. Select the View "server" - "all the server document", select the virtual server you want to create a server document
3. Click the button above "web" - "create a virtual server"
4. Select "Virtual Server" and click "OK" button
5. In the "Basic" tab, complete the following fields:
1) IP Address: The IP address used by the virtual server
2) Host Name: (Optional) The virtual server's host name
3) The default home page: (optional) when a user access to the virtual server to display the HTML file, only when the "Home URL" field is empty when the field is applied.
6. Click on "maps" tab, complete the following fields:
1) Home URL: When users access the virtual server, execute URL command, you can display a database or server database list. The priority of the domain than the "default home page" field high.
2) Fill in the rest of the domain of other files
7. Click the "Security" tab, set the virtual server security options.
8. Save the document
9. In the server console, enter the command tell http restart, restart the http service.
How to display the virtual server documentation: Administrator Start Domino Administrator software, click on the "Configuration" tab, select the View "web" - web server configuration, you can see the virtual server document, the document server of the document as a reply to the document appears.
38, how to record the user exit A: The use of LOTUS the unload event, is leaving the current page or close the browser is activated, use this event to inform the server, the server sends a url on the line. This method can be the most accurate record of the user's logout time. Not so accurate with the session, because it has some active, only the timeout will be recorded. As for the database login information customization domlog.nsf form. (Mainly the user's login information) custom domlog.nsf database view. Access to the records you need.
39, in a normal html form submitted to the notes database using html form of Action can be specified in the notes of an agent.
html form reads:
<Form.
name = form1 action = "<A
href = ""> "
method = get>
<! - Method must get! ->
<> <INPUT
name = openagent> </ P>
<! - The first of the name must be openagent! ->
<> <INPUT
id = text2
name = text2> </ P>
<! - You can add a number of input! ->
<> <INPUT
id = submit1 type = submit value = Submit
name = submit1> </ P>
</ Form>
Agents write:
1. Sharing Agent
2. Run the timing of agency: the implementation from the agent list, select
3. Select the document specified agent operations: all the documents in the database
4.LotusScript.:
Dim
s As NotesSession
Set s = New NotesSession
Dim str as
String
str = s.documentcontext.query_string_Decoded (0)
'This has been submitted to all the html form fields (text Kuang, etc.) input data, including the contents of the domain name and domain
str
'According to the str you can generate a new document, add the domain based on the content
40, Notes in folders and view the database limit solution: in fact the product itself does not have a pre-set limits, but by available handles (handle) the number of restrictions.
Handle (handle) is the maximum number of 10,495. When the library folder and close to limit the number of views, Notes will be reported out of memory errors in, similar to the following error messages:
For Notes
R5.X
"Maximum number of memory segments that Notes can support has been
exceeded "
For NotesR6.X
"Insufficient memory - too many design elements
(Desk Design Pool). "
As if a database contains a lot of folders, Notes will be for each view or folder to allocate memory. In the end,
Notes will not be enough memory, because it can not track access to all memory locations. (Handle the root cause is the greatest number of restrictions.) Alternative way to circumvent the folder and the database will be maintained at a reasonable number of views within the scope. or shorten the name of the folder or the length of the view.
41, save the original scope of the server backup Domino \ Data directory cert.id server.id names.nsf log.nsf certlog.nsf
Mail directory and other database application database content, save the original Notes \ Data directory user.id.
Backup domino \ notes.ini files, backup the database (in the domino \ data below):
*. Dsk, names.nsf, admin4.nsf, bookmark.nsf, busytime.nsf, catalog.nsf, certlog.nsf, certsrv.nsf, events4.nsf, log.nsf
mail *. box, mail \ *.*, nntppost.nsf, statmail.nsf, statrep.nsf, webadmin.nsf, all IDs: *. id \ backup setup.nsf database
42, HTTP submit attachment limits (in the ADMINISTRATOR of the INTERNET protocol)
1) HTTP HTTP protocol is limited to bookmarks in the maximum size of request content
2) DOMINO
WEB POST data in the engine in the
43, refresh the database design to achieve volume
1, the first in the configuration management database or a document created in the database.
2, build script library, "RefrshDesign"
Declare Function W32_NSFDbOpen Lib "nnotes.dll" Alias "NSFDbOpen" _
(Byval dbName As String, hdb As Long) As Integer
Declare Function W32_NSFDbClose Lib "nnotes.dll" Alias "NSFDbClose" _
(Byval hdb As Long) As Integer
Declare Function W32_DesignRefresh Lib "nnotes.dll" Alias "DesignRefresh" _
(Byval Server As String, Byval hdb As Long, Byval dwFlags As Long, _
Byval null0 As Long, Byval null1 As Long) As Integer
Dim hDB As Long
Dim ret%
3, in the view of database information built on the selection of certain documents, click the View operation.
Dim session As New notessession
Dim ws As New notesuiworkspace
Dim view As notesview
Dim db As notesdatabase
Dim dc As notesdocumentcollection
Dim doc As notesdocument
Dim fullname As String
Dim TemplateServer As String
Dim i As Integer
TempLateSever = "*********/ OA "'- the name of the server where the template
Set db = session.currentdatabase
Set view = ws.currentview.view
Set dc = db.unprocesseddocuments
If dc.count <> 0 Then
If Msgbox ("OK to refresh the specified database server with a template design?", 48 + 4, "operation prompt:") = 7 Then
Exit Sub
End If
For i = 1 To dc.count
Set doc = dc.getnthdocument (i)
fullname = doc.database_server (0) + "!!" + doc.database_dir (0) & "/" & doc.database_filename (0)
If fullname <> "" Then
rc = W32_NSFDbOpen (fullname, hDb)
If rc = 0 Then
'Print "Refreshing Design of" & doc.database_name (0) & "......"
Call W32_DesignRefresh (TempLateSever, hDb, 0,0,0)
rc = W32_NSFDbClose (hDb)
'Print "ok!"
Else
End If
End If
End If
44, rejection of the portal e-mail confirmation letter following is Sina-mail service for my company rejected the deal:
Dear User pennykristy:
Hello!
I'm very sorry for your inconvenience, you can use the following method to solve the problem:
If you are the administrator mailbox other sites, your site's email to the email sent, but to Sina, please e-mail with your site to send a message to our engineers, with the theme: free e-mail letters about the issue to the Sina In the message, please specify: name, outgoing mail server IP address, contact, contacts, e-mail address, telephone number, address, sent to the <A
href = "mailto: mailmaster@staff.sina.com.cn"> mailmaster@staff.sina.com.cn
Engineers will contact you to.
We hope that our answer to make you satisfied!
If you have any questions, please reply when you are sure to pay attention to the last letter attached in order to solve your problem.
More Frequently Asked Questions see: <A
href = "">
Thank you for your support Sina
<A Href="mailto:webmaster@vip.sina.com"> webmaster@vip.sina.com
2005-05-18
17:22:42
13 Customer Service Commissioner for your dedicated service hotline 95105670 Sina unified national service (free long distance)
45. Bulk Mail design update
load convert mail \ * mail6.ntf
46, how to change the certifier ID (Cert.id) password
Administrator -> "Configuration" page, in the right "tools" menu, open the "verification." Select the "identifier property." Select and open the certifier ID (Cert.id). Enter the original password, if entered correctly you open the current certifier ID (Cert.id) user ID dialog box. In the "Basic" page, select "Set Password." You can now enter a new password, and again to confirm. Finally, click "OK" button.
47, a single database management of multi-language input can be any URL command to add the charset = [MIME charset name] parameter to specify the character set required to return the form or page, regardless of the browser to set the preferred language. Server can not automatically generate charset = [MIME charset name] parameter. It must be built into the application.
Syntax / FormName? OpenForm & charset = [MIME
charset name]
Here:
FormName name of the form is opened.
[MIME charset name] is the target form will be used to return the character set name.
Usage
charset = [MIME charset name] parameter will override the form $ $ HTMLContentLang domain. The use of $ $ HTMLContentLang domain in order to enable multiple character set for database input, please refer to the "Lotus Notes, Domino and Designer Release Notes - Release 5.0.2" in the "management of a single database, multiple language input."
Sample
Distribution of a company's sales staff in **,** and Russia. Requires that each staff member each month to a single database to submit a performance summary. If the return form the URL command summary
charset = [MIME charset name]
Parameters, the sales staff can use the same database in English, Japanese and Russian character sets. The server after receiving the order, it will return the specified target character set form. Will return to the target using the Japanese character set form of the URL
Command is shown below.
<A Href=""> ...
m & charset = Shift_JIS
48 ways to repair damaged database corruption if encountered in the database, you can use any of the following methods to try to fix the problem. On the record due to damage
R5 database is not a big problem, these methods are mainly used to solve
R4 R5 databases and are not recorded in the database corruption problems.
Run Fixup
Fixed corrupted views and documents.
Fixed damage running Updall view and full-text index; If the view is damaged, run before you run Fixup Updall.
Run
Compact amendment did not correct the defects of the Fixup; R5 if the database is a database, use the-C option.
Press SHIFT + F9 key to rebuilding a view; Press
CTRL + SHIFT + F9 key to rebuild the database in all views.
49, the Web custom "form has been processed" confirmation
Web
After the user submits a document, Domino uses the default "Form processed" confirmation response to the user. To reset the default answer can be calculated in the text field to the form, to this domain name
$ $ Return, and use HTML as the calculated values to create a custom approval.
Show personalized answer the following $ $ Return formula returns the reply "Thank
you ", along with user name:
who: = @ If (@ Left (From; "") = ""; From; @ Left (From; "
"));
@ Return ("<h2> Thank you," + who +
"</ H2> <br> <h4>
<A
href = / register.nsf / Main + View? OpenView> Main
View </ a> ");
50, display custom error messages to customize the display to the Web
The appearance of error messages users to add custom database error message form. If an error condition, and the custom form exists, Domino
Use custom form to display an error message. Otherwise, Domino
Use the default error message form. News added to the database will cover the form set by the administrator of the server within the message.
To form and create an association between error conditions, one of the following names to create a form. Then create a named
MessageString editable text fields to hold the error message. Add error messages displayed along with the other text, links, and other form objects. Name of the form condition
$ $ ReturnAuthenticationFailure not verify the user name and password.
$ $ ReturnAuthorizationFailure
Users do not have enough access to the database level.
$ $ ReturnDocumentDeleted successfully deleted documents.
$ $ ReturnGeneralError
There are other error conditions.
Link to another page in the document based on field values submitted in the HTML, including links to another page in the URL. The following $ $ Return
The scope of the formula to return the user selected reply. For example: If the user chooses to Europe, then the message "access to our site in Italy" will display a link to the Web links to Italy
Site (assuming the formula "stdAnswer" and "stdFooter" pre-defined.).
@ If (Region = "Asia"; stdAnswer +
"<h2> Visit our site in <a href = \" <A
href = ' \ "> Japanhttp: / / \"> Japan </ a> </ h2> "
+ StdFooter;
Region = "Europe"; stdAnswer + "<h2> Visit our site in <a
href = \ "<A
href = ' \ it_ciao / it_ciao.htm \ "> Italyhttp: / / \ it_ciao / it_ciao.htm \"> Italy </ a> </ h2> "
+
stdFooter;
stdAnswer + stdFooter);
Back to another page to jump to a different Web page, the page can be
URL brackets into the page. When a user submits a document, Web client will display the referenced document. For example: the following
$ $ Return formula shows the Lotus
** Site's home page.
"[]"
51, if you want to help users quickly create and read the documentation, please refer to the following form in the design of the guidance:
Design a form to avoid the use of large bitmaps or graphics.
Avoid using the form property "Automatically refresh fields."
Instead, the selected domain using the "refresh changes by keyword domain", or write a
LotusScript. Field events, when the user removed from a specific domain to recalculate the document or update other fields.
Contains a large computational domain to avoid the long form.
Minimize the use of the design domain
@ DbLookup @ DbColumn or formula, or to replace them faster and to support the troubleshooting of LotusScript.
Program.
In the "condition hidden" conditions using a simple formula.
If possible, avoid re-computing domain. Otherwise, these fields to "fill in when the calculated" so that they are only calculated when the document was written, but if necessary, can be a button in the future,
Operation or agent update.
Used in the domain
LotusScript. Form events rather than use formula to set the field values.
For example: the document is saved to re-set the status field, you can create a QuerySave event
Script, without having to write a use @ If (@ IsDocBeingSaved; "x"; "y")
Formula.
Streamlining the number of domains, in particular, the number of hidden fields, use the form formula of the event rather than the domain to perform logic processing and avoid unnecessary recalculations.
For example: If the form contains a hidden computational domain
State, in the field to determine the document workflow needs to be sent where and to where. QuerySave event by setting a form field values of LotusScript.
Program to replace this domain, you can only set when the document is saved field values, and open or refresh in the document does not set the field values.
52, agent, servlet and the CGI program compared
Agents, servlet and CGI programs allow Domino Web application functionality expansion. Agents can form WebQueryOpen and
WebQuerySave events and Web
Application is tightly integrated. Servlet Servlet API classes can be obtained by some special features, such as conversation and Cookie
Management. As people increasingly popular Java, today instead of using CGI servlet program to develop new products has become a trend. However, many existing CGI
Procedures still in use.
If you are writing your own applications and the need for programming some of the features on the server, you need the type of procedures on the use of the option to make a choice. Each type of program has its own advantages and may be the best choice under certain circumstances. The following are procedures for each type of use:
Type of procedure to read or post the best use of agency implementation of Domino document operations procedures. Scheduled time or when required by the occurrence of database operations (such as the arrival of new mail) is running.
Servlet
Using the standard Java interface (such as JDBC) procedures. Cookie HTTP session maintenance or use of the procedure. Complex or resource-intensive Java programs.
CGI programs need to be low-level access to system resources the program. Through the Java API and other products of non-connected programs.
The following properties of these procedures by comparing the selection of the program helpful.
Program can be written using the kind of language?
Agent: Java, LotusScript. Or Notes
Formula language. The language itself can be cross-platform use.
Servlet: Java. Itself can be cross-platform use.
CGI
Procedure: platform scripting language that can be compiled into executable files of any language or cross-platform languages (such as Java or Perl).
Program is stored in what position?
Agent: stored in the Domino
Database, which means that agents can take advantage of database replication and cluster servers.
Servlet: stored in the file system, usually domino \ servlet directory.
CGI
Program: is stored in the file system, usually domino \ cgi-bin directory.
Program in what way is Web user calls?
Agent: from WebQueryOpen or
WebQuerySave automatically call events, or by OpenAgent
URL (such as "")
Direct calls. Agents also by the server event (such as the arrival of new mail) or pre-arranged to trigger timing.
Servlet: directly by the
URL calls. Domino recognizes two types of servlet's URL. The first type specified by name
servlet (for example:
""). Specify a second type
Domino administrator has mapped to the servlet
The file extension (
For example: "").
CGI program: directly from the URL
Call (for example: "").
When the server load or unload the program?
Agent: fashion into each call, the Executive after uninstall.
Servlet: a one-time load; HTTP
Shut down or restart the task when the uninstall. Compared with the agent or the CGI program, the performance of the servlet has great advantages. However, this also means that multiple requests can be accessed simultaneously by
servlet classes, servlet code must ensure thread safety.
CGI program: each call into fashion, after the uninstall completes.
Procedures and Domino
Interactions and agent: LotusScript. And Java agents can use the Domino object classes. Most agents can use the formula
@ Function.
Servlet: by CORBA (Common Object Request Broker Architecture) interface to access
Domino.
CGI program: CORBA interface, or through through the Domino C or C + + API to access
Domino.
What security protection program available?
Agent: To call the proxy, Web
Users must include the agency's database has a "store" or higher access level. Agents or users can create their identity operation. Domino
All security features are applicable to the operations performed by the agent.
Servlet: a small program to access the server by "Domino Directory" in the File Protection document to control. If the small program by
CORBA interface to access the Domino, you can specify a
Domino user name and Internet password. Domino security applies to all CORBA
Operation.
CGI program: access to the program by "Domino Directory" in the File Protection document to control. If the program through the C API to access
Domino, the server identifier is used. If the program using the CORBA interface, you can specify the user name and Internet password. In both cases, can be applied Domino
Security.
53, run the servlet in Domino
Write servlet sample
To write a servlet, requires a
The Java compiler and the servlet API. Both are available from Sun Microsystem's website <A
href = ""> on Access. Download
Java Development
Kit (JDK) (which contains the compiler and other basic tools) and Java Servlet Development Kit (JSDK) (which contains the servlet API
Specification, servlet's. JAR file (jsdk.jar) and the small sample server program). Sun to the Web site also provides other servlet
Resource links.
You can also use a variety of popular Java development environment to write a servlet. In order to provide convenient, Domino server and the installer includes Designer
jsdk.jar file copy.
This paper provided by Sun's JSDK is exactly the same.
Sun JDK and regularly updated JSDK. Domino 5.0
Support for JDK 1.1.6 and JSDK 2.0. Domino's quarterly maintenance Edition (QMR) is often combined with Sun's update contents, so you should check QMR
The release notes to verify the supported JDK and JSDK version.
Domino servlet support enabled
By the Domino Java Servlet
Servlet Manager (HTTP server is part of the mission) to load and call. Java servlet running time is supported by the Domino
Java
Virtual Machine (JVM) provided. HTTP task starts, a small program to automatically start the server administrator and load JVM. HTTP
Task status information of these operations will write the server console and log files.
Servlet manager is "Domino
Directory "and" server "document set to control. This setting is located in the" server "document" Internet Protocol "" Domino Web
Engine "sign attached. Set as follows:
Setting Options
Java servlet support free: (default) HTTP task does not load the server administrator or a small program
JVM.
Domino servlet manager: HTTP JVM and the servlet is loaded at the same time the task manager.
Third Party Servlet Support: HTTP
Task into JVM, but not into the Domino servlet manager. This allows the use of third-party servlet managers such as IBM WebSphere
Application Server.
Servlet URL path URL path in this URL points to a notice Domino
servlet. The default path is / servlet.
Classpath list of one or more paths, servlet Manager class loader will search the list to find the servlet
And related classes. This setting allows you to add another path. You can specify directories, JAR files and ZIP files. Path can be an absolute path is the Domino data directory can also be a relative path. Default
domino \ servlet.
Sample:
Relative directory path: domino \ servlet
Absolute directory path:
c: \ apps \ MyServlets
JAR file: c: \ javamail \ mail.jar
ZIP
File: domino \ servlet \ sql.zip
Servlet URL file extension file extension list, indicating a URL reference to the Domino
servlet. Each expansion of the list must be servlets.properties
File mapped to a single instruction
servlet. The default is no expansion.
The following settings control the Java Servlet API HttpSession interface to the Domino Servlet
Manager of the run-time support. Do not use this interface to the server applet from these settings.
Note HttpSession interface support and
Domino in the "HTTP Session Authentication" feature is completely independent.
Enable session state tracking setup options: (default) servlet manager periodically check all
HttpSession instance of user activities. A given period of time free session will be automatically terminated.
servlet manager calls the instance
HttpSession.invalidate () member function notify the servlet of the session has been terminated.
Disabled: Do not check the session active.
Timeout waiting for idle users do not perform the operation the number of minutes, then will terminate the session. The default is 30 minutes.
Maximum simultaneous active sessions allowed the activities of the sessions. The default is 1,000
Sessions. When this limit is reached, the longest idle session will be terminated.
Continued opening of the session: exit HTTP task, servlet session manager will save the data to the Domino
Data directory called sessdata.ser disk file. HTTP session data will be reloaded when the task starts again. Servlet
Has been fixed in the session object if the implementation of java.io.Serializable interface, the object will be saved.
Disabled: (default) exit HTTP
Task to give up all session data.
Loaded with the JVM loader servlet class
servlet
Manager class loader will not load using their own code, create a custom class loader or perform certain other restricted classes of operations. If the servlet class can not be requested servlet
Manager loaded, you can try Domino JVM class loader to load. JVM loader is often used with Domino from a filing system with Java installed (especially
java .* and lotus .* packages) is loaded with class. By the servlet from the Servlet Manager classpath to move JVM class path, you can force the use of JVM
Loader instead of using the servlet manager loader to load the server applet. JVM class path variable from the NOTES.INI JavaUserClasses
Specified.
Set the properties of a single servlet servlet special properties can be named in the Domino data directory
servlets.properties text file specified. You can specify the following attributes:
Alias initialization parameters
URL extension mapping
servlet
Manager These properties are loaded at startup by the servlets.properties
Instructions in the file specified. General instruction syntax is:
servlet (s). <name>. <property> = <value(s)>
Command is case sensitive. servlets.properties
File can also contain blank lines and the "#" sign at the beginning of the comment line. servlets.properties
File is not should be set. Servlet default attributes are: no alias, no initialization parameters, no map extended as necessary into the Servlet.
Servlet
Alias alias command syntax is:
servlet. <alias-name>. code = <class-name>
For example:
servlet.SQLQuery.code = sql.database.query.Servlet
For security considerations, Domino
Does not allow servlet names containing servlet URL to be used in the cycle. This prevents malicious users to load arbitrary Java servlet Manager
Packages and classes. If the server applet has a package name, you must specify an alias. The above examples are such as to allow sql.database.query.Servlet servlet
"" other URL call. In the servlet alias
Hide the actual name of the user are also useful.
Servlet can specify multiple aliases. servlet manager on receipt of a URL pointing to all the alias is created when
a new instance of the servlet. New instance is created,
servlet manager calls the servlet's init ()
Member function. Properties file as an alias used in other commands, so examples can be given different properties. For example: alias specified for each separate initialization parameters. Similarly, even if more than one instance is created, servlet
Class loaded only once, so servlet instance variables by using the static class to share data.
For safety reasons, if given the servlet alias is specified, the
servlet in the URL, not by the class name to reference. This servlet can hide the true name.
Initialization parameters can be specified in the properties file for the servlet
Specify the initial data. Servlet can use ServletConfig.getInitParameter
Member function to access the data. Initialization command syntax is:
servlet. <alias or class
name>. initArgs = <name1=value1>, <name2=value2>, ...
You can specify multiple parameters separated by commas. For example:
servlet.SQLQuery.initArgs = target = db2, user = Domino, cacheSize = 30
URL
Map expansion
URL extension mapping directive syntax is:
servlet. <alias or class
name>. extension = <extension> <extension> ...
Can servlet
Specify multiple extensions, each separated by spaces. All extensions must be included in the "server" record "Servlet file extensions" setting. For example: whenever URL
Specify the extension "sql" or "sq", can make calls SQLQuery Domino
servlet, set the "sql, sq" to the server settings, and the directive to the properties file:
servlet.SQLQuery.extension = sql
sq
This allows users to call similar to the following URL with the servlet:
<A
href = "">
Start fashion into default, servlet
Manager receives the first reference to a servlet's URL, this servlet class file into memory. However, you can specify the servlet
Manager starts to load one or more same servlet. This prevents the user URL
When the first request into a small program to be delayed.
Start command syntax is as follows:
servlets.startup = <alias or class>
<alias or class> ...
Note that "servlets" (the server applet) is plural, servlet
The name must be separated by a space.
If a small program specifies one or more aliases, you can include these in the startup command aliases. This would servlet
Manager applet is loaded for each class and then create an instance of an alias.
servlet manager into the servlet class, if not the Domino HTTP
Task from the console command "tell http quit" console command to terminate or "tell http
restart "
Restart, these classes will always reside in memory. Uninstall service +
Before service applet, servlet manager will call for each instance of servlet destroy ()
Member function to have the opportunity to clean up its resources.
Loaded by the JVM class loader the class prior to the termination in the HTTP task will remain loaded state. "Tell http
restart "command does not unload the class.
The following is a sample properties file servlets.properties sample files:
# Properties
for the sql
servlet
servlet.SQLQuery.code = sql.database.query.Servlet
servlet.SQLQuery.initArgs = cache = 30
servlet.SQLQuery.extension = sql
#
Properties for the mail
servlet
servlet.MailServlet.initArgs = mime = enabled, smime = disabled
# Both
servlets should be loaded at startup
servlets.startup = SQLQuery
MailServlet
# End of file
Example: Java servlet
This servlet returns an HTML sample
Page, shown above, the browser and the server sends a small program with all HTTP request headers.
import java.util .*;
import
java.io. *;
import javax.servlet .*;
import javax.servlet.http .*;
public
class ExampleServlet extends HttpServlet (
public void doGet
(HttpServletRequest request, HttpServletResponse response) throws
IOException
(
response.setContentType ("text / html");
ServletOutputStream ut =
response.getOutputStream ();
out.println ("<HTML> <B> Headers sent with the
request: </ B> <BR> ");
for (Enumeration headers =
request.getHeaderNames ();
headers.hasMoreElements ();)
(
String headerName = (String)
headers.nextElement ();
out.println ("<BR>" + headerName + ":" +
request.getHeader (headerName));
)
) / /
end of method
) / / End of class
Good compilation of this code, please copy the file to the server ExampleServlet.class
domino \ servlet in. Because no special attributes of this file, so do not create
servlets.properties file. Enter the following
URL (please use your own server name) to run the applet from your browser:
<A
href = "">
Returned by the server applet on the page of information is determined by the browser. The following are
Netscape browser to return to the HTML page:
<HTML> <B> Headers received with the
request: </ B> <BR>
<BR> ACCEPT-LANGUAGE:
<BR> CONNECTION: Keep-Alive
<BR> ACCEPT: image / gif,
image / x-xbitmap, image / jpeg, image / pjpeg, image / png,
* / *
<BR> USER-AGENT: Mozilla/4.05 [en] (Win95; U
; Nav)
<BR> ACCEPT-CHARSET: iso-8859-1, *, utf-8
<BR> HOST:
test1
54, Domino applications and XML
XML is one of the most obvious advantage because of this emerging technology to build in HTML and SGML
Above the standard, so it represents the data-sharing technology, this development does not require new hardware and software. XML and Domino application integration very well. In the XML
Provides a description of data sharing through the network applications, Domino provides data sharing so that safe, reliable and effective and need all the other tools. In addition to providing used to write XML data and to
XML parser XML data, media outlets supply, Domino Designer
Also provides:
Powerful development environment, including for the establishment of a collaborative e-business application programming tools needed layer of security to protect data from the database access control to a single domain encryption; allows users to locate data in effective search capabilities; work flow operation message processing, such as order confirmations, email notifications and document reviewers, in addition to Domino Designer
Development tools provided, you can also get connection service, allowing you to master the application back-end systems with the following link:
ERP systems such as SAP, PeopleSoft, Oracle and JD
Edwards
Relational database, such as DB2, Sybase and Oracle
Transaction systems, such as IBM CICS, MQ Series, WebSphere, and
BEA Tuxedo
XML-enhanced applications use to illustrate how to integrate Designer applications
XML, consider a site to sell books online, including: description of the data of each book is a standard XML markup tags, such as
<bookTitle> and
<bookAuthor>. Processing the data book are available for all applications written using these standards describe specific data tags. Application can use this standard data format and the supplier and buyers to interact.
You may be thinking can be used for any application
XML. For example, the idea of an auto supply stores, it maintains an online directory with auto parts of an e-commerce site. Using XML
Parts information as described in common language, buying agents from different vendors will be on the parts price and availability information directly into the Domino
Database. Users can access this database to find their parts can be ordered online the latest information. Domino
And provide all the necessary tools to complete the ordering parts and managing inventory of security online transaction processing.
Another example is the human resources of the "self-service" application, the employee can use it to access and manage their personal data. For example, companies can
Benefits information posted on the Intranet site, allowing employees to use Notes client or Web browser on-line to choose their own welfare. Selection, employee XML
Format data to the server (such as IBM,
WebSphere Application Server). Server using the Java servlet to pass data to
HR back-end systems (eg: Peoplesoft database), and the transaction is completed notice Domino. XML tags describe the data to be passed so that the data in the Peoplesoft
Meaning represented in the database in the Domino application on behalf of the same.
Designer application, included in the methods There are several ways XML can be included in the XML
Designer application and the data supplied to the XML parser.
Can be entered on the form or page to describe data XML tags. By the form or page content as HTML
Processing, can be tagged to the XML parser can explain the supply of XML.
XML
Description of the data being displayed. To the data on the form or page definition format and style, you can use through the Extensible Stylesheet Language (XSL) style sheet to create the data into
HTML
, Or you can use cascading style sheets (CSS) to define the client directly in the XML-style.
The formula can also be included in the column to use XML tags generated XML view
Data. To view to the server, it must be embedded in a page or view the XML document in order to correct the definition of tags cover the entire view.
Can use a proxy or a servlet
Dynamically generated or stored XML. Agents for Domino applications to run in the scheduled process. Servlet based on the request from the Web browser running on the server.
About
Domino DTD and XML using Java method to generate more information, see "Programming
Guide "in Chapter III.
Application design in a form or page into XML
Form is the best tool for XML. You can enter XML
Tag, and tag included in the data domain. The result is XML document passed to the XML parser with a very meaningful time data. XML can also be placed on the page. Domino
Designer Pages is to display the information in the database design elements. Traditional applications can use the content of the page (eg home page), or use XML
Tags to describe data on the page. You will view on the use of XML in a learned, the page will help embed the view and add the required XML processing view
Tag. Page also helps to create scalable style sheets (XSL) or cascading style sheets (CSS) to guide the server or use a browser how to format XML tags describe the data.
Use
XML element defines the data on the form when the form or page using the XML element, must follow certain rules in order to build an effective XML, and must be properly formatted XML
Tag.
XML tags and HTML tags are very similar. But there are some different rules for building XML tags, as tag data must follow them. For example, the requirements of embedded than embedding XML
HTML
Marking requirements more stringent. On the use of XML tags to tag data for more information, see the IBM XML Web site: <A
href = "">
As the use of XML in the form
Sample, the online manual directory entry for each book is likely to be as follows:
<? Xml version = "1.0"
encoding = "UTF-8"?>
<BOOK>
<bookTitle> Chess for the
Master </ bookTitle>
<bookCategory> Games </ bookCategory>
<bookAuthor> Alice
B.
Charles </ bookAuthor>
<bookPrice> 10 </ bookPrice>
<bookListPrice> 12 </ bookListPrice>
<bookISBN> 0-980-38475-81 </ bookISBN>
<bookDatePublished> April
1997 </ bookDatePublished>
<bookAbstract> The authority on all the
latest chess moves, including the entire Big Blue
arsenal. </ bookAbstract>
</ BOOK>
XML format to create a document
1.
Create a form or page.
2. Enter the document type declaration, as follows:
<? Xml version = "1.0"?>
Can optionally add the code for reference, as follows:
<? Xml version = "1.0" encoding = "UTF-8"
?>
3. Enter the XML element, generally the root element with child elements.
4. There is to use XML input
Tag data field.
5. Select the "design", "Form Properties."
6. In the "Advanced" tab, select the "content of the document as
HTML ".
Domino will notify all the property passed to the HTTP requester text of the document, without having to generate HTML tags.
7.
Save and close the form.
8. To view documents created from the form, you can create a view using the form formula (the formula of the form name resolves to the XML form.)
Use style sheets to format
XML Data
XML
One is that it only describes the properties of data without reference to the appearance of the data. For computer to computer transactions, the appearance is not important, but if you want data to the user (for example: to paste in the Web
Site), the appearance is very important. XML documents often rely on style sheet to determine the layout and appearance of the data. Some popular browser elements (such as <; Para>, <List> and
<Item>) Provides a simple default style, but usually have to use style sheets to describe the data format. With XML you can use two types of style sheets:
Extensible Stylesheet Language
(XSL) describes how the XML is converted to HTML or another version of the XML.
Cascading Style Sheets (CSS) directly in the browser to support CSS, XML defined
Style.
To use style sheets, document type declarations in the root element before and after the style sheet directly into the reference. For example:
<? Xml version = "1.0"
encoding = "UTF-8"?>
<? Xml-stylesheet type = "text / css"
href = "bookdisplay.css"?>
<BOOK>
If the page create a style sheet, you can set the page property to "content as the page
HTML ".
Will convert the information about the book HTML, XSL style sheet might look like:
<? Xml version = "1.0"
?>
<Xsl: stylesheet xmlns: xsl = "<A
href = ""> ">
<Xsl: template
<HTML>
<HEAD>
<TITLE> <Xsl: value-of </ TITLE>
</ HEAD>
<BODY
bgcolor = "F0FFF8">
<B> <Xsl: value-of</ B>
</ BODY>
</ HTML>
</ Xsl: template>
</ Xsl: stylesheet>
<? Xml: stylesheet
type = "text / xsl" href = "/ roibooks.nsf / bookform.xsl"?>
Cascading Style Sheets (CSS) is not the XML
Converted to HTML, but directly to the server how to format each XML element in the instructions. CSS book may be as follows:
BOOK (
display:
block;
border: 1px solid # cccccc;
)
BOOKTITLE (
display:
block;
float: left;
margin-right: 10px;
padding:
5px;
)
BOOKAUTHOR (
display: block;
font-style.
italic;
)
Related topics include the application in the XML Designer
Domino applications and XML
XML
Applications designed to use the term XML view generation
View allows the user to control which documents to XML and can display track information and converts it to XML, through
Intranet or the World Wide Web. Lotus Web site at
<A
href = ""> the ROI Books
Locate the application view is described in this section.
XML tags will be mapped to the view using the view to generate XML, XML tags must be from the DTD
Mapped to the view column. Once you create a map view and which XML tags, you can embed the view into the page. View embedded in the Web page to maintain its presence in the Notes
Client application the same functions, allows the user to control the view shows the size and appearance. XML for display
View, the page contains the XML declaration and root element.
Sample
ROI
Books application uses the column formula in a document and the domain assigned to each element "XML
View "in the column. The first child element of the column formula also contains open the parent element tag, and the last child element of the column formula contains the parent element tag closed. For example, the first column of the column formula is:
"<BOOK> <BOOKTITLE>" + BookTitle + "</ BOOKTITLE>"
Parent element is
<BOOK>, Child elements are <BOOKTITLE>, bookTitle is included <BOOKTITLE>
Tag content of the domain name. <BOOK> More child element in included in this view, so
<BOOK> Element until the last child element was added after the close. In the ROI
Books
Application, the last child element is assigned to view the last row. The last column of the formula:
"<BOOKPUBLISHDATE>" + BookDatePublished + "</ BOOKPUBLISHDATE> </ BOOK>"
"XML
View "is embedded in the XML declaration and root element that contains the page <BOOKCATALOG>.
Will be mapped to the XML view
1.
Create a view and open.
2. Select "Edit" "Properties" to open the "View" property box.
3. Click the "Advanced" tab.
4. In the "Web Access", select "to view content as HTML". If you do not select this property, Domino generated content for the view
HTML. Also, if you do not select this property,
The view is embedded in the page content is not visible.
5. Click the "Object" tab in the "View
Selection "and add the selection formula to define what the document will be included in the view. For example: for the online bookstore application contains the approved orders view. Using the following formula for the view selection document:
status = "Approved"
6. Click the "Object" tab in the "Form. Formula" and enter the formula to select the template form.
7. The list to view.
8. Click the first column of the view.
9. Use the following syntax in the Script.
Area, enter the column formula.
"<; PARENT> <CHILD>" + fieldname + "<\ CHILD>"
If there are multiple elements in a column, the formula in the first column to add a semicolon at the end of
(;) And add the following line to the next element in the formula.
"<; PARENT> <CHILD>" + fieldname + "<\ CHILD>";
"<CHILD>" + Fieldname + "<\ CHILD>";
"<CHILD>" + Fieldname + "<\ CHILD>"
Tip Use the following syntax to an attribute of the domain into elements.
"<CHILD
attributeName = \ "" + fieldname + "\"> "+ fieldname2 +" </ CHILD>
10.
Click the second column and use the following syntax to type out the formula to the Script.
Area.
"<CHILD>" + Fieldname + "<\ CHILD>"
11. For each XML element (except the last one) Repeat steps
10.
12. On the last child element using the following syntax.
"<LASTCHILD>" + Fieldname + "<\ LASTCHILD> </ PARENT>"
Embedded in the page view
1. Open or create a page.
2. Select the "design", "Page Properties."
3. Click the "Page Info" tab.
4.
In the "Web Access", select "to the page content as HTML", and then close the "Page" property box.
5. Where you want the embedded view on the location, type XML
Statement.
6. Will want to display an embedded view of the cursor position.
7. Select the "Create" "embedded element" "view."
8.
(Optional) If you do not want to show the same in all environments, the view, click the "Select a view based on the formula." Click "OK" to close the dialog box, in the "programming" to write the formula pane to display the appropriate view.
9. (Optional) Click the embedded view and select the "element" "view properties", change the alignment or style, or in a condition to hide this element.
10.
Type in the view of a closed root tag.
To delete an embedded view, click the "work" embedded in the view pane and select "Edit," "Clear."
Related topics in the Designer
Applications, including XML
Domino applications and XML
XML terminology in the form or page into XML
Application design using the proxy generation
XML
Generate XML using a proxy is one of the biggest advantages of flexibility. Can be based on events or on the URL command to set the proxy response time to run. This flexibility for creating automatic
XML application is necessary. For example: Web bookstore
Site contains a database of their customers a monthly newsletter to contributors. One of them editing and approval workflow process, will be responsible for them in the manuscript has been approved to move the system. Agents run every hour to collect manuscripts and published articles can be converted into
XML. Then the agent will be placed in manuscript as a static XML document to another database, the subscriber can in this collection document.
ROI Books application contains a feature called
createXML agent, the agent for the view and the XML generated for each document or the server browser-based requests to send out. To view this agent's output, the Microsoft
Open Internet Explorer 5 ROI Books application and click the XML Agent link, or use OpenAgent URL
Command to run Agent:
<A
href = "">
Prepared by the agent not only can be printed, can also be used
LSO or DECS connector API to XML output stored in the string variable and writes it to a static XML document or another database system.
Sample: XML
Acting in this sample is LotusScript. Agent, view it as XML to extract each document, according to its XML content creation and print output.
Dim s As
New NotesSession
Dim db As NotesDatabase
Dim doc As NotesDocument
Dim
view As NotesView
Set db = s.currentDatabase
Set view = db.GetView ("XML"
)
Set doc = view.GetFirstDocument
Print "Content-type:
text / xml "
'Prevents Domino from sending default headers.
"<BOOKCATALOG>"
'BOOKCATALOG is the root element of the XML
document.
While Not (doc Is Nothing)
'Loop as long as there are document
objects available.
Print "<BOOK>"
'Send the parent element for each book document.
"<bookTitle>" + Doc.bookTitle (0 )+"</ bookTitle> "
"<bookAuthor>" + Doc.bookAuthor (0 )+"</ bookAuthor> "
"<bookPrice>" + Doc.bookDiscountPrice (0 )+"</ bookPrice> "
"<bookCategory>" + Doc.bookCategory (0 )+"</ bookCategory> "
Print "</ BOOK>"
'Close the book element
tag.
Set doc = view.GetNextDocument (doc)
'Get
the next document in the view.
Wend
"</ BOOKCATALOG>"
'Closes the root element.
Related topics in the Designer
Applications, including XML
Domino applications and XML
XML application design using the Java servlet terminology generated
XML
Servlet is a response to the request of Web browsers to run Java programs. With different agents, when the Web server starts and resides on the server when
Servlet was loaded. They are often used to dynamically generate and update the Web page and in different exchange data between applications. Servlet functionality can be extended to use XML
As a common language between applications in a bridge between applications. Java servlet can produce not only XML tags and passes to the server for processing, Servlet
You can also interact with the LotusXSL processor to format the XML tags describe the data. XML and XSL
With the use, will provide you with a powerful tool for customized data.
XML Servlet sample application how to promote XML as Servlet
Example customized data packet transmission, consider an organization in which the field sales representatives to use a different device (such as the Notes client, browser or PDA
) From the Domino
Database download information. Sales representative may apply to the Domino application requests all information about a specific customer. Servlet will come from a collection of different data sources together and use the appropriate XML
Tag the data package. Servlet can be applied using LotusXSL processor will use the XML style
Tag data and the most suitable format for connecting devices to transmit data. In this case, through the narrow band telephone lines connected to a sales representative with the PDA than the Notes through a broadband network connection
The sales representative client connection information obtained less. The following graphics display connected device, run the Domino Servlet
Back-end database applications, and the relationship between:
In another sense on how to provide custom Servlet sample data, you can consider a real estate application, its data is stored in housing sales
Domino database. A real estate agent or potential buyers to use the Domino Web browser
Application requests the sale of housing information. User-specified search criteria, such as between the number of bedrooms required. Servlet in
Domino
Applications run with the specified standards to find and assemble all documents that match. Servlet dynamic folding it in the correct XML tags (such as <HOUSE> and
<HOUSETYPE>) Found in the data. And then use LotusXSL processor to apply XSL style sheet
Related Posts of domino_ Encyclopedia FAQ Center ...
rails string
Embedded string form; 1. <% = Link_to "section # (@ m) a view",: action => "show"%> 2.def hello (name) "Welcome, # (name)!" end 3.Post.find (: all,: conditions => 'id = # (params [: id])') parabolic mistake: null in entry
Ssh in a project, I encountered such a parabolic mistake: auth.model.AuthUser is a model category. And I tried to write the sql Success. Subsequently, I see, and should be a key requirement for non-empty result, I accidentally omitted. Seriously comp ... | http://www.codeweblog.com/domino_-encyclopedia-faq-center/ | CC-MAIN-2014-41 | refinedweb | 11,350 | 54.52 |
We’ve all been there, looking through a year old repo’s commits and wondering what you were feeling in that moment. It looks like frustration, going from “Set-up base components and boilerplate unit tests,” to “Fix tests,” until we arrive at a familiar place in commit message history: “asfdasdf.” Okay, maybe by a year I mean yesterday. It happens to every developer. But, it doesn’t have to! Here’s how I try to keep things 💯 in my commit history.
Always work in a branch, always squash and merge*. Then delete the source branch with your “pls pls pls” commits. Easy.
*…
Being highly involved in the hiring and recruitment process is something that I enjoy and encourage my teammates to do. Over time I have screened thousands of resumes, interviewed hundreds of candidates and actively try to pass on what I’m looking for onto interns and experienced developers alike. While each person has a different experience and need, these are the common themes of what I consider to be a successful resume. A good amount of these may apply to any resume, but my experience is in Software Development.
Before you start on the arduous journey of writing about yourself, first…
Update Feb. 6, 2019: I’ve updated this now that Hooks are officially out! Let me know how you like them on Twitter.
React Hooks are an exciting new feature that let you do things in function components instead of using classes, like fetch data. There’s a lot of discussion around them, but you’re here to fetch data!
Fire up a new create-react-app with
npx create-react-app hooks-test. Now that Hooks have been officially released, this is the only step you need!
Create a component in
/src/ called
Photos.js and give it a basic list:
// Photos.js
import React from "react";
import { useFetch….
There are many free APIs for you to query for example projects, or you may have your own that you want to consume in Gatsby. You can find a list of them on Todd Motto’s…
The JAMstack has inspired some of the greatest web development tools we’ve ever seen. Publishing amazingly fast, secure, and accessible websites has never been easier, or so free. I’m still having a hard time believing my own personal website now runs for free instead of on a $15/month VPS. If you’re able to convert an older architecture to what we will discuss today, you can invest those savings in your personal development with a course or educational membership.
Here’s the list of things you may learn from this article:
Do you find yourself using some of the same components over and over in multiple projects? This is super common, and with a bit of time spent compiling these into one reusable project, you can save time and boost your efficiency in future projects.
In this post I will create a boilerplate step by step that you can use to start your own component library. I will also develop the requirements and make pull requests along the way, as I mentioned in my last post.
Before we get into coding, my first rule of new projects is always come up…
Having a few personal projects available for the public to see can add to any resume, from students look for their first internship to experienced developers. Even a small utility can impress potential interviewers if you go above and beyond with your process. Personally, a well executed GitHub project is more valuable to see than thousands of lines of modern code. You will always have to adapt to a new codebase at a job, but the skills demonstrated by using branches, pull requests, issues and more can leave a lasting impression.
Coming up with an idea is always the hardest… | https://medium.com/@cwlsn | CC-MAIN-2021-25 | refinedweb | 644 | 70.84 |
If you are looking for alternative approaches to Android application development, you should consider giving Google's Flutter, a framework based on the Dart programming language, a try.
Apps built with Flutter are largely indistinguishable from those built using the Android SDK, both in terms of looks and performance. What's more, with minor tweaks, they can be run on iOS devices as well.
In this tutorial, I'll introduce you to the basics of Flutter by showing you how to build a simple tip calculator app for Android.
Prerequisites
To be able to follow this tutorial, you'll need:
- the latest version of IntelliJ IDEA
- Android Studio 2.2 or higher
- a device or emulator running Android 4.4 or higher
- a computer running Mac or Linux
1. Why Use Flutter?
Running at 60 fps, user interfaces created with Flutter perform far better than those created with other cross-platform development frameworks such as React Native and Ionic. If that doesn't excite you, here are a few more reasons why you might want to use Flutter:
- Flutter uses Dart, a fast, object-oriented language with several useful features such as mixins, generics, isolates, and optional static types.
- Flutter has its own UI components, along with an engine to render them on both the Android and iOS platforms. Most of those UI components, right out of the box, conform to the guidelines of Material Design.
- Flutter apps can be developed using IntelliJ IDEA, an IDE that is very similar to Android Studio.
2. Installing Flutter
You can get the latest version of Flutter by cloning its GitHub repository.
git clone
Flutter has several dependencies, such as the Dart SDK and Material Design fonts. Fortunately, the first time you run Flutter's diagnostic tool, all of them are installed automatically.
cd flutter/bin ./flutter doctor
To be able to build Android apps, you must also point Flutter to the directory where you installed Android Studio.
./flutter config --android-studio-dir ~/android-studio
3. Configuring IntelliJ IDEA
Although you can directly use Flutter's CLI to create and run new apps, you're likely to have a far better development experience if you use an IDE. The recommended IDE for Flutter is IntelliJ IDEA.
Before you start developing Flutter apps with it, however, you must install plugins for both Dart and Flutter. To do so, start by selecting Configure > Plugins in the IntelliJ welcome screen.
In the dialog that pops up, press the Browse repositories... button and search for the Dart plugin. Once you find it, press the Install button to install it.
Similarly, search for and install the Flutter plugin.
Once both the plugins are installed, restart IntelliJ IDEA.
You must now point the Flutter plugin to the directory in which you installed Flutter. To do so, select Configure > Settings in the welcome screen and, in the dialog that pops up, navigate to Languages & Frameworks > Flutter. In the Flutter SDK path field, type in the absolute path of the directory.
Press OK to complete the configuration.
4. Creating a New Project
To create a new Flutter project, press the Create New Project button in the welcome screen. In the New Project dialog, choose Flutter and press Next.
You can now give a meaningful name to your project and press Finish.
Once the project has been generated, I suggest you press the Run button to make sure that the Dart SDK, the plugins, and the Flutter framework are all configured correctly. If they are, after several seconds, you should see the following screen on your device or emulator:
Note that, from this point on, you don't have to press the Run button again even after making code changes. Flutter supports hot reload, a feature that allows you to instantly push updates to the app without restarting it.
5. Creating Widgets
In this tutorial, we'll be creating a tip calculator app with the following widgets:
- a
TextFieldto accept a bill amount
- a
TextFieldto accept a tip percentage
- a
RaisedButtonthe user can press to calculate the tip
Each Flutter widget can either be a
StatelessWidget or a
StatefulWidget. As its name suggests, a
StatefulWidget has a
State object associated with it, which allows it not only to store data, but also to react to changes in the data.
A
StatelessWidget, on the other hand, is a simpler object, not designed to persistently store any data. To keep this tutorial short, we'll be creating our tip calculator as a
StatelessWidget. Therefore, open main.dart, remove all its contents, and add the following code to it:
import 'package:flutter/material.dart'; class TipCalculator extends StatelessWidget { }
In the above code, the
import line is important because material.dart is the library that contains all the Material Design widgets we'll be using in this app.
To store the bill amount and the tip percentage, add two member variables to the class.
double billAmount = 0.0; double tipPercentage = 0.0;
To start creating the user interface of the app, override the
build() method.
@override Widget build(BuildContext context) { // More code goes here }
Let us now create the two
TextField widgets. While doing so, we can specify details such as the labels we want to associate with the widgets and the types of the virtual keyboards that must be displayed when they are in focus.
Because we can't directly retrieve the contents of a
TextField widget, we must also associate an
onChanged event handler with it. Inside the handler, which receives an
InputValue object, we can update the contents of our class's member variables.
Accordingly, add the following code inside the
build() method:
// Create first input field TextField billAmountField = new TextField( labelText: "Bill amount(\$)", keyboardType: TextInputType.number, onChanged: (InputValue value) { try { billAmount = double.parse(value.text); } catch (exception) { billAmount = 0.0; } } ); // Create another input field TextField tipPercentageField = new TextField( labelText: "Tip %", keyboardType: TextInputType.number, hintText: "15", onChanged: (InputValue value) { try { tipPercentage = double.parse(value.text); } catch (exception) { tipPercentage = 0.0; } } );
Even if you have never worked with Dart before, the above code should be fairly intuitive, so long as you are familiar with Java. For instance, you can see that we are using the
parse() method to convert each
TextField widget's text content to a
double object. Because the
parse() method can throw a
FormatException, it is also surrounded by a
try...catch block.
Creating a
RaisedButton widget is much like creating a
TextField widget. However, to assign a label to it, you must create a new
Text widget and add it as its
child.
// Create button RaisedButton calculateButton = new RaisedButton( child: new Text("Calculate"), onPressed: () { // More code goes here } );
Inside the
onPressed event handler of the button, we'll calculate the tip and the total amount to be paid, and display both inside a modal dialog. To create the dialog, we can use the
AlertDialog class. Once created, the dialog can be displayed by passing it as an argument to the
showDialog() method.
Accordingly, add the following code inside the
onPressed event handler:
// Calculate tip and total double calculatedTip = billAmount * tipPercentage / 100.0; double total = billAmount + calculatedTip; // Generate dialog AlertDialog dialog = new AlertDialog( content: new Text("Tip: \$$calculatedTip \n" "Total: \$$total") ); // Show dialog showDialog(context: context, child: dialog);
In the above code, note that we've used Dart's string interpolation feature to embed variables inside the
content of the dialog. Also, you can see that string literals in Dart can be concatenated just by placing them beside one another—though you can use the
+ operator too, if you like.
6. Creating a Widget Tree
A Flutter app is usually nothing but a tree of widgets. In other words, you create a Flutter app by simply creating multiple widgets and establishing parent-child relationships between them.
Currently, there are no relationships between the widgets we created in the previous step. As you might have guessed, they are all going to be siblings, so let's now create a parent widget for them.
A widget that can have multiple children is usually referred to as a layout widget. Flutter offers several layout widgets to choose from. For our app, the
Column widget is most appropriate because it positions all its children one below the other.
Additionally, in order to conform to the Material Design spec, we must add a padding of 16 dp to the
Column widget. We can do so by making it a
child of a
Container widget.
Container container = new Container( padding: const EdgeInsets.all(16.0), child: new Column( children: [ billAmountField, tipPercentageField, calculateButton ] ) );
A Material Design user interface is not complete without an app bar. Therefore, create one now using the
AppBar widget.
AppBar appBar = new AppBar(title: new Text("Tip Calculator"));
Layouts containing app bars and containers are so common that Flutter offers a
Scaffold widget to help you quickly establish a relationship between them.
Scaffold scaffold = new Scaffold(appBar: appBar, body: container);
With the
Scaffold widget at its root, our widget tree is now ready. You can go ahead and use the
Scaffold widget as the return value of the
build() method.
return scaffold;
If you are finding it hard to visualize the tree, the following diagram should help:
7. Creating an Entry Point
Our Dart file needs a
main() function as its entry point. Inside it, we must call the
runApp() function to actually inflate and render the widget tree we created in the previous step.
Additionally, our
TipCalculator widget must be placed inside a
MaterialApp widget so that a Material Design theme and color scheme can be applied to it. Therefore, add the following code to main.dart:
void main() { runApp(new MaterialApp( title: 'Tip Calculator', home: new TipCalculator() )); }
You can now press the Hot Reload App button to start using the app on your device.
Conclusion
In this tutorial, you learned how to use Flutter and Dart, along with IntelliJ IDEA, to create a simple app for Android.
In my opinion, Flutter has almost everything a developer might look for in a cross-platform mobile app development framework. Before you decide to start building your next big app with it, however, be aware that it is still a very new and rapidly evolving framework.
To learn more about Flutter, you can refer to its official documentation.<< | https://code.tutsplus.com/tutorials/developing-an-android-app-with-flutter--cms-28270 | CC-MAIN-2021-17 | refinedweb | 1,713 | 53.81 |
Consider the following program that computes the area of a disc
whose radius is
10:
3.14159 * 10 * 10
To make complex expressions more readable we can give meaningful names to intermediate expressions:
val radius = 10 val pi = 3.14159 pi * radius * radius
Besides making the last expression more readable it also allows us to not repeat the actual value of the radius.
A name is evaluated by replacing it with the right hand side of its definition
Here are the evaluation steps of the above expression:
pi * radius * radius 3.14159 * radius * radius 3.14159 * 10 * radius 31.4159 * radius 31.4159 * 10 314.159
Definitions can have parameters. For instance:
def square(x: Double) = x * x square(3.0) shouldBe res0
Let’s define a method that computes the area of a disc, given its radius:
def square(x: Double) = x * x def area(radius: Double): Double = 3.14159 * square(radius) area(10) shouldBe res0
Separate several parameters with commas:
def sumOfSquares(x: Double, y: Double) = square(x) + square(y)
Function parameters come with their type, which is given after a colon
def power(x: Double, y: Int): Double = ...
If a return type is given, it follows the parameter list.
The right hand side of a
def definition is evaluated on each use.
The right hand side of a
val definition is evaluated at the point of the definition
itself. Afterwards, the name refers to the value.
val x = 2 val y = square(x)
For instance,
y above refers to
4, not
square(2).
Applications of parametrized functions are evaluated in a similar way as operators:
sumOfSquares(3, 2 + 2) sumOfSquares(3, 4) square(3) + square(4) 3 * 3 + square(4) 9 + square(4) 9 + 4 * 4 9 + 16 25
This scheme of expression evaluation is called the substitution model.
The idea underlying this model is that all evaluation does is reduce an expression to a value.
It can be applied to all expressions, as long as they have no side effects.
The substitution model is formalized in the λ-calculus, which gives a foundation for functional programming.
Does every expression reduce to a value (in a finite number of steps)?
No. Here is a counter-example:
def loop: Int = loop loop
The difference between
val and
def becomes apparent when the right
hand side does not terminate. Given
def loop: Int = loop
A definition
def x = loop
is OK, but a definition
val x = loop
will lead to an infinite loop.
The interpreter reduces function arguments to values before rewriting the function application.
One could alternatively apply the function to unreduced arguments.
For instance:
sumOfSquares(3, 2 + 2) square(3) + square(2 + 2) 3 * 3 + square(2 + 2) 9 + square(2 + 2) 9 + (2 + 2) * (2 + 2) 9 + 4 * (2 + 2) 9 + 4 * 4 25
The first evaluation strategy is known as call-by-value, the second is is known as call-by-name.
Both strategies reduce to the same final values as long as
Call-by-value has the advantage that it evaluates every function argument only once.
Call-by-name has the advantage that a function argument is not evaluated if the corresponding parameter is unused in the evaluation of the function body.
Scala normally uses call-by-value.
Complete the following definition of the
triangleArea function,
which takes a triangle base and height as parameters and returns
its area:
def triangleArea(base: Double, height: Double): Double = base * height / res0 triangleArea(3, 4) shouldBe 6 triangleArea(5, 6) shouldBe res1 | https://www.scala-exercises.org/scala_tutorial/definitions_and_evaluation | CC-MAIN-2017-39 | refinedweb | 585 | 53.92 |
HOW TO LIVE WITH PAIN AND THOSE WHO HAVE IT
HOW TO LIVE WITH PAINHOW TO LIVE WITH PAIN
AND THOSE WHO HAVE IT
By
Robert Sprackland, Ph.D.
Dedicated to the biggest pains in my life, which reside in the greatest love of my life.
I
[Note: links to special sites with detailed follow-up information about specific subjects is provided in the text. -Ed.]
Chronic pain is everywhere, perhaps the most frequently discussed ailment of our time. The arthritis, rheumatism, gout, and sore backs that have long been familiar have been joined by tingling, "pins-and-needles," carpal tunnel syndrome, stressed joint pain, migraines and so many others that the whole field of pain research cannot keep up with demand for effective treatments. As of right now, six out of every ten Americans has, or has had, intense recurring or chronic pain.
To get a little perspective, let me give you some statistics. According to a 2005 survey by ABC News, USA Today and Stanford University Medical Center (), nearly forty percent of Americans suffer pain on a regular basis. That's roughly 80 million people! Sixty percent of the American population (120 million) says their pain is moderate or worse, and for twenty percent (40 million) the pain is severe. About twenty percent also claim their pain is chronic, lasting three or more.
Here's an interesting point: there are no medical tests available to prove that a person is in pain. Doctors must therefore explore the patient's history and the specific features associated with the pain ().
This is why many pain sufferers find it unbelievable that their family and closest friends-and even their doctors-might not think that the pain is real, that the sufferer is a hypochondriac, and that the problem is "all in your head." Well, guess what? They are partially correct: depending on the type of pain you have, it is either largely or totally in your head. And it is also quite real!
In this series of reports, I shall explain precisely what causes pain, how pain affects your body, and the types of treatments that really work to reduce or eliminate pain. Too many people visit the medical folks and are told they have an ailment, but they fail to understand all the information that the doctor and nurses tell them. We have probably all met people who told us that, yes they have a pain and the doctor explained it, but they didn't understand any of it except the part about taking these pills. It is especially for those folks that I write these reports. The topics will be: What is Pain? - Types of Pain - What Makes Pain Chronic - Properly Diagnosing Your Pain -- Treating Pain Effectively - Living With People in Chronic Pain.
WHAT IS PAIN?
Your body contains the most intricate wiring system known, far more complex than the wires used to operate an airliner, your computer, or the entire Pentagon. There have been a lot of descriptions about how many brain cells we have, how much data the brain can store, and how quickly we can coordinate different messages to sign our name with a pen. Most of those accounts are guesswork. The nervous system is, so far, too complicated to be discussed in reductionist terms. But for all we do not yet know, we have learned a lot. In fact, between 1996 and 2006, scientific understanding of the brain increased by more than the entire history of brain research before 1996! Remember those TV shows and movies where some expert (or alien) claims that we use only ten percent of our brains? That was what we thought back during World War II days. Even by 1960 we knew that most of the brain did something, and we now have evidence that we do use 100 percent of the brain. That doesn't mean we know what all of it does, but we know all of it does something!
Extending out from the brain is an extraordinarily intricate series of thin living wires, which are our nerve cells. The majority of those nerve cells provide support and protection to other nerve cells, the ones that actually carry messages to and from the brain. Those active nerve cells are called neurons. Some neurons are involved with making a muscle contract, and others tell us when it is cold around us. Neurons are responsible for all of our sensory input, including seeing, smelling, tasting, hearing, maintaining balance, and touch. It is among that last group that we find the neurons that specifically tell us when something potentially serious is wrong. Those are our pain receptors.
Let us imagine that you are sewing and you accidentally prick your finger with the needle. You feel pain, right? What happens is this: the pain receptor neurons (called nociceptors) around the spot where the pin pierced your skin begin to fire off messages that travel up one particular wire of nerves that joins other wires at the spinal cord, and travel to the brain. The brain has a "receptionist" called the thalamus that routes incoming nerve messages to the appropriate part or parts of the rest of the brain. The pain signal is then sent primarily to the middle outer layer of the large cerebral cortex-the part most of us imagine when we think about the brain at all-and the right centers there tell us we have been stabbed by a small sharp object. That message usually gets translated as "Ouch!!" Only an astronomically tiny percent of humans are born and live without pain receptor nerves (). In part, the intensity of pain is determined by where you are injured-there are up to 1,300 nociceptors per square inch of skin in sensitive areas-and how deeply the cause of injury penetrates your body. How you feel the pain depends on the type or types of pain receptors are activated. You have pain receptors for pressure, heat, cold, sound, and other senses.."
Now get this: ALL pain is felt in the brain, only in certain parts of the brain, and always in those parts of the brain. Your finger nerve can send a nerve signal, but only the brain "feels" that signal as pain. In this sense, then, all your pain IS literally in your head! (Unless you are one of those people who keeps their brain closer to the seat of one's chair...) That means if you do any one of three things, your finger will no longer feel pain:
- 1) Remove the nociceptor cells from your finger-this is not really possible unless, as a result of extensive injury or surgery, parts of the skin do dot regrow properly.
- 2) Remove the pain receptor sites in your brain-very "science fiction" and impractical. Besides, not feeling pain could be dangerous.
- 3) Stop the message sent by nociceptors from reaching the brain-Bingo! This is the only reasonable and, so far, possible way to handle pain, and those treatments come in several varieties.
Types of Pain
Pain may manifest itself in a great many ways, but it is always felt and processed by your nervous system in the one way, as described earlier. Where the types of pain can be confusing is when trying to understand the associated symptoms, such as dizziness, nausea, or tenseness. To understand the types of pain, you must understand the two main ways in which pain can be caused. Pain is classified in the medical profession as either acute or chronic.
If there is a reason why the body needs a pain signal to alert it to or help it stay aware of a problem, the pain is called acute. Acute pain disappears shortly after the cause of the pain has been treated, or the body has healed. Though acute pain may be related to other nervous conditions, such as anxiety, depression, and stress, its causes can usually be identified and, to some degree, treated.
Chronic pain is long-lasting pain, exceeding the time needed to recover after an injury or surgery, and where the cause may not be treatable. Doctors call pains that last for more than three months chronic, though this diagnosis is not always correct. For example, a woman who was involved in a minor car accident reported pain in the rotator cuff region of her right shoulder. Her doctor refused to take x-rays because there was no external or symptomatic sign of a break or dislocation, and because his medical system bosses would not approve the costly technique. It took three more doctors and two years before she strong-armed a pain specialist to x-ray her shoulder, whereupon it was discovered that she had no break, but had shattered part of her collar bone into tiny, sharp shards that were sawing her muscles and nerves!.
What, then, causes the pain? If the pain affects a specific physical part of the body, it is the result of a physical injury. The injury may be a needle prick or knife wound; a sprained ankle or a Charlie-horse; a nauseated gut or your mouth after a tooth extraction. In each case of somatogenic pain (pain originating in body tissue), there is physical injury to the area sending pain signals to the brain. Ever stub your little toe? Feels like it's been sliced with a knife, doesn't it? You have lots of receptors in your toes, so even the little toe can send strong messages to the brain's pain recognition centers. Nausea is how the nociceptors in your gut are interpreted in the brain. Your brain is informed that something nasty is whittling away on your stomach or intestines, so your brain sends signals to other parts of the brain the cause gut muscles to spastically contract so you vomit and, hopefully, remove the pain-causing source.
A special type of somatogenic pain has been described as neuropathic pain, which specifically refers to pain caused by damage to nerves cells themselves. Nerve cells are damaged as a result of some diseases, including multiple sclerosis, diabetes, cancer, and Parkinson's disease.
If pain is not somatogenic, not caused by a wound or illness to specific body tissues, it is termed psychogenic pain. The older term was "psychosomatic," which indicated that the patient's pain was "all in their head" and they were making themselves sick. Today we recognize that the mind is still poorly understood, perhaps the area of our own biology we least know about. But if the mind can have an illness, the result could include pain. Thus, psychogenic pain is largely a product of psychological factors. The condition and pain are definitely real. The patient is genuinely experiencing pain, but the pain has either no organic explanation or else a weak one. Psychogenic pain often manifests itself as chronic headaches, lower back pain, or generic pains that the patient says are difficult to explain.
It is not unlikely that a patient will suffer more than one type of pain at the same time. Let me give an example of how several pain modes can come into play for a fictional patient named Gina:
Gina, aged 35, was brought to hospital because of acute abdominal (somatogenic) pain. By the time she was brought in, her digestive tract had begun to hurt and go into spasms of nausea (somatogenic). After each vomiting episode, she briefly felt better. After being tested, the attending physician informed Gina that she had contracted a severe case of food poisoning. Gina had seen in the news that a local food poisoning epidemic had killed four people, so she began to worry and developed a severe headache (psychogenic). She also realized that she would miss an important meeting at work the next day, which caused her added stomach pain (psychogenic). To treat her symptoms, Gina was given an IV tube, and she was aware of it being very painful (somatogenic). The next morning, she awoke feeling ill, but much better than the day before. Convinced now that she wasn't going to die, she had lost her headache and nausea. It was only after the nurse checked her on morning rounds did Gina remember that she'd only started her new job two weeks ago, and wouldn't be covered by medical insurance until she'd worked for sixty days. The thought of how much her medical expenses would be caused the headache and stomach cramps to return (psychogenic) and made her more sensitive to the presence of the IV tube.
To sum up: virtually all pain is real, and it is all in our heads. If the brain doesn't get a signal from a pain-receiving nerve wire, we cannot feel that pain. Acute pain is an alarm that tells us something is wrong in our body. Chronic pain is telling us that something may be wrong in our body, but more likely is the result of a psychological cause. Finally, pain caused by the mind-somatogenic or psychogenic-is real and should be treated as such by the patient, health-care givers, family, and friends.
Because chronic pain is the biggest mystery and cause of so much grief to so many people, the next section will specifically address the question of just what makes pain chronic....
I will need to read this; more. I am in pain so it is very hard to concentrate. 1999 chemical therapy(unneeded) jolted my brain into chronic pain & fatigue. It has stolen my life & meds don't help much; so i want to GET IT OUT OF MY BRAIN.
Just have NO CLUE how !!! "A special type of somatogenic pain has been described as neuropathic pain, which specifically refers to pain caused by damage to nerves cells themselves." I think chemical therapy can cause this ? But there has to be a way to undo the problem.
my son is one of those "astronomical" numbered people who was born with out receptors..he regularly sees a neurologist down at cooks childrens in dallas texas. As a mother of two other children who can feel pain...i went through the teething and not feeling well and i still do of course, but i used to be one of those mothers who would state "god! how i wish my kids couldnt feel pain" but when i had seth i realized how feeling pain was such a blessing. he is only two years old and has done countless things to himself. serious things to. when Dr Martz down at Cooks Childrens had first seen him he had told me that he had NEVER seen anyone who couldnt feel pain in their entire body..just in a hand or foot, etc. he actually had to do research for himself to help better protect my son. if anyone is in my position they know how terrifying it really is.
Chronic pain drives people mad and drives their doctors crazy. Sometimes it gets better. Often it doesn't. | http://hubpages.com/politics/HOW-TO-LIVE-WITH-PAIN | CC-MAIN-2016-50 | refinedweb | 2,506 | 69.31 |
hello everyone,
Ive been trying to alter strings and i am having a very difficult time altering the vlaue after i split them. i understand what i am doing when i split the input(String[] word = phrase.split(" ");), thanks to this site.. and how i am then able to alter the characters and positions such as "Character.toString(word[0].charAt(0)).toUpperCase()". However, anything further than this i am totally confused. For example. my homework requires me to retrieve the first letter of users 3 words and create an acronym. i have successfullly done this but wanted to display an error message if user enters less than or more than 3 words. i tried setting up some if and else statements but compiler slapped me and called me stupid. Also, once i split the words, is it possible to to capture the first letter for all the users input whether its 3 or 30 words?
import javax.swing.JOptionPane; import java.lang.*; public class ThreeLetterAcronym { public static void main(String []args) { String phrase = JOptionPane.showInputDialog(null, "Please enter 3 words for me to program."); String[] word = phrase.split(" "); String acronym = Character.toString(word[0].charAt(0)).toUpperCase() + Character.toString(word[1].charAt(0)).toUpperCase() + Character.toString(word[2].charAt(0)).toUpperCase() ; JOptionPane.showMessageDialog(null, "You chose: " + phrase + " . The acronym for the following is " + acronym); }//checked on oracle's tutorials for splitting strings. this way seems way easier but having difficult setting error for anything above 3 words. }
our class has not yet tackled splitting phrases,(which is why i am struggling but this way seems much more clean and simple) but if you could point me in right direction to research would be greatly appreciated. | https://www.daniweb.com/programming/software-development/threads/435219/split-phrases | CC-MAIN-2017-17 | refinedweb | 286 | 57.47 |
A vs JSF
- Angular vs GWT
- Angular vs jQuery
- The Angular take on MVC (or MVW)
- The M in MVC – Scopes
- The V in MVC – Directives
- The C in MVC – Controllers
The Origins of Angular JS
Angular is becoming a framework of choice for developing web applications in enterprise settings, where traditionally the backend is built in Java and the frontend is built in a Java/XML based framework such as JSF or GWT.
As Java developers often living in the Spring/Hibernate world, we might wonder how a dependency-injection, dirty checking based MVC framework ever managed to jump from the server and into our browsers, and find that to be an interesting coincidence.
The Story Behind Angular
It turns out that the similarities are likelly not a coincidence, because at it’s roots Angular was built by Java Developers at Google, that felt that they where not being productive building frontend applications using Java, specifically GWT.
These are some important quotes from the Angular developers about the origins of Angular, recently on the Javascript Jabber Podcast (transcript link here):
we were building something in GWT and was getting really frustrated just how unproductive I was being.
we could build the application (Google Feedback) much faster than we could build it in GWT.
So this means Angular was effectively created by full-time Java GWT developers, as a response to how they felt that Java frameworks limited their frontend development productivity.
Is JSF or GWT still the way to go?
Although with two very different approaches, one of the main goals of both JSF and GWT is to abstract at least part of the web away, by allowing web development to be done in the Java/XML world.
But it seems that in this day and age of HTML5 browsers, frameworks like JSF/GWT are much more complex than the underlying platform that they are trying to abstract away in the first place.
Although they can be made to work fine, the question is: at what cost?
Often the underlying browser technologies leak through to the developer, which ends up having to know HTML, CSS and Javascript anyway in order to be able to implement many real world requirements.
This leaves the developer wondering why can’t browser technologies be used directly without so many constraints and intermediate layers of abstraction, because in the end there is really no escape from them.
Browser technologies are actually simpler, more widespread and far better documented than any Java framework could ever be.
Historical context of JSF and GWT
It’s important to realize how JSF/GWT came to be in the first place: they where created to be used in scenarios where an enterprise backend already existed built in Java/XML, and a need existed to reuse that same team of enterprise developers to build also the frontend.
From a project management point of view, on a first look and still today this makes a lot of sense.
Also from an historical point of view, JSF/GWT where created in a context where the browser was a much more quirkier platform than it is today, and with a lot less developer tools available.
So the goal of the framework was to abstract at least some of the browser technologies away, enabling it to be used by a wider developer base.
Angular vs JSF
JSF came more or less at the same time as Ajax exploded in the web development scene a decade ago. The initial version of JSF was not designed with Ajax in mind, but was instead meant as a full page request/response model.
In this model, a DOM-like tree of components representing the user interface exists in memory, but this tree exists only on the server side.
The server View then gets converted back and forth to HTML, CSS and Javascript, treating the browser mostly as a rendering platform with no state and limited control over what is going on.
Pages are generated by converting the server View representation to HTML, CSS and Javascript via a set of special classes called Renderers, before sending the page to the user.
How does JSF work?
The user will then interact with the page and send back an action typically via an HTTP POST, and then a server side lifecycle is triggered via the JSF Controller, that restores the view tree, applies the new values to the view and validates them, updates the domain model, invokes the business logic and renders back a new view.
The framework was then evolved in JSF 2 for native Ajax support and stateless web development, but the main approach of generating the HTML in the browser from a server side model remained.
How does Angular compare to JSF
The main design difference is that in Angular the Model, the View and the Controller where moved from the server and into the browser itself.
In Angular, the browser technologies are not seen as something to be avoided or hidden, but something to be used to the full extent of it’s capabilities, to build something that is much more similar to a Swing fat client rather than a web page.
Angular does not mandate this, but the server typically has very little to no state and serves mostly JSON via REST services.
How important is Javascript in JSF?
The take of JSF towards Javascript seems to be that the language is something that JSF library developers need to know, but usually not the application developers.
The most widespread JSF library Primefaces contains internally thousands of lines of Javascript code for it’s jQuery based frontend widgets, but Primefaces based projects have often very little to no Javascript on the application code base itself.
Still, in order to do custom component development in Primefaces, it’s important to know Javascript and jQuery, but usually only a small part of the application team needs to know it.
Angular vs GWT
A second generation take on Java web development on the browser came with the arrival of GWT. In the GWT take, the Model, View and Controller are also moved to the browser, just like in Angular.
The main difference is the the way that Javascript is handled: GWT provides a Java to Javascript compiler that treats Javascript as a client side bytecode execution engine.
In this model the development is made entirely in Java, and with a build process the code gets compiled down to Javascript and executed in the browser.
The GWT take on HTML and CSS
In GWT, HTML and CSS are not meant to be completely hidden from the developer, although XML namespaces are provided to the user to layout at least some of the page major blocks.
When getting to the level of forms, an
HtmlPanel is provided to allow to build pages in HTML and CSS directly. This is by the way also possible in JSF, although in the case of both frameworks typically developers try to avoid as much as possible HTML and CSS, by trying to use the XML namespaces to their maximum possible extent.
Why the Javascript transpilation aproach ?
GWT is not so different from Angular in a certain way: it’s MVC in the browser, with Javascript being a transpilation target rather than the application development language.
The main goal of that transpilation is again reusing the same developer team that builds the backend as well, and abstracting away browser quirks.
Does the GWT object oriented approach help ?
The GWT programming model means that the web page is viewed from an object oriented point of view: the page is seen in the program as a network of interconnected objects instead of a document.
The notion of document and elements are hidden away by the framework, but it turns out that this extra level of indirection altough familiar, ends up not being that helpful and often gets in the way of the developer more than anything else.
Is the extra layer of abstraction needed?
The fact is that the notion of page and elements are already simple and powerful enough so that they don’t need an extra layer of abstraction around it.
With the object oriented abstraction of the page, the developer often ends up having to debug it’s way through a myriad of classes for simple things like finding where to add a or remove a simple CSS class or wrap an element in a div.
Super Dev Mode helps, but it feels like the whole GWT hierarchy of objects, the Java to Javascript compiler and the several debug modes and browser and IDE plugins ecosystem are all together far more complex that what they are trying to hide away in the first place: the web.
Angular vs jQuery
Meanwhile and in parallel in the Javascript world, a new approach came along for tackling browser differences: the idea that a Javascript library can be created that provides a common API that works well in all browsers.
The library would detect the browser at runtime and adapt internally the code used so that the same results occur in all browsers.
Such library would be much simpler to use as it did not require browser quirk knowledge, and could appeal to a wider development base.
The most successful of those libraries is jQuery, which is mostly a page manipulation library, but it’s not meant to be an MVC framework.
jQuery in the Java World
Still jQuery is the client side basis of the most popular JSF framework: Primefaces. The main difference between Angular and jQuery is that in jQuery there is no notion of Model or Controller, the document is instead directly manipulated.
A lot of code like this is written if using jQuery (example from the Primefaces Javascript autocomplete widget):
this.itemtip = $('<div id="' + this.id + '_itemtip" class="ui-autocomplete-itemtip ui-state-highlight ui-widget ui-corner-all ui-shadow"></div>') .appendTo(document.body);
As we can see, the Primefaces developers themselves need to know about HTML, CSS and Javascript, altough many of the application developers use the provided XML tags that wrap the frontend widgets, and treat them as a black box.
This type of code reminds of the code initially written in Java development, when the Servlet API came along but there weren’t yet any JSP’s:
out.println(" " + message + "");
What Angular allows is to decouple the Model from the View, and loosely glue the two together with a Controller.
The Angular JS take on MVC (or MVW)
Angular positions itself as MVW framework – Model, View, Whatever. This means that it acknowledges the clear separation of a Model, that can be a View specific model and not necessarily a domain model.
In Angular the Model is just a POJO – Plain Old Javascript Object.
Angular acknowledges also the existence of a View, that is binded declaratively to the Model. The view is just HTML with some special expression language for Model and user interaction binding, and a reusable component building mechanism known as Directives.
It also acknowledges the need to something to glue the Model and the View together, but it does name this element hence the “Wathever”. In MVC this element is the Controller, in MVP it’s the Presenter, etc.
Minimal Angular Example
Let’s go over the three elements of MVC and see what do they correspond in Angular by using a minimal interactive multiplication example, here it is working in a jsFiddle.
As you can see, the result is updated immediately once the two factors change. Doing this in something like JSF or GWT would be a far larger amount of work.
What would this look like in JSF and GWT?
In JSF, for example in Primefaces this would mean having to write a small jQuery plugin or routine to add the interactive multiplication feature, create a facelet template, declare a facelet tag and add it to the tag library, etc.
In GWT this would mean bootstraping a sample app, creating a UI binder template, add listeners to the two fields or setting up the editor framework, etc.
Enhanced Developer Productivity
We can see what the Angular JS developers meant with enhanced productivity, as the complete Angular version is the following, written in a few minutes:
<div ng- <input type="text" ng- * <input type="text" ng- = <span>{{multiply()}}</span> </div> angular.module('Calculator', []) .controller('CalculatorCtrl', function($scope) { $scope.model = { left: 10, right: 10 }; $scope.multiply = function() { return $scope.model.left * $scope.model.right; } });
So let’s go over the MVC setup of this sample code, starting with the M.
The M in MVC – Angular Scopes
The Model in Angular is just a simple Javascript object. This is the model object, being injected into the scope:
$scope.model = { left: 10, right: 10 };
Injecting the model into the scope makes it dirty checked, so that any changes in the model are reflected immediately back to the view. In the case of the example above, editing the factor input boxes triggers dirty checking which triggers the recalculation of the multiplication, which gets instantly reflected in the result.
The V in MVC – Enhanced HTML
The view in Angular is just HTML annotated with a special expression language, such as the definition of the
multiply() field. The HTML is really acting in this case as client side template, that could be split into reusable HTML components called Directives.
The C in MVC – Angular Controllers
The
CalculatorCtrl is the controller of the example application. It initializes the model before the view gets rendered, and act’s as the glue between the view and the model by defining the multiply function.
The controller typically defines observers on the model that trigger event driven code.
Conclusions
It seems that polyglot development in both Java and Javascript is a viable option for the future of enterprise development, and that Angular is a major part of that view on how to build enterprise apps.
The simplicity and speed of development that it brings is attractive to frontend Java developers, which to one degree or another already need to deal with HTML, CSS and often Javascript anyway.
So an attractive option seems to be that a portion of enterprise application code will start being written in Javascript using Angular instead of Java, but only the next few years will tell.
An alternative way of using Angular
Another possibility is that Angular is used internally by frameworks such as JSF as an internal implementation mechanism.
See for example this post from the lead of the Primefaces project:
I have plans to add built-in js mvc framework support, probably it will be angular.
So it’s possible that Angular will be used as an implementation mechanism for technologies that will follow the aproach of keeping the application developer experience to be Java and XML based as much as possible.
One thing seems sure, Angular either as an application MVC framework or as an internal detail of a Java/XML based framework seems slowly but surely making it’s way into the enterprise Java world.
Related Links:
A great online resource for Angular: The egghead.io Angular lessons, a series of minimal 5 minutes video lectures by John Lindquist (@johnlindquist). | http://www.javacodegeeks.com/2014/07/the-java-origins-of-angular-js-angular-vs-jsf-vs-gwt.html | CC-MAIN-2015-32 | refinedweb | 2,538 | 54.05 |
Requirements: Mac OS X 10.8 or later and the iOS 7+ SDK.
Instructions:
There are three build targets:
If everything goes fine, you should see a build/ios directory, inside there‘s
FIXME: This needs to be updated for the latest methods
Here is the easiest method:
Here is a more manual method:
Window and display mode sizes in SDL are in “screen coordinates” (or “points”, in Apple's terminology) rather than in pixels. On iOS this means that a window created on an iPhone 6 will have a size in screen coordinates of 375 x 667, rather than a size in pixels of 750 x 1334. All iOS apps are expected to size their content based on screen coordinates / points rather than pixels, as this allows different iOS devices to have different pixel densities (Retina versus non-Retina screens, etc.) without apps caring too much.
By default SDL will not use the full pixel density of the screen on Retina/high-dpi capable devices. Use the SDL_WINDOW_ALLOW_HIGHDPI flag when creating your window to enable high-dpi support.
When high-dpi support is enabled, SDL_GetWindowSize() and display mode sizes will still be in “screen coordinates” rather than pixels, but the window will have a much greater pixel density when the device supports it, and the SDL_GL_GetDrawableSize() or SDL_GetRendererOutputSize() functions (depending on whether raw OpenGL or the SDL_Render API is used) can be queried to determine the size in pixels of the drawable screen framebuffer.
Some OpenGL ES functions such as glViewport expect sizes in pixels rather than sizes in screen coordinates. When doing 2D rendering with OpenGL ES, an orthographic projection matrix using the size in screen coordinates (SDL_GetWindowSize()) can be used in order to display content at the same scale no matter whether a Retina device is used or not.
On iOS the application goes through a fixed life cycle and you will get notifications of state changes via application events. When these events are delivered you must handle them in an event callback because the OS may not give you any processing time after the events are delivered.
e.g.
int HandleAppEvents(void *userdata, SDL_Event *event) { switch (event->type) { case SDL_APP_TERMINATING: /* Terminate the app. Shut everything down before returning from this function. */ return 0; case SDL_APP_LOWMEMORY: /* You will get this when your app is paused and iOS wants more memory. Release as much memory as possible. */ return 0; case SDL_APP_WILLENTERBACKGROUND: /* Prepare your app to go into the background. Stop loops, etc. This gets called when the user hits the home button, or gets a call. */ return 0; case SDL_APP_DIDENTERBACKGROUND: /* This will get called if the user accepted whatever sent your app to the background. If the user got a phone call and canceled it, you'll instead get an SDL_APP_DIDENTERFOREGROUND event and restart your loops. When you get this, you have 5 seconds to save all your state or the app will be terminated. Your app is NOT active at this point. */ return 0; case SDL_APP_WILLENTERFOREGROUND: /* This call happens when your app is coming back to the foreground. Restore all your state here. */ return 0; case SDL_APP_DIDENTERFOREGROUND: /* Restart your loops here. Your app is interactive and getting CPU again. */ return 0; default: /* No special processing, add it to the event queue */ return 1; } } int main(int argc, char *argv[]) { SDL_SetEventFilter(HandleAppEvents, NULL); ... run your main loop return 0; }.
Your SDL application for iOS uses OpenGL ES for video by default.().
If your application doesn‘t use OpenGL’s depth buffer, you may find significant performance improvement by setting SDL_GL_DEPTH_SIZE to 0.
Finally, if your application completely redraws the screen each frame, you may find significant performance improvement by setting the attribute SDL_GL_RETAINED_BACKING to 0.
OpenGL ES on iOS doesn't use the traditional system-framebuffer setup provided in other operating systems. Special care must be taken because of this:
The above objects can be obtained via SDL_GetWindowWMInfo() (in SDL_syswm.h).
The SDL keyboard API has been extended to support on-screen keyboards:
void SDL_StartTextInput() -- enables text events and reveals the onscreen keyboard.
void SDL_StopTextInput() -- disables text events and hides the onscreen keyboard.
SDL_bool SDL_IsTextInputActive() -- returns whether or not text events are enabled (and the onscreen keyboard is visible)).
Textures: The optimal texture formats on iOS are SDL_PIXELFORMAT_ABGR8888, SDL_PIXELFORMAT_ABGR8888, SDL_PIXELFORMAT_BGR888, and SDL_PIXELFORMAT_RGB24 pixel formats.
Loading Shared Objects: This is disabled by default since it seems to break the terms of the iOS SDK agreement for iOS versions prior to iOS 8. It can be re-enabled in SDL_config_iphoneos.h.
Game Center integration might require that you break up your main loop in order to yield control back to the system. In other words, instead of running an endless main loop, you run each frame in a callback function, using:
int SDL_iPhoneSetAnimationCallback(SDL_Window * window, int interval, void (*callback)(void*), void *callbackParam);
This will set up the given function to be called back on the animation callback, and then you have to return from main() to let the Cocoa event loop run.
e.g.
extern "C" void ShowFrame(void*) { ... do event handling, frame logic and rendering ... } int main(int argc, char *argv[]) { ... initialize game ... #if __IPHONEOS__ // Initialize the Game Center for scoring and matchmaking InitGameCenter(); // Set up the game to run in the window animation callback on iOS // so that Game Center and so forth works correctly. SDL_iPhoneSetAnimationCallback(window, 1, ShowFrame, NULL); #else while ( running ) { ShowFrame(0); DelayFrame(); } #endif return 0; } | https://fuchsia.googlesource.com/third_party/sdl/+/76a44039ef778f30f41c303157f275a1009d973e/docs/README-ios.md | CC-MAIN-2020-50 | refinedweb | 901 | 61.56 |
First we start by building off the previous lesson and create a security handler object for AGOL. If you missed a deeper discussion on how to handle security, please see this post.
One of the most common tasks to perform when working with your site, is to query it to see what content you have. There are two ways to query a site:
- With Credentials - this means a token will be appended on the end of the query search, and users can find both public and non-public items if your users has the proper permissions to do so.
- Without Credentials - this means only find public items shared with everyone.
from arcrest.security import AGOLTokenSecurityHandler from arcrest.manageorg import Administration if __name__ == "__main__": username = "username" password = "password" proxy_port = None proxy_url = None securityHandler = AGOLTokenSecurityHandler(username, password, proxy_url=proxy_url, proxy_port=proxy_port) siteObject = Administration(securityHandler=securityHandler, proxy_url=proxy_url, proxy_port=proxy_port) results = siteObject.query(q="< some query string >")
This returns a Python dictionary object as shown below:
{ "query" : "type:feature class", "total" : 12345, "start" : 1, "num" : 10, "nextStart" : 11, "results" : [ ...list of items... ] }
Let us examine the results object. It's a dictionary with many valuable key/value pairs. Since this query was performed using the default values, this means that the query will begin with the first item and go to the 10th item (num). The key nextStart tells you where the start value needs to be if you want to page each result. So the next time you pass the query in order to get all the results you need to set the start = 11. Manually this seems quite daunting task, but luckily for us, we have looks. There are many way to create loops to perform this task, but in this example, a while loop will be used to walk the results and put them in a single list object.
def findContentByDate(admin, q): """ finds items based on a query (q) Inputs: admin - manageorg.Administration object q - query string """ start = 1 count = 0 nextStart = 0 num = 100 results = [] while nextStart != -1: query = admin.query(q=q, sortOrder="desc", sortField="modified", t=None, start=start + (count * num), num=100) results = results + query['results'] nextStart = query['nextStart'] count += 1 return resultsNow we have all the items based on a query in a single Python list. This is very helpful in cataloging a site/org or even good for finding publicly shared items.
AGOL has it's own query language much like the google or other search engines. This documentation can be found here (). It is a must read for anyone who wants to query AGOL or Portal effectively.
Though the samples shown are created to work with Portal, then same example can be used to query a Portal site.
If you want to learn more, please visit the ArcREST Github Page. Help us make it better and post comments, suggestions and improvements in the Issues section. Or better yet, fork the repo and submit a pull request.
Happy Coding and Enjoy! | http://anothergisblog.blogspot.com/2015/04/arcrest-basics-arcgis-online-and-query.html | CC-MAIN-2017-22 | refinedweb | 498 | 63.49 |
Vue.js 3: Future-Oriented Programming
- 1764
Vue.js 3: Future-Oriented Programming .If you are interested in Vue.js, you probably know about the 3rd version of this framework,
If you are interested in Vue.js, you probably know about the 3rd version of this framework, which will be released shortly (if you are reading this article from the future, I hope it’s still relevant 😉). The new version is under active development for now, but all possible features can be found in separate RFC (request for comments) repository:. One of them, function-api, can dramatically change the style of developing Vue apps.
What’s wrong with the current API? 👀
The best way is to show everything in an example. So, let’s imagine that we need to implement a component that should fetch some user’s data, show loading state and topbar depending on scroll offset. Here is the final result:
Live example you can check here.
It is good practice to extract some logic to reuse across multiple components. With Vue 2.x’s current API, there are a number of common patterns, most well known are:
- Mixins (via the
mixinsoption) 🍹
- Higher-order components (HOCs) 🎢
So, let’s move scroll tracking logic into a mixin, and fetching logic into a higher-order component. Typical implementation with Vue you can see below.
Scroll mixin:
const scrollMixin = { data() { return { pageOffset: 0 } }, mounted() { window.addEventListener('scroll', this.update) }, destroyed() { window.removeEventListener('scroll', this.update) }, methods: { update() { this.pageOffset = window.pageYOffset } } }
scrollMixin.js
Here we add
scroll event listener, track page offset and save it in
pageOffset property.
The higher-order component will look like this:
import { fetchUserPosts } from '@/api' const withPostsHOC = WrappedComponent => ({ props: WrappedComponent.props, data() { return { postsIsLoading: false, fetchedPosts: [] } }, watch: { id: { handler: 'fetchPosts', immediate: true } }, methods: { async fetchPosts() { this.postsIsLoading = true this.fetchedPosts = await fetchUserPosts(this.id) this.postsIsLoading = false } }, computed: { postsCount() { return this.fetchedPosts.length } }, render(h) { return h(WrappedComponent, { props: { ...this.$props, isLoading: this.postsIsLoading, posts: this.fetchedPosts, count: this.postsCount } }) } })
fetchHOC.js
Here
isLoading,
posts properties initialized for loading state and posts data respectively. The
fetchPosts method will be invoked after creating an instance and every time
props.id changes, in order to fetch data for new
id.
It’s not a complete implementation of HOC, but for this example, it will be enough. Here we just wrap the target component and pass original props alongside fetch-related props.
Target component looks like this:
// ... <script> export default { name: 'PostsPage', mixins: [scrollMixin], props: { id: Number, isLoading: Boolean, posts: Array, count: Number } } </script> // ...
decomposedOptionsComponent.vue
To get specified props it should be wrapped in created HOC:
const PostsPage = withPostsHOC(PostsPage)
Full component with template and styles can be found here.
Great! 🥳 We just implemented our task using mixin and HOC, so they can be used by other components. But not everything is so rosy, there are several problems with these approaches.
1. Namespace clashing ⚔️
Imagine that we need to add
update method to our component:
// ... <script> export default { name: 'PostsPage', mixins: [scrollMixin], props: { id: Number, isLoading: Boolean, posts: Array, count: Number }, methods: { update() { console.log('some update logic here') } } } </script> // ...
methodsOptionsComponent.vue
If you open the page again and scroll it, the topbar will not be shown anymore. This is due to the overwriting of mixin’s method
update. The same works for HOCs. If you change the data field
fetchedPosts to
const withPostsHOC = WrappedComponent => ({ props: WrappedComponent.props, // ['posts', ...] data() { return { postsIsLoading: false, posts: [] // fetchedPosts -> posts } }, // ...
fetchHOCPosts.js
…you will get errors like this:
The reason for this is that wrapped component already specified property with the name
2. Unclear sources 📦
What if after some time you decided to use another mixin in your component:
// ... <script> export default { name: 'PostsPage', mixins: [scrollMixin, mouseMixin], // ...
optionsComponentMixins.vue
Can you tell exactly which mixin a
pageOffset property was injected from? Or in another scenario, both mixins can have, for example,
yOffset property, so the last mixin will override property from the previous one. That’s not good and can cause a lot of unexpected bugs. 😕
3. Performance ⏱
Another problem with HOCs is that we need separate component instances created just for logic reuse purposes that come at a performance cost.
Let’s “setup” 🏗
Let’s see what alternative can offer the next Vue.js release and how we can solve the same problem using function-based API.
Since Vue 3 is not released yet, the helper plugin was created — vue-function-api. It provides function api from
Vue3.x to
Vue2.x for developing next-generation Vue applications.
Firstly, you need to install it:
$ npm install vue-function-api
and explicitly install via
Vue.use():
import Vue from 'vue' import { plugin } from 'vue-function-api' Vue.use(plugin)
The main addition function-based API provides is a new component option -
setup(). As the name suggests, this is the place where we use the new API’s functions to setup the logic of our component. So, let’s implement a feature to show topbar depending on scroll offset. Basic component example:
// ... <script> export default { setup(props) { const pageOffset = 0 return { pageOffset } } } </script> // ...
baseSetupComponent.vue
Note that the
setup function receives the resolved props object as its first argument and this
props object is reactive. We also return an object containing
pageOffset property to be exposed to the template’s render context. This property becomes reactive too, but on the render context only. We can use it in the template as usual:
<div class="topbar" :...</div>
But this property should be mutated on every scroll event. To implement this, we need to add scroll event listener when the component will be mounted and remove the listener — when unmounted. For these purposes
value,
onMounted,
onUnmounted API functions exist:
// ... <script> import { value, onMounted, onUnmounted } from 'vue-function-api' export default { setup(props) { const pageOffset = value(0) const update = () => { pageOffset.value = window.pageYOffset } onMounted(() => window.addEventListener('scroll', update)) onUnmounted(() => window.removeEventListener('scroll', update)) return { pageOffset } } } </script> // ...
scrollSetupComponent.vue
Note that all lifecycle hooks in 2.x version of Vue have an equivalent
onXXX function that can be used inside
setup().
You probably also noticed that
pageOffset variable contains a single reactive property:
.value. We need to use this wrapped property because primitive values in JavaScript like numbers and strings are not passed by reference. Value wrappers provide a way to pass around mutable and reactive references for arbitrary value types.
Here’s how the
pageOffset object looks like:
The next step is to implement the user’s data fetching. As well as when using option-based API, you can declare computed values and watchers using function-based API:
// ... <script> import { value, watch, computed, onMounted, onUnmounted } from 'vue-function-api' import { fetchUserPosts } from '@/api' export default { setup(props) { const pageOffset = value(0) const isLoading = value(false) const posts = value([]) const count = computed(() => posts.value.length) const update = () => { pageOffset.value = window.pageYOffset } onMounted(() => window.addEventListener('scroll', update)) onUnmounted(() => window.removeEventListener('scroll', update)) watch( () => props.id, async id => { isLoading.value = true posts.value = await fetchUserPosts(id) isLoading.value = false } ) return { isLoading, pageOffset, posts, count } } } </script> // ...
scrollFetchSetupComponent.vue
A computed value behaves just like a 2.x computed property: it tracks its dependencies and only re-evaluates when dependencies have changed. The first argument passed to
watch is called a “source”, which can be one of the following:
- a getter function
- a value wrapper
- an array containing the two above types
The second argument is a callback that will only get called when the value returned from the getter or the value wrapper has changed.
We just implemented the target component using function-based API. 🎉 The next step is to make all this logic reusable.
Decomposition 🎻 ✂️
This is the most interesting part, to reuse code related to a piece of logic we just can extract it into what called a “composition function” and return reactive state:
// ... <script> import { value, watch, computed, onMounted, onUnmounted } from 'vue-function-api' import { fetchUserPosts } from '@/api' function useScroll() { const pageOffset = value(0) const update = () => { pageOffset.value = window.pageYOffset } onMounted(() => window.addEventListener('scroll', update)) onUnmounted(() => window.removeEventListener('scroll', update)) return { pageOffset } } function useFetchPosts(props) { const isLoading = value(false) const posts = value([]) watch( () => props.id, async id => { isLoading.value = true posts.value = await fetchUserPosts(id) isLoading.value = false } ) return { isLoading, posts } } export default { props: { id: Number }, setup(props) { const { isLoading, posts } = useFetchPosts(props) const count = computed(() => posts.value.length) return { ...useScroll(), isLoading, posts, count } } } </script> // ...
decomposedSetupComponent.vue
Note how we used
useFetchPosts and
useScroll functions to return reactive properties. These functions can be stored in separate files and used in any other component. Compared to the option-based solution:
- Properties exposed to the template have clear sources since they are values returned from composition functions;
- Returned values from composition functions arbitrarily named so there is no namespace collision;
- There are no unnecessary component instances created just for logic reuse purposes.
There are a lot of other benefits that can be found on the official RFC page.
All code examples used in this article you can find here.
Live example of the component you can check here.
Conclusion
As you can see Vue’s function-based API presents a clean and flexible way to compose logic inside and between components without any of option-based API drawbacks. Just imagine how powerful composition functions could be for any type of project — from small to big, complex web apps. 🚀
I hope this post was useful 🎓. If you have any thoughts or questions, please feel free to respond and comment below! I will be glad to answer 🙂. Thanks.
Suggest: | https://school.geekwall.in/p/9-xUW5Gf/vue-js-3-future-oriented-programming | CC-MAIN-2020-40 | refinedweb | 1,588 | 50.63 |
Talk:United States Numbered Highway Relations
Contents
US:US Redundancy?
US:US? That seems a little redundant. I know we're using
US: as a quasi-namespace for state routes, and (apparently now) Interstates, but I don't know of a technical reason we need to call it
US:US. The numbered routes are the original U.S. route network. – Minh Nguyễn (talk, contribs) 21:18, 26 May 2009 (UTC)
- I'd say the US:US is for consistency. Considering it's the middle tier between the Interstates and the state routes, having it as simply US would seem to imply that the original US route network is somehow more significant than the Interstate network and all the states' individual networks. You could argue that it's more significant because it's the oldest, but many states already had state route networks before the US route network came into existence. Vid the Kid 21:44, 30 June 2009 (UTC)
Historic routes
There should be a section for significant historic routes. Yes, some of the significant historic routes are already covered, as they are National Scenic Byways, but not all of them. The Lincoln Highway is only a National Scenic Byway in Illinois, but that historic road is definitely much longer. The Dixie Highway is not in the NSB program at all. So I would like to expand the National Scenic Byways section to also include other significant historic routes. Vid the Kid 21:48, 30 June 2009 (UTC)
Splitting relations
Why are there now multiple relations entries for several states? I see the recent edits by NE2...but I don’t see the point. Did someone say somewhere that relations should be split at state borders or something? --Hawke 18:28, 7 January 2010 (UTC)
- Some mappers expressed concern that editing some of these highways was taking forever, due to the massive number of members in some of these relations. As a Potlatch user, I'm not thrilled to be seeing super-relations all over, but I guess it'll make things easier to manage in the long run. Splitting relations at state lines is pretty much an arbitrary decision, by the way. – Minh Nguyễn (talk, contribs) 07:08, 8 January 2010 (UTC)
- Well, I knew that splitting at state borders was arbitrary, but if we’re going to split it that’s the sensible place. My only concern was that I only see comments along the lines of “we might want to consider splitting some relations at some point,” and nothing saying we were definitely going ahead and doing that, or under what circumstances the ways should be split. It also didn’t help that I misread NE2’s changes slightly. --Hawke 16:44, 8 January 2010 (UTC)
- It might be useful to consider whether relations over a certain number of members would be useful to split. For instance, might consider relation chunks of 500 members — if 500 is handled reasonably well by all users with respect to performance. I think that a performance-based split would be better than a state-by-state split. However, in a performance-based split, the split itself should happen at a state border. For instance, might have a segment that crosses Indiana, Illinois and Iowa, with the split points at the Iowa and Indiana borders. If the performance issues are only for a small subset of folks, I think the issue could be handled another way, such as only downloading (to JOSM) members mapped to a certain state or group of states. --Ceyockey 00:21, 9 January 2010 (UTC)
- I prefer spliting at state borders except in occasional cases where the sections in another state are so short it would be silly. For example, when i did the I-88 relation for NY, i included the short PA segments. i'm generally finding that JOSM starts to seem a bit sluggish when i am dealing with the larger relations in NY. -- nfgusedautoparts 00:04 10 January 2010 (UTC)
For the record, the only one I split was US 1 in Florida, which is longer than Washington to Boston. The other changes in the table were to make it sortable (specifically so you can sort by progress). --NE2 02:44, 9 January 2010 (UTC)
Sorting
What use is sorting on this page?
:^) – Minh Nguyễn (talk, contribs) 08:14, 13 January 2010 (UTC)
- No you can't – clicking the sorting button gives me a JavaScript error in Firefox. – Minh Nguyễn (talk, contribs) 06:12, 31 January 2010 (UTC)
Looks like there will be a gap on US 24
Colorado signs US 24 on the old surface road to the Kansas state line, but Kansas signs it on I-70. It's probably best to leave this as a gap. --NE2 23:36, 30 January 2010 (UTC)
Gap in US 61 too?
On US 61 south at Turrell, Arkansas, it's supposed to enter I-55 south. But the recent reconstruction eliminated this ramp, and as best as I can tell there is no signage in this area pointing to I-55 south. The Arkansas Highway and Transportation Department doesn't care how US 61 goes, as they don't deal with overlaps. So I've left a gap in the US 61 relation. --NE2 09:19, 5 February 2010 (UTC)
Detoured, former and decommissioned alignments
If there are a limited amount of signs for a route that has been decommissioned for at least ten years or has never been legislated as such, should a relation or ref tags be used to mark that route along its former alignment nevertheless?
A user and I had an argument recently over how to classify US 1 Business in Trenton. He insists on having the relation follow the route of US 1 Alternate, which was decommissioned in 1978; I, however, had it along the route legislated by the New Jersey Department of Transportation before his revisions to US 206. The relation is currently set to end at the New Jersey-Pennsylvania line, and the route, except where it runs along US 206 (along which, except for one sign at the Brunswick Circle, there exist no signs for any bannered spur of US 1). CrystalWalrein 02:28, 17 March 2010 (UTC)
- See our page on disputes, which describes the "on the ground rule". There it's talking specifically about languages, but in my experience it's the general rule for anything else. This certainly keeps us from arguing between differing definitions (in some cases the state disagrees with itself). Here, there certainly are signs south of the Brunswick Circle, for instance at the CR 653 split. If and when the signs (which, at least at the circle, were presumably posted by NJDOT) disappear, then we can change the relation and ref tags. According to [1], NJDOT signs US 1 Business "as a service to the public".
- By the way, NJDOT does not "legislate" any routes. The state legislature at one point defined the routes by law, but the 1953 renumbering ended this. They keep track of routes with the straight line diagrams, but in this case the signage department chose not to follow the SLD. --NE2 19:09, 17 March 2010 (UTC)
Redoing the 2d/3d US Highways sections
I think we should completely redo the entire setup for the 2d/3d US Highways into the same format that the Interstate Highway relations page is using with separate sections for SuperRelations and the normal Relations. Because right now IMO, it doesn't look good at all & the sorting is useless imo.
With my recommendation to convert it to the same format as the Interstates page, we'd separate the 2d US Highways (including US-101) into ten digit blocks. So, we would have 0x, 1x, 2x, 3x, 4x, 5x, 6x, 7x, 8x, and 9x. US-101 would be placed in the 9x section.
As for the US Highways higher that 101 that are currently in the 102-730 section, we should separate them into 100's sections. Example: 1xx, 2xx, 3xx, 4xx, 5xx, 6xx, 7xx
Opinions? --Rickmastfan67 08:26, 29 May 2010 (UTC)
- Would would be the point? (I do agree that, now that everything is complete, sorting is useless.) --NE2 11:55, 29 May 2010 (UTC)
- Well, it would make it easier to read for one thing. Especially if you can see a SuperRelation section right in-front of the normal relations for each section/state. This will allow people to easily be able to identify the SuperRelation that they need to add a new relation to IF they need to do more splitting up in the future. Because, who knows if somebody will need to split up another relation in the future once they clean up the entire route after adding any state route relations that multiplex. That and bridges and other separated segments of the routes. --Rickmastfan67 02:11, 30 May 2010 (UTC)
- So, nobody else has an opinion on this (besides NE2)? Because tomorrow I should have some time to redo it if nobody has any major objections with it. --Rickmastfan67 02:30, 13 June 2010 (UTC)
- Alright, I'm going to get at least the 1-101 part overhauled right now so it's easier to read. :) --Rickmastfan67 21:15, 21 June 2010 (UTC)
- All done now. --Rickmastfan67 05:44, 22 June 2010 (UTC)
- I'm working on removing the State Wiki links. I consider it a failed experiment. :( --Rickmastfan67 23:59, 13 December 2010 (UTC) | http://wiki.openstreetmap.org/wiki/Talk:United_States_Numbered_Highway_Relations | CC-MAIN-2017-09 | refinedweb | 1,582 | 68.3 |
CODEX
Easy real-time messaging system with Python and Nodejs (Redis Pub/Sub Tutorial)
You have an application in Python and another in Nodejs and you want them to exchange data in real-time. The obvious way would be to set up Websockets or a RestfulAPI and have them communicate through endpoints. This can be a very tricky and tedious task. So, what if I told you, you can do that with ease without setting up Websockets or a RestfulAPI?
For the code on github, please scroll to the end.
Introducing the Publish/Subscribe model
This is a pattern that allows many different applications to communicate with each other with ease despite of what programming language you are using. It is highly scalable and very easy to set up.
What is a Publish/Subscribe model?
The general idea is, there is a “channel” to which an application will send data, in other words publish data and another application will be subscribed to it, to receive data. Multiple applications can publish data to one channel and multiple applications can subscribe to one channel to receive data.
Getting Started
In this tutorial, I will be showing you how we can use Redis to utilize this pub/sub model. For this tutorial, I am assuming you already have a python and nodejs environment set up. All you now need is to
- Download redis and install it —
Alright then. Let’s get into it!
Run redis-cli.exe
First thing’s first. After we install redis, go to the installed folder and run redis-cli.exe. It should look something like this.
As you can see, it starts the redis server at port 6379.
Python Publisher
We are gonna set up our python publisher to make sure that we can send messages to a channel using python. First, we need to make sure we have redis for python installed.
pip install redis
Run the above command to install it. Now create a file called publisher.py and code in the following.
import redispublisher = redis.Redis(host = 'localhost', port = 6379)
channel = "test"
while(message!="exit"):
message = input("")
send_message = "Python : " + message
publisher.publish(channel, send_message)
That’s all! Our python publisher is set up successfully. What we are doing here is keeping the publisher running until we type in “exit”, so we can keep on sending messages to the channel. The channel the message will be sent to is called “test”. The channel can be anything you want.
Python Subscriber
Now we are going to set up our python subscriber. Create a file subscriber.py and code in the following.
import redissubscriber = redis.Redis(host = 'localhost', port = 6379)
channel = 'test'
p = subscriber.pubsub()
p.subscribe(channel)
while True:
message = p.get_message()
if message and not message['data'] == 1:
message = message['data'].decode('utf-8')
print(message)
We are subscribing to the channel ‘test’ , so that as soon as any message is sent to ‘test’, it will be received by our program here.
Now if you run the following codes on two separate terminals at the same time, we get something like this.
As soon as I input “hello” and “how are you?” in publisher.py, it gets picked up by subscriber.py in real time.
Now let’s get deeper and connect another program to this channel. This is where things get even more interesting!
Nodejs Publisher
We first need to set up a project in nodejs and install redis. Running these two commands does the trick!
npm init
npm install redis prompt-sync
We also installed prompt-sync as it is required by nodejs to ensure we can take user-input from the command-line.
After that’s set up, create a file called publisher.js and code the following
const redis = require("redis");
const publisher = redis.createClient();
const prompt = require("prompt-sync")({ sigint: true });
var channel = "test"
var message = prompt();
message = "Nodejs : " + message;
publisher.publish(channel, message, () => {
publisher.quit();
});
As you can see the channel we will publish our data to is called ‘test’.
Nodejs Subscriber
Now we are going to set up our nodejs subscriber. Create a file subscriber.py and code in the following.
var redis = require("redis");
var subscriber = redis.createClient();var channel = "test";
subscriber.on("message", function (channel, message) {
console.log(message);
});
subscriber.subscribe(channel);
That’s all! We now have our nodejs subscriber ready to go.
Final Output
After we run our nodejs programs on separate terminals, our whole project looks like this.
That’s all for this tutorial. Do follow me and “clap” this writing if you guys learnt something cool and fun.
Here’s the whole code —
Please star it if you find it helpful.
Connect with me
Github — | https://medium.com/codex/easy-real-time-messaging-system-with-python-and-nodejs-redis-pub-sub-tutorial-6d43f5f4c75a?source=post_internal_links---------1---------------------------- | CC-MAIN-2021-17 | refinedweb | 782 | 68.87 |
GameFromScratch.com
Today saw the release of version 3.0 of the Xenko game engine. The Xenko game engine was made by Silicon Studios in Japan, previously known as the Paradox 3D engine. It was obviously having some issues as a product, with a few announced changes to the licensing structure and then in March rumours that it would be open sourced. Today that exact thing happened, Xenko 3.0 was released under the MIT license and is now available on GitHub.
As part of this release, Silicon Studios are no longer going to be supporting Xenko development. Fortunately though, this is not the end for Xenko, as one of the engine developers is currently going to be supporting the engine full time, at least in the short term. He has started a Patreon account in an attempt to raise the funding required to continue supporting the game engine going forward.
Details from the announcement:.
While the majority of the 3.0 release was targeting at moving to open source, there were a few additional features including video playback support and hair rendering. Additionally the SiliconStudio namespaces were removed, so if you are an existing Xenko developer, you will have to do some refactoring.
If you are interested in learning more about the Xenko game engine, be sure to check out our Closer Look review, as well as our much older Tutorial Series. You can see hands-on with the engine in this video and see what it is capable of in the 2017 demo reel.
GameDev News
Paradox C# Engine
In beta for a couple of years now, Silicon Studios have just released Xenko Game Engine. If you are interested in learning more, we did a complete tutorial series back when it was known as Paradox 3D. As part of the release pricing information has finally been announced.
Until July 31st, the Pro version will be available for free. Xenko is a cross platform 2D/3D game engine with an editor and full Visual Studio integration. The primary language is C#. The personal release requires a splashscreen and has a $200K USD revenue limit.
We did a hands on video detailing the new release below and embedded below.
GameDev News
Paradox now implements IIdentifiable and has an Id property. Audio Add SetRange support.
Xenko, previously known as Paradox Engine, just released version 1.8 of their C# powered cross platform 2D/3D game engine. If you want to learn more about the Xenko game engine, I previously featured it in the Closer Look series, as well as this tutorial series.
Perhaps the biggest new feature of this release is a new UI Editor.
This new editor enables WYSIWYG UI editing directly in the editor and comes with several controls and gives you the ability to edit properties such as color, layout, etc.
The UI editor isn’t the only new feature of this release. Other features include:
There are also several fixes and changes from the release notes:
DebugConsoleSystem
FindChild
FindRoot
Entity
AnimationBlend
SoundInstance
Position
PlayAndForget
AudioEmitterComponent
Input.SetGamePadVibration
IsLooped
IsLooping
GetSoundController
AttachSound
AttachSounds
DetachSound
DetachSounds
Collision.Contacts
HashSet
foreach
Xenko is available as a free download here. | http://www.gamefromscratch.com/?tag=/Paradox | CC-MAIN-2018-43 | refinedweb | 529 | 56.05 |
02 April 2009 01:23 [Source: ICIS news]
TORONTO (ICIS news)--German fertilizer major K+S is poised to buy the Morton Salt business from Dow Chemical in the US, according to news reports.
Financial Times Deutschland, citing unnamed banking sources, reported in its online editions that K+S was about to pay $1.5bn (€1.1bn) for Morton. The Wall Street Journal reported Dow was close to an agreement to pay $1.68bn.
Dow, which earlier on Wednesday closed its acquisition of Rohm and Haas, was trying to finance that deal by selling off some of the combined company’s business units, the paper said.
Dow CEO Andrew Liveris said last month it had been approached by several parties interested in buying Morton.
K+S was due to present at an investor conference in ?xml:namespace>
K+S offices in
Dow Chemical did not immediately return a call requesting comment.
K+S was earlier this year seen as interested in acquiring US salt firm Compass Mineral but reportedly shelved those plans after Compass’ share price shot up in the wake of takeover rumours.
In 2006, K+S strengthened its global salt business with the acquisition of Chilean producer SPL.
( | http://www.icis.com/Articles/2009/04/02/9205252/germanys-k-s-poised-to-buy-morton-salt-from-dow-report.html | CC-MAIN-2014-15 | refinedweb | 200 | 60.95 |
I have two UITableViews. They contain basically the same rows, but each one has different information. I detect left and right swipes and then animate the views so that one appears to "slide off" the screen while the other "slides on". They can go in either direction.
There could be up to 20 rows in them. If they scroll in one, I'd like the other one to scroll with it so that when they swipe left or right, they stay aligned. Hopefully that makes sense.
Is there an event that will detect when the UITableView has been scrolled so that I could then "force" the other UITableView to scroll to the same exact spot?
@JeffRush UITableView does have a "Scrolled" event handler and then you can use the "ScrollToRow" method to force your second UITableView to scroll to the same row. Hope this helps.
Thanks Danny. I actually tried that before my post and it worked as long as the user had scrolled exactly so that the top of a row aligned with the top of the table. In other words, if at the top of the first UITableView you could only see half of the top row, the second UITableView would not quite be exactly aligned.
Have you tried to synchronize on the
ContentOffsetproperty?
UITableViewis a subclass of
UIScrollView, so if the tables have the same dimensions, my hunch is that would be worth exploring.
I ended up using ScrollToRow on the first table as suggested so that it will always show the entire top row before I keep the other table in sync.
using Foundation;
using System;
using UIKit;
using CoreGraphics;
using System.Collections.Generic;
namespace ScrollTesting
{
public partial class ViewController : UIViewController,IUITableViewDelegate, IUITableViewDataSource
{
}
This works perfect!! Enjoy!!
//ContentSize = new CGSize(View.Bounds.Width, View.Bounds.Height+h),
Don't forget to uncomment this line.
| https://forums.xamarin.com/discussion/comment/381658/ | CC-MAIN-2021-10 | refinedweb | 308 | 72.56 |
Provided by: manpages_5.05-1_all
NAME
socket - Linux socket interface
SYNOPSIS
#include <sys/socket.h> sockfd = such as AF_INET, AF_IPX, and AF_PACKET, and socket types such as). ┌────────────────────────────────────────────────────────────────────┐ │ I/O events │ ├───────────┬───────────┬────────────────────────────────────────────┤ │Event │ Poll flag │ Occurrence │ ├───────────┼───────────┼────────────────────────────────────────────┤ │Read │ POLLIN │ New data arrived. │ ├───────────┼───────────┼────────────────────────────────────────────┤ │Read │ POLLIN │ A connection setup has been completed (for │ │ │ │ connection-oriented sockets) │ ├───────────┼───────────┼────────────────────────────────────────────┤ │Read │ POLLHUP │ A disconnection request has been initiated │ │ │ │ by the other end. │ ├───────────┼───────────┼────────────────────────────────────────────┤ │Read │ POLLHUP │ A connection is broken (only for │ │ │ │ connection-oriented protocols). When the │ │ │ │ socket is written SIGPIPE is also sent. │ ├───────────┼───────────┼────────────────────────────────────────────┤ │Write │ POLLOUT │ Socket has enough send buffer space for │ │ │ │ writing new data. │ ├───────────┼───────────┼────────────────────────────────────────────┤ │Read/Write │ POLLIN | │ An outgoing connect(2) finished. │ │ │ POLLOUT │ │ ├───────────┼───────────┼────────────────────────────────────────────┤ │Read/Write │ POLLERR │ An asynchronous error occurred. │ ├───────────┼───────────┼────────────────────────────────────────────┤ │Read/Write │ POLLHUP │ The other end has shut down one direction. │ ├───────────┼───────────┼────────────────────────────────────────────┤ │Exception │ POLLPRI │ Urgent data arrived. SIGURG is sent then. │ └───────────┴───────────┴────────────────────────────────────────────┘ program. Both classic and extended BPF are explained in the kernel source file Documentation/networking/filter.txt SO_ATTACH_REUSEPORT_CBPF, SO_ATTACH_REUSEPORT_EBPF For use with the SO_REUSEPORT option, these options allow the user to set a classic BPF .. Before Linux 2.6.28 select(2), poll(2), and epoll(7) did not respect the SO_RCVLOWAT setting on Linux, and indicated a socket as readable when even a single byte of data was available. A subsequent read from the socket would then . that is to receive SIGIO or SIGURG signals when I/O becomes possible or urgent data is available. The argument is a pointer to a pid_t. For further details, see the description of F_SETOWN in fcntl(2). /proc interfaces were introduced values in the corresponding /proc files are twice what can be observed on the wire. Linux will allow port reuse only.
SEE ALSO
wireshark(1), bpf(2), connect(2), getsockopt(2), setsockopt(2), socket(2), pcap(3), address_families(7), capabilities(7), ddp(7), ip(7), packet(7), tcp(7), udp(7), unix(7), tcpdump(8)
COLOPHON
This page is part of release 5.05 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at. | http://manpages.ubuntu.com/manpages/focal/man7/socket.7.html | CC-MAIN-2020-40 | refinedweb | 340 | 59.7 |
Fast Subnet Matching
Determining if a subnet contains a given IP is a fundamental operation in networking. Router dataplanes spend all of their time looking up prefix matches to make forwarding decisions, but even higher layers of application code need to perform this operation - for example, looking up a client IP address in a geographical database or checking a client IP against an abuse blocklist.
Routers have extremely optimized implementations, but since these other uses may be one-off codepaths in a higher-level language (eg. some random Go microservice), they’re not written with the same level of care and optimization. Sometimes they’re written with no care or optimization at all and quickly become bottlenecks.
Here’s a list of basic techniques and tradeoffs to reference next time you need to implement this form of lookup; I hope it’s useful in determining a good implementation for the level of optimization you need.
Multiple Subnets
If you have multiple subnets and want to determine which of them match a given IP (eg. longest prefix match), you should be reaching for something in the trie family. I won’t cover the fundamentals here, but do recommend The Art of Computer Programming, Vol. 3 for an overview.
Be extremely skeptical of any off-the-shelf radix libraries:
- Many do not do prefix compression
- Many support N instead of two edges, which may lead to unnecessary memory overhead
- Many will operate on some form of string type to be as generic as possible, again contributing to memory overhead
- All be difficult to adapt to different stride lengths
I would highly recommend writing your own implementation if performance is a concern at all. Most common implementations are either too generic or are optimized for exact instead of prefix match.
unibit to multibit to compressed
A radix 2 trie that does bit-by-bit comparison with compression for empty nodes is a good starting point. To further speed it up, you’ll want to compare more than one bit at a time - this is typically referred to as a multibit stride.
Multibit strides will get you significantly faster lookup time at the cost of some memory - in order to align all comparisons on the stride size, you’ll need to expand some prefixes.
As an example, let’s say you’re building a trie that contains three prefixes:
- Prefix 1: 01*
- Prefix 2: 110*
- Prefix 3: 10*
A unibit trie would look like this:
If instead we want to use a multibit trie with a stride of two bits, then prefix 2 needs to be expanded into its two sub-prefixes, 1101* and 1100*. Our multibit trie would look like this:
Note how this trie has incresed our memory usage by duplicating prefix 2, but has reduced our memory accesses and improved locality (there are far fewer pointers chased in this diagram), thus trading memory usage for lookup performance.
Most of the time a multibit trie is where you can stop. If you need to optimize further, especially if you need to start reducing memory usage, then you’ll want to explore the literature on compressed tries. The general idea with many of these is to use a longer or adaptive stride, but find clever ways to remove some of the redundancy it introduces. Starting points include LC-tries, Luleå tries, and tree bitmaps.
Modified traversals
There are some common, related problems that can be solved by small modifications to the traversal algorithm:
- If instead of finding the longest prefix match you need to find all containing subnets, simply keep track of the list of all matching nodes instead of the single most recent node as you traverse and return the full set at the end.
- If you need to match a containing subnet on some criteria other than most specific match, for example declaration order from a config file, express this as a numerical priority and persist it alongside the node. As you traverse, keep track of the most recently visited node and only replace it if the currently visited is a higher priority.
Sidenote on PATRICIA tries
PATRICIA tries are a radix 2 trie that saves a count of bits skipped instead of the full substring when doing compression. You don’t want this! They’re great for exact match lookup, like what you’d want in a trie of filenames, but saving only the skip count causes prefix matches to backtrack, resulting in significantly worse performance. It’s unfortunate that they’re so often associated with networking; in some cases the name is misused and people say PATRICIA when they simple mean radix 2.
Single Subnet
If you have a large number of IPs and want to check if a single subnet contains them, spend a little time looking at your assembler output to choose a good implementation. If available, you’re best off using 128-bit literals to support IPv6. C, C++, Rust, and many systems languages will support this. Unfortunately Go and Java do not, so you’ll have to piece it together with two 64-bit integers - slightly cumbersome, and slightly more overhead as we’ll see.
In IPv4, subnet contains checking is easy since everything fits in a word, roughly:
// checking if 1.2.3.0/8 contains 1.2.3.4 uint32_t prefix = 0x01020300; // prefix address, packed big endian uint32_t client = 0x01020304; // client address, packed big endian uint8_t mask = 8; // netmask, range 0-32 uint32_t bitmask = 0xFFFFFFFF << (32 - mask); // invert the mask to get a count of number of zeros if ( (prefix & bitmask) == (client & bitmask) ) { // subnet contains client }
IPv6 is when things get interesting. 128-bit long IPv6 addresses means juggling two machine words. In computing the bitmask we need a mask for the upper and the lower portion of the address.
uint64_t upper_prefix, lower_prefix, upper_client, lower_client = ; // assume these are initialized uint8_t mask = ;// netmask, range 0-128 uint64_t upper_bitmask = UINT64_MAX; uint64_t lower_bitmask = UINT64_MAX; if (mask < 64) { lower_bitmask <<= mask; } else { upper_bitmask = lower_bitmask << (64 - mask); lower = 0; } if ((upper_prefix & upper_bitmask) == (upper_client & upper_bitmask) && (lower_prefix & lower_bitmask) == (lower_client && lower_bitmask)) { // subnet contains client }
Rewriting with gcc/clang’s int128 emulated type:
__uint128 prefix, client = ; // assume these are initialized uint8_t mask = ;// netmask, range 0-128 __uint128 bitmask = std::numeric_limits<__uint128_t>::max() <<= (128 - mask); if ( (prefix & bitmask) == (client & bitmask) ) { // subnet contains client }
The emulated int128s are much easier to read and work with, but how does performance compare?
Here is the source code and Godbolt link for a small test, isolating just the shift portion:
#include <cstdint> __int128 shift128(uint8_t shift) { __int128 t = -1; t <<= shift; return t; } struct Pair { uint64_t first, second; }; Pair shift64(uint8_t shift) { uint64_t upper = -1; uint64_t lower = -1; if (shift < 64) { lower <<= shift; } else { upper = lower << (shift - 64); lower = 0; } return Pair{upper, lower}; }
And here is the compiler’s optimized x86 assembly with comments added:
shift128(unsigned char): mov ecx, edi ; load mask into ecx mov rax, -1 ; initialize lower word xor esi, esi ; zero this register for use in cmov mov rdx, -1 ; initialize upper word sal rax, cl ; shift lower word by mask and ecx, 64 ; and our mask with 64 cmovne rdx, rax ; move lower word into upper cmovne rax, rsi ; zero lower word ret shift64(unsigned char): movzx ecx, dil ; load mask into ecx cmp dil, 63 ja .L4 ; jump if mask is >= 64 mov rdx, -1 ; initialize lower word mov rax, -1 ; initialize upper word sal rdx, cl ; shift lower word by mask ret .L4: sub ecx, 64 ; find out how much we need to shift the upper word by mov rax, -1 ; initialize upper word xor edx, edx ; mask was >64, so just zero the lower word sal rax, cl ; shift upper word ret
There are a few interesting things to note:
salwill automatically mask its shift operand to the appropriate range, so while it’s undefined behavior in C to shift by more than the size of the target, this is fine at the asm level
andwith 64 is using knowledge of undefined behavior - our shift is only well-defined within the range of 1-127, so we assume UB is impossible and ignore the range outside.
cmovis used instead of a jump. On modern hardware this should be strictly better, though is most noticeable when jumps are unpredictable. Our jumps should be very predictable here.
If we wanted, we could rewrite the int64 version in a way that would more closely match the int128 assembly:
Pair shift64_v2(uint8_t shift) { uint64_t upper = -1; uint64_t lower = -1; lower <<= (shift & 0x3F); if (shift > 0x3F) { upper = lower; lower = 0; } return Pair{upper, lower}; }
shift64_v2(unsigned char): mov ecx, edi mov rdx, -1 mov rax, -1 sal rdx, cl cmp dil, 63 jbe .L4 mov rax, rdx xor edx, edx .L4: ret
Note how the assembly does not contain any explicit
and with 0x3F, we’ve
merely communicated to the compiler that we want the
sal instruction’s
default mask behvior. Our
cmov has also been converted to
jmp.
Previously I’d hoped that I could use the 128-bit SSE registers and mm
intrinsics to operate on IPv6 addresses natively. However, operations to use
SSE registers as a single 128-bit value (as opposed to 2 64-bit values, 4
32-bit values, etc.) are quite limited. In particular,
_mm_slli_si128 shifts
by bytes instead of bits so won’t work for our use case (though SIMD
instructions would be useful for performing matches against multiple client IPs
at once). | https://siliconsprawl.com/2020/06/07/fast-subnet-matching.html | CC-MAIN-2020-29 | refinedweb | 1,576 | 50.4 |
On Thu, Nov 12, 2009 at 2:19 PM, Vern Ceder <vceder at canterburyschool.org> wrote: > Thanks a lot, Kirby. That's the sort of boost we need. I hope some of the > Portland folks will participate, virtually if they can't make the journey > across the country. > > Cheers, > Vern > Thanks to you as well Vern, for keeping this ball rolling. I dropped something into the Chipy list as well, expect a few subscribers will reminisce about last year's action (promotion). Speaking of which, I've found occasion to promote Python as a community to groups who might benefit from lessons learned. Two examples: (i) the Lightning Talk meme is worth sharing. A group, say a bunch of anarchists, hears how Pythonistas sometimes get kinda fascist about that time limit, which gives them respect for the discipline but maybe without thinking they have really think like a computer scientist (tends to mean "strict and precise" in the popular mindset). Here's a way to do Show & Tell without surrendering the floor to some guy who knows how to avoid being interrupted, plays King of the Hill with that microphone. Of course Karaoke has already helped reinforce this "sharing the limelight" ethic, other brands of open mic, but it's good to know the Python people have their own ethic and aesthetic, proves engineers are likewise human. (ii) the structure of the Python community contains many worthwhile ideas. Like the WikiEducator group is trying to juggle the puzzle pieces of having the Wiki itself, a governing structure, a Google group. What about a communal blog, do we need one? -- a recent question. Well, why not check out how the Python community does it with that PSF blog etc. There's something to be said for having those blogospheric links, as the Wiki concept is still not that familiar to those not getting open source through their schooling, let alone hearing anything about specific implementations, e.g. MoinMoin (believe it or not). Wikipedia is understood more as a read-only encyclopedia than a co-authoring system i.e. many who consult it for homework assignments don't understand what a Wiki is or what that has to do with how Wikipedia pages come to be. This means they might come to Wikieducator with similar preconceptions, more as consumers than as producers of content (which is fine, so far as it goes, but represents a loss of freedom, promotes passivity where more activity is probably what's needed). We have a long way to go, when it comes to passing on free and open source concepts. I continue to mine the Python infrastructure for good ideas (PEPs, clear notion of namespaces, a willingess to adapt but not gratuitously (Django's "no astronauts" meme...). It's a two way street of course, e.g. this idea of a "poster session" as a part of Pycon is already very much a part of many successful conferences (including GIS in Action (where I spoke right after Pycon this year, mentioned our wanting to copy this **). I still remember Chalmers, the NanoTech conference going on in tandem with EuroPython. For some of our keynotes, we had to wander through a poster session on buckyballs and nanotubes. It's around then that I launched something called HP4E, echoes of CP4E, trying to promote more geometric awareness (HP = hexapents &&). Kirby ** (with pictures) && (see links) | https://mail.python.org/pipermail/edu-sig/2009-November/009666.html | CC-MAIN-2016-40 | refinedweb | 566 | 69.21 |
...
31 comments:
I am trying to use ipython in PyDeV without success so far. What do you mean by "make sure that the IPython paths are properly added in your interpreter configuration inside of PyDev".
Here's my setup:
- Mac OS X 10.7 with python 2.7 and ipython 0.11 installed via MacPorts
- Eclipse Indigo with PyDev version 2.2.1.2011073123
When I start a python console I don't get an ipython shell, although import IPython works:
import sys; print('%s %s' % (sys.executable or sys.platform, sys.version))
/opt/local/Library/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python 2.7.2 (default, Jul 23 2011, 13:25:29)
[GCC 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2335.15.00)]
import IPython
IPython.__file__
Out[3]: '/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/IPython/__init__.pyc'
/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages is added to the "system libs" in the pydev config.
What else is needed for pydev to find ipython?
Actually, it seems you have an IPython-based shell (it's just that IPython 0.11 no longer prints a welcome as it did on 0.10, but it's being used as your backend -- is there some IPython command you're trying that's not working as it should?)
p.s.: The fact that it's printing "Out[3]:" makes me 100% sure that you're really using the IPython backend.
Mac users who encounter issues with line scrolling when using IPython should reinstall readlines using easy_install and not PIP. See this answer.
Another interesting alternative to the standard python shell is bpython.
I get an error.
System running OSX 10.6.8
Python Enthought EPD 7.1 (python2.7, iPython 0.11)
Error getting info on interpreter.
Common reasons include
- Using an unsupported version
- Specifying an invalid interpreter.
The details:
See error log for details.
Unable to recreate the Interpreter info (Its format changed. Please, re-create your Interpreter information).Contents found:ipython [options] [files]
...there was a bunch more about iPython that was too long to post.
I'd really like to highlight some code and execute it in the iPython console.
I think you're trying to configure ipython.exe. What you should do is configure python.exe (as regular following the steps at: ), and start the shell and it'll automatically pick IPython (PyDev imports IPython and uses it -- the ipython executable is not used)
On linux it's not working with indigo and helios. I still have the standard pydev console but no warning at all. Do I have to remove or modify something in the "interactive console" configuration? (Actually the initial interpreter command is: mport sys; print('%s %s' % (sys.executable or sys.platform, sys.version))
)
Message when I open a console:
>>> import sys; print('%s %s' % (sys.executable or sys.platform, sys.version))
/usr/bin/python2.7 2.7.1+ (r271:86832, Apr 11 2011, 18:13:53)
[GCC 4.5.2]
So it's not ipython.
pydev version:
2.2.1.2011073123
Actually, if you have no warning saying that it's NOT IPython, then it is IPython (in 0.11 they're not printing the standard welcome message).
My normal shell can detect ipython. But django shell cannot
How are you starting the shell? (for me, when starting the shell for Django it always gets it with IPython configured).
I think that you are trapping the stack traces somehow when ipython dies on an exception.
Usually after:
ValueError: value does not have the same key
you can type %debug and get into the the stack but in pydev:
>>> %debug
ERROR: No traceback has been produced, nothing to debug.
Please report that as a bug in the pydev sf tracker. See:
Hi,
Some more feedback. I have been trying to start ipython as well but with no luck so far.
Actually I cannot get the python interpreter to be configured properly, as when I add the absolute path to the ipython executable, this one seems to be replaced with some message, and what appears in the Location field of the Python Interpreter is a long string like
/User/homedir/.ipythonInitializing from configuration: /some/path/Ipython/UserConfigSuccessful Upgrade!All files in your directory /Users/homedir/.ipythonwhich would have been overwritten by the upgrade bla bla bla...
My config:
Mac Os X 10.7, python 2.6 and ipython 0.10.1 via macports
eclipse indigo
PyDev for Eclipse 2.2.1.2011081113
The normal python and Jython console start without a problem.
Guillaume.
Please create a bug report with the full error..
Matt
Hi,
I'm also having no luck with PyDev and IPython. I have installed Eclipse and PyDev and everything works.
I'm using Python(x,y) 2.7.2.0 which is a Python distribution for scientists.
Python(x,y) installed IPython in C:\Python27\Scripts.
Under PyDev preferences, I went to the Python Interpreter preferences and add C:\Python27\Scripts to the PYTHONPATH and clicked Apply. PyDev did some processing.
But when I go to create a new PyDev console I get following message:
PyDev console: using default backend (IPython not available).
Any ideas what could be going wrong?
I'm using Eclipse Helios Service Release 2 and PyDev 2.2.3.
Thanks.
Michael
I also am having problems getting IPython to work in PyDev:
>>> import sys; print('%s %s' % (sys.executable or sys.platform, sys.version))
/usr/bin/python2.6 2.6.6 (r266:84292, Dec 26 2010, 22:31:48)
[GCC 4.4.5]
PyDev console: using default backend (IPython not available).
>>> import IPython
>>> IPython.__file__
'/usr/lib/pymodules/python2.6/IPython/__init__.pyc'
If anyone can help with more specifics on what configuration is necessary for PyDev to use IPython I'd be very grateful...!
@Unknown:
Please create a bug report specifying the IPython version you're using.
Also, please add the result of executing:
from IPython.frontend.terminal.interactiveshell import TerminalInteractiveShell
and
from IPython.frontend.prefilterfrontend import PrefilterFrontEnd
to the bug report
@Michael P
Please follow the same instructions I passed for @Unknown...
hi fabio
I have the same problem as michael p.
with pydev 2.3 I add my ipytyhon path c:/python26/scripts (the default) but it is not picked up.
I have ipython 0.11
the output to the two commands you requestsed are
from IPython.frontend.terminal.interactiveshell import TerminalInteractiveShell
Traceback (most recent call last):
File "", line 1, in
ImportError: No module named terminal.interactiveshell
from IPython.frontend.prefilterfrontend import PrefilterFrontEnd
Traceback (most recent call last):
File "", line 1, in
File "C:\Python26\lib\site-packages\ipython-0.10.1-py2.6.egg\IPython\frontend\prefilterfrontend.py", line 32, in
from IPython.kernel.core.redirector_output_trap import RedirectorOutputTrap
File "C:\Python26\lib\site-packages\ipython-0.10.1-py2.6.egg\IPython\kernel\__init__.py", line 25, in
from IPython.kernel.error import TaskRejectError
File "C:\Python26\lib\site-packages\ipython-0.10.1-py2.6.egg\IPython\kernel\error.py", line 20, in
from twisted.python import failure
ImportError: No module named twisted.python
I would ifle a bug but I'm not sure where.
thanks
Henry
You said you had ipython 0.11, yet, your traceback maps to ipython 0.10 (maybe that's part of your problem?)
If you're not able to find it out, please ask about it in stackoverflow with a pydev tag giving details on your configuration, etc. (this blog isn't the proper place for these questions).
If you think it's a bug (and not a misconfiguration on your side), please report it at the pydev bugs tracker (in sourceforge).
I am having trouble adding the ipython interpreter in the pydev interpreter preference window. I get the following error:
Error getting info on interpreter.
Common reasons include:
-using an unsupported version
-specifying an invalid interpreter.
The path to the interpreter I am trying to specify is /Library/Frameworks/Python.framework/Versions/Current/bin/ipython.
Is this not the proper interpreter file? What is the proper interpreter file?
I am running on Mac OS 10.7.2, and pydev 2.3
@anonymous:
Please ask at stackoverflow with a 'pydev' tag (this blog is not the proper place for support questions).
Hi,
great post, thanks for the explanation.
Everything works fine, except I am not able to use tab-completion in my IPython through Pydev in Eclipse (Debian 6 64bit VM guest on a 64bit Windows 7 Pro host).
Any Ideas why this might be?
@Anonymous: can you ask related on tab-complete on StackOverflow (with a PyDev tag)?
Hi,
Apparently Ipython 0.12 associated. I solved this problem by commenting out all lines containing @testdec.skip_doctest in file C:\Python27\Lib\site-packages\IPython\Magic.py
Paul
I'm on Aptana Studio 3, with PyDev 2.7.0. I've got the pythonxy distribution, and when I do a pip freeze | grep ipython, I am told that my ipython version is 0.13.1.
However, when I pull up the PyDev console, I get this:
C:\Python27\python.exe 2.7.3 (default, Apr 10 2012, 23:31:26) [MSC v.1500 32 bit (Intel)]
PyDev console: using IPython 0.11
IPython versions do not match! Is this expected or something wrong with my setup?
also, is there any chance of integrating the ipython QTconsole with PyDev? Is it even possible? I like the fact that plot are not blocking.
Hi T,
Probably your PYTHONPATH is not correct (you can double-check it printint sys.path).
It's possible to make the console non-blocking in PyDev (I made some experiments in the past for that), but it's not high in my priority list right now...
Cheers,
Fabio :-)
What doesn't work and looks like a bug is the prompt is still >>> instead of the "In[x]" prompt, and Out is always Out[1], i.e. the run number never increments..
Note1: of course,you need IPython installed of your system for this to work
Note2: the interpreter will still complain with something like "In [1]: PyDev console: using default backend (IPython not available)" but it is because it prints it automatically before you have the chance to import IPython.
Hope this helps!! Bye!!.
Thanks!
Hi Andrew,
There's no such option in the UI, but you can manually edit plugins\org.python.pydev\pysrc\pydevconsole.py and change the line:
from pydev_ipython_console import InterpreterInterface
to a "pass" to achieve that.
Cheers,
Fabio | http://pydev.blogspot.com/2011/08/ipython-pydev.html?showComment=1312445855632 | CC-MAIN-2018-17 | refinedweb | 1,770 | 69.38 |
import "github.com/pachyderm/pachyderm/src/server/pfs/s3"
auth.go bucket.go controller.go error.go multipart.go object.go s3.go service.go util.go
Server runs an HTTP server with an S3-like API for PFS. This allows you to use s3 clients to acccess PFS contents.
This returns an `http.Server` instance. It is the responsibility of the caller to start the returned server. It's possible for the caller to gracefully shutdown the server if desired; see the `http` package for details.
Note: server errors are redirected to logrus' standard log writer. The log writer is never closed. This should not be a problem with logrus' default configuration, which just writes to stdio. But if the standard logger is overwritten (e.g. to write to a socket), it's possible for this to cause problems.
Note: In `s3cmd`, you must set the access key and secret key, even though this API will ignore them - otherwise, you'll get an opaque config error:
Package s3 imports 25 packages (graph) and is imported by 2 packages. Updated 2019-09-14. Refresh now. Tools for package owners. | https://godoc.org/github.com/pachyderm/pachyderm/src/server/pfs/s3 | CC-MAIN-2019-51 | refinedweb | 189 | 57.47 |
This patch implements a couple of functions that were described in my RFC from last week that’s available at. This patch adds the following pieces: apply function, requiredSelection function, and the selection::SourceSelectionRange constraint.
This code will be used as a base to start moving the functionality and tests for clang-rename over to clang-refactor.
+ Introduce refactoring diagnostics.
Ping.
This is great work and definitely a lot to digest! ;) Some high-level comments for the first round.
In general, I really appreciate the high-level interfaces; I am just a bit concerned about the implementation which is a bit hard to follow at this point, especially with all the template magics. I left some suggestions in the comments; hopefully they would help.
I would also appreciate more detailed high-level comments on the major classes and their relationships - I found myself constantly going back to the RFC to understand their intentions.
Could you please split changes related to DiagnosticOr into a separate patch and have owners of clang/Basic/ directory review it? ;)
Code:
Why is this called apply? I feel something like createAction or generateAction would be more intuitive.
What are all the possible states, for example?
Maybe I am missing too much context, but I found it hard to understand the comment and couldn't relate the comment to the public interfaces. Could you elaborate more?
It is also unclear to me if this is specific to the source selection.
It's unclear to me what the intentions of these members are. Could you add some comments for these members?
Do we expect the result changes to be modified? Why?
nit: redundant be used.
It might worth explaining the relationship between this and the RequirementBase.
explicit ?
but probably it would be a bit cleaner to enable SFINAE via a template parameter ( , option #4) rather than via extra argument.
Extracted DiagnosticOr to a separate patch at.
I will update this patch tomorrow.
Bad comment. I think inputs is more descriptive.
Nah, it's more convenient to be able to return a single AtomicChanges without an explicit initializer I think.
No, that was a workaround for a non-const member use. I'll use a const cast instead.
This class is not related to RequirementBase though. Were you talking about another class? :)
Could you help me understand this class?
This seems to be a selection-specific requirement and should live in selection. It is also used in BaseSpecializedRule which seems to be a higher level of abstraction.
I'm a bit unsure about the abstraction of the refactoring result. I would expected refactoring results to be source changes always. Do you have any refactoring tool that outputs occurrences in mind?
+1 to explicit which could prevent unintentional conversion. RefactoringResult(Change); isn't too bad IMO.
Does detail mean internal implementation? Maybe use internal which is more commonly used for this?
In Xcode we require rename to return symbol occurrences because the IDE is responsible for figuring out:.
In D36075#851278, @ioeric wrote:.
It's just a container class that stores all information about the requiredSelection requirement. I agree about BaseSpecializedRule, that connection should be chopped. I will move the evaluation code into the requirement itself when I update the patch.
I would tend to disagree about moving it though, as SourceSelectionRequirement is a requirement first and I think that's why it should live with other requirements. Yes, it's related to selections, but it uses them to implement the requirement. I think it's better to keep requirements together, as opposed to having option requirements close to options, selection requirements close to selection, and so on. WDYT?
Thanks!
Makes sense. We might want to put individual requirements into their own headers so that this doesn't grow into a huge file when more requirements are supported.
It might worth having a comment explaining why and how Expected<Optional> is wrapped and unwrapped during the evaluation.
We might want to document supported requirements somewhere else so that we don't need to update this file every time a new requirement is added.
s/RefactoringOperationController.h/RefactoringOperation.h/ :)
I found the name a bit confusing - RefactoringOperation sounds a bit like RefactoringAction.
Would it make sense to call this RefactoringContext or RefactoringRuleContext, if this stores states of a refactoring rule?
I feel occurrences are more of an intermediate state of a refactoring action than a result. I'm wondering if it makes sense to introduce a separate class to represent such intermediate states? I am a bit nervous to fuse multiple classes into one; the interfaces can get pretty ugly when more result kinds are added.
We are tempted to avoid using namespace if possible.
Good point. I agree.
I think it would be better to differentiate between RefactoringActionRules then. Ordinary rules return a set of AtomicChanges instead of RefactoringResult. But then we could also have "interactive" rules that return "partial" results like symbol occurrences.
I think I'll try the following approach:
class RefactoringActionRule {
virtual ~RefactoringActionRule() {}
};
class RefactoringActionSourceChangeRule: public RefactoringActionRule {
public:
virtual Expected<Optional<AtomicChanges>>
createSourceReplacements(RefactoringOperation &Operation) = 0;
};
class RefactoringActionSymbolOccurrencesRule: public RefactoringActionRule {
public:
virtual Expected<Optional<SymbolOccurrences>>
findSymbolOccurrences(RefactoringOperation &Operation) = 0;
};
Why? It's not in a header. using namespace clang is the common practice across all of Clang's sources.
@arphaman.
that's great, i'm interested in this too and would be happy to see clang-reorder-fields moving to clang-refactor (pls, let me know if i can help make this happen)
Do you think it should be in Clang's documentation? I can start on a new document there but I'd prefer to do it in a separate patch. WDYT?
Thanks for the changes! Lgtm with a few nits.
Sure, this is fine for now. It would be nice to have proper documentation in the future when pieces get into places.
Why isn't this a interface in SpecificRefactoringRuleAdapter with return type Expected<Optional<T>>?
It would be nice to also rename the variable from Operation to Context.
Thanks,
I'll start working on the documentation patch and will split follow-up clang-refactor patch into one or two parts today.
A method declaration in SpecificRefactoringRuleAdapter won't work since it won't be available in either the template specialisation or the deriving class as the classes won't be directly related. I could use a separate parent class independent of SpecificRefactoringRuleAdapter that declares a generic interface though.
Sorry for being late, was out on vacation.
Generally, why do we need this tag-based abstraction here instead of using the more typical OO double-dispatch where necessary?
(We do this in the AST a lot, but the AST is special, because there we want to implement a lot of different algorithms that rely on the node type, while I don't see how that applies here)
Generally the clients will have to somehow distinguish between the types of results that are produced by rules to figure out what to do (e.g. AtomicChanges -> apply, SymbolOccurrences -> ask user, Continuation -> look for more ASTs). So far I've thought that the LLVM based dynamic casts will work well for this, e.g.
if (auto *Action = dyn_cast<SourceChangeRefactoringRule>()) {
Expected<Optional<AtomicChanges>> Changes = Action->createSourceReplacements();
applyChanges(Changes);
} else if (...) {
...
} else (...) {
...
}
But you're probably right, there might be a better way to do this rather than the tag based approach. Something like a consumer class that clients can implement that provides consumer functions that take in the specific results. I reckon a single consumer will actually work better in the long-run when we might Continuations that both return changes in the first TU and information for searches in other TUs. I'll see if I can get a patch out that removes this tag and uses the consumer approach.
Cool, looking forward to it :) | https://reviews.llvm.org/D36075 | CC-MAIN-2021-39 | refinedweb | 1,307 | 56.66 |
Writing JSON APIs : Part I – Creating a secure JSON API with Grails and Spring Security in 3 easy steps
We had a requirement in a recent project to expose some of the functionality we had via a JSON API. The functionality needed to be secure, as was the initial web interface which exposed the functionality. We were using Spring Security for the security aspect of our application.
The spring security plugin, together with a secured controller and a custom JSON marshaller(which overrides the default functionality of render as JSON method) gave us a very simple, yet elegant and powerful JSON API which was secure. Here are the three steps that we followed
1. Setting up Spring Security Plugin:
The first step is to set up Spring Security Plugin to expose our JSON based controllers to use Basic Authentication, instead of the standard web-based authentication. This is done by adding the lines given below in Config.groovy
[java]
//Enable Basic Auth Filter
grails.plugins.springsecurity.useBasicAuth = true
grails.plugins.springsecurity.basic.realmName = "JSON API Example"
//Exclude normal controllers from basic auth filter. Just the JSON API is included
grails.plugins.springsecurity.filterChain.chainMap = [
‘/json/**’: ‘JOINED_FILTERS,-exceptionTranslationFilter’,
‘/**’: ‘JOINED_FILTERS,-basicAuthenticationFilter,-basicExceptionTranslationFilter’
]
[/java]
More details about the authentication mechanism can be found here.
2. Adding a Controller with required actions:
Naturally, this is the next step. We added a controller which would expose the functionalities required(another reason why most of our logic should be in our services instead of controllers). A sample controller would look like this.
[java]
package jsonapi
import grails.plugins.springsecurity.Secured
import grails.converters.JSON
import com.intelligrape.example.json.Book
@Secured(["ROLE_USER"])
class JsonController {
def getBooks = {
render Book.list() as JSON
}
}
[/java]
3. Customizing the Marshaller to change the way some properties like enums are rendered:
We had to change the way some of the properties like enums were going to be rendered. We just had to render the property name and the id of the enum. So instead of a JSON map like
[java]
"genre":{"enumType":"com.intelligrape.example.json.Book$Genre","name":"FICTION"}
[/java]
we needed
[java]
"genre":"FICTION"
[/java]
I sought help from David Bower’s post and created my own custom Domain Class Marshaller for JSON with a modification to just use the Enum value if the property happened to be an enum. This is a very powerful feature because it allows us to customize the way in which we want to render the values when creating a JSON or XML document from our classes. We can even customize it to have an excludes list in our domain class where we can specify the properties to be excluded while constructing our JSON or XML
With this, we had a JSON API ready in very little time. I have extracted the functionality into a small example application, which has been shared on Github.
Yet another example of how simple grails has made it easy for developers.
You are right. REST was probably the wrong word to use here. I should probably have used “JSON API” instead.
I had a look at the plugin and it’s awesome. We had to incorporate a few more things like having separate property groups, which I’ll be touching upon in part II.
Updating the title. 🙂
Well… there’s nothing REST in the way you described your API. REST is all about resources and not actions so “getBooks” is no in the RESTful spirit. I know it first-hand that getting your mind set to working with resources instead of actions for someone coming from the WS-* world is hard. But hey, learning and twisting your mind are the two things we programmers like to do best, right?
If you’d like to expose an entity in a fully restful manner you can use the json-rest-api plugin. | https://www.tothenew.com/blog/writing-json-apis-part-i-creating-a-secure-rest-json-api-with-grails-and-spring-security-in-3-easy-steps/?replytocom=42424 | CC-MAIN-2021-39 | refinedweb | 642 | 55.95 |
PyX — Example: drawing2/ellipse.py
Applying transformations on a path or canvas: Drawing an ellipse
from pyx import * c = canvas.canvas() circ = path.circle(0, 0, 1) # variant 1: use trafo as a deformer c.stroke(circ, [style.linewidth.THIck, trafo.scale(sx=2, sy=0.9), trafo.rotate(45), trafo.translate(1, 0)]) # variant 2: transform a subcanvas sc = canvas.canvas() sc.stroke(circ, [style.linewidth.THIck]) c.insert(sc, [trafo.scale(sx=2, sy=0.9), trafo.rotate(45), trafo.translate(5, 0)]) c.writeEPSfile("ellipse") c.writePDFfile("ellipse")
Description
PyX does not directly provide a path corresponding to an ellipse. This example shows two ways how to draw an ellipse using affine transformations.
In order to create an ellipse, we best start from a unit circle centered around the point of origin of the coordinate system (here:
circ). In variant 1, we tell PyX to apply a couple of affine transformations before stroking this circle on the canvas
c. These affine transformations are contained in the
trafo module. We first use
trafo.scale to apply a non-uniform scaling, namely by a factor of 2 in x-direction and a factor of 0.9 in y-direction. Doing so, we define the two principle axes of the ellipse. In a next step, we rotate with
trafo.rotate the ellipse by an angle of 45 degrees in the mathematical positive direction, i.e. counter-clockwise. Last, we shift the origin of the ellipse to the desired point by applying a
trafo.translate operation.
Note that the order of the transformations matters. If you, for instance, would first translate the ellipse, the later scaling would also affect the distance by which you have shifted the ellipse. PyX applies the transformations one after the other, from left to right, so the example shown above does the correct thing.
You can also treat transformations as mathematical objects (they are represented by two-dimensional matrices together with an offset vector) and multiply them using the
* operator. Note, however, that mathematically, transformations are applied from right to left, such that variant 1 would need to be written as
c.stroke(circ, [trafo.translate(1,0) * trafo.rotate(45) * trafo.scale(sx=2, sy=1.5)])
PyX also provides some convenience methods for applying certain transformations with a given point as the origin. These allow one to write variant 1 in yet another form
c.stroke(circ, [trafo.scale(sx=2, sy=1.5, x=1, y=0), trafo.rotate(45, x=1, y=0)])
where we have started already from a circle centered around the desired point 1,0.
When telling the stroke method to apply a number of transformations, we use that a transformation is a so-called deformer. Deformers take an original path, do some operation on it and return the modified version. PyX thus internally converts the circle into a path representing the ellipse. Alternatively, we can also let the PostScript or PDF interpreter do the same transformation. This is what is shown in variant 2. There, we first stroke the circle on a new canvas
sc. When inserting this canvas in the original canvas
c, we again pass a set of transformations as arguments. Since PyX cannot deform an entire canvas, it just writes these transformations into the output file. If you compare the resulting output (the right ellipse) with the one of variant 1, you will notice a difference, though: when transforming a whole canvas, the lineshape is transformed as well. Often, this is not the intended result, so you better transform individual objects when stroking or filling them.
When you look at the EPS or PDF output generated by the example, you will notice that the bounding box is too large. The reason for this artefact lies in the way PyX calculates the bounding box for a transformed canvas: It simply applies the transformation to the bounding box of the canvas and takes the bounding box of this new object. While this certainly yields a bounding box of the canvas, it does not necessarily yield a minimal one. To see this, you just have to consider the two extreme cases of a circle, which is rotationally invariant, and a square, which only posseses a discrete rotational symmetry. Whereas the minimal bounding box of the circle does not change under rotations around its center, the same is not true for the square. When you rotate a circle by applying a deformer (as in variant 1), PyX will thus calculate the correct bounding box. On the other hand, when you insert the circle into a canvas and afterwards transform this canvas (as in variant 2), PyX cannot distinguish between a circle and a square anymore and calculates a too large bounding box. | http://pyx.sourceforge.net/examples/drawing2/ellipse.html | CC-MAIN-2014-10 | refinedweb | 793 | 54.93 |
Package jdk.jshell
JShell is the central class. An instance of
JShell holds the evaluation state, which is both the current
set of source snippets and the execution state they have produced.
Each source snippet is represented by an instance of a subclass of
Snippet. For example, a statement is represented by an
instance of
StatementSnippet, and a method declaration is
represented by an instance of
MethodSnippet.
Snippets are created when
JShell.eval(String)
is invoked with an input which includes one or more snippets of code.
Any change to the compilation status of a snippet is reported with a
SnippetEvent. There are three major kinds of
changes to the status of a snippet: it can created with
eval,
it can be dropped from the active source state with
JShell.drop(jdk.jshell.Snippet), and it can have
its status updated as a result of a status change in another snippet.
For
example: given
js, an instance of
JShell, executing
js.eval("int x = 5;") will add the variable
x to
the source state and will generate an event describing the creation of a
VarSnippet for
x. Then executing
js.eval("int timesx(int val) { return val * x; }") will add
a method to the source state and will generate an event
describing the creation of a
MethodSnippet for
timesx.
Assume that
varx holds the snippet created by the first
call to
eval, executing
js.drop(varx) will
generate two events: one for changing the status of the
variable snippet to
DROPPED and one for
updating the method snippet (which now has an unresolved reference to
x).
Of course, for any general application of the API, the input would not be fixed strings, but would come from the user. Below is a very simplified example of how the API might be used to implement a REPL.
import java.io.ByteArrayInputStream; import java.io.Console; import java.util.List; import jdk.jshell.*; import jdk.jshell.Snippet.Status; class ExampleJShell { public static void main(String[] args) { Console console = System.console(); try (JShell js = JShell.create()) { do { System.out.print("Enter some Java code: "); String input = console.readLine(); if (input == null) { break; } List<SnippetEvent> events = js.eval(input); for (SnippetEvent e : events) { StringBuilder sb = new StringBuilder(); if (e.causeSnippet == null) { // We have a snippet creation event switch (e.status) { case VALID: sb.append("Successful "); break; case RECOVERABLE_DEFINED: sb.append("With unresolved references "); break; case RECOVERABLE_NOT_DEFINED: sb.append("Possibly reparable, failed "); break; case REJECTED: sb.append("Failed "); break; } if (e.previousStatus == Status.NONEXISTENT) { sb.append("addition"); } else { sb.append("modification"); } sb.append(" of "); sb.append(e.snippet.source()); System.out.println(sb); if (e.value != null) { System.out.printf("Value is: %s\n", e.value); } System.out.flush(); } } } while (true); } System.out.println("\nGoodbye"); } }
To register for status change events use
JShell.onSnippetEvent(java.util.function.Consumer).
These events are only generated by
eval and
drop,
the return values of these methods are the list of events generated by that
call. So, as in the example above, events can be used without registering
to receive events.
If you experiment with this example, you will see that failing to terminate
a statement or variable declaration with a semi-colon will simply fail.
An unfinished entry (for example a desired multi-line method) will also just
fail after one line. The utilities in
SourceCodeAnalysis
provide source boundary and completeness analysis to address cases like
those.
SourceCodeAnalysis also provides suggested completions
of input, as might be used in tab-completion.
- Since:
- 9 | https://docs.oracle.com/javase/10/docs/api/jdk/jshell/package-summary.html | CC-MAIN-2019-43 | refinedweb | 584 | 51.34 |
Many few years with the arrival of HTML5, JQuery, NodeJS, WebRTC, Google API, and others, JavaScript has become a mature language for developing a robust business applications via server-side coding, NoSQL Databases, JSON format, REST for communication and many other worthy methods.
In this article, we will prove how possible (and even easy) it is to develop a mobile web application that retrieves the phone contacts on disconnected mode and retrieves contacts from a remote server in connected mode using HTML5, JavaScript, and CSS3. We’ll package it using PhoneGap to build to a native mobile app that will work online and offline.
Background
Before starting using this guide, you should have some basics on HTML5, JavaScript, and the mobile development world. In this article, I’ll also use the Wakanda DataStore as a NoSQL database that will be remotely added by our native app to get data using REST/HTTP and JSON format, so having some basics on Wakanda could be very helpful.
Using The Code
Application Architecture
The PhoneGap build input for this mobile application is freely available for download. It’s a zip file developed using the Wakanda Studio smartphone part. The bundle contains numerous noteworthy files.
First and most importantly, it contains the index.html file, which is the main page of our application. This page has three views: one for the homepage that contains two buttons to choose between connected or disconnected mode, a view containing a grid that will load data from the remote Wakanda Server, and a third view that contains a richText widget, which will load the list of mobile contacts using the PhoneGap contact API.
The second file worth mentioning is the config.xml file. This XML file provides some necessary settings to the PhoneGap build service. which to package the app and prepare it for mobile devices. Below is the config.xml data.
<!--?xml version="1.0" encoding="UTF-8"?--> GET CONTACTS An application that gets contacts from DataStore and also from phone Saad Mousliki
(You could take a look to the official PhoneGap documentation to learn more about how to write this file.)
Two additional noteworthy files are splash.png and icon.png. These two files serve as the splash screen and the icon that will be displayed when installing the packaged app on your mobile device.
Next is the walib folder, which contains the WAF (Wakanda Application Framework) API—a set of JavaScript libraries used to manipulate UI, HTTP/REST communication, mapping, etc.
And, of course, the widely-used scripts and styles folders, which contain the JavaScript and CSS files used by the mobile app.
(Note: To learn more about how to create this application using Wakanda Studio and pre-package it to be accepted by PhoneGap, take a look to this guide and this blog.)
The HTML5 Interface
The HTML5 page is generated using the Wakanda Studio GUI Designer with the dragging and dropping of widgets and careful styling using the properties tabs. Below is an example of a Wakanda HTML5 interface in progress; lets go through the surprisingly-easy process of designing the interface for our mobile app.
Step 1
After installing the Wakanda Studio version 3, open the studio by double clicking its icon. Click on the “Create New Solution” button.
Step 2
Give a name to your solution e.g. “CreateHTML5Page.” Check the “Add a blank project to the solution” checkbox and click the “OK” button.
Step 3
Now, click on the “WebFolder” folder and double-click on the index.html file.
Step 4
Go to the right side of the studio; you will find an arrow to choose between platforms: desktop, tablet, or smartphone. Choose the smartphone page.
Step 5
At this step, the page is empty and should be designed using the widget tab on the left and the properties tab on the right. First, we will add a navigation widget to the page by dragging and dropping the desired widget onto the page.
Step 6
After that, we must add some views (navigation options) to the navigation views widget using the properties tab of this widget.
Step 7
Now, we will add a pair of buttons to the first view.
You could modify the button titles and other properties (width, height, color, etc.) using the properties tab on the right side.
Step 8
To specify an event to the button, we should click on the events button on the right side tab and choose the “onClick” event.
Step 9
For example, we could add code that allows switching between views. We will use the navigation view method “goToNextView”, so when we click on this button we will go to the View number 2. Look below for clarity.
button1.click = function button1_click (event) { $$("navigationView3").goToNextView(); }
(Note: To learn more on how to create a mobile web app using Wakanda Studio, take a look to this video for a step-by-step description of how to create and test mobile apps on an iPod.)
After creating a page, we should pre-package (prepare it to be packaged by PhoneGap build) it using the step-by-step described in this blog.
Retrieving Contacts Using the PhoneGap Contact API
The following JavaScript code will be executed when the “Get contacts from phone” button is clicked.
var options = new ContactFindOptions(), name, phoneNumber; options.filter = ""; options.multiple = true; var fields = ["name", "phoneNumbers"]; function onSuccess(contacts) { var res = ""; for(var index = 0, len = contacts.length; index < len; index++) { name = contacts[index].name; name = name != null ? name.formatted : ""; phoneNumber = contacts[index].phoneNumbers; phoneNumber = phoneNumber != null && phoneNumber.length > 0 ? phoneNumber[0].value : ""; res += name + " : " + phoneNumber + "n"; } $$('textField2').setValue(res); navView.goToView(3); } function onError() { alert('onError!'); } navigator.contacts.find(fields, onSuccess, onError, options);
For more details about this code, take a look to the PhoneGap contact API.
Package the App Using PhoneGap Build
This video will show you how to package the web app, starting by uploading the .zip file to getting the .ipa file. For the complete description, we’ll start by downloading the .zip file offered at the beginning of this article and unzipping it.
Take the “Client_Side” folder within the .zip file and zip it to get the desired input file for the PhoneGap build service.
You should have an account on PhoneGap build to upload and build the application, so if you don’t have an account yet, you could create one.
After creating an account, go to the apps page, upload the zip file (Client_Side.zip), and build it! Look below for clarity.
Setting Up Server-Side Elements
For the server-side application, you should take the “Server_Side” folder and run it on Wakanda 3 Server, using a command line or Wakanda Studio. The command line is:
“C:Wakanda Server.exe” “C:Download the applicationServer_SidegetPhoneContact SolutiongetPhoneContact.waSolution”
After that, you should go to the “Client_Side” folder, and within the “scripts” folder, open the index-smartphone.js and add the IP address of the machine where you have hosted your Wakanda application on the following lines:
WAF.config.baseURL = ""; WAF.core.restConnect.defaultService = 'cors'; WAF.core.restConnect.baseURL = ""; WAF.onAfterInit = function onAfterInit() { // @lock // @region namespaceDeclaration// @startlock var documentEvent = {}; // @document var button2 = {}; // @button var button1 = {}; // @button
Now, you could access the Wakanda DataStore remotely using the WAF API.
(Note: To build the .ipa for your iPhone, you should provide a provisioning key and the password of your Apple store account.)
Conclusion
This application is a simple demonstration of Wakanda Studio, as well as a much greater testament to the capabilities of HTML5, JavaScript, and CSS3 for the creation of powerful mobile apps. Using the process of development used in this example (from HTML5, JavaScript, and CSS to native app), we could deliver a cross-platform native mobile application in very little time, so by using PhoneGap, a developer with little background in Java or Objective-C can start developing native mobile apps for many mobile platforms simultaneously. Developing this application, testing it, and packaging it using PhoneGap build took be less than a day’s work.
- miguel
- miguel
- Saad Mousliki | http://www.sitepoint.com/build-contacts-app-with-html5-css-javascript-wakanda-studio/ | CC-MAIN-2015-35 | refinedweb | 1,344 | 63.19 |
#include <stdio.h>
int fputc(int c, FILE *stream); st_ctime and st_mtime fields of the file shall be marked for update between the successful execution of fputc() and the next successful completion of a call to fflush() or fclose() on the same stream or a call to exit() or abort().
Upon successful completion, fputc() shall return the value it has written. Otherwise, it shall return EOF, the error indicator for the stream shall be set, and errno shall be set to indicate the error.
The fputc() function shall fail if either the stream is unbuffered or the stream's buffer needs to be flushed, and:
The fputc() function may fail if:
The following sections are informative.
None.
None.
None.
None.
ferror() , fopen() , getrlimit() , putc() , puts() , setbuf() , ulimit() , the Base Definitions volume of IEEE Std 1003.1-2001, <stdio.h> | http://www.makelinux.net/man/3posix/F/fputc | CC-MAIN-2014-49 | refinedweb | 139 | 61.4 |
ROS.
The more ambitious your robot design becomes, the more ROS will be able to help you. For example, with ROS you can take a robot beyond manual control with a joystick and tell the robot to make its own way into the kitchen. The difference in complexity from the former to a robot that can create and use maps and avoid obstacles along the way is quite substantial. For example, joystick control of a robot can be set up fairly quickly just using an Arduino. For autonomous movement, ROS has map creation, depth map handling, and robot localization already available so you can use higher level “go to this place” commands.
A high-level overview
ROS provides support for a publish and subscribe message model using a namespace like a filesystem. A program can register one or more ROS nodes and these nodes can publish and subscribe to topics that are interesting to them. For example, you might have a ROS node that reads a USB camera and publishes the images to the “/camera” topic for the rest of your robot to enjoy. A small Arduino might subscribe to messages on “/clawpincer” and adjust the position of your robot claw based on messages that are sent to it. This separation of processing into nodes which send and receive messages on topics allows you to connect together specialized nodes to form an entire robot. The message passing helps to keep your nodes separate. A node might just display information on an LED screen without needing to know anything about the rest of your robot (Figure 1).
Messages sent to topics can use basic types like integers, floating point numbers, times, durations, strings, and multidimensional arrays as well as some robotics specific types for example setting the desired drive speeds(s) and direction(s). You can also define your own custom message types.
A complex robot is likely to run many nodes, and starting things up in the right order can be a complex task in itself. ROS uses launch XML files to describe how and what needs to be started. A launch file can also include other launch files, so you can create a single command that will start your motor controller, cameras, navigation and mapping stack, displays, custom radio control software, etc.
The ROS MoveIt! software lets your robot use one or more arms to manipulate objects. MoveIt! integrates with ROS, detecting objects which might be temporarily blocking the most direct path that an arm might have otherwise taken to move to a given location.
A ROS node can be written in either C++ or Python. A partial example of publishing a message to a topic in ROS is shown below. The NodeHandle can be reused to send multiple messages; in this case, we are sending a single string to a topic that is specified using the template parameter to advertise(). Instead of passing a std::string to publish(), the ROS std::msgs type is passed.
ros::NodeHandle n; ros::Publisher chatter_pub = n.advertise("chatter", 1000); ... std_msgs::String msg; msg.data = "hello world"; chatter_pub.publish(msg);
Part of a Python program that listens on the chatter topic is shown below. As you can see, the basic type is accessed through the “.data” element much as in the C++ publisher shown above.
def callback(data): rospy.loginfo(rospy.get_caller_id() + "I heard %s", data.data) def listener(): rospy.init_node('listener', anonymous=True) rospy.Subscriber("chatter", String, callback)
It is very useful for your robot to present a web interface offering both information and remote control. By starting the rosbridge_websocket package, you can send and receive ROS messages from JavaScript in the browser.
The following fragments set up a “ros” object for communication and, when a bootstrap form is completed, will send a message to the “/screen/textbig” topic so that the robot shows a given string to you. Although this example is simply showing text on the robot, you can also use sliders to alter the position of your robot arm or set waypoints in the web interface to have the robot move around.
var ros = new ROSLIB.Ros({ url : 'ws://192.168.2.3:9090' }); var topic_screen_text_big = new ROSLIB.Topic({ ros : ros, name : '/screen/textbig', messageType : 'std_msgs/String' }); var screen_showBigText = function() { var txt = $('#screen-textbig').val(); topic_screen_text_big.publish( new ROSLIB.Message({ data: txt }) ); } // ... <form class="form-inline" onsubmit="screen_showBigText()" action="#"> <div class="row"> <div class="col-md-2"><label>BIG Text</label></div> <div class="col-md-4"><input type="text" class="form-control" placeholder="" id="screen-textbig" /></div> <div class="col-md-1"><button type="submit" class="btn btn-default">Submit</button></div> </div> </form>
When starting out in robotics, it might be tempting to dismiss robot simulators. Simulators are great for folks who don’t have the real robot; but if you have the robot, why would you bother simulating it? Some things might be seen as a cross-over between simulation and reality. For example, when building a map, you are taking data from a camera or lidar device telling you how far things are away from your real robot in the real world. You can then mark that in your map and move your real robot around a bit and take another reading of how far things are away in the real world. You might think of the map that you are building as a model or “simulation” of the real world, which is affected by data that is acquired from the real world (your camera or lidar). Another example might be that you want to see how an arm movement will look on screen before performing it in the real world. So, the line between robotic simulation and the real robot can become a grey area.
ROS has support for simulation using Gazebo and a robot visualization tool called rviz, which lets to see your robot, its map, where the robot thinks it is located, and other data that is sent around through ROS topics.
You will often want to know exactly where something on your robot is relative to the real world. Is the camera located at ground level or 2 feet above the ground? You’ll need to know if the arm is at the front or the back of the robot to work out how far you extend the arm to pick something up. ROS provides the TF framework so you can describe in XML the layout of your robot and then easily find out where things are located without having to perform complex calculations in your own code.
Moving a robot is done by publishing a Twist message to the “/cmd_vel” topic. The Twist message is rather generic and allows a speed and heading to be given for up to three axes. For a robot that operates by turning two wheels, you will only need to set a single speed and a single angle or heading. To provide feedback about movement, a robot base will publish Odometry information, which contains information about the current twist the robot is following and the pose of the robot. The pose allows a robot to show what direction it is facing as it is moving — handy for robots that can move sideways as well as backward and forward. It is also very useful to know if the robot is facing the door or has just entered through it.
Driving with no hands
For a robot to move to a desired destination by itself, many things are likely to be needed. A map of the walls and obstacles in the environment are needed, for example. Other requirements include knowledge of where the robot is on that map, some method to detect objects that block the path but that are not always on the map, a way to generate a plan to get to the destination from the current location, and a means to monitor exactly where the robot is as it moves towards the goal position. Being able to send messages to the robot base telling it what speed and heading to follow and then to monitor the odometry information as the robot moves allows control of the robot to be abstracted from how motion is achieved.
One fairly affordable method to build maps is using an “RGBD” camera, such as the Kinect, which offers both color and depth information in each image. Another way to work out depth information is by using two cameras that are a known distance apart, such as with a PlayStation camera or creating a similar setup using two normal web cameras in a fixed location. The Kinect is designed for indoor use in gaming and does not work well outside where there is a lot of background infrared light. Using two cameras can work both inside and outside but also requires light in order to see objects.
ROS has support for depth information from both the Kinect and PS4 eye cameras. For the latter, you will also need to resolder the PS4 eye cable to obtain a USB3 connection to it. Although I have seen successful modifications like this, you should be prepared to possibly damage or destroy some of your hardware if you undertake them.
Although cameras can provide information about how far objects are away in three dimensions, you might like to start navigating around by converting the information from the camera into a 2D representation. This is much less computationally intense, and ROS has good support for converting information from a Kinect to a “laser scan,” where the depth information is converted into a 2Dl representation of how far away objects are from the robot. The laser scan is then used by the gmapping package to generate a map of the environment. The Adaptive Monte Carlo Localization (AMCL) package can use the current laser scan and a rough idea of where the robot started to determine where the robot currently is located on a map. As the robot moves around a little bit, the initial location estimate is improved because more depth information from the real world helps work out the position of the robot relative to the map.
Final words
ROS is a very powerful robotics platform. That said, it does have a fairly steep learning curve. Some key tutorials would help ease new users into creating fully functional robots. For example, detailed instructions for the creation of an extremely cheap robot arm complete with a ROS package to drive it would provide a great base for customization for the robot arm you might have on your desk. It is often much simpler to customize the arm segment lengths in your robot arm model from an existing software package than to start from scratch.
On the other hand, ROS does allow a determined hobbyist to create a robot with mapping and navigation and be able to talk from JavaScript through to Arduino code running on one of many specific hardware controller boards on a robot. | https://www.linux.com/topic/embedded-iot/ros-open-source-robotics-platform-linux/ | CC-MAIN-2022-27 | refinedweb | 1,829 | 58.92 |
Le Wed, 4 Nov 2009 18:17:21 +0100, Hugo Arts <hugo.yoshi at gmail.com> s'exprima ainsi: > Now, the code. If you write __iter__ as a generator, you won't have to > write a next method at all. It simplifies The thing a whole lot: > > def __iter__(self): > for range in self.ranges: > for item in range: > yield item > > That's it. Alternatively, you could turn the whole thing into a > one-liner and just return a generator expression from __iter__: > > def __iter__(self): > return (item for r in self.ranges for item in r) > > It's not as clear though, and it doesn't save that much space. I like > the first one slightly better. Thank you very much! That's exactly what I expected. Was sure my code was uselessly heavy. Actually, when reading the doc about iteration, I had wrongly understood that next() is required, too. Now, I agree with you on your last comment... except that (.. for .. in ..) is precisely the syntax for generator expression (which is much more familiar to me than the use of generator funcs). Still, I will use the first idiom for clarity. Two additional questions (relative to things manually implemented in my original code): * What about memorization of "emptyness", meaning the last item is already reached, and following calls will all fail. This is automatic for generators, but... * Then how do you restart it? With a decoupling of __iter__() and next(), it's possible to have both failure when empty for the same iterator (= call to next()), and a new iterator returned by __iter__(), typically for a new "for" statement. Below after a bug correction (attributes needing initialisation): ======================= def testPolyRange(): ....... for i in pr1: print i, try: print pr1.next() except StopIteration,e: print "StopIteration" for i in pr1: print i, ==> 1 2 5 6 3 4 5 6 7 8 StopIteration 1 2 5 6 3 4 5 6 7 8 ======================= PS: Just checked and works as expected with generator. Thank you again, Denis. ------ la vita e estrany | https://mail.python.org/pipermail/tutor/2009-November/072677.html | CC-MAIN-2017-04 | refinedweb | 340 | 66.84 |
The Q3ComboBox widget is a combined button and popup list. More...
#include <Q3ComboBox>
This class is part of the Qt 3 support library. It is provided to keep old source code working. We strongly advise against using it in new code. See Porting to Qt 4 for more information.
This class was introduced in Qt 4.1.
The Q3ComboBox widget is a combined button and popup list.33.
This property holds whether auto-completion is enabled.
This property can only be set for editable comboboxes, for non-editable comboboxes it has no effect. It is false by default.
Access functions:
This property holds the number of items in the combobox.
Access functions:
This property holds the index of the current item in the combobox.
Note that the activated() and highlighted() signals are only emitted when the user changes the current item, not when it is changed programmatically.
Access functions:
This property holds the text of the combobox's current item.
Access functions:
This property holds whether duplicates are allowed..
Access functions:
This property holds whether the combobox is editable. its appearance to a 2.0 style Motif combobox is it is set to be editable.
Access functions:
This property holds the position of the items inserted by the user.
The default insertion policy is AtBottom. See Policy.
Access functions:
This property holds the maximum number of items allowed in the combobox.
Access functions:.
Destroys the combobox.
This signal is emitted when a new item has been activated (selected). The index is the position of the item in the combobox.
This signal is not emitted if the item is changed programmatically, e.g. using setCurrentItem().
This is an overloaded function.
This signal is emitted when a new item has been activated (selected). string is the selected string.
You can also use the activated(int) signal, but be aware that its argument is meaningful only for selected strings, not for user entered strings.
Returns true if auto-resize is enabled; otherwise returns false.
See also setAutoResize() and autoResize.
Replaces the item at position index with the text t.
This is an overloaded function.
Replaces the item at position index with the pixmap im, unless the combobox is editable.
See also insertItem().
This is an overloaded function.
Replaces the item at position index with the pixmap im and the text t.
See also insertItem().
Removes all combobox items.().
This slot is equivalent to setValidator( 0 ).
Reimplemented from QWidget::focusInEvent().
Reimplemented from QWidget::focusOutEvent().
Hides the combobox.
See also QWidget::hide().
This signal is emitted when a new item has been set to be the current item. The index is the position of the item in the combobox.
This signal is not emitted if the item is changed programmatically, e.g. using setCurrentItem().
This is an overloaded function.
This signal is emitted when a new item has been set to be the current item. string is the item's text.
You can also use the highlighted(int) signal.
Inserts a text item with text t, at position index. The item will be appended if index is negative.
This is an overloaded function.
Inserts a pixmap item at position index. The item will be appended if index is negative.
This is an overloaded function.
Inserts a pixmap item with additional text text at position index. The item will be appended if index is negative. );
See also insertStringList().
This is an overloaded function.
Inserts the list of strings at position index in the combobox.
This is only for compatibility since it does not support Unicode strings. See insertStringList().
This is an overloaded function.
Inserts the list of strings at position index in the combobox.
This is only for compatibility since it does not support Unicode strings. See insertStringList().
Inserts the list of strings at position index in the combobox.
Reimplemented from QWidget::keyPressEvent().
Returns the line edit, or 0 if there is no line edit.
Only editable listboxes have a line editor.
See also setLineEdit().
Returns the current list box, or 0 if there is no list box. (Q3ComboBox can use QPopupMenu instead of QListBox.) Provided to match setListBox().
See also setListBox().().
Returns the pixmap item at position index, or 0 if the item is not a pixmap.
Pops up the combobox popup list.
If the list is empty, no items appear.
Removes the item at position index.
Reimplemented from QWidget::resizeEvent().
If enable is true, enable auto-resize; disable it otherwise.
See also autoResize.().
Enables the combobox if enable is true; otherwise disables it.
See also QWidget::enabled.
Sets the font for both the combobox button and the combobox popup list to font.
Sets the line edit to use edit instead of the current line edit..
Sets the palette for both the combobox button and the combobox popup list to palette.
Applies the validator v to the combobox so that only text which is valid according to v is accepted.
This function does nothing if the combobox is not editable.
See also validator(), clearValidator(), and QValidator.
Reimplemented from QWidget::sizeHint().
This implementation caches the size hint to avoid resizing when the contents change dynamically. To invalidate the cached value call setFont().
Returns the text item at position index, or QString::null if the item is not a string.
See also currentText().
This signal is used for editable comboboxes. It is emitted whenever the contents of the text entry field changes. string contains the new text.
Updates the widget mask.
See also QWidget::setMask().
Returns the validator which constrains editing for this combobox if there is one; otherwise returns 0.
See also setValidator(), clearValidator(), and QValidator.
Reimplemented from QWidget::wheelEvent(). | http://doc.trolltech.com/main-snapshot/q3combobox.html | crawl-003 | refinedweb | 934 | 62.54 |
Java Platform Overview | Getting Started | Learning Paths
References & Resources | Certification | Supplements
Contents
Inheritance | NEXT>>Containers and Components
Download or Print -->
In Building an Application, Part 1, you learned about application objects, and that the plans for objects are written into files called classes. In addition, you learned to use predefined classes from the Java API, and to manipulate objects by calling methods, either your own or predefined.
So far, you constructed the first class of the Dive Log application and the place holder classes that DiveLog.java initializes.
Part 2 reinforces these concepts and introduces the
Welcome class, which covers:
In Part 1, you created the
DiveLog class, which contained a constructor that builds the frame for the Dive Log application, a
JMenu, and initializes a
JTabbedPane object with six titled tabs. Each tab creates an object from a placeholder class that, for now, does nothing.
For this part of the tutorial, you need images and the
Welcome.java placeholder class. You can use different images than those provided here, but to prevent problems with layout make them the same size as the images provided. As you work through the tutorials, you'll discover more ways to customize your application and its layout.
Note: It's assumed you installed the Java TM 2 Platform, Standard Edition on your system, and that you have completed Part 1 of the Building an Application tutorial.
Applications can consist of a single class, but most are built with many classes. The classes that make an application often communicate through a reference to an object and methods, using the dot operator. You've seen examples of this in the
DiveLog class:
dlframe.setSize(765, 690);
In this example,
dlframe is the reference to the instance of a
JFrame object you created, but the
JFrame class doesn't define a method called
setSize. So, where does this method come from? How can you call a method on a
JFrame object, when the
JFrame class doesn't define that method? This process works similar to the way your hair color is passed down to you through your mother or father--through inheritance. But Java class inheritance gives a developer much more control over the child object than human inheritance does.
Because you instantiated an object of type
JFrame, the
DiveLog class inherited all the methods that
JFrame contains. In addition, the
DiveLog class inherited the methods that
JFrame inherited. The
JFrame class inherits methods and fields from several classes up the hierarchy tree:
All classes inherit from class
Object automatically. In addition, when you create an object of the
JFrame type, this new object also inherits from the
Frame,
Window,
Container, and
Component classes. To call a method from one of these inherited classes, you generally use the dot operator with your reference variable. The
setSize method was inherited from the
Component class.
Not all inherited methods and fields are accessible, yet they are part of the make-up for that object. Later, you'll learn more about accessing certain types of data from parent classes.
There is a more direct way to inherit from specific classes: use the
extends keyword in the class declaration. By using
extends, your child (also called subclass or derived class) inherits from the parent or super class(es) and frees you from having to:
In other words, the
extends keyword allows your class to inherit from a class of your choosing, unlike human inheritance in which you have no choice of who your parents are or what traits you inherit from them.
To make the
DiveLog child class a child of the
JFrame poarent class, you type:
public class DiveLog extends JFrame
Now you have specified a class you want your subclass to inherit from, and it becomes that type of class. Using the above statement, you make the
DiveLog class a type of
JFrame object, just like a Labrador puppy is a Labrador type of dog.
By extending to the the
JFrame class, there is no need to instantiate
JFrame in your class to get to its methods, as shown in the previous lesson:
dlframe.addWindowListener(new WindowAdapter() dlframe.getContentPane().add(tabbedPane); dlframe.setJMenuBar(mb);
Instead, you can write the method call without the variable
dlframe. You call the inherited methods by their names:
getContentPane().add(tabbedPane); addWindowListener(new WindowAdapter() getContentPane().add(tabbedPane); setJMenuBar(mb);
The parent class,
JFrame, has a constructor you can call by using the
super keyword.
DiveLog can call the parent
JFrame constructor and supply the
String to appear at the top of the frame window as follows:
super("A Java Technology Dive Log");
In human inheritance, you get some genes from your mother and some from your father. With class inheritance, by using the
extends keyword, the
DiveLog object is a
JFrame object with additional features that you added. In other words, the
DiveLog object has everything a
JFrame object has and more. In addition to having a frame with a title, the
DiveLog object, derived these from the
JFrame object, has:
TabbedPaneobject
To improve this class, you can move the
setSize(765, 690),
setBackground(Color.white), and
setVisible(true) methods into the
main method. Since
DiveLog is a type of
frame object, it makes sense to set size and background on the newly instantiated
DiveLog object once it's built, rather than when it's being constructed. But either way works.
You don't need to rewrite your
DiveLog.java to continue with this tutorial, but it is a good exercise in inheritance.
The next Dive Log class
extends a class to gain the benefits of inheritance. The
Welcome class holds the content for the first tab in the Dive Log application. | http://www.oracle.com/technetwork/topics/newtojava/part2-140785.html | CC-MAIN-2015-18 | refinedweb | 948 | 51.78 |
- Sagi Grimberg authored
On an ordered target shutdown, the target can send a AEN on a namespace removal, this will trigger the host to queue ns-list query. The shutdown will trigger error recovery which will attepmt periodic reconnect. We can hit a race where the ns rescanning fails (error recovery kicked in and we're not connected) causing removing all the namespaces and when we reconnect we won't see any namespaces for this controller. So, queue a namespace rescan after we successfully reconnected to the target. Signed-off-by:
Sagi Grimberg <sagi@grimberg.me> Reviewed-by:
Christoph Hellwig <hch@lst.de>5f372eb3 | https://gitlab.flux.utah.edu/xcap/xcap-capability-linux/blob/5f372eb3e76317b4fe4ba53ad1547f39fc883350/drivers/nvme/host/rdma.c | CC-MAIN-2019-47 | refinedweb | 104 | 55.74 |
Find All Duplicates in an Array
Introduction
In this blog, we will discuss an array problem that has been asked frequently in Interviews.
The problem is to Find All Duplicates in an Array.
We are given an array of N elements, and each integer appears once or twice. We have to return an array of all the integers that appears twice.
Example: nums = [4,3,2,7,8,2,3,1]
Here, 2 and 3 appear twice.
∴The output will be [2,3]
Naive Approach
The naive approach to Find All Duplicates in an Array will be to take each element and to find whether any second occurrence of that element is present in the entire array or not. If any second occurrence is present, then we need to display it in the answer array. But this approach would take O(n²) time.
To have an optimised approach, we can also do this by using a HashSet.
This would reduce our Time Complexity and space complexity to O(n).
Declare a HashSet
⬇
If an element is not present in the HashSet on traversing the array, add the element to HashSet.
⬇
If that element is present in the HashSet, add the element to the result array.
⬇
Return the result array.
Optimized Approach
As the condition to us is 1 <= nums[i] <= n, this means that number present in the array will not be greater than the length of the array so that we can reduce our space complexity to O(1).
get the index; the element corresponds to
⬇
Flip the number to negative
⬇
If the number is already negative, we encounter it twice, so store it in the answer array.
⬇
Return answer. i.e.:All Duplicates in an Array
So in the most optimal approach, the time complexity for this problem will reduce to O(n) and space complexity to O(1).
Till now, I assume you must have got the basic idea of what has been asked in the problem statement. So, I strongly recommend you first give it a try.
Find All Duplicates In An Array
If you were not able to solve it, don’t be sad. It’s a part of the learning process.
Please have a look at the algorithm, and then again, you must give it a try.
PseudoCode
Algorithm
___________________________________________________________________
procedure findDuplicates( ):
___________________________________________________________________
1. Declare a vector to store the answer.
2. Run a for loop from 0 till the size of nums.
3. index=abs(nums[i])-1
4. if(nums[index]<0)
5. Store the number in the answer array.
6. Else nums[index]=nums[index]*-1;
7. Return answer.
end procedure
___________________________________________________________________
CODE IN C++
#include <iostream> #include <bits/stdc++.h> using namespace std; int main() { int n; cin>>n; int nums[n]; for(int i=0; i<n; i++){ cin>>nums[i]; } //vector to store answer array vector<int> resultSet; for(int i=0; i<n; i++){ //get the index, the element corresponds to int index=abs(nums[i])-1; //if present number is already negative, it means we are encountering it twice, so store in answer array. if(nums[index]<0){ resultSet.push_back(index+1); } //Flip the number to negative nums[index]=nums[index]*-1; } //return answer for(int i=0; i<resultSet.size(); i++){ cout<<resultSet[i]<<" "; } }
Output
Sample Input: 8 [4,3,2,7,8,2,3,1] Sample Output: [2,3]
Complexity Analysis
Time Complexity: O(n).
Analysing Time Complexity:
Since loop runs only once.
∴n.
Space complexity: O(1) since no extra variable is used.
Frequently Asked Questions
- How to approach a problem like this?
Answer) First, understand the problem. The best way is to use diagrams. Then write your algorithm. Then start with the Brute Force method. Lastly, try to optimise your code.
- What is HashSet?
Answer) Unordered collections consisting of unique elements is referred to as HashSet. To know more about HashSet, click here.
- What is the difference between Array and Vector?
Answer) The array is static in size, whereas the vector is dynamic in size.
Key Takeaways
This article taught us how to Find All Duplicates in an Array by approaching the problem using a hash set and then without extra space. We discussed its implementation using illustrations, pseudocode, and then proper code.
We hope you could take away critical techniques like analyzing problems by walking over the execution of the examples and finding out the recursive pattern followed in most binary search tree problems.
Now, we recommend you practice problem sets based on arrays to master your fundamentals. You can get a wide range of questions similar to this on CodeStudio.
It's not the end. Must solve the problem of similar types.
Happy Coding. | https://www.codingninjas.com/codestudio/library/find-all-duplicates-in-an-array | CC-MAIN-2022-27 | refinedweb | 784 | 65.52 |
01 March 2012 14:34 [Source: ICIS news]
(updates throughout, adds comments from VCI general manager)
LONDON (ICIS)--Germany's chemical producers’ trade group has revised its 2012 chemical production output forecast for the country to zero after reporting a decline in production in the fourth quarter of 2011, it said on Thursday.
Frankfurt-based VCI had previously forecast 1.0% year-on-year growth in ?xml:namespace>
The revision came after VCI reported that
Compared with the fourth quarter of 2010, production was down by 4.3% year on year; excluding pharmaceuticals, the year-on-year decrease was 6.0%.
However, the trade group said that the longer-term prospects for the country’s chemical industry remain promising.
Production growth should resume in 2013, rising by about 2.0–3.0% from 2012, it said. Through 2020, VCI expects
VCI general manager Utz Tillmann said that headwinds from the eurozone debt crisis had hit the chemical industry in the fourth quarter. The industry now seemed to be in a trough, he added.
However, Tillmann said there were signs that the situation was improving, and he pointed to recent economic sentiment indicators and industry surveys.
“We are working on the assumption that dynamic forces will prevail in coming months,” said Tillmann.
As for the fourth quarter of 2011,
Fourth-quarter chemical industry sales fell by 2.3% from the third quarter, to €41.7bn ($55.6bn). Compared with the 2010 fourth quarter, sales were up by 1.8% year on year.
Chemicals prices stayed at a high level in the fourth quarter, but they did not continue to rise further after strong price hikes in the earlier part of 2011, VCI said.
Despite the weak quarter, firms continued to hire workers. The industry now employs 427,000 workers, 3% more than in 2010.
For the full year of 2011, VCI reported
Full-year sales were €184.2bn, up by 7.7% from 2010. Producer prices rose by 5.2%, while full-year capacity utilisation averaged 84.7%, compared with 84.6% in 2010.
“Those numbers, however, cannot disguise the fact that the economic recovery has run out of breath, even in
In fact, Germany's chemical production had been falling – on a sequential month-to-month basis – since May 2011, the group said.
($1 = €0.75) | http://www.icis.com/Articles/2012/03/01/9537494/germany-chem-trade-group-cuts-2012-forecast-after-weak-q4.html | CC-MAIN-2015-06 | refinedweb | 386 | 66.74 |
Have you always wanted to learn to code but didn’t know where to start? Learn how to control Minecraft on the Raspberry Pi using Python and some simple electronics. Here’s the end result:
You will need a Pi 2 or newer for this project, and whilst you could complete most of these tasks via command line over Secure Shell (SSH), this tutorial will focus on coding directly on the Pi.Raspberry Pi 3 Model B Motherboard Raspberry Pi 3 Model B Motherboard 1.2GHz 64-bit quad-core ARMv8 CPU, 1 GB RAM Buy Now At Amazon $35.70
New to Minecraft? Don’t worry – here’s our Minecraft Beginner’s Guide The (Latecomer) Beginner's Guide To Minecraft The (Latecomer) Beginner's Guide To Minecraft If you're late to the party though, don't worry - this extensive beginner's guide has you covered. Read More .
Introduction to Minecraft Pi
Minecraft for the Raspberry Pi has been developed for learning and tinkering (and it’s free). It comes with an Application Programming Interface (API) which provides a way for code to easily talk to Minecraft. It’s brilliant for learning how to code in Python, as well as getting started with .
What is Python?
Python is a programming language. It is interpreted, which means when you run a Python file or program, the computer has to do a tiny bit of work to the file first. The downsides are that it can be considered slow when compared to compiled languages.
The benefits of interpreted languages are the speed of coding and their friendliness. You do not need to tell the computer what data you want to store, just that you want to store something and the computer will figure out what to do. There are exceptions, of course, and this is a somewhat simplified view, however programming should be fun! If you start digging into the complex technical details it can become a bit laborious.
Python is case sensitive. This is important to know, as Python will not recognise objects even if they are spelt correctly if the case is wrong. “Dosomething()” will not work if the method is actually called “DoSomething()”. Python also uses indentation. Other programming languages may not care how many indents your code has, whereas Python does care. Indents are used to tell Python where code belongs. Other languages may use “Curly Braces” ({}) to group code — Python does not use these. Python uses a hash (#) for comments, and comments are used to tell other developers or people looking at the code what a particular part does, or why it is needed. Python ignores anything after a hash.
Finally, there are two main versions of Python — Python 2.7.x and Python 3.x. There are some differences between the two (what are the differences?). This tutorial will use Python 3.
Initial Setup
Providing your Pi is already setup and running , there’s not a lot of initial setup needed.
Open Terminal (Menu > Accessories > Terminal) and run this command. It’s always good practise to keep the repository list up to date, and this will download the latest list of programs (it will not download the programs themselves, this helps the Pi know what programs are called and where to find them).
sudo apt-get update
Now update the Pi (this may take a while):
sudo apt-get upgrade
Python and Minecraft Pi are installed already, however if Minecraft Pi is not installed for any reason, it’s simple to install 3 Ways To Install Software On Raspberry Pi 3 Ways To Install Software On Raspberry Pi Getting your hands on a Raspberry Pi will open up a remarkable world of computing projects, from media centres and NAS boxes to Android emulation, retro games and of course the computer’s core purpose –... Read More :
sudo apt-get install minecraft-pi
Navigate to documents and create a new folder called “Minecraft”:
cd Documents/ mkdir Minecraft
You can view the contents of this new folder:
ls
Here’s a tip – if you start typing and hit the TAB key, the command line will attempt to autocomplete the statement for you.
You can examine the path to the current directory using pwd, which stands for Print Working Directory:
pwd
Start Minecraft by going to Menu > Games > Minecraft Pi. You will need this running, but will come back to it later.
Open Python 3 from Menu > Programming > Python 3 (IDLE). This program provides a way for you to run Python commands and to write programs.
Now you could type your Python commands here, but that’s not very practical. Go to File > New File and then File > Save and save this in the folder you created earlier. (Documents > Minecraft). Let’s call it “hello_world.py“. You do not have to use the .py extension, this will be added automatically, but it’s good practise.
If you switch back to the terminal, and navigate into the Minecraft folder you should see the file you just created:
cd Minecraft/ ls
You can run this file like this:
python hello_world
Notice how “python” is all lower-case. This has to be before the file name, as it tells the Pi that the following file is Python, so it should be executed as such.
Switch back to the Python editor and type:
print "Hello, World!"
Save this file and run it again – you should now see “Hello, World!” appear in the command line — neat! The print command simply tells Python to output the following text in double quotes. This is good, but not terribly useful for Minecraft, let’s link it up:
from mcpi.minecraft import Minecraft mc = Minecraft.create() mc.postToChat("Hello, World!")
Now if you save and run this file, you should see “Hello, World!” appear in the Minecraft game. Let’s breakdown the code:
from mcpi.minecraft import Minecraft
This line tells Python that you want to use code from another file. This mcpi.minecraft file was developed to allow easy control of Minecraft.
mc = Minecraft.create()
This line creates an object called “mc” (Minecraft). You have to create this to allow communication to the Minecraft game — it is not enough just to include the file.
mc.postToChat("Hello, World!")
Finally, this line tells Minecraft to write some text to the chat. Try changing “Hello, World!” to something else and see what happens, but remember to include both the double-quotes. If you are having software problems, these are some common Python and Minecraft Pi errors:
- AttributeError — this is a typo, such as pint or prnt instead of print
- NameError: name ‘Minecraft’ is not defined — remember to import the modules you need
- NameError: name ‘true’ is not defined — Python is case sensitive, change to “True”
- socket.error: [Errno 111] Connection refused — Make sure Minecraft is running
Projects
Now that you know the basics of Python and Minecraft, let’s make some cool projects. All of the codecan be downloaded from Github.
Automated Bridge Builder
This program will effectively build a bridge over water. When the player gets near to a body of water, the program will convert several blocks to stone. As Minecraft uses a coordinate system, it is very easy to get the location of the player, along with the type of blocks around the player. Minecraft Pi is slightly limited, so it’s not possible to update multiple different blocks in bulk. You can easily code this behavior yourself, however.
Create a new file (File > New File) and save it as “bridge_builder.py“.
from mcpi.minecraft import Minecraft mc = Minecraft.create() # create Minecraft Object while True: x, y, z = mc.player.getPos() # store player position # store the surrounding blocks a = mc.getBlock(x, y - 1, z + 1) b = mc.getBlock(x, y - 1, z - 1) c = mc.getBlock(x - 1, y - 1, z) d = mc.getBlock(x + 1, y - 1, z) if a == 8 or a == 9 or b == 8 or b == 9 or c == 8 or c == 9 or d == 8 or d == 9: # 8 or 9 is water. Set surrounding blocks on floor to a solid (stone) if water is found mc.setBlocks(x, y - 1, z, x + 1, y - 1, z + 1, 1) mc.setBlocks(x, y - 1, z, x - 1, y - 1, z - 1, 1) mc.setBlocks(x, y - 1, z, x - 1, y - 1, z + 1, 1) mc.setBlocks(x, y - 1, z, x + 1, y - 1, z - 1, 1)
Notice how the y value is actually looking at y – 1. This is the floor level. If the value of y was used, the script would look for blocks at about knee level — it would not work very well! Mc.getBlock() returns the id of a block for the given coordinates. As x, y, and z are the coordinates of the player, you can add or subtract from them to get positions around the player. You do not have to use the x, y, and z values, you could use any number, however you may not know how that particular block relates to the player — it’s better to use values relative to the player. Run this file from the command line and see what happens.
You should see that a small area of ground turns into stone once the player reaches a body of water. It’s not great — you are able to walk fast enough to cause a problem. You could solve this by converting a larger volume of water to land. The final part of the mc.setBlocks() method is the block id. One is the block id for stone. You could change this to wood, grass, or anything. If you wanted to, you could quite easily convert this to a complex design — maybe a suspension bridge!
Super Mining Button
This example will make short work of mining. It consists of a physical button, that when pressed, will mine 10 blocks cubed. Let’s start with the button. Similar to buttons on the Arduino, you will need a small amount of electronics, all of which should be found in a basic :
- 1 x Breadboard
- 1 x momentary switch
- 1 x 220 ohm resistor
- Female > male jump cables
- Male > Male jump cables
Here’s the circuit:
This resistor is called a “pull down” resistor. It helps to ensure that what the Pi thinks is the button being pressed, really is the button being pressed. You do not have to use this, however it is recommended, as you may find lots of noise and false readings without it.
The button is connected to General Purpose Input Output (GPIO) pin 14. You can use any GPIO pin, however look at the pinout first, as they are not all controllable from the Pi, and vary slightly between models.
Now that the button is connected, it’s time to test it. Create a new file and save it as “button_test.py“. Add this code, save it then run it in Terminal.
import RPi.GPIO as GPIO import time GPIO.setmode(GPIO.BCM) # tell the Pi what headers to use GPIO.setup(14, GPIO.IN) # tell the Pi this pin is an input while True: if GPIO.input(14) == True: # look for button press print "BUTTON WORKS!" # log result time.sleep(0.5) # wait 0.5 seconds
Press Control + C to stop the script. If everything is working correctly you should see “BUTTON WORKS!” in the Terminal. Notice how, like the Minecraft module, this test is using the RPi.GPIO and time modules. These allow the Pi to access the hardware pins and provide useful timing functions.
Now lets finish the rest of the code. Create a new filecalled “super_mine.py“. Here’s the code:
import RPi.GPIO as GPIO import time from mcpi.minecraft import Minecraft x, y, z = mc.player.getPos() # read the player position mc.setBlocks(x, y, z, x + 10, y + 10, z + 10, 0) # mine 10 blocks mc.setBlocks(x, y, z, x - 10, y + 10, z - 10, 0) # mine 10 blocks time.sleep(0.5) # wait 0.5 seconds
mc.player.getPos() returns the players current coordinates, which are then stored in x, y, and z. The setBlocks() method tells Minecraft to fill all blocks between the start and end with the following block. Zero is the block-id for air. You could change this to another block-id to solid fill an area. You could also change the coordinates to +100 or even +1000 blocks, however the Pi may start to struggle if you get too crazy. Notice how y + 10 is the same for both lines. You could change this to y – 10 if you wanted to remove blocks underground.
Teleporting
Another simple use for this button could be to “teleport”. The Minecraft Pi Api provides a way to set the player position. The following code will “teleport” the player to a preset location:
mc.player.setPos(0, 0, 0)
Note that his method accepts three parameters; x, y, and z – so you could set these to anything to instantly teleport the player to that location.
Create a copy of the super_mine file (File > Save Copy As) and modify it by replacing the if with the following:
if GPIO.input(14) == True: # look for button press mc.player.setPos(0, 0, 0) # teleport player time.sleep(0.5) # wait 0.5 seconds
This file should now look like this:
import RPi.GPIO as GPIO from mcpi.minecraft import Minecraft import time mc.player.setPos(0, 0, 0) # teleport player time.sleep(0.5) # wait 0.5 seconds
Save it as “teleport.py” and run.
You may find the player gets stuck inside some blocks when using this, in which case you’ll need to adjust the coordinates to a known open space (the top left of the screen shows your current location).
Build a House
One last task for this button is to build a house. Much like the quick-mining example above, this will simply replace blocks surrounding the player to make a house. Different block-ids will be used for different materials (window, walls etc). To make things easier to code, a solid block will be created, and then the inside removed (set block to air), this will create a hollow shell. You could add extras like a bed or door, however the Minecraft Pi project is a little incomplete, and whilst these objects work when placed by the player, they are not brilliant when using Python.
from mcpi.minecraft import Minecraft import RPi.GPIO as GPIO import time mc = Minecraft.create() # create Minecraft Object GPIO.setmode(GPIO.BCM) # tell the Pi what headers to use GPIO.setup(14, GPIO.IN) # tell the Pi this pin is an input while True: if GPIO.input(14) == True: x, y, z = mc.player.getPos() mc.setBlocks(x + 2, y - 1, z + 2, x + 7, y + 3, z + 8, 5) # make shell mc.setBlocks(x + 3, y, z + 3, x + 6, y + 2, z + 7, 0) # remove inside mc.setBlocks(x + 2, y, z + 5, x + 2, y + 1, z + 5, 0) # make doorway mc.setBlocks(x + 4, y + 1, z + 8, x + 5, y + 1, z + 8, 102) # make window 1 mc.setBlocks(x + 4, y + 1, z + 2, x + 5, y + 1, z + 2, 102) # make window 2 mc.setBlocks(x + 7, y + 1, z + 4, x + 7, y + 1, z + 6, 102) # make window 3
Save this as “house.py” and run. All being well, you should see a small house appear (you may need to turn around to find it). It’s very simple, an opening and some windows. In theory, there is no limit to how large or complex a building you could construct.
Make a Mini Game
Next, let’s make a mini-game! This will be quite simple, when the player steps on a block of sand, it will turn into lava after a random amount of time. This is a good game to make, as you could design your own levels or modify it to make things harder. You will not need the button for this example.
Create a new file and save it as “mini_game.py“. Here’s the code:
from mcpi.minecraft import Minecraft import random import time mc = Minecraft.create() # create Minecraft Object while True: x, y, z = mc.player.getPos() block_under_player = mc.getBlock(x, y - 1, z) if block_under_player == 12: # player standing on sand, start the timer random_time = random.uniform(0.1, 2.5) # generate random number time.sleep(random_time); # wait mc.setBlock(x, y - 1, z, 11) # turn it into lava
This code is a good starter on the random() function: random.uniform(0.1, 2.5) will generate a random number between 0.1 (1/10th second) and 2.5 (2 1/2 seconds). Increasing these numbers will make the game easier.
Try it out! Stand on a block of sand, and it will shortly turn into lava. This could be the basis of a more complex game.
Make a Another Mini Game
The premise for this game is simple – don’t be standing on the wooden floor when the time runs out. The player gets teleported into an “arena”. They are forced to stand still until the game starts. Once started, the floor will turn to water once the timer runs out. The player must be standing in the safe zone (diamond blocks) to survive. Each level reduces the timer by one second. After each successful level the safe area gets larger. Check out the code below:
import time import random from mcpi.minecraft import Minecraft mc = Minecraft.create() # create Minecraft Object # clear area mc.setBlocks(-10, 1, -10, 25, 5, 25, 0) # create arena shell mc.setBlocks(0, 0, 0, 25, 10, 25, 17) # hollow out arena mc.setBlocks(1, 1, 1, 24, 10, 24, 0) # move player to arena mc.player.setPos(14, 25, 20) # teleport player # make them stay put # teleport player to start position every 1/10th second. # do this for 5 seconds then start the game time.sleep(2) total_wait = 0 mc.postToChat("Waiting to Start") while total_wait < 5: mc.player.setPos(14, 1, 20) # teleport player time.sleep(0.1) total_wait += 0.1 mc.postToChat("BEGIN!") # 10 levels for level in range(10): x, y, z = mc.player.getPos() level_time = 10 - level # reduce time by 1 second for each level mc.postToChat("Level - " + str(level + 1) + " start") # build floor mc.setBlocks(0, 0, 0, 25, 0, 25, 17) # make safe area safe_area_start = random.uniform(0, 22) safe_area_end = random.uniform(0, 22) mc.setBlocks(safe_area_start, 0, safe_area_end, safe_area_start + level, 0, safe_area_end + level, 57) elapsed_time = 0 while elapsed_time < 10: x, y, z = mc.player.getPos() time.sleep(0.25) elapsed_time += 0.25 # check player is still on floor if y < 0.75: mc.postToChat("Game Over") break; else: # remove floor mc.setBlocks(-10, 0, -10, 25, 0, 25, 8) # put safe area back mc.setBlocks(safe_area_start, 0, safe_area_end, safe_area_start + level, 0, safe_area_end + level, 57) time.sleep(2.5) continue break
Save this as “mini_game_2.py” and give it a run.
The Pi 2 has some performance issues whilst running Minecraft. The Central Processing Unit (CPU) usage graph (top right corner) never shows any heavy load, so this must be down to poor design and optimizations by the developers. These issues are unrelated to running code (as they continue when Python is not running), however they are compounded by this mini game. If your Pi is really struggling you may want to reduce the size of the arena or overclock your Pi Not Enough Juice? Squeeze Your Raspberry Pi By Overclocking Not Enough Juice? Squeeze Your Raspberry Pi By Overclocking If there’s any hardware released in 2012 that you’re likely to fall in love with, it’s the sweet-as-sugar Raspberry Pi, a mini computer designed and built in the UK that has shipped all around the... Read More .
Diamond Detector
Let’s make another circuit. This will use a Light Emitting Diode (LED) to light up when there are diamonds underneath (within 15 blocks). Here’s what you need:
- 1 x Breadboard
- 1 x LED
- 1 x 220 ohm resistor
- Female > male jump cables
- Male > Male jump cables
Here’s the circuit:
Connect the Anode (long leg) to GPIO Pin 14. This pin acts like +5v. Connect the Cathode (short leg) to ground.
I have used a cheap ore toy and modified it by removing the rear cover and electronics, I then placed an LED underneath it. You could could easily make this permanent with hot glue or something similar.
Save this code as “diamonds.py“:
import RPi.GPIO as GPIO import time from mcpi.minecraft import Minecraft mc = Minecraft.create() # create Minecraft Object led_pin = 14 # store the GPIO pin number GPIO.setmode(GPIO.BCM) # tell the Pi what headers to use GPIO.setup(14, GPIO.OUT) # tell the Pi this pin is an output while True: # repeat indefinitely x, y, z = mc.player.getPos() for i in range(15): # look at every block until block 15 if mc.getBlock(x, y - i, z) == 56: GPIO.output(led_pin, True) # turn LED on time.sleep(0.25) # wait GPIO.output(led_pin, False) # turn LED off time.sleep(0.25) # wait
When there is a diamond ore block underneath the player (within 15 blocks) the light will flash.Think Geek Minecraft Light-Up Diamond Ore Think Geek Minecraft Light-Up Diamond Ore Minecraft Diamond Ore Buy Now At Amazon $12.89
Have you made something cool with Minecraft Pi? Let me know in the comments what you made or how far you made it in the games.
"...over Secure Socket Shell (SSH)..." That's not what it means; it means Secure Shell (the word "socket" is not used or implied):
Thanks Howard, corrected. | http://www.makeuseof.com/tag/learn-python-electronics-minecraft-pi-edition/ | CC-MAIN-2017-04 | refinedweb | 3,635 | 75.3 |
I noticed a few weeks back that I have over 1,000 Google Hangout requests. While I go out of my way to answer every question I receive it would be impossible to personally talk to each and every one of you nice people 🙂
So, I thought maybe it would be a good idea to have a live Ask Me Anything type video where I can answer many of the questions that come up all of the time, or anything else you may be interested in. I’d answer all the questions live on YouTube, but to help me decide if this is a good idea or not, leave any questions you have below in the comments.
I’m willing to answer most any question you may have. In situations in which honesty may get me sued I will just omit specifics. Feel free to leave questions anonymously if you’d prefer to do that.
Tell me if you think this would be a fun idea?
Thank you
Derek
[googleplusone]
Derek, I think your videos are awesome! I’m currently trying to get some good practice with Javascript and PHP objects, without spending a fortune. I’m looking at Node.js after that. I want to segue to the server-side with some good programming language experience to try my luck with writing Android apps. I don’t feel ready for your Android videos yet. What do you suggest?
The problem with Android is basically the jargon. That is what confuses people. For example Intent sounds confusing. What is an Intent? It is just a way of referring to something you want to interact with. If you want to open a new Activity the Activity is referred to as an Intent. If you want to open the picture gallery to get a picture you refer to the picture gallery as an Intent.
I’m working on a video right now that has the one goal of making all of this jargon make sense. After you get past the jargon making Android apps is actually very easy and fun 🙂
Hello Derek,
Thank you so much for the learning environment. I have started trying to learn Java for a basis as I am wanting to possibly go after a computer science degree so I watched just you basic set up video and followed it to a the letter, however, I am having problems getting the Apache.commons downloaded to get started and wanted to know if something has changed since you made the tutorial?
Thank you again for your help.
Gene
You’re very welcome Gene 🙂 I have a tutorial on how to install Java libraries. I hope it helps.
Hi Derek
I watched your design pattern tutorials and enjoyed them to the fullest. They were comprehensive, exhaustive and precise at the same time! I managed to learn multiple design patterns in a short span of time. I am grateful to you for providing such learning. Thanks a lot! 🙂
I also have to get started with advanced C++ though I have basic understanding of C++. Do you have any existing tutorial on C++? If not, would it be possible for you to teach smart pointers, templates, STL etc?
Thank you!
Hi,
Thank you for the nice compliments 🙂 I’m very happy to hear that you enjoyed the design pattern tutorials. When I was making those videos nobody was watching them, but I felt it was an important topic to cover properly.
I haven’t covered C++ yet, but I definitely will soon. I will finish my C tutorial this month.
Derek
Hi Derek,
I have now seen almost all of your videos,
I was wondering if you were going to do one on netbeans?
That is a lot of watching 🙂 I will cover Java Enterprise stuff eventually, but I want to devote myself to it when I do. I’ll get to it as soon as possible.
Great idea ! That would be very helpful ! , how can we ask ? By leaving a comment here , YouTube or where :p ?
Thank you very much for all of your tutorials which are the BEST ! .
Feel free to leave questions here on my website because I can’t guarantee I’ll see comments on YouTube or Google+ anymore. Ask anything
first thanks for these great videos
i want to know the topics that you will do in c / c++
you are awesome
Thank you very much 🙂 I’ve already started covering C. I will finish that tutorial this month. As per C++ I plan on covering it with the same level of detail I used with Java. I’d love to transition into electronics tutorials with C, but there doesn’t seem to be any interest in that topic for some reason?
If you mean embedded systems programming, then I am interested in that. OpenGL would be great too. Btw thanks for your tutorials, they are great.
I will be spending a great deal of time with OpenGL next year. Thank you 🙂 I’m glad you are enjoying the videos.
Did you just say electronics???!!!!!
Awesome dude. I Would love to watch electronics tutorials videos too 🙂 and I am sure there would be many wanting to learn the same.
What kind of Electronics topics would you cover??
Also, how are you ALSO into electronics?? (well, other than CS/programming stuff)
Do u work on it frequently?
And yes, Embedded stuff would also be great..
In the real world I spend a lot of time making Android apps that communicate with electronics. I went to school to be an electrical engineer. When I make those tutorials I will start at electrons and conductors and work up from there. I want everyone to completely understand everything. I’d teach electronics from the mindset of a programmer because that is what made sense to me.
Hi Derek,
I’ve been following your Android Tutorial video series, and I must say that they are absolutely fantastic. They have helped me understand a great deal.
As for my question.. how many Android (or iOS) apps have you published and which one did you enjoy making the most?
Hi Praagya,
Thank you very much 🙂 I have made a bunch of Android apps, but I make private apps for business owners. This is an untapped market in my opinion because I receive way more work then I can handle.
The most successful app I ever made was a app that allowed business owners to set up security cameras all over the country and then monitor them from their phone. I’ve set up that one app 6 times already. Aside from the convenience the greatest thing about it is that it was very inexpensive to set up.
If I do a live AMA I will talk more about it and other apps I have made.
Hello, Derek. Thank you very much for your lessons. Can you make videos about WordPress theme development?
Thanks
Excuse me…Drupal…We need drupal)))
Hello, I have made over 80 videos on WordPress theme development. I hope you find them useful 🙂
Thanks. Sorry for my inattention
Are you going to make videos about drupal?
Sorry, but I use WordPress. I don’t think I could make Drupal tutorials that are any better then what is already out there.
And MVC PHP Framework please (ZF or Kohana or Yii or Symfony)
I will definitely cover the Zend framework. I have been using it for many years and a tutorial is well over due.
Already asked you that on twitter and I haven’t got any reply from you so I might as well ask you here, will you make a LWJGL tutorials?
Also thanks for all of those useful tutorial 😉
Sorry for not replying. My Twitter account is full of spam. Yes I plan on covering LWJGL. Next year will be all about games and Android apps.
By next year you mean 16 days from now in 2014?
Yes 🙂
I am interested in making YouTube tutorial videos also like you but wanted to know wether it pays good or enough to live by own if you dont mind me asking. Is there any way of knowing how much approx i will make based on if i make x amount of videos, with x amount of views and x amount of subscribers. Would you know how many subsribers and videos and views i would need to start making $500 or $1000 a month.
Keep up your good work i have learnt so much from your tutorials as it has helped me out so much in my career and studies. Your work has actually made a difference in my life by me getting a new job. Once again i thank you from the bottom of my heart.
Thank you for the kind message 🙂
It is kind of hard to say how long it would take to make $1,000 a month because everyone is different. Also making money through YouTube works different from most anything else. It is kind of similar to running a business.
I made a bunch of videos on what I’ve learned over the years about making YouTube videos. I also analyzed how the most successful people (Not Me) make YouTube videos. Here are a few of those:
How to Video Blog(Tricks used by the most successful Vloggers)
Get more views on YouTube
Become a YouTube Partner
I hope that helps and feel free to ask any other questions you have 🙂
Derek
Hi Derek,
I am great fan of yours. I need your suggestion. How to improve problem solving skills? E.g. how to find problems so that I can practice oops ,java and design pattern concept?
I know theoretically but how to apply them on actual problem and where to find problems like that?
Thanks for your time.
Regards
Sohi
Hi Sohi,
Thank you for the nice compliments 🙂 I made a tutorial that shows how to turn a problem into finished code Object Oriented Design tutorial.
I have been working on a problem solving tutorial for a while, but it isn’t quite ready yet. I hope to get it out soon.
I hope that helps
Derek
Is there any plan to cover C# AND ASP.NET waiting for it 🙁 or UNITY?
Sorry, but I’m not sure when I’ll cover those topics. I’m a bit backed up at the moment.
Derek, thank you very much for your lessons – you really help a lot of people. Good luck and don’t stop))
When I have got a chance, I will make donation. I promise
Thank you 🙂 I have many videos planned for the coming year. Please don’t donate. If you know someone who might like my website tell them about it. That is the only thanks I require.
Derek
Thank you as always for the help that you provide, Derek.
The question that I always have lingering in the back of my mind is, how does one go about getting a job in the web design industry? Thanks in large part to your tutorials as well as practicing on my own and my former schooling, I feel as though I have managed to obtain a decent skill set. What exactly are employers looking for and what could one do to stand out from the crowd?
I am setting up a new portfolio website for myself after finishing my friend’s e-commerce site and I was wondering if you had any advice, as I’m going to start applying for jobs after my site is finished.
Thank you again for your constant hard work and happy holidays!
Scott
You’re very welcome 🙂 From what I’ve personally seen getting a job in that industry comes down to a few things:
1. Making friends with someone who already works there
2. Meeting certain standards that the employer sees as valuable. Often by just graduating from a specific school will get you the job
3. Proving that you can attract customers to the business
I decided a while back to just be a consultant and get my own business. If you create a proven way to solve business problems and you are good at sales it is pretty easy to start your own business. You’ll suffer for about 4 years, but if you stick with it that long and do good work you’ll be set after that.
I focused years ago on perfecting all aspects of selling online. After I had a few profitable shopping carts under my belt it was very easy to attract business. Now I turn away over 95% of the customers that approach me. If you get very good at most any skill you’ll find a job.
I wish you the best 🙂
Derek
Great job with the Samsung series, they provided a great set of samples and you’ve done an excellent job of describing them, Thx – On Android Dev. Is there a difference between playing a video and streaming it? How about a live feed?
Thank you 🙂 yes Samsung basically told me exactly what they wanted me to do. They wanted just 4 videos, but I decided to make many more so that I could properly explain everything.
The only difference between for me between recording live versus recording and editing is that i don’t have any experience with recording live. It is a bit odd because there are a few seconds of delay.
Hey Derek,
Your videos on Design Patterns are just awesome. I am planning to learn Android development too, from your tutorials.
I am really happy to see you’re planning this AMA session. 🙂
I assume this will be a live youtube video. Can you let us know when this session is going to be?
I’m very happy that you enjoyed the design pattern tutorials. It would help a lot if everyone would leave questions here for the AMA so that I have some questions to answer at the beginning of the video. I haven’t decided yet when I’ll do it. I’ll make everyone well aware days before it happens.
What is the best advice you can give for someone who is wanting to start making youtube tutorial videos?
I’ll get more into this topic in my live ask me anything, but the best advice is to make videos on topics that you love to talk about. Then make as many as possible so you can learn how to make good videos. It isn’t something you can learn from a book. I’ve made a few videos on what the really successful people do on youtube. Here is one that may help.
Hi Derek I appreciated a lot your video classes on UML.
I’m trying to learn Apache Solr and Apache Nutch by myself as I haven’t found any video class on the web.
Would you be interested in preparing video classes on Solr or Nutch?
I (and many others) would be extremely grateful.
Ciao
Andrea
Hi Andrea,
Two awesome technologies I’d love to cover and plan to cover when I get back into web stuff. Thank you for the request. I wasn’t aware that anyone was interested in seeing more on them.
Thank you
Derek
Hey I want to make a website similar to yours, It would essentially exist to help with group (programming)projects by sharing information on where to begin learning to program and written articles. I can invite anyone who has an email to be a moderator but and have a Facebook page for the website but I don’t know where to find people who would be on the site.
The people will eventually find you. I have tried all of the social network tricks for attracting people and they haven’t worked for me. I think the most important thing to do is to decide what videos you’d enjoy making even if nobody watched them.
You need to make a ton of videos in the beginning just to get good at making videos. The first few hundred I made weren’t that good, but I kept at it and kept improving. It is a skill and it needs to be developed. Look for topics that aren’t properly covered and that you are interested in and make those videos.
I have a bunch of tips on what the really successful people do on YouTube here.
Also check out the tutorial people on YouTube that are really successful like thenewboston, khan academy, etc. They are way more popular then I am.
I hope that helps
Derek
I would like to build app is it true that I must first learn JQuery although I know html css and javascript.
It depends on the type of app. Are you referring to Android apps?
yes I am refering to android apps
You can make Android apps in a few ways. The most common is to use Java. You can however also use other tools that are easier to use, but are more limited. I’m going to cover one of those very soon.
ok thanks I will be waiting my brother.
What are your thought’s on this
I believe I have absolutely no control over my government. I didn’t watch the whole video, but I read the news every day and I know whats going on. I’m just powerless to change anything and so I dedicate myself to helping the world in what ever small way I can 🙂
It’s very sad what’s going on but we must work until that day come but
I am really thankful for people like you.
Thank you 🙂 you are very kind. Eventually someone great will come along and do the right thing.
I wanted to ask you, i am thinking of making tutorial videos on YouTube but all will create a website, do you know any good free templates for wordpress sites that would be good for tutorial. Sorry to bother you.
Here are a ton of free wordpress themes
Sorry for all these questions but my last question is, is there a particular reason why you dont put your actual website URL in your YouTube description? I have seen that you but shortened URL, is there a particular reason why you dont actually put your website URL in the YouTube description is it bad for SEO for your site?
Keep up the good work!!
I do that just to shorten the url and for no other reason. I don’t really do anything for SEO reasons anymore. I’ve decided that people will find me eventually if they look hard enough and I don’t want to try and manipulate Google and potentially get black listed.(without using any plugin). Jagannath,
Thank you 🙂 I’m glad you enjoy the videos.
1. I’m not sure what you mean by a custom post type. I use categories to organize posts.
2. I have a bunch of slider tutorials for WordPress, but this is probably the best Edit the Coda Slider.
3. You can create sub categories in WordPress as it is. I’m not sure what you are struggling with here.
I’m sorry I could be of more help, but I have all my WordPress tutorials here to help. I have over 80 of them.
I hope that helps.
Derek
Hi Derek,
Watching your design pattern videos at the moment, really enjoying it, it is very easy to follow and love those visuals in the presentation.
I checked through your site and realised you have done so much and being very productive.
I wonder what’re your daily rituals that make you so motivated and productive?
Thank you again.
Seng
Hi Seng,
I’m very happy that you are enjoying the videos. I think I have some sort of issue where I go crazy if I’m not doing something. I barely sleep at night and if I ever find myself waiting for something, or if I’m on the phone with someone that isn’t keeping me stimulated I’ll work on things like these tutorials.
The tutorials actually take me very little time to make, but in between running my business and taking care of my kids it gets a bit hard to make them on a consistent basis.
So, basically I’m able to accomplish everything that I do because I’m a bit crazy 🙂
I hope that makes sense
Derek
what do you this about raspberry pi what kind of language do i need to learn to make my own linux~android based OS i know its a big leap but its a goal to reach. Is it possble to do this on a windows computer or virtual machine and how do they write the OS for Android, and one more thing its more of a request please sometime in the future of you have time please explain how to write opengl if you can.
Operating systems are written with a combination of assembly and C normally. I plan on covering OpenGL soon.
hello mr Derek Banas. First I would like to appreciate your effort in sharing your knowledge, and to thank you for being an inspiration to people like us from africa.
That being said, I have a problem with the phonebook app i was following with your tutorial videos.
Please help look at this logcat error if you can identify what went wrong. My app throws this errors when i want to run it.
12-18 05:42:06.600: D/dalvikvm(1745): GC_FOR_ALLOC freed 64K, 4% free 3092K/3216K, paused 3ms, total 3ms
12-18 05:42:06.600: I/dalvikvm-heap(1745): Grow heap (frag case) to 4.147MB for 1127532-byte allocation
12-18 05:42:06.610: D/dalvikvm(1745): GC_FOR_ALLOC freed 2K, 3% free 4190K/4320K, paused 3ms, total 3ms
12-18 05:42:06.700: D/AndroidRuntime(1745): Shutting down VM
12-18 05:42:06.700: W/dalvikvm(1745): threadid=1: thread exiting with uncaught exception (group=0xb0d2ab08)
12-18 05:42:06.700: E/AndroidRuntime(1745): FATAL EXCEPTION: main
12-18 05:42:06.700: E/AndroidRuntime(1745): Process: com.DYCES.ccchymnbook, PID: 1745
12-18 05:42:06.700: E/AndroidRuntime(1745): 05:42:06.700: E/AndroidRuntime(1745): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2176)
12-18 05:42:06.700: E/AndroidRuntime(1745): at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2226)
12-18 05:42:06.700: E/AndroidRuntime(1745): at android.app.ActivityThread.access$700(ActivityThread.java:135)
12-18 05:42:06.700: E/AndroidRuntime(1745): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1397)
12-18 05:42:06.700: E/AndroidRuntime(1745): at android.os.Handler.dispatchMessage(Handler.java:102)
12-18 05:42:06.700: E/AndroidRuntime(1745): at android.os.Looper.loop(Looper.java:137)
12-18 05:42:06.700: E/AndroidRuntime(1745): at android.app.ActivityThread.main(ActivityThread.java:4998)
12-18 05:42:06.700: E/AndroidRuntime(1745): at java.lang.reflect.Method.invokeNative(Native Method)
12-18 05:42:06.700: E/AndroidRuntime(1745): at java.lang.reflect.Method.invoke(Method.java:515)
12-18 05:42:06.700: E/AndroidRuntime(1745): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:777)
12-18 05:42:06.700: E/AndroidRuntime(1745): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:593)
12-18 05:42:06.700: E/AndroidRuntime(1745): at dalvik.system.NativeStart.main(Native Method)
12-18 05:42:06.700: E/AndroidRuntime(1745): Caused by: java.lang.RuntimeException: Your content must have a ListView whose id attribute is ‘android.R.id.list’
12-18 05:42:06.700: E/AndroidRuntime(1745): at android.app.ListActivity.onContentChanged(ListActivity.java:243)
12-18 05:42:06.700: E/AndroidRuntime(1745): at com.android.internal.policy.impl.PhoneWindow.setContentView(PhoneWindow.java:293)
12-18 05:42:06.700: E/AndroidRuntime(1745): at android.app.Activity.setContentView(Activity.java:1928)
12-18 05:42:06.700: E/AndroidRuntime(1745): at com.DYCES.ccchymnbook.MainActivity.onCreate(MainActivity.java:33)
12-18 05:42:06.700: E/AndroidRuntime(1745): at android.app.Activity.performCreate(Activity.java:5243)
12-18 05:42:06.700: E/AndroidRuntime(1745): at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1087)
12-18 05:42:06.700: E/AndroidRuntime(1745): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2140)
12-18 05:42:06.700: E/AndroidRuntime(1745): … 11 more
12-18 06:56:11.562: D/dalvikvm(1931): GC_FOR_ALLOC freed 43K, 4% free 3092K/3192K, paused 36ms, total 37ms
12-18 06:56:11.562: I/dalvikvm-heap(1931): Grow heap (frag case) to 4.147MB for 1127532-byte allocation
12-18 06:56:11.572: D/dalvikvm(1931): GC_FOR_ALLOC freed 0K, 3% free 4193K/4296K, paused 10ms, total 10ms
12-18 06:56:11.642: D/AndroidRuntime(1931): Shutting down VM
12-18 06:56:11.642: W/dalvikvm(1931): threadid=1: thread exiting with uncaught exception (group=0xb0d2ab08)
12-18 06:56:11.682: E/AndroidRuntime(1931): FATAL EXCEPTION: main
12-18 06:56:11.682: E/AndroidRuntime(1931): Process: com.DYCES.ccchymnbook, PID: 1931
12-18 06:56:11.682: E/AndroidRuntime(1931): 06:56:11.682: E/AndroidRuntime(1931): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2176)
12-18 06:56:11.682: E/AndroidRuntime(1931): at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2226)
12-18 06:56:11.682: E/AndroidRuntime(1931): at android.app.ActivityThread.access$700(ActivityThread.java:135)
12-18 06:56:11.682: E/AndroidRuntime(1931): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1397)
12-18 06:56:11.682: E/AndroidRuntime(1931): at android.os.Handler.dispatchMessage(Handler.java:102)
12-18 06:56:11.682: E/AndroidRuntime(1931): at android.os.Looper.loop(Looper.java:137)
12-18 06:56:11.682: E/AndroidRuntime(1931): at android.app.ActivityThread.main(ActivityThread.java:4998)
12-18 06:56:11.682: E/AndroidRuntime(1931): at java.lang.reflect.Method.invokeNative(Native Method)
12-18 06:56:11.682: E/AndroidRuntime(1931): at java.lang.reflect.Method.invoke(Method.java:515)
12-18 06:56:11.682: E/AndroidRuntime(1931): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:777)
12-18 06:56:11.682: E/AndroidRuntime(1931): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:593)
12-18 06:56:11.682: E/AndroidRuntime(1931): at dalvik.system.NativeStart.main(Native Method)
12-18 06:56:11.682: E/AndroidRuntime(1931): Caused by: java.lang.RuntimeException: Your content must have a ListView whose id attribute is ‘android.R.id.list’
12-18 06:56:11.682: E/AndroidRuntime(1931): at android.app.ListActivity.onContentChanged(ListActivity.java:243)
12-18 06:56:11.682: E/AndroidRuntime(1931): at com.android.internal.policy.impl.PhoneWindow.setContentView(PhoneWindow.java:293)
12-18 06:56:11.682: E/AndroidRuntime(1931): at android.app.Activity.setContentView(Activity.java:1928)
12-18 06:56:11.682: E/AndroidRuntime(1931): at com.DYCES.ccchymnbook.MainActivity.onCreate(MainActivity.java:33)
12-18 06:56:11.682: E/AndroidRuntime(1931): at android.app.Activity.performCreate(Activity.java:5243)
12-18 06:56:11.682: E/AndroidRuntime(1931): at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1087)
12-18 06:56:11.682: E/AndroidRuntime(1931): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2140)
12-18 06:56:11.682: E/AndroidRuntime(1931): … 11 more
It looks like heap and emulator problems. Take a look at this video. I show how to install all of the updated Android / Eclipse software. That should fix it.
Hi Derek,
Have you received any requests from viewers to receive your tutorial videos on DVD, saving the users from streaming/downloading from Youtube, for a nominal fee of say $10 for Java, $10 for Android etc.
It goes without saying the content is far more valuable than the nominal $10 fee. I have been your fateful viewer on many videos and found that having the contents on a local drive is very handy.
Part of the proceed would cover the processing and handling, and the remaining portion can go to support and maintain your web hosting and other expenses.
Have a great holiday!
Ernie
Hi Ernie,
I have thought about doing that in the past, but it would go against my idea of providing a free education. You can download the videos if you want. I personally can’t allow it however because I wouldn’t be able to afford the hosting if I hosted my own videos.
There are some other personal decisions I have had to make to make hosting my videos worthwhile to YouTube. They are providing my massive videos for me and so I have found that it is in my interest for them to make some money off of them.
I hope that makes sense
Derek
Bro you are veeeeeeeeeeeeeeeeeeeeeeeeeeerrrrrrrrrrrrrrrrrrryyyyyyyyyyy SMART
Thank you very much 🙂 I’m not all that smart I just understand this stuff because I’ve been doing it forever.
Hi Derek,
First of all THANK U SO MUCH for doing such a great job.
“You are an open university having loads to crash courses for free!!” 🙂
I have seen your design pattern videos and found them really useful, though yet to finish all. One thing I felt was it’s a bit faster and too much to fill in that 20-25 minutes video for each pattern or may be I grasp at a slower pace -:).
Would like to have videos on “How to crack programming interviews having Data struture/Algorithms”, topics/tips/experience.
Thanks again for enlightening!! 🙂
Regards,
Raj
Hi Raj,
Thank you 🙂 I’m glad you enjoyed the videos. I have some Java algorithm tutorials. I plan on covering more advanced data structures and algorithms soon. Thank you for the request.
Hello derek, I really wanted to learn how to program the server side of android apps and since you are not going cover networking for now, I decided I’ll learn this by myself. My objective is: to develop more complex apps that uses the web and databases, build web servervices, and so on…But I dont know what I should learn or where to start! Should I learn php and mysql? Or is it jsp that I should learn?
By the way, Im a HUGE fan and very excited for the android game series. Thanks for everything man, your videos are making me so much smarter!
I personally almost always do everything server side using php. It is the least expensive option. I plan on covering all of that stuff very soon.
But isn’t php only for websites since its code is embedded inside the html? Does it work with android apps or otther programs as well?
An Android app can communicate with a web server just like a browser. If you have a database for example on a web server it would be common to have a php program stand between the Android app and the database.
Dear Derek,
I really enjoyed all of your videos. These are some of the most informative and clear videos I have seen in a long time. Anyways today I would like to ask if you would be continuing the Object-C tutorials? I would really appreciate if there were more videos on object-c since that is very popular language at the moment.
-Thank You
Hi Matt
Thank you 🙂 I’m glad you enjoyed the video. I’ll be covering objective c and iOS as soon as I finish up android. Thank you for the request
I’ve just discovered your youtube channel and I cant wait to start watching your tutoriasl! How often do you post new videos? Twice a week? Thank you and merry christmas
Thank you. I’m glad you like them. I post whenever I have free time. Sometimes I’ll post 5 in a week and other times 1. Sorry but it is hard for me to follow a schedule. Merry Christmas 🙂
hi
i want to know if you will make some videos on image processing with c++
thanks
Hi I plan on covering c++ very soon. What exactly do you want to see in regards to image processing.
like object detection and change its center and filters and threshold and pattern recognition
Do you have any steps in mind to building a brand/online company bloggging etc..
I have a ton of marketing tutorials on this site : Marketing Tutorials, Vlogging Tutorials, and SEO tips.
I’m a big believer in finding information that you love studying and then provide that information through a blog or vlog. I personally still see many more opportunities through YouTube then from a traditional site.
In wordpress/facebook is it possible to have other users manage the account at a lower level then the orignal ADMIN
Yes, but when you give people even a minor amount of power it opens you up to potential security issues. For example if I just allowed people to post comments without verifying them my site would be successfully attacked in minutes.
Derek, you have a most lucid way of explaining complicated topics. You’re truly gifted.
I’m currently learning Java and watching your Java algorithms videos. I have a question which might seem a bit academic, but it bothers me. I’m coming from a PHP background which gives us a bunch of pre-built sorting functions that we can use easily without knowing what’s going on under the hood.
My question is, why didn’t the developers of Java make these type of pre-built functions for us the same way PHP has and save us a lot of time coding? I imagine that even the very-experienced developer would benefit if, for instance, he didn’t have to code a Merge Sort manually?
Thank you for your amazing tutorials.
Hi Noam,
Thank you for the nice compliment. I do my best to constantly improve the videos.
As per your question Java does have all of the sort functions just like php. In my regular Java tutorial I cover those.
So if Java already has them the logical next question is why do we learn algorithms? The answer is mainly to train our brain as programmers to be able to solve coding problems that Java doesn’t have a prebuilt solution.
I hope that makes sense and sorry if I caused any confusion.
Derek
Hey Derek, I just stumbled upon your tutorials on design patterns and loved them. Just have one question and this question has been for all famous tutorial makers like TheNewBoston, KhanAcademy, you, etc. It seems you guys cover a wide array of topics. And then there are tutorials from guys who just specialize in lets say…java or photoshop. Would watching your tutorials be more beneficial vs watching them from a person who only works on java tutorials? I know it depends, but my question is how in depth and how much quality do you try to put in each topic. Is your knowledge any less than someone who only works on java. Would I be wasting my time watching your tutorials when I could be watching one of theirs? Would appreciate if you could answer these. Thanks.
First thank you for comparing me to New Boston and the Khan Academy. They are tutorial royalty and I’m but a peasant 🙂
As per your question, I think you should look at all the tutorial options available to you. It may seem like I cover many topics, but really everything is basically about Java, web development languages and tools that I use all of the time to make websites and such. In the real world I make websites using all of these tools on a daily basis. I also make Android apps. Actually I create custom electronics and I don’t even have any electronics tutorials yet.
I don’t think some great expert, but I do use all of these tools on a daily basis. YouTube is just a hobby for me. I’d say my tutorials are for people that want to learn as much as possible in the least amount of time possible. They definitely aren’t for everyone, but I like to make original tutorials so fast tutorials are what I make.
Try them all out and eventually you’ll find the best teacher for you. I wish you all the best on your journey 🙂
Derek
Are you kidding me by comparing Derek with thenewvoston?????Derek’s channel, are waaayyyyyyyyyyyyyyyyyyy better than thenewboston’s. Derek’s tutorials are the ones with the best quality I’ve ever seen and he answers every single comment or question( which for me, is one of the most wonderfull things about Derek). Derek, thank you very much for sharing your knoledge with us, we are all really greatful! You’re the man!
Thank you very much 🙂 it is very kind of you to say that.
Thank you as always Derek and have a great 2014!
I have a question. You seem to be a big praiser of WordPress and all its functionality. Is there much advantage to being really strong in HTML/CSS/Javascript or should people starting in Web Design nowadays just get stuck into everything about WordPress sites?
Thanks!
I wish you a wonderful 2014 as well 🙂
Yes you definitely need to learn HTML, CSS and JavaScript even if you want to design WordPress sites. I personally use WordPress for security and for plugins. It just stands between your regular website designed with html and css and a database. You can do most anything with a WordPress back end that you can do with a regular site. Most people don’t know that. I have most every WordPress tutorial I ever did on this page. I hope you find them useful.
Peace be Upon with you
I am glad to see your youtube android development tutorials video. it have 41 videos. many videos i watched and implement in my android practise . i need your help. Please Help me.
i have pdf books more. but i want to make these pdf books convert to android app. i have little knowledge in android . pls help me . please give me the example code. how to create book app.. i am not software developer. i dont have knowledge in java but i have confident to make app.
for an example here one app link. i need same like this .
if you help me. very useful to me. please help me brother. i have more pdf books. i want to make android app. please give me example res, layout, string. manifest.xml, mainacvity.java. please please. i hope your reply.
thanks …
Thank you Sulthan 🙂 Peace be upon you as well. I will be making a new Android tutorial for non-programmers. I will start it in January. You’ll be able to easily make most any Android app including the one you mentioned with the tools I cover then. I’ll get it going as soon as possible.
that would be great because im not a app developer but I would love to make the website im working on into a app i hope you make a video on that and when is newthink tank app coming out it’s about to be 2014 man hurry up lol
I actually plan to start making Android apps to go along with each video series. It will provide practice exercises that people have been requesting. I hope to start doing that this year. They will be free 🙂
Hi Derek, Your video are really nice, specially the explanations skills. It’s very much clear and in easy manner. I saw the wordpress theme videos tutorial. It guided me a lot. However, I am looking for guidance on next steps. Like how to make it dynamic like integrating the markup with wordpress existing functions using(wp_header, wp_footer etc). Would appreciate your guidance.
I’m very happy that you found the WordPress tutorials useful. I cover everything you mentioned over the course of all my WordPress tutorials. I have most everything on this one page WordPress How To. I hope it helps 🙂
Thank you so much for the reference. I am new in WordPress and trying to get hand on it. The reference URL covered almost everything. Just one confusion. How can I create my own home page. As for example- I have my own custom structure(markup) and I want to use the same as Index page when first page load and then on clicking on other link(i.e portfolio or services etc) it should be redirected to index.php(wordpress file). Is it possible? Please suggest.
Thank you I’m glad it helped. A WordPress site normally has a consistent sidebar and header. If you want pages to have different headers for example you could import chain home based on page I guess. I’m not sure what your design is though.
okey, let me give a more try. I want to use custom pages, using existing wordpress header and footer. as for example, i have middle container which is having different data, now, how can I call that to middle container with different page name?
If you want to only change the content in the middle of the page WordPress does that on its own. I may not be understanding the question. Do you have an example you can refer me to?
hey bro. what is a good hosting account that you recommend and is it also a bad thing to have your domain name and hosting account sign upto one company for ex (GODADDY)because I watch a video and the guy said they will own you.
I have worked with numerous hosting companies and the only ones I had problems with in regards to transferring domains were the small ones. I personally use Go Daddy because they have never tried to force me to buy a dedicated server even now since I get a lot of traffic.
I won’t say anything bad about other hosting companies but many years ago a few popular sites had very serious security issues because they weren’t properly policing what was going on with their servers. i think that has been corrected, but I’m not sure.
Hi,
I wanted to ask what is the best settings to record tutorial videos for YouTube using Camtasia if you dont mind telling? Because i tried to make a video but the text appears to small on YouTube or to fuzzy. Thanks.
Yes I had that problem as well. Open Camtasia and click preferences.
1. In the Canvas tab make sure you have Scale by percentage set to 100%.
2. In Recording tab set Screen Frame Rate to 30 fps
3. When exporting click Share -> Advanced Export and pick 1080p
I hope that helps
Derek
Hello Dear Coach,
Just want to ask HOW YOU MANAGE YOUR TIME to do so much ?
Do you plan on making tuts for Hibernate,Struts 2,GIT , Maven ?
Thanks and God Bless
123japanuser
Hello 🙂 it is nice to hear from you. I just don’t sleep that much. I also don’t have any other hobbies. Since I have been making videos for so long it is also pretty easy for me. I’ll cover j2ee tips eventually but I want to finish covering android first. I will cover it while covering android if that is possible without hurting the Android ones.
Hi there, I am enjoying very much of the deign pattern and code refctoring video and thank you for all the videos. I hope you still going to do the j2ee video and the last time you mentioned soon. May i know how soon and exactly when. If not, any suggestion of good website.
I’m not sure how soon I’ll get to j2ee because I have to cover a lot of Android stuff first. I think java brains on YouTube is supposed to be the j2ee expert from what I’ve heard. I’m sorry it is taking so long but I don’t like to end tutorials unless I feel I’ve covered everything.
Hi Derek,
Happy New Year 2014 wishes for you and your Family! Keep up the good work. Recently I’ve fallen behind with your tutorials but I promise to make up for the lost time 🙂 I’m struggling with C# in Visual Studio now but I’m hoping to get back to some of your lessons soon. Good luck and take care.
Happy new year 🙂 Thank you for stopping by. I doubt there are many people that watch all my videos. I’m just happy if they help those that do watch them.
Hi,
Would you know what is the best way or tactic for the following senario.
I want to make youtube videos and am using a screen capturing software but am using my mobile to record sound. Do you know the best way to sync up the sound and video? Is there a way i know when to sync them together so the vidoe and sound goes together?
Happy new year
You can do that, but it is very messy if you plan on editing the videos, or sound in any way. If you don’t need to edit though you just import the sound and then drag it over the video and it syncs. I get the timing down by clapping at the beginning of the video. Then I match up the clapping with the video and audio.
I hope that helps 🙂
Hi Derek. I’m currently learning Java and I hope to start working on Android development in the near future. I’m wondering if you have any opinion or advice regarding Android vs. IOS. Considering I learn objective-C in addition to Java, is it more challenging to develop for one platform over the other? Do you foresee any trends in employment opportunities that might place more value for an Android developer vs. an IOS developer? I appreciate your insights and advice tremendously.
Hi Noam,
I can only talk from personal experience, but I see very little work for iOS. I gave up on it all together 2 years ago. I personally though make personal apps for business owners. You can’t do that on iOS. iOS is easier to develop on then Android, but that is changing very quickly.
I personally love my iPad, but I don’t believe Apple is providing developers with the freedom they require to make the apps that are in demand.
Just my opinion
Derek
Hello Derek,i think your tutorials are just awesome.They are also updated that’s why i like them most.I’m a junior programmer and i just finished the fundamental level of java and other programming languages.Now i want to start data structure and algorithm in java.Please give me some sequence of your tutorials that will help me to progress perfectly.Thank you.
Hello Tarango,
If you put your mouse on videos in the tool bar on my site you’ll find all the videos you are looking for. I have them all here Java Videos in the order in which I believe they should be watched.
I hope that helps and thank you for the nice compliment 🙂
Derek
Hello Sir,
Firstly Happy New Year ! Sir I have seen your MVC tutorial and I am very thankful to for that.
Sir I want to ask you that whether I choose JAVA or PHP for my Future.
I have searched lots of time on Internet but I wont get any satisfactory solution
please sir guide me . I have No one in my contact , who give a best answer AS SAME AS you !
Hello,
You’re very welcome 🙂 I’m glad you enjoyed the tutorial.
As per your question I’d say it depends on what you want to do. Either way you’ll need a bit of PHP. If you want to focus on Android apps then go the java route.
If you want to develop on the internet then go the PHP route because a PHP site is much less expensive to run.
I hope that helps
Derek
Sir
Thanks for replying ! I am not getting on what i should focus more for my future.
I am Currently in 3rd yr of Engineering and want to choose a stream.
Please suggest me on what I should give more focus.
I have also interest in OOA & D .I have downloaded yourr tutorial on it.
But as Programmer on which Language – Java or PHP I should Focus more to get Better Job, AS I have only 1 yr. for My Professional Life.
Hi,
If you’ll be working on the web PHP and if you’ll be developing for Android Java. You could go the regular html route if you want to develop cross platform apps as well.
Hey are u gona make a java tutorial about how to make a game engine.
I’ll be covering Android games first. I’m not sure yet what I’ll do for the desktop.
I have tried 3 Apple headphone/microphone headsets like you use, but my sounds seems to be so low in all the videos. What are the microphone settings you use on your Mac? And any suggestions on how to a decent sound quality for the videos? Thanks for helping me and answering all these questions, much appreciated.
The new Mac mic headphones aren’t very good. I eventually switched to a Plantronics Gamecom 780 headset that I got cheap on EBay. It is working very well for me.
Hi Derek,
Your site is really interesting! I’m planning to come back for more not just for the programming part, but other aspect of the knowledges as well. Do you know the resources to get started HFT using Java stack? Sorry, C++ is too much for me right now. Any suggestions how a regular bob java programmer can find some wall street programming jobs? Thanks!
Hi Jay,
Sorry, but I don’t think it is possible to compete with the big firms in HFT. They are pretty much front running from what I have seen. I know a girl that works in that field and when i asked if what they are doing is basically legal front running she said yes pretty much. That is a wild world.
Best of luck
Derek
How do you place a div image inside of a div to over lap it in html/css ?
You could use position:absolute and then manipulate the z index.
I want to get into iphone programming and am new to programming. I know that i have to learn Objective C, Coca etc to develop iphone apps but i have been reading on the net that i should start learning C or Python first. Would you reccomend learning C or Python first before going onto Objective C etc?
Thanks
If you don’t know how to program it is best to learn the basics using a language like python, but not C in my opinion. You may want to give iPhone development a try first though since you are motivated to learn it. It is pretty easy to develop iPhone apps. If you struggle and begin to lose motivation then try python. python is a very fun and powerful language.
Hi Derek,
I have been following your videos on java tutorials, java algorithms, design patterns and OOAD for few days now. I have got big job interviews for SDE-1 positions lined up in the coming week. What all videos would you recommend be a “must go through” for review purpose.
The only other thing would be refactoring. There are a bunch of great programming interview books out there that show common questions and answers. The big corps change them up constantly though. Often they ask you to make a rather simple program like how to make a url shortener and then ask a bunch of questions along the way. I wish you the best 🙂
Looking forward for videos on backbone js
I’ll do my best 🙂
I have to do an application management workshop ( repairing diesel pumps and injectors)
how can I do it with java ??
like swing ..
it’s my the final year project 🙂
I need more information on what you are trying to make.
Hi Derek,
I have studied basic syntax of C++ till now as i am in second semester,I want to know the sequence of study of your tutorials on Object Oriented Design,Pattern,UML,Refactoring and/or other tutorials that fall in this category
I made them in this order Design Patterns, UML, Object Oriented Design, refactoring. They don’t need to go in that order, but if you watch them in that order they may make more sense 🙂 I hope they help
Thats awesome that you went to school for Electrical Engineering! Is there anyway you could make videos on ARM Assembly Language?
By the way your videos on youtube are a crazy gift! You have donated so much of your time, these series have helped out tons of people!
I was also curious if cause you know so much about oscillators and stuff, if you knew how to make funky fresh bass from sine waves. I imagine you do….
Thanks for the videos.
Thank you 🙂 I very much enjoy making the videos. I would love to make electronics tutorials and will start doing that some time this year. I plan on starting at the bare bones and working up towards making most anything. That would be more fun for me then most anything. It is coming and I can’t wait 🙂
Thank you very much Derek for making these awesome tutorials I am 12 years old and I would never even have looked into programming if it wasn’t for your awesome videos I am currently looking at your java and android development tutorials and they are very interesting.
public class ThankYou
{
public static void main (String[] args)
{
System.out.println(“Thank You”);
}
}
That is very cool! You must be very intelligent. I’m honored that you would use my videos to learn from. Always feel free to ask questions 🙂
Hello Derek !!! im new to android development and your videos helped me a lot like anything 🙂 .. thanx a ton for these videos .. My question is can you cover some of the networking stuff like detecting devices using similar wifi ?? im not getting any kind of tutorials on networking
Thank you 🙂 Yes I will definitely get back into regular Android Java programming after I finish with App Inventor. I’ll make sure I cover wifi, bluetooth, nfc, etc.
Ohh thats great!!! waiting for your videos…
Hi Derek,
I am in the process of setting up a wordpress site, i wanted to ask what plugins to you use to avoid spam commenting and also in your opinion what is the best way to keep the wordpress site secure?
Akismet is pretty good about spam over time. For security I use the WordPress Firewall and the Sucuri Security plugin and services. My security is probably a bit to secure because I blacklist anyone if they do anything weird, but in this world I was forced to do that because of the constant attacks.
Hi Derek,
Firstly, I would like to thank you for sharing your knowledge with such high standard videos. YouTube has become a great place to study and learn in a variety of subjects but there are very few people presenting quality videos like yours.
Question for you, do you have any plans in making tutorials on virtualization topics, like VMWare and HyperV?
Hi Golthier,
Thank you for the nice compliment 🙂 I mainly do tutorials on topics that I’m very aware of. I don’t plan on covering VMWare because I’m not aware of it.
Over the course of this year I will slowly start making both video tutorials and free Android apps to go along with them. My hope is that those apps will help people better study the information. That is the big plan. I hope it goes well.
Thank you
Derek
I’m working on a independent film. What strategies would you use to get it to the public or how would you market it online and drive traffic towards it. Any advice brother.
I don’t think there is an easy way to market online any more. There are tricks that will get you attention for the short run, but they will eventually get caught by Google. I have always been a big believer in building an audience slowly. That requires a lot more time, but those communities will also stick around. I have always preferred to have many videos with 10,000 views over one video with 1 million views.
Hi,
Im loving your new tutorials and have learned so much from you, i wanted to aks last year you did a competition with Samsung, how did that happen did you contact them to arrange a competition or did they contact you if you dont mind sharing.
All the best.
Thank you 🙂 They contacted me. My Android tutorials have become pretty popular and so I guess they thought it would be a way to help draw attention to their coding competition. They were nice enough to give me free products to give away to you guys so it was very fun.
Hello sir!
First of all, I just say thanks because I cannot express my feelings for you in words.This is a good institute for me to learn. Sometimes i listen to you instead of going to university because i learn here more than uni.
I want you to specify Object oriented tutorials in your java programming tutorials.
and are you going to start tutorials on C++? Thanks a lot
Thank you very much Gulzar 🙂 I’m happy that I have been able to help. Here is my Object Oriented Design tutorial. I hope it is useful. Yes I plan on covering C++. I just want to move a little further along with the App Inventor tutorials first.
How do you add a video slider like the one on this website>>>>>>>
Check this tutorial out.
please give me a complete running code for paint application in java
Here is all the code for my Java paint application.
i am just started checking ur site and it feels awesome..
ty for such a great experience…. i just need to know that is there any lecture video that explains how to keyword search on XML documents using java.. pls reply..
Thank you 🙂 I have a couple videos on parsing xml with Java, but this is the most popular Read and Write XML with Java. I hope it helps.
sir the videos are extremely helpful… jdom2 and XPath both are really good … sir i just want to know how to search more than one xml file at a time because i am working on this project thing “returning clustered result for a keyword search for xml documents….
pls reply
Thank you 🙂 You could search in the same way maybe using different threads for each xml page? If you are doing a ton of bulk searching like with a web spider I think I’d use PHP though.
@Derek Banas
sir u have already given this society a lot.. no one can ask for more but i just want to make a request.. i am working on my final year project that is “RETURNING CLUSTERED RESULTS FOR KEYWORD SEARCH ON XML DOCUMENT” i am watch ur tutorial and those are really helpful … but i am not able to start my project work..
so pls if u have a little while reply me with steps how to start it .. because i do not have a proper guidance…
Thank you 🙂 Sorry, but I don’t think I can cover that topic as soon as you need it. Here is an interesting article on clustering XML that should help. I hope that helps.
Hi Derek!!!
My question is
(How to STOP or UNINSTALL process on clicking button of that particular process?)
Im developing an app Android Device Management through which I can Uninstall or Force Stop processes. Im getting the list of running processes with their PID and I also know code to stop and uninstall processes but can you guide me on how do I stop process based on Its PID and uninstall process(app) based on “package:com.example.appname” format when I click that particular process’s button. if you want I can share that code also.
Thanks in advance 🙂
This will kill a process android.os.Process.killProcess
How to uninstall
I hope that helps 🙂
Hi Derek!
Currently I have done something like that only but I want the package name should be dynamically go into that code it can be using some variable.So it can uninstall or force stop any random third party app. Is it possible ?? because I have tried a lot using different ways of doing it. May be you can help giving me better solution.
thanks for helping!!
Hi Derek. Thank you for everything you’re doing. You really have some very interesting stuff on here.
I really enjoyed you psychology videos and was wondering if you are planning on making any more of them.
You’re very welcome 🙂 I have been toying with the idea of making more psychology videos. I have been thinking about a hypnotic storytelling video for a long time. I’d go into how propaganda works, but I think that might turn political, which I have always tried to avoid.
Hi Derik i am new to programming not new new but i m two years now in the field and yeah i can program basic programs.Here is what eats me up, when now developing a system what are the basic things you start with. i found myself just starring on my screen not knowing what to do next.eg lets say its a Chat program in java with the Client side and Server side where do you real start from. Please help.
Hi Evans,
What you need is UML more then likely. Another thing that should help is an understanding of Object Oriented Design. For the OOD tutorial watch the first 2 videos and even if you don’t understand all of the code you will learn a lot about the thought process. Feel free to leave more questions.
ok thanks so much I am doing as such and if I face any challenges I will always ask. Is it possible for you to email me your personal contact details eg Skype details , mobile number? if its not its still fine. its just that I find you as an inspiration. I am from Zimbabwe
Always feel free to ask and I’ll help if I can. Sorry I can’t talk to people via mobile though. You wouldn’t find me that interesting I promise 🙂
ok that’s not a problem let me go through the tutorials you suggested so that I can be a good developer.
Hi Derek I have a small favour I would like you to help me with though I am kind of shy to say it out but will jus do anyway since I need help. I am a computer science student in my 3rd year now (which is internship )and next year (2015) will go back to college for my fourth year in which we do a final year project which has a weight contributing much to the degree class you look for a topic of your own propose it and if accepted you program it since they want a working prototype however I am lacking or I am not creative enough and have read from areas ranging from BIG DATA , BI(business Intelligence) , Internet of things among other areas but still cant come up with a feasible idea on what to do.if you come up with a topic we are required to then work on it using any language of your choice and I was asking you if you can personally help with ideas . I feel very embarrassed to ask but it doesn’t kill asking.Let me know maybe you might be having a bunch of crazy ideas.
Hi, Well it really depends on what interests you about CS. I’m not sure if the project must lean more towards research, or towards a commercial application. I personally created a virtual reality system for my senior project, but that was many years ago in 1996. Give me a little more information on the goal of the project and I’ll do my best to help 🙂
Hi, Derek.
Are you planning to make some video tutorials about ‘scala’? I think that would be great!^^
Hi Mike, I’ll try to fit in a learn scala in 30 minutes video
Hi Derek,
Nice videos of design patterns. You are saving so much time of us of reading & understanding these kind of topics. Thank you so much for that. I would like to know if you are going to cover SQL & NoSQL databases, reason to use, how do these work & some examples of these type of databases ?
May be a silly question, but do you use any database for your website ? If yes, which one & how ?
Hi,
I have a bunch of SQL tutorials. I use MySQL exclusively for everything. I have used other databases and I see no reason to use anything except for MySQL mainly because of price.
Hello Derek,
Do you have a tutorial for login authentication for android and connecting using mysql instead of sqlite?
thank you in advance.
I’m going to make one very soon. I just need to finish my web services tutorial first.
Okay thank you, I’m looking forward to it.
Do you have an sqlite tutorial for it? If none, well thank you again.
Here is my SQLite tutorial
Oh sorry i mean an sqlite tutorial for the login authentication.
You couldn’t really do that because the SQLite database on the device would need constant updates from a MySQL server you control. You could call for it to be updated, but a direct route to your MySQL database would be much quicker.
Hello Derek,
Greetings of the Day, I am one of the follower of your android tutorial. We are developing an android app and simultaneously looking for this app would be platform compatible. I came to know that Phonegap will do this, so can you make a video that describes making apps using android studio and Phonegap. Waiting for your reply eagerly.
Hats off for your effort of making videos on various technologies.
Thanks,
Phani Kumar.
Hi Phani, I’ll try to do my best in regards to a PhoneGap tutorial. I’m also planning cross platform apps using html5
Hello Derek,
do you have a tutorial or can you have tutorial about a search-engine like in android, Thank you.
Hello, Sorry I’ve never made a tutorial about building a search engine. I can’t imagine anything that large would do very well without thousands of servers.
I think I have a wrong word but it is like when searching a data in listview from the data base like a search filter something.
I’m not very well versed in search engine techniques. I’ll look into what goes into it.
Hi Derek,
First of all I would like to thank you for the time and your passion towards educating others through your channel. Honestly this is the first ever youtube channel which I have subscribed to. you are really really amazing. I have learnt a lot of stuff from java to android development which helped me to set my career path.
request:
I would like to request you to create a tutorial series in jee as for now , there is book, no good tutorial to start with to learn jee. I hope other will njjoy as well.
Thank you for the nice compliments 🙂 I try to do my best. I plan on covering Java enterprise after I finish with my Android tutorials. I’ll do my best to get into it soon. Sorry about the wait.
Hello, english is not my native language: that’s why I think it’s harder for me to understand rare words in programming. I’m thinking: is it possible to create android app with app inventor, which could speak caller’s, SMS writer’s ID-name from contacts. Also speak SMS message. And to repeat this for a determined times with also determined pauses. It seems that text to speech is not that loud: maybe it’s possible to change that?
Thank you.
Hello, Near the end of this tutorial I talk about a tool called AI Live Complete. It allows you to edit the sound like you want. I also show how to transition to Java Android which will also allow that.
Hi, Derek , I am a happy learner of your Android video tutorials , I am a life science graduate turned web developer . interested to learn mobile technologies . i would like to know how effectively we can make a HTML5 mobile apps . i haven’t tried it so far but i posses a stereotype that cross platform apps are dumb and slow . do you have any suggestions .
I have been in to development for 2 years . before that i have no knowledge about any of the programming languages .
I wouldn’t say html apps are dumb, but they don’t take advantage of the large majority of what iOS and Android offer. I have only ever made simple html apps. I’ll try to upload a html5 tutorial next week to help you along.
Hi Derek thank you for the tutorials for Python/RegExpressions. I watched many of your tutorials.
I need to extract certain lines of strings from a huge text file and save it to a separate file. I couldn’t do it. I would be grateful if you code refer any tutorial or code you have done.
Thanks
In this tutorial I cover how to grab most anything using Regex.
Hey Banas, I wud like to be an android application developer, my question is where do I start considering am 26 and don’t have any experience in programming…
Hey Paul, Look at my Android for beginners tutorial. By the end of it you’ll be able to make a large number of Android apps.
I am new to ANDROID development When ever i create a new project, the Fragment_main.xml file is added to my Layout folder and unlike in Eclipse it is this file that contains what is normally in the Activity_Main.xml file.Why is the Fragment_main.xml file always added to my projects in Eclipse and how is it different from the “regular” Activity_main.xml file? As I follow your tutorials which of the two between Activity_main.xml and Fragment_main.xml should I be editing and inserting my layout code. since in most tutorials I see there is only main.xml. I am using ECLIPSE KEPLER.
I show an easy way to fix that here Android fragment fix.
thanks so much you are a life saver.
I’m glad I could help 🙂
First of all, I would like to thank you so much for all the amazing tutorial!
I have a couple of questions:
a) Is is possible to install Android Studio and the SDK in Win 8.1?
I tried but I still get errors, I want to know if I should keep trying or am I trying an impossible quest?
b) It’s funny the way i found yout site and Youtube channel was via another Youtube channel that is uploading copies your videos. () is this you or at least with your consent? If not I guess you should at least know about this.
Again sir,
Thanks for all the Great work… Please keep it up
Johnny
Yes it is possible. I think the issue may be with the version of Java used. Use Java 7 instead of 8. I don’t really mind if people copy my videos as long as they don’t post copyright claims against me. Thank you for pointing that out though.
Hi Derek,
The new crash courses you have started is really great .
I was hoping whether you will start with node.js videos, I don’t know its importance , but seems to be a hot topic nowadays.
And a final request , please make a post regarding all the technologies(only the names) which are used nowadays in different sections of web, like bootstrap,angular.js,dart these are some which I know.
Thanks for all your effort.!!
Thank you 🙂 I’m glad you are enjoying them. I plan on making one for pretty much everything including your requests. I’ll upload them as soon as possible.
thanks for reply,
I don’t know whether you have mentioned anywhere about type of programming style like functional,imperative etc. ,I really wanted to know about their differences. Wiki page for these are really not simple to understand. I would appreciate if you come up with a short post.
Thanks in advance for everything.
This is a perfect example on the differences between functional and imperative programming.
thanks for the reply. I really do appreciate it a lot.i cant express my excitement and thank you again for replying. ok to kickstart abt what our projects are, they are commercial applications ,our curriculum now focuses on commercial applications so to be more specific I have been reading on IOT (internet of Things)and WOT(web of things) and Business Intelligence and micro controllers so I have interest in all the above fields and I do have all your android tutorials so if you can help me on how these technologies can be used or how I can use them to come up with a project topic and application I can develop using Android or Java or any web related language. another area I was reading along was Hadoop (Big Data) so yeah I think I have shed some light on a bit. I really hope you can help along those area I have highlighted above and I am following your tutorials step by step from design patterns to android to java and here in Zimbabwe I hope to make a change. I always look forward to you when I am stuck. hope to here from you and if you have other questions can ask and will clarify on the project issue
One thing you could try is to improve upon an app that you currently like. Everyone has a favorite app that they wish had a few extra capabilities. Recreate that app as a guide and then add in your improved features. A few years back I worked on a game. We decided to take a very popular game at the time named Field Runners and improve it. We came up with the idea of a reverse tower defense game that would allow users to attack others bases. We didn’t finish the game, but it looked a lot like a very popular game that came out months later named Clash of Clans! I wish we would have finished it 🙂
Derek! I was going through your Android development tutorial videos and have to say, they are wonderful. I have a question to ask : i am using adt on eclipse on Windows. Whenever i create a new Android project and select a blank activity, my src and layout folders are created blank. Even the values folder is created without a dimens.Xml file.
I’ve gone through numerous threads to find a solution for this, but haven’t got a fruitful answer. Please help me resolve this.
Thank you 🙂 The best course of action would probably be to use Android Studio. It is a lot less buggy then Eclipse at this point. All of the code is exactly the same.
Hello Derek,
I’m a huge fan of your Android tutorials. They are simply amazing. I request you to make a tutorial video on ServerSocket and Socket programming in Android. I’ve tried to do it on my own but I find it impossible to make the program running on my PC to connect to the server program running on the Android tablet(both are connected to the same network and are able to ping each other). Please, I beg you.
Hello Derek can you put up a tutorial about an online database in Android?
Putting and Retrieving data online…
I have completely no idea when it comes to online,
so I hope you can help me with that.
Thank you.
I made a tutorial on using a web service with Android. I’ll make another tutorial on interacting with a database soon as well.
Sir,I had made an android app that will check for new message in mysql database each sec and notification will appear if there is new message.It work fine in localhost but when i upload the code to a hosting server, my router will block the connection to the hosting server suddenly and after i restart my router,the app is working fine again.I use the http post method that you teach in the translation app u made to check for new message in the database. May i know that is the problem?
Are you trying to do this with an emulator? Emulators get confused some times when you try to use the internet rather then the localhost.
Hi Derek, How do you write subscripts and superscripts in strings.xml
ex: I want a natural display of x^2 where 2 should be a superscipt.
You would define that style using XSLT. This tutorial should help.
Hi Derek,
I am a Core Java developer and now want to get into the web development. Do you have any video tutorial covering these aspects (web server configuartion, web services (SOAP/REST) and development)
I have a web service tutorial using PHP and MySQL here, but I haven’t made one for Java yet.
Hey mate, firstly i am sure everyone says this but your tutorials are by far the best i have found on the internet for Android app development. So thanks for putting them up here.
Now coming to my question. I want to be able to use Beacons/bluetooth/GPS and location based services in my app. Just wanted to know if you had any tutorials created on using these hardware features? If not do you plan to put up any? Also if its on your agenda as such, can you please point me to some online resources where I can go and learn these?
Thanks in Advance.
Thank you 🙂 I did 2 videos on Google Maps starting here. I plan on making a bunch of large apps soon along with bluetooth. I’m getting close to the end of covering the basics and will make big apps soon.
Hi Derek i have a problem with JSON Service calling . I have watched your Android Studio Tutorial i have come to video 16 but so far i couldnt find the solution
I take the service URL and I enter the URL into my browser then I take the text(the result) and view it in JSON viewer than I take what I want(then parsing the JSON) , but in my problem when i enter the URL, a file is downloading and inside of this file the text exists(i know reading JSON file in text) but i am not allowed to download and read (i am not allowed to do it that way i am supposed to do it without downloading it), i asked it to many people but people are just saying me to change the Framework but is there a way to not downloading but reading inside of it ?
(Also in ıoS swift my coworker can do it with Serialization but i cant do it in Android (: )
Could you please help me ?
I’m sorry, but I don’t understand the question. What do you mean that you want to read the data without downloading it. You would have to retrieve the data to be able read it. Can you give me an example?
Hey Derek, You’re really good with Tutorials,
I’ve got to say that I’m enjoying to watch it and It’s really useful.
there is one thing that is annoying, I feel like It has been always hard for me to learn programming but I’m still giving it a try, So I’m really into jQuery right now but I jumped to that before learning Javascript, and I really like it, It’s awesome.
So I’ve learned the basic and how controlling the events and all the DOM stuff, but I want to learn how to create my own functions…
Now all the tutorials are really basic, set 2 vars with integer and sum it, it’s really not useful.. and it’s almost every tutorial of javascript function.
I wish it was something more useful like creating a really simple plugin with function.
Thank you 🙂 I have a bunch of tutorials in which I make real world things with JavaScript. If you are into JQuery I made a slider here with it. I’m actually going to make a NodeJS tutorial very soon which is like the JavaScript version of PHP. It is super awesome. You should like that as well. I hope to make it very soon. I’ll try to fit in some new JavaScript tutorials as well. | http://www.newthinktank.com/2013/12/ask-anything/?replytocom=52014 | CC-MAIN-2021-31 | refinedweb | 13,699 | 74.39 |
A CheckBox is a graphical user interface element that permits the user to make one or multiple selections from a number of options.
In this tutorial, we'll create a Switch checkbox inspired by the iPhone Graphical User Interface. Read on to find out how!
Step 1: Brief Overview
Using the Flash drawing tools we'll create a vector Switch that will be controled by classes. One class will take care of all the Switch behavior and another class will simply check the value of the Switch. Let's go!
Step 2: Starting
Open Flash and create a new Flash File (ActionScript 3).
Set the stage size to 600x300 and set the color to #EFEFF0.
We'll now create the Switch graphics.
Step 3: Border
Select the Primitive Rectangle Tool (R) and create a 280x80 px rectangle, filling it with this linear gradient: #505052, #ACADB1.
Use the Gradient Transform Tool to rotate the gradient horizontally and change the corner radius (Properties Panel) to 10.
Step 4: OFF Background
We'll draw two backgrounds for the Switch, the OFF background and the ON background.
Duplicate the previous shape and change its size to 276x76 px. Change the linear grandient to #9A9A9A, #F4F3F6 and move the last color selector (Color Panel) to halfway along the bar.
Select the Text Tool (T) and create a Static TextField. Write "OFF" and place it at the right side of the background.
I used Helvetica Neue Bold, 48 pt, #8C8C8C.
Step 5: Draggable Area
Now we'll add a button that can be dragged to modify the Switch value.
Use the Rectangle Tool to create a 120x80 px rectangle and fill it with #A19FA0, set the corner radius to 10.
Duplicate the shape and resize it to 116x76 px, fill it with #FCFCFE.
To give the final touch to the button, repeat the process and fill the shape with a #D7D7D7, #FCFCFE linear gradient. Use the Gradient Transform Tool to rotate the fill.
Step 6: ON Background
Duplicate the border and the OFF background, delete the text and change the border gradient to #0D4372, #6193D2.
Next, change the background gradient to #0C68B5, #479FF9, #6DB0F6.
Place the button border shape in the right side.
Break Apart (Cmd+B) the shapes to cut them.
Use the same Text Format to add the "ON" text to the background.
Step 7: Setting the MovieClips
Convert the Draggable Button to MovieClip and name it "area". As you can imagine this will be the area that will be dragged to change the Switch value.
Make sure the Registration point is positioned like the one in the images.
Select all shapes including the MovieClip and convert them again, name the result "slider".
Use any of the border shapes to create another MovieClip, this will be the Mask that will hide part of the graphics. Name it "msk".
Convert everything to MovieClip once again and double-click it.
Create a new Layer then cut and paste the mask clip on it. Right-click the mask layer and select the "Mask" option.
This will finish all the graphics. Now your Switch should look like this (note the Registration point):
Step 8: Linkage
Open the Library and right-click your Switch symbol. Select Properties, mark the "Export for ActionScript" box and write "Switch" as the class name.
Step 9: Switch.as
Create a new ActionScript document and save it as "Switch.as".
Step 10: Necessary Classes
Import the required classes. If you need specific help for any of these, please refer to the Flash Help (F1).
package { import fl.transitions.Tween; import fl.transitions.easing.Strong; import flash.display.Sprite; import flash.events.MouseEvent; import flash.geom.Rectangle;
Step 11: Variables
These are the variables we'll use, explained in the code commentary.
public class Switch extends Sprite { private var tween:Tween; //A Tween object for animation public var stat:Boolean = false; // This is a Public variable, it's used to know the Switch value outside this class
Step 12: Constructor Function
The Constructor function. This function adds the listeners.
public function Switch():void { slider.area.addEventListener(MouseEvent.MOUSE_DOWN, switchDrag); slider.area.addEventListener(MouseEvent.MOUSE_UP, checkPosition); }
Step 13: Drag function
This function handles the button dragging, based on its position.
private function switchDrag(e:MouseEvent):void { if (! stat) //If Switch is OFF, we can drag to the right { e.target.parent.startDrag(true, new Rectangle(0, 0, e.target.parent.parent.msk.width/1.75, 0)); } else { e.target.parent.startDrag(true, new Rectangle(e.target.parent.parent.msk.width/1.75, 0, -e.target.parent.parent.msk.width/1.75, 0)); } }
Step 14: Check Function
This code checks the position of the draggable button. Depending on its value it returns to the original position or stays in the new one.
private function checkPosition(e:MouseEvent):void { e.target.parent.stopDrag(); if (e.target.parent.x >= 140) { e.target.parent.x = 160; stat = true; } else if (!stat && e.target.parent.x < 140) { tween = new Tween(e.target.parent,"x",Strong.easeOut,e.target.parent.x,0,1,true); stat = false; } // OFF to ON if (e.target.parent.x <= 20) { e.target.parent.x = 0; stat = false; } else if (stat && e.target.parent.x > 20) { tween = new Tween(e.target.parent,"x",Strong.easeOut,e.target.parent.x,160,1,true); stat = true; } }
Step 15: Main Class
This is an example of how to use your new Switch.
Create a new ActionScript document and save it as "Main.as".
package { import Switch; //Import the class import flash.display.Sprite; import flash.events.MouseEvent; public class Main extends Sprite { public function Main():void { iSwitch.addEventListener(MouseEvent.MOUSE_UP, checkState);//iSwitch is an instance in the stage of the Switch class } private function checkState(e:MouseEvent):void { if(iSwitch.stat) { trace("Switch is ON!"); } else { trace("Switch is OFF!"); } } } }
Step 16: Document Class
Go back to the .Fla file and in the Properties Panel add "Main" in the Class field to make this the Document Class.
Conclusion
You have created a fully customizable Switch to use in your applications! Remember that you can create your own skins and add plenty more functionality to the ON and OFF states.
Thanks for reading!
Envato Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this post
| http://code.tutsplus.com/tutorials/create-an-iphone-inspired-switch-checkbox-using-flash-and-actionscript-3-0--active-2382 | CC-MAIN-2016-30 | refinedweb | 1,051 | 69.07 |
02-08-2018 12:36 PM - edited 02-08-2018 12:39 PM
I'm on Spark 1.6.0, HBase 1.2.0 (cdh 5.7).
I can read an entire HBase table from the scala spark-shell just by doing:
user@node:~$ export SPARK_CLASSPATH=$(hbase classpath) user@node:~$ spark-shell --master local
val df = sqlContext.read.format("org.apache.hadoop.hbase.spark") .option("hbase.table","test_table") .option("hbase.columns.mapping", "rowkey STRING :key, anothercol STRING cf:anothercol") .load() df.show()
But whenever I filter the rows to retrieve based on a string, such as by doing:
df.where("rowkey >= \"1-2018\"").show()
I get java.lang.NoClassDefFoundError: scala/collection/immutable/StringOps
This actually works fine with a numeric type, for instance, or if I cache the entire table upon loading it. Also notice that if I simply do
import scala.collection.immutable.StringOps
the import runs fine. I suspect that because the hbase-spark library is part of HBase, this is just HBase not being able to find a scala-library.jar.
Is there any way I can make this class available to hbase-spark?
Cheers!
03-17-2018 02:42 AM
06-13-2018 05:38 AM
I am also facing the exact issue. I tried master as yarn too but still the issue persists. Were you able to resolve this? If so please let me know the solution
06-13-2018 07:10 AM | https://community.cloudera.com/t5/Storage-Random-Access-HDFS/How-to-make-HBase-find-scala-library/m-p/65516 | CC-MAIN-2019-26 | refinedweb | 238 | 60.01 |
How to wait for a request for service (RQS)
** Note. Cross-posting on the LabVIEW forums:
I'm trying to write a simple C # (.NET 4.0) program to control a Keithley 2400 SMU via VISA GPIB and I'm having trouble getting the program to wait for a service request that Keithley sends at the end of a sweep.
The swing is a simple linear voltage sweep, controlled internally by the Keithley device. I have a device configured to send a ServiceRequest at the end of a sweep or when a match is reached.
I can send commands to the SMU and read the data buffer, but only if I manually enter a timeout between the sweep start command and the data read command.
One problem I am running into is that I am fairly new to C # - I am using this project (porting parts of my LV code) to find out.
Here's what I've used so far for my C # code:
using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading; using NationalInstruments.VisaNS; private void OnServiceRequest(object sender, MessageBasedSessionEventArgs e) { Console.WriteLine("Service Request Received!"); } // The following code is in a class method, but public double[,] RunSweep() { // Create the session and message-based session MessageBasedSession mbSession = null; Session mySession = null; string responseString = null; // open the address Console.WriteLine("Sending Commands to Instrument"); instrAddr = "GPIB0::25::INSTR"; mySession = ResourceManager.GetLocalManager().Open(instrAddr); // Cast to message-based session mbSession = (MessageBasedSession)mySession; // Here where things get iffy for me... Enabling the event and whatnot mbSession.ServiceRequest += new MessageBasedSessionEventHandler(OnServiceRequest); MessageBasedSessionEventType srq = MessageBasedSessionEventType.ServiceRequest; mbSession.EnableEvent(srq, EventMechanism.Handler); // Start the sweep (SMU was set up earlier) Console.WriteLine("Starting Sweep"); mbSession.Write(":OUTP ON;:INIT"); int timeout = 10000; // milliseconds // Thread.Sleep(10000); // using this line works fine, but it means the test always takes 10s even if compliance is hit early // This raises error saying that the event is not enabled. mbSession.WaitOnEvent(srq, timeout); // Turn off the SMU. Console.WriteLine("I hope the sweep is done, cause I'm tired of waiting"); mbSession.Write(":OUTP OFF;:TRAC:FEED:CONT NEV"); // Get the data string data = mbSession.Query(":TRAC:DATA?"); // Close session mbSession.Dispose(); // For now, create a dummy array, 3x3, to return. The array after is the starting value. double[,] dummyArray = new double[3, 3] {{1, 2, 3}, {4, 5, 6}, {7, 8, 9}}; return dummyArray; }
All of the above should mimic this LabVIEW code:
So, any ideas on where I am going wrong?
Thank,
Edit:
After a little twisting, I found that the service request feature
OnServiceRequest
actually triggered at the right time (“Service request received!” Is printed on the console).
source to share
It turns out that I need to include the event as a Queue and not a handler. This line:
mbSession.EnableEvent(srq, EventMechanism.Handler);
In fact, it should be:
mbSession.EnableEvent(srq, EventMechanism.Queue);
Source: Documentation in the Notes section. Was a pain to find documents on it ... NI needs to be relieved: - (.
With this change, I also don't need to create
MessageBasedSessionEventHandler
.
The last working code looks like this:
rm = ResourceManager.GetLocalManager().Open("GPIB0::25::INSTR"); MessageBasedSession mbSession = (MessageBasedSession)rm; MessageBasedSessionEventType srq = MessageBasedSessionEventType.ServiceRequest; mbSession.EnableEvent(srq, EventMechanism.Queue); // Note QUEUE, not HANDLER int timeout = 10000; // Start the sweep mbSession.Write(":OUTP ON;:INIT"); // This waits for the Service Request mbSession.WaitOnEvent(srq, timeout); // After the Service Request, turn off the SMUs and get the data mbSession.Write(":OUTP OFF;:TRAC:FEED:CONT NEV"); string data = mbSession.Query(":TRAC:DATA?"); mbSession.Dispose();
source to share
What you are doing looks right to me, so it is possible that there is a problem with the NI library.
The only thing I can think of is to wait for "all events", not just "ServiceRequest". eg:
mbSession.WaitOnEvent(MessageBasedSessionEventType.AllEnabledEvents, timeout);
Note: it doesn't seem like you can "enable" all events (so don't change this part).
I also searched for some examples of other people doing Keithley sweeps and I found this and this (Matlab ex). As I suspected in both cases, they don't use events to determine when the sweep is complete, but rather a "while loop that continues polling Keithley" (the first link actually uses streams, but it's the same idea). This makes me think that this is perhaps the best choice. Therefore, you can simply do this:
int timeout = 10000; int cycleWait = 1000; for (int i = 0; i < timeout / cycleWait; i++) { try { string data = mbSession.Query(":TRAC:DATA?"); break; } catch { Thread.Sleep(cycleWait); } }
(You may also need to check if the data is null, but there must be some way to know when the sweep is complete).
source to share | https://daily-blog.netlify.app/questions/2216528/index.html | CC-MAIN-2021-49 | refinedweb | 789 | 50.23 |
Updated: October 2008
You: File Name, Build Action, Custom Tool, and Custom Tool Namespace.
The
Build Action, Custom Tool, and Custom Tool Namespace properties are provided for advanced scenarios. The default values are typically sufficient and do not have to be changed.
You can rename a file by clicking the File Name property in the Properties window and typing in the new name. Notice that if you change the file's name, Visual Studio will automatically rename any .vb or .resx files that are associated with it.
The Build Action property indicates what Visual Studio does with a file when a build is executed. Build Action
Build Action property is extensible. As a result, you may see additional options listed for this property that have been added by other products and features.
The default value for Build Action depends on the extension of the file that you add to the solution. For example, if you add a Visual Basic project to Solution Explorer, the default value for Build Action is Compile. This is. Therefore, by way of the strongly-typed class auto-generated for the .resx file. Therefore, you should not change this setting to Embedded Resource, because doing this would include the image two times in the assembly.
For more information about how to access resource files (compiled from .resx files) at run time, see ResourceManager Class. For more information about only. In rare circumstances, you might have to change the value of this property. The value of this property must be either blank or one of the built-in custom tools.
To set or change the custom tool, click the Custom Tool property in the Properties window and type the name of a custom tool.
If you have a custom tool assigned to your project, the Custom Tool Namespace property enables you to specify the namespace you want to assign to code generated by the custom tool. When you specify a value for the Custom Tool Namespace property, code generated by the tool is put in the specified namespace. If the property is empty, generated code is put in the default namespace for the folder in which the converted file is located. For Visual Basic, this is the project's root namespace, and for Visual C# this corresponds to the setting of the DefaultNamespace property for the folder.
Date
History
Reason
October 2008
Added note about the extensibility of the Build Action property.
Customer feedback. | http://msdn.microsoft.com/en-us/library/0c6xyb66.aspx | crawl-002 | refinedweb | 407 | 62.88 |
Details
Description
CommandFormat currently takes an array and offset for parsing and returns a list of arguments. It'd be much more convenient to have it process a list too. It would also be nice to differentiate between too few and too many args instead of the generic "Illegal number of arguments". Finally, CommandFormat is completely devoid of tests.
Issue Links
- is part of
HADOOP-7176 Redesign FsShell
- Resolved
Activity
- All
- Work Log
- History
- Activity
- Transitions
- Please use junit 4 (i.e. org.junit.Test and other classes org.junit.* instead of junit.framework.TestCase)
- All public classes and methods (except tests) must have javadoc.
- How about passing minPar/maxPar and psize to NotEnoughArgumentsException/TooManyArgumentsException and then shows the numbers in the error messages?
- Minor: how about passing pos to parse(List<String> args), so that we could just return parse(Arrays.asList(args), pos) in parse(String[] args, int pos)?
Converted to junit 4, added what I think are the missing javadocs
, added min/max & expected args to parameter exceptions.
I omitted the suggested change to add an index to the list version of parse. The index is present on the old method to skip over ARGV[0] (the command). With the forthcoming changes, the command will be consumed from the list before calling the list-based parse. If it was added, then the sublist/erase would still be required since the list is expected to be destructively modified. If you feel strongly about the index, please let me know.
+1 overall. Here are the results of testing the latest attachment
against trunk revision 10823 patch looks good.
I have committed this. Thanks, Daryn!
Integrated in Hadoop-Common-trunk-Commit #530 (See)
Commit the missing file TestCommandFormat.java for
HADOOP-7180.
HADOOP-7180. Better support on CommandFormat on the API and exceptions. Contributed by Daryn Sharp
Integrated in Hadoop-Common-trunk #634 (See)
Commit the missing file TestCommandFormat.java for
HADOOP-7180.
HADOOP-7180. Better support on CommandFormat on the API and exceptions. Contributed by Daryn Sharp
Add a slew of tests, add exceptions for too many/few, allowing parsing a list. Backwards compatibility. | https://issues.apache.org/jira/browse/HADOOP-7180 | CC-MAIN-2016-22 | refinedweb | 355 | 58.89 |
{-| Code for manipulation equivalence classes on index types. An 'Equivalence' is an equivalence relation. The empty equivalence relation is constructed over a ranges of values using 'emptyEquivalence'. Less discerning equivalence relations can be obtained with 'equate' and 'equateAll'. The relation can be tested with 'equiv' and 'equivalent', and canonical representatives can be chosen with 'repr'. An example follows: > import Data.Equivalence.Persistent > > rel = equateAll [1,3,5,7,9] > . equate 5 6 > . equate 2 4 > $ emptyEquivalence (1,10) > > test1 = equiv rel 3 5 -- This is True > test2 = equiv rel 1 6 -- This is True > test3 = equiv rel 4 6 -- This is False -} module Data.Equivalence.Persistent ( Equivalence, emptyEquivalence, repr, equiv, equivalent, equate, equateAll ) where import Control.Concurrent.MVar import Control.Monad import Data.Array.Diff import Data.IORef import Data.List import System.IO.Unsafe arrayFrom :: (IArray a e, Ix i) => (i,i) -> (i -> e) -> a i e arrayFrom rng f = array rng [ (x, f x) | x <- range rng ] {-| An 'Equivalence' is an equivalence relation on a range of values of some index type. -} data Equivalence i = Equivalence { ranks :: DiffArray i Int, parents :: IORef (DiffArray i i) } {-| 'emptyEquivalence' is an equivalence relation that equates two values only when they are equal to each other. It is the most discerning such relation possible. -} emptyEquivalence :: Ix i => (i, i) -> Equivalence i emptyEquivalence is = unsafePerformIO $ do v <- newIORef (arrayFrom is id) return $ Equivalence (arrayFrom is (const 0)) v reprHelper :: Ix i => DiffArray i i -> i -> (DiffArray i i, i) reprHelper ps i | pi == i = (ps, i) | otherwise = let (ps', r) = reprHelper ps pi in (ps' // [(i,r)], r) where pi = ps ! i {-| 'repr' gives a canonical representative of the equivalence class containing @x@. It is chosen arbitrarily, but is always the same for a given equivalence relation. This function is slightly unsafe. In particular, it's possible to build the same equivalence relation by equating values in two different orders, and the choice of canonical representatives will differ. You can either think of a value of type 'Equivalence' as an equivalence relation together with a choice of canonical representatives, or you can consider this not a pure function. Since 'Equivalence' is not an instance of @Eq@ and equality is not observable, both perspectives are valid. -} repr :: Ix i => Equivalence i -> i -> i repr (Equivalence rs vps) i = unsafePerformIO $ atomicModifyIORef vps f where f ps = reprHelper ps (ps ! i) {-| Determines if two values are equivalent under the given equivalence relation. -} equiv :: Ix i => Equivalence i -> i -> i -> Bool equiv eq x y = repr eq x == repr eq y {-| Determines if all of the given values are equivalent under the given equivalence relation. -} equivalent :: Ix i => Equivalence i -> [i] -> Bool equivalent eq [] = True equivalent eq (x:xs) = all (== repr eq x) (map (repr eq) xs) {-| Construct the equivalence relation obtained by equating the given two values. This combines equivalence classes. -} equate :: Ix i => i -> i -> Equivalence i -> Equivalence i equate x y (Equivalence rs vps) = unsafePerformIO $ do (px, py, ps) <- atomicModifyIORef vps $ \ ps -> let (ps', px) = reprHelper ps x (ps'', py) = reprHelper ps' y in (ps'', (px, py, ps'')) return (go px py ps) where go px py ps | px == py = Equivalence rs vps | rx > ry = let ps' = ps // [(py, px)] in Equivalence rs (unsafePerformIO (newIORef ps')) | rx < ry = let ps' = ps // [(px, py)] in Equivalence rs (unsafePerformIO (newIORef ps')) | otherwise = let ps' = ps // [(py, px)] rs' = rs // [(px, (rx + 1))] in Equivalence rs (unsafePerformIO (newIORef ps')) where rx = rs ! px ry = rs ! py {-| Construct the equivalence relation obtained by equating all of the given values. This combines equivalence classes. -} equateAll :: Ix i => [i] -> Equivalence i -> Equivalence i equateAll [] eq = eq equateAll (x:xs) eq = foldl' (flip (equate x)) eq xs | http://hackage.haskell.org/package/persistent-equivalence-0.1/docs/src/Data-Equivalence-Persistent.html | CC-MAIN-2015-18 | refinedweb | 616 | 54.32 |
Content-type: text/html
pthread_cond_signal_int_np - Wakes one thread that is waiting on the specified condition variable (called from interrupt level only).
DECthreads POSIX 1003.1c Library (libpthread.so)
#include <pthread.h>
int pthread_cond_signal_int_np(
pthread_cond_t *cond);
None
Condition variable to be signaled.
This routine wakes one thread waiting on the specified condition variable. It can only be called from a software interrupt handler routine (that is, from a Tru64 UNIX signal handler or OpenVMS AST). Calling this routine implies that it might be possible for a single waiting thread to proceed.
The scheduling policies of the waiting threads determine which thread is awakened. For policies SCHED_FIFO and SCHED_RR, a blocked thread is chosen in priority order, using first-in/first-out (FIFO) within priorities.
This routine does not cause a thread blocked on a condition variable to resume execution immediately. A thread resumes execution at some time after the interrupt handler routine returns.
You can call this routine regardless of whether the associated mutex is locked (by some other thread). Never lock a mutex from an interrupt handler routine.
This routine allows you to signal a thread from a software interrupt handler. Do not call this routine from noninterrupt code. To signal a thread from the normal noninterrupt level, use pthread_cond_signal(3).
If an error condition occurs, this routine returns an integer value indicating the type of error. Possible return values are as follows: Successful completion. The value specified by cond is invalid.
None
Functions: pthread_cond_broadcast(3), pthread_cond_signal(3), pthread_cond_timedwait(3), pthread_cond_wait(3)
Manuals: Guide to DECthreads and Programmer's Guide
delim off | http://backdrift.org/man/tru64/man3/pthread_cond_signal_int_np.3.html | CC-MAIN-2017-22 | refinedweb | 262 | 51.04 |
Hottest Forum Q&A on CodeGuru - November 3rd
Introduction:
Lots of hot topics are covered in the Discussion Forums on CodeGuru. If you missed the forums this week, you missed some interesting ways to solve a problem. Some of the hot topics this week include:
- What is the difference between calling by pointer and calling by reference?
- How can I cast params together and pass them to a thread?
- How do I determine what error code is 0040e864?
- How can I call a .dll with arguments?
- What is the maximum size of a structure that can be defined in VC++?
avi123 knows that there are three common ways to pass a parameter to a function.
- Calling by value
- Calling by pointer
- Calling by reference
But, he is not is not sure about the difference between the second and third options. Do you know? Actually, it's simple.
- When calling by pointer, the value can be 0.
- When calling by reference, the value must be valid.
But, if you are just passing an object, using a reference is better in terms of readability. As from the perspective of efficiency, both are the same. However, if you need to pass a pointer to an array of objects, there is no choice but to use a pointer. This is because the address stored by the reference cannot be modified after initialization.
Daviesrt needs to pass some parameters to a thread. Actually, it isn't a big problem. Right? But, besides that, he also wants to cast them and then pass them to the thread.
I am writing a thread to control a timer. I need to pass "theEvent" and "theLock" to this thread, where "theEvent" and "theLock" are
theEvent = new CEvent(FALSE,FALSE, "Something happened"); theLock = new CSingleLock(theEvent);
How can I cast these together and pass them to the thread?
UINT StartClock(LPVOID pParam)
Is it possible??
Andreas Masur answered the question with a very nice, simple, and practical example code. Here is his answer:
Usually, you allocate the structure you wish to pass on the heap (thus, dynamically). Then, before the thread terminates, you would release the previously allocated memory. However, this might resolve in memory leaks if the thread function terminates unexpectedly (as when a function throws an exception, and so forth) because your 'delete' statement will not be processed in this case. To avoid this, you can use an auto pointer instead...
#include <memory> struct MyStruct { int m_Int; MyStruct(int i) { m_Int = i; } }; // Create thread MyStruct *pData = new MyStruct(10); AfxBeginThread(MyThreadFunc, pData); UINT MyThreadFunc(void *pvParam) { std::auto_ptr<MyStruct> Struct(static_cast<MyStruct*>(pvParam)); { ..... } return 0; }
The above code will not leak memory irregardless of how the thread function terminates...
Ralf Schneider developed an application that works fine on his system. Besides that, he has also tested the application on different OSes, such as windows 95, OSR2, 98, and XP and it runs without any problems, but the application still crashes on the customer's system. Ralf is not able to reproduce the error. The only information he got is that his application causes the following exception.
- MyProg cause an exception 10H in module MyProg.exe at 0137: 0040e864
Several users complain it crashed immediately on their machines including windows 95, 95 OSR2, and 98. It says somthing like: MyProg cause an exception 10H in module MyProg.exe at 0137: 0040e864 Registers: .... Stack dump: ... I tested the software on my company's Windows 95, OSR2, 98, and XP machines and it works fine. I just cannot reproduce those crashes, and they are too far away for me to be on site. Does anyone have a clue on how to debug this problem?
Basically, you'll need a MAP file to track the error code. Without having a mapfile, it's nearly impossible to determine the error. According to MSDN, the error occurs when an unmasked floating-point exception has signaled a previous instruction. Here is the snippet from MSDN KB.
So, the problem is a uninitialized float variable. Ralf confirmed that; he has sent a new version to his customer and now it runs!
myth7676 needs to start a DLL with arguments. Huh, why would somebody need to do that? You have an EXE for that purpose.
I have a .dll, say test.dll and i want to call this .dll like:
test.dll /filename /start address.
The DLL should take the filename and start address as parameters and do something with them inside. Can this be done?
The only solution I know would be to export functions from an EXE and call them as if the EXE were a DLL.
Varadha has defined a structure in his application. But, when he starts the application, it crashes with an unhandled Exception. His first thought is that there may be a maximum size for a structure.
When i define the following structure.
typedef struct summa { int a; int b; char c[1062144]; }summa;
and run the application, I get the following error. Unhandled Exception: ..... Stack Overflow. Is there a constraint on the maximum size of the Structure that can be defined in VC++?
The exception appears because it's too big for the stack. You could allocate it with new and put it on the heap instead. Or, you could increase the size of your stack in the linker options. Or, you could use a std::string or std::vector<char> as a structure member that essentially puts that memory on the heap.
There are no comments yet. Be the first to comment! | http://www.codeguru.com/columns/forum_highlights/article.php/c6611/Hottest-Forum-QA-on-CodeGuru--November-3rd.htm | CC-MAIN-2014-42 | refinedweb | 923 | 67.15 |
From.
Here's an example:. The brake light ECU is really only waiting on the message from the brake system ECU. Also, the horn ECU doesn't react to the braking system ECU.
This broadcast system is broken down into different components; the two most important are message ID and message data.
For now, think of the message ID as an ECU address. The message data is the content. It is typically larger than the ID at around 8 bytes long.
Here's an example:
message ID: 620 data: 10 80 FF FF 80 20 00 80
The ECUs communicate with each other over a twisted wire pair holding CAN-high (CAN+) and CAN-low (CAN-). CAN-high and CAN-low are accessible through the OBD-II port under the steering wheel. This is how we'll get in!
Pro-tip: Use a wire tracer/tone generator to backtrace to other CAN Bus access points within your car.
Volkswagon has a good guide to how the CAN Bus network works:
Step 1: Components and Assembly
Components:
1- Arduino UNO R3
2- Sparkfun (or other) CAN Bus Shield:
Note: Also available at SK Pang: (SK Pang also supplies the needed CAN Bus library).
Note2: At the time of this writing, there were only 6 in stock at Sparkfun.
Note3: Sparkfun's CAN Bus shield also has a joystick (up, down, left, right, center), a micro SD slot, and support for GPS and LCD modules.
Note4: If you're feeling up to it, you can order the parts from Digikey and make your own using Sparkfun's provided EAGLE CAD drawing.
3- Wire pair or Sparkfun's OBD-II to DB9 cable:
Note: I found some old speaker wire that worked great.
4- breakable header pins - the CAN Bus shield doesn't include them:
Assembly:
1- Break headers into 2x8 pin, 2x6 pin, and (optional - 1x4 pin sections)
2- Solder the headers to the CAN Bus shield.
Step 2: Familiarizing Yourself With the CAN Bus Library
Once assembled, be sure to download the CAN Bus Library for use with your Arduino IDE.
Library and Example files are located here:...
Download link for Library and Examples:...
- Library in the src/ folder
- Sparkfun (and my) examples are in the examples/ folder
CAN Bus Shield Initialization:
#include <Canbus.h> // don't forget to include these #include <defaults.h> #include <global.h> #include <mcp2515.h> #include <mcp2515_defs.h> void setup() { Serial.begin(9600); //Initialise MCP2515 CAN controller at the specified speed if(Canbus.init(CANSPEED_500)) Serial.println("CAN Init ok"); else Serial.println("Can't Init CAN"); delay(1000); }
Shield initialization will be required for all tasks. Here, we define our CAN bitrate and import our library. Every vehicle might use different bitrate speeds. For our example, we use 500 kbps.
Available options are:
CANSPEED_125 //CAN speed at 125 kbps
CANSPEED_250 //CAN speed at 250 kbps
CANSPEED_500 //CAN speed at 500 kbps
If you're unsure of your vehicle's CAN bitrate, do some Googling...
Read CAN Bus Messages:
We are reading every message here. It can be a bit overwhelming as you see the traffic flow through.
- ALL Messages
void loop() { tCAN message; if (mcp2515_check_message()) { if (mcp2515_get_message(&message)) { Serial.print("ID: "); Serial.print(message.id,HEX); Serial.print(", "); Serial.print("Data: "); for(int i=0;i<message.header.length;i++) { Serial.print(message.data[i],HEX); Serial.print(" "); } Serial.println(""); }} }
Filtering will cut out a huge chunk of noise. (You'll see what I mean when you begin to sniff unfiltered.)
- Filter Messages
void loop() { tCAN message; if (mcp2515_check_message()) { if (mcp2515_get_message(&message)) { if(message.id == 0x631) //filtering based on CAN bus message ID. { Serial.print("ID: "); Serial.print(message.id,HEX); Serial.print(", "); Serial.print("Data: "); for(int i=0;i<message.header.length;i++) { Serial.print(message.data[i],HEX); Serial.print(" "); } Serial.println(""); }}} }
message.header.length is the size of the CAN message.
The above was filtered by message ID. We can also filter based on message data.
if(message.id==0x631 and message.data[3]==0x04 and message.data[4]==0x0F)
Notes:
1- Messages can be longer than 3 digits.
2- We are formatting incoming message IDs and message data as HEX.
Write CAN Bus Messages:
In order to write a CAN Bus message, we need to first assemble the message components: message ID, message size, and message data. The message is broken down by message.id, message.header.rtr, message.header.length, and message.data[].
void loop() { tCAN message; message.id = 0x631; //formatted in HEX message.header.rtr = 0; message.header.length = 8; //formatted in DEC message.data[0] = 0x40; message.data[1] = 0x05; message.data[2] = 0x30; message.data[3] = 0xFF; //formatted in HEX message.data[4] = 0x00; message.data[5] = 0x40; message.data[6] = 0x00; message.data[7] = 0x00; mcp2515_bit_modify(CANCTRL, (1<<REQOP2)|(1<<REQOP1)|(1<<REQOP0), 0); mcp2515_send_message(&message); delay(1000); }
The message ID and data are written in HEX (0xFF, for example), which is the same format we read with.
mcp2515_send_message(&message); sends the message.
Step 3: Connect and Read / Write
The attached file, CAN_read_sample, is for simply reading all messages. I commented out filtering, so you should be able to modify it easily to include filtering of message ID and data.
I also attached a file, CAN_write_sample, for writing a message.
You have two options for connecting the Arduino to vehicle's CAN-high and CAN-low lines:
1- Hack up some speaker wire (or any wire pair) and connect the CAN-H and CAN-L through-holes on the shield to the OBD-II port.
CAN-H (shield) <-----> CAN-high (OBD-II)
CAN-L (shield) <-----> CAN-low (OBD-II)
2- Buy Sparkfun's OBD-II to DB9 Cable:. This also powers the Arduino through the car's 12v line. I haven't used it, but let me know how it works out... YMMV
Connect the Arduino to your car and computer, load the code, open the serial monitor, and watch the magic.
Step 4: What Next?
As you begin to read CAN bus messages, start manipulating your car.
- Unlock and lock the vehicle
- Pop the trunk
- Roll up and down windows
- Sounding the alarm
- Blow your horn
- Turn on and off your flashers
- Turn on and off your signal lights
- Turn of and off your lights and high beams
- Etc.
Remember that filtering is your friend!
See if you can find messages related to the above. Once you do, write the same messages back out through your Arduino using Step 2. See if you can unlock or lock your vehicle, pop the trunk, or blow your horn!
I hope to share my findings in the future!
Thanks for reading!
3 People Made This Project!
FermentedOrder made it!
nasredinne made it!
renatoaloi made it!
81 Discussions
2 months ago
HI guys. I am starting to write a program about converting CAN data to USB. is there anyone who did that before? i would be wounder if you comment URL of some document about it.
All best
Reply 2 months ago
Could you explain what you mean exactly?
I've written code for seeing the can messages from the shield on the pc through the USB connection of the arduino. It also displays the messages in order.
Reply 2 months ago
Hello,
I'm trying also to do it; for the moment i have nothing except this :...
but it is about a CAN of motorcycle;
If you have also some links please share it with me
Best regards.
2 months ago
Cant we just use the arduino uno or nodemcu directly to read the can messages. Instead of using the canbus shield?
2 months ago
Hello evrybody,
I'm trying to hack a CANbus of KTM motorcycle and I don't knew if I can use the tutorial that you provided above.
Thankyou for any advices.
6 months ago
Hi, I tried this on my 2003 VW Golf mk4. I had to hookup my shield directly on the bus wires, located behind the dashboard to be able to read messages. Took me quite a while to find the right wires. You have to look for a pair of orange twisted cables connected to the green plug behind the dashboard. There are three orange pairs. Two of which are Orange/black and orange/brown. You have to use the light coloured pair. Orange/black is Can H and orange/brown is Can L (at 500kbps). Currently im collecting data and try to understand it. My goal is to implement cruise control and an active rev matching system, controlled by a windows 10 pc in the trunk, which is connected to a touchscreen in the front, replacing the radio and navigation system.
Question 7 months ago
Hi! I'm working on a project for my 1996 Saab 900 turbo. I wonder what message.id does. Is it an identifier that is sent to the ECU and the ECU then responds back with the same identifier so it's easier to know what it is responding to? Also I would like to know what message.header.rtr does. The last queston I have (for now) is if I need to do anything else to make it run at 615 kbps than to edit the Canbus.h file and change CANSPEED in the sketch?
Regards,
Christian
Question 8 months ago
I need to transmit and receive specific CAN messages for a project. I have to use a laptop with Busmaster on one end and the Arduino with the CAN shield on the other. When I send messages from the Busmaster to the Arduino I get an error. A CAN message isn't transmitted by busmaster. Is there anything else I need to add to the code?
The setup is a laptop with busmaster sending CAN signals to the shield-arduino through a USB-DB9 connector. The Arduino is connected via USB to another laptop on whose serial monitor I wish to read the messages being transmitted by the first laptop. Also do I need to worry about adding 120 ohm resistors anywhere in this setup? Please help.
Answer 7 months ago
Yes, you will need at least one 120ohm resistor, in fact, canbus networks should terminate with 120ohm registers at BOTH ends (so resitance between lines is about 60)
1 year ago
Hi. Great guide.
I need an Analog 0-5V to CAN converter at work and the cheapest one I can find is about £400. So I am thinking of going the Arduino route and developing our own.
I have a joystick that outputs in CAN but I cannot use it for reasons, and it is connected in a forklift truck (which uses CAN to communicate of course)
I have a joystick that outputs in 0-5V which I can use, but of course, I need to translate the 0-5V to CAN exactly as the OEM.
I am kind of struggling to figure out how to do this. Any help would be appreciated.
Reply 9 months ago
Microchip CAN tranciever MCP2561-E/SN cost 1dollar..
1 year ago
hello,
Basically, I wanted to establish can communication between two arduino boards.
For that, I have two sparkfun can-shields.
I wanted to know,
1.Can I connect them directly by using CAN_H and CAN_L pins provided on shield?(I tried direct connection, but I'm not able to receive can frame at the receiving arduino board)
2. whether do i need to connect terminating resistors of 10 ohm at both ends?
3.is there any API, where can we configure baudrate and can fram id etc?
Thanks in advance.
My mail id is: rkomeghadoot@gmail.com
Reply 9 months ago
You need to terminate the wiring... a CAN bus needs a terminator at 120ohm in each end...
Reply 1 year ago
I face same issues, do you had get the answer?Thankyou
Question 10 months ago
Hello,
i bought the "SparkFun CAN-BUS Shield" and
hooked up on a Arduino Uno. After that, I uploaded the sketch
"CAN_Read_Demo". I connected it to my car, but there was no data
displayed on the serial monitor. Then I uploaded the sketch "SparkFun_CAN_Demo"
to test the board.
The serial monitor shows:
CAN-Bus Demo
CAN Init ok
Please choose a menu option
1.Speed
2.RPM
3.Throttle
4.Coolant Temperature
5.O2 Voltage
6.MAF Sensor
But when I enter a option (e.g. 1 for Speed) it shows
following error:
Vehicle Speed:
Not a valid input.
Please enter a valid option.
Is the shield broken or did I something wrong?
Question 11 months ago on Introduction
Hey, I did everything as it says, I hooked up the Arduino Ono on the bus shield, and I plugged obd2 - db9 cable between the car and the shield and all the time I get only one message "cant init can" I also tried to connect without the cable, And can - low and gnd and it shows me the same message, what am i doing wrong, please help me out
Answer 10 months ago
I just went through this, your CAN Bus shield is most likely configured for the incorrect "CS", chip select pin compared to your sketch/library. More than likely your CAN Bus shield is an aftermarket one. Fastest way to rectify this is to either correct the CS pin (it'll either be 9 or 10) in the code/library or physically jump the gold soldering dot on the back of the CAN Bus shield shorting the center dot to pin 10 dot AND use a blade or knife to cut the PCB print from the center dot to the pin 9 dot.
I'm sure this is confusing as heck but flip JUST the CAN Bus shield onto its bottom and you'll see four different rows of soldering gold contacts labeled "CS, MOSI, MISO, SCK" from top row to bottom. The CS row is what you're interested in and you need to sever the TINNNNY wire going from the very left (9) gold contact to the center contact using a blade and physically solder the center contact to the very right contact dot (10). Note, none of this will apply to you if you have an original CAN Bus shield, its for aftermarket shields only.
Question 1 year ago on Step 2
Can you help me why I can't detect the message?thankyou
Tip 1 year ago
For all Volkswagen family, VW, Audi, Skoda
YOU NEED TO REQUEST DATA FROM THE PORT UNLESS YOU'RE DIRECTLY ON THE CANBUS WIRE
EX: Canbus.ecu_req(ENGINE_RPM, buffer);
Question 1 year ago
Hello!
I have also arrived at this point, but now I would like to decode the hexadecimal strings to get the real values but I do not know where to start! :(
for example I read from wikipedia which id correspond to the RPM and I filtered, but now I do not know how to convert the string in INT could you help me? thank you so much :) | https://www.instructables.com/id/CAN-Bus-Sniffing-and-Broadcasting-with-Arduino/ | CC-MAIN-2019-09 | refinedweb | 2,508 | 74.29 |
HDFS namenodes and datanodes
Hadoop includes two main pieces: a distributed architecture for running MapReduce jobs, which are Java and other programs used to convert data from one format to another, and a distributed file system (HDFS) for storing data in a distributed architecture. Here we discuss HDFS.
In a regular file on a regular PC or server, the computer stores files in contiguous disk space called blocks. A central mechanism keeps track of which blocks are stored on which disk location. On a PC or server that is called the FAT (File Allocation Table) or NTFS (New Technology File System). Plus there are other structures that keep track of what data is written where.
On a magnetic disk drive the data is written to disk using a moving device called a disk controller. In HDFS writes are pushed across the network using Hadoop network protocols to individual servers which then use a Hadoop data note operations together with FAT or NTFS or something similar to write to the host machine’s data blocks. This is usually locally attached storage, to drive down the cost, instead of a storage array, like a storage rack.(Low operating cost is one of the selling points of Hadoop.)
Hadoop removes the physical limitations of fixed disk sizes by storing file metadata (i.e., the location of blocks) in a namenode process on the master server and the data on any infinite number of datanodes.
Datanode processes running on each node in the cluster to write data to disk blocks there.
Unlike a regular file system, the HDFS can grow without limit as the architecture and administrator can add nodes at will. This abstraction of a single file across multiple computers lets files grow without limits as to their size.
But it also serves as a handy bucket notation where users can conventionally store files without having to give any thought to what directory they are mounted on, etc. For example they can copy any kind of file to hdfs://(server name):port and can retrieve that from any computer on the network.
Many other big data products like Spark, Storm, Cassandra Hbase, Hive, etc. use at least part of HDFS for their applications.
HDFS only writes data, does not update
In Hadoop you can only write and delete files. You cannot update them.
The system is made to be resilient and fail proof because when each datanode writes its memory to disk data blocks, it also writes that memory to another server using replication. The number of replicas is set by the user. And Datanodes can be made rack aware. This is necessary because redundancy does not work when you write data to two disk drives in the same rack, that share the same electric power and network connections.
Data blocks on a regular PC are something like 4K or 64K in size. But in Hadoop they are often 64MB.
Hadoop is a batch system so it is not designed to read data fast. Instead it is designed to write large amounts of data and retrieve that using batch MapReduce jobs.
Below we show a diagram of the basic architecture.
Here we see a mount point/home/foo/data. This is called a Namespace. This folder does not exist on any disk, it is just an abstraction across the cluster. The namespace exists across all datanodes.
The namenode tells the datanodes where to write data. The datanodes report back into which block and drive they have written data to the namenode, so that one central repository can know that. The namenode duplicates that information into a secondary name node.
The datanodes write data to local storage in block format, just like a regular PC. So in the case of a failed disk drive the namenode can reconstruct the data on another data note. (In a large data center disk drives fail daily.) The namenode use the Editlog to keep track of all the write operations. The edit log is written to the namenode server on regular disk local storage. This file is called the FSImage. Hadoop truncates the Editlog as transactions is the FSImage metadata file are written to the datanodes.
HDFS CLI
You work with Hadoop files either from a client program using the API, like the Java program shown below, or the command line. The commands are almost exactly the same as regular Linux commands: ls, mkdir, chown, etc. So they are very easy to use. The only difference is you write “hadoop fs -(command”. For example hadoop fs -ls lists files in a directory (namespace).
The full list of CLI commands are:
appendToFile cat checksum chgrp chmod chown copyFromLocal copyToLocal count cp createSnapshot deleteSnapshot df du dus expunge find get getfacl getfattr getmerge help ls lsr mkdir moveFromLocal moveToLocal mv put renameSnapshot rm rmdir rmr setfacl setfattr setrep stat tail test text touchz truncate usage
The is command output looks just like regular Linux command:
hadoop fs -ls /data -rw-r--r-- 1 root supergroup 1081141 2017-03-31 15:37 /data/ssh.log
HDFS configuration
The HDFS parameters list is very long and is best checked in the Hadoop manual. In brief there are only a few parameters that you need to set to get Hadoop running in local or node mode. The most commons ones related to storage are shown below:
hdfs-site.xml
<property>
<name>dfs.namenode.name.dir</name>
<value></value>
</property>
<property>
<name>dfs.datanode.name.dir</name>
<value></value>
</property>
core-site.xml
<property>
<name>fs.defaultFS</name>
<value>hdfs://hadoop-master:9000/</value>
</property>
Turns on or off permissions checking.
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
Java
Hadoop is written in Java. Apache Pig creates Java MapReduce programs based upon SQL commands typed into the console. So it is good to know Java if you want to work with HDFS.
Here is an example program Hdfs.java to copy a String to a file in HDFS.
First source the environment:
export HADOOP_CLASSPATH=$($HADOOP_HOME/bin/hadoop classpath)
Compile the program like this:
javac -classpath ${HADOOP_CLASSPATH} Hdfs.java
Then run it like this:
java -cp .:$HADOOP_CLASSPATH Hdfs
Here is the source of Hdfs.java.
import java.io.BufferedInputStream; import java.io.BufferedOutputStream; import java.io.File; import java.io.FileInputStream; import java.io.FileOutputStream; import java.io.IOException; import java.io.InputStream; import java.io.OutputStream; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.FSDataInputStream; import org.apache.hadoop.fs.FSDataOutputStream; import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.Path; public class Hdfs { public static void main(String[] args) throws IOException { Configuration conf = new Configuration (); FileSystem fileSystem = FileSystem.get(conf); Path path = new Path("/data/silly.txt"); FSDataOutputStream out = fileSystem.create(path); String helloWorld = "Hello Hadoop World."; out.writeBytes(helloWorld); out.close(); fileSystem.close(); } }
When we first run this we get this error:
Permission denied: user=walker, access=WRITE, inode="/data":root:supergroup:drwxr-xr-x
So change the folder permission just as you would with the Linux chown command:
hadoop ds -chown walker /data
Then cat the file’s contents to show that it workd:
hadoop fs -cat /data/silly.txt Hello Hadoop World.
Hadoop file names
Hadoop files created by Map and Reduce are stored as a varying number of files in folders.
This is because such jobs are split into parallel processes and run across the cluster.
So when you run a MapReduce job and you set the output folder as:
FileOutputFormat.setOutputPath(job, “/data.txt);
The output is split into a files called “success” and “partn” in the folder /data.txt/ where n ranges from 1 to how every many partitions this step was divided into.
Hadoop file storage
Hadoop uses several file storage formats, including Avro, Parquet, Sequence, and Text.
Avro
Avro serialization for Hadoop writes data in JSON byte format so it can be consumed by programs written in any language. Avro is built into Hadoop. (Serialization means writing data to storage, like a field or web session or file.) With Avro you can represent complex structures (Arrays, customClass) in addition to primitives (.e.g., int, float, boolean).
Parquet
Parquet is a row and column storage format added to Hadoop by Cloudera and Twitter. It is like Hbase which is a storage mechanism on Hadoop to store columns of data, positioned close to each other for fast retrieval in a row-column storage design. HBase is the open source implementation for Google’s Big Table database. Parquet is designed to serialize complex structures in bulk, like the classes you define in Java, Python, and other languages, together with their member fields and functions. Parquet supports APache Hive, Pig, and Spark storage as well.
Sequence Files
Is a JSON text file, but stored in binary (key-value= format. So (a=>b) would be stored as (01100001->01100010). There are 3 types of sequence files, the difference being whether the keys and values are compressed or not. Those uses these codecs for compression: BZip2Codec, DefaultCodec, GzipCodec. Without explaining how those work observe that a string of 6 bits of 1 could be written as something like “6 1’s” thus taking up less space than 6 actual 1s.
Plain Text (csv and txt)
Needs no explanation.In Java it is designated:
job.setOutputFormatClass(TextOutputFormat.class);
Hadoop in the cloud
Different vendors have implemented their own Hadoop abstraction, for example Windows Azure and Amazon EC2 and S3. S3 in fact is already a URL-type file system, like Hadoop, replacing directory names like //server/directory/file with URLs http://. Cloudera has made Hadoop the bulk of its business. | https://www.bmc.com/guides/hadoop-hdfs.html | CC-MAIN-2018-09 | refinedweb | 1,605 | 57.87 |
Wikiversity:Colloquium/archives/November 2007
Contents
- 1 Putting your picture on your user page (rights issues)
- 2 San Francisco / Bay Area Wikiversitans?
- 3 The philosophy of the basic filmmaking and film scoring courses
- 4 Email functionality for Watchlists on Wikiversity
- 5 Public/Private Key Instructure
- 6 Connexions
- 7 Proposed new hierarchical structure
- 8 why should we try to divertified our thoughts about humans as a jewel on earth
- 9 Recommented format
- 10 Diploma
- 11 Structured Semantic Wikis
- 12 Wikipedia article about Wikiversity (not notable)
- 13 Interactive quiz
- 14 Request Programming Assistance
Putting your picture on your user page (rights issues)
- In the next few months, I want to design a template for my students to put on their USER page. On this page, I will mark which assignments they have completed, etc.
- I want students to post their picture. But there is a rights issue. It is foolish to ask students (or anyone) to put their picture on their USER page with a GNU license which will allow anyone to use their likeness for any purpose. That is asking too much of people (as well as allowing identity theft.)
- Therefore, I need a license for person's likeness (photos, sketches or other kinds of portraits) that only that person has the rights to. Robert Elliott 02:10, 6 November 2007 (UTC)
- How can an image file be used for identity theft? User pages are supposed to be used to facilitate constructive participation at Wikiversity. How does non-free content (pictures that cannot be re-used) contribute to Wikiversity? --JWSchmidt 03:05, 6 November 2007 (UTC)
- Sorry, I am not sure I understand. What license do we have which is "non-free content"?
- A few of my students are (or will be) famous people. A GNU Open License will allow anyone to sell products (songs mostly) with my student's pictures on them.
- There is a "For non-commercial use license" but someone has changed the programming of Wikiversity to say that any image using the license will be removed in a few days:
- To me, it is not clear what alternative license can be used by students for their portraits.Robert Elliott 22:47, 6 November 2007 (UTC)
- Robert, you need to be realistic about this. Your students (like every other wikiversity participant) need to play by the rules, and one of the most fundamental rules is that content contributed here is free content. If you want to place restrictions on the use of content, you really should just pay the $x per month and host your own wiki.
- I don't mean to be hostile here, it's just that I get the feeling from your comments over the past year or so that you're just really uncomfortable with Wikiversity's copyrights. We couldn't change those if we wanted to, and to be frank most of us wouldn't if we could. --SB_Johnny | talk 23:10, 6 November 2007 (UTC)
- "There is a 'For non-commercial use license'" <-- Which license? Wikiversity allows fair use. Some companies explicitly allow educational use of screen shots of their software while explicitly forbidding commercial use of screen shots of their software. "It is foolish to ask students (or anyone) to put their picture on their USER page with a GNU license which will allow anyone to use their likeness for any purpose." <-- I'm not a lawyer, but the GFDL does not make illegal practices "allowed". I believe there is a body of law which prevents unauthorized marketing devices that imply someone famous endorses a product. The simple solution for people who want to own their own photographic image is to not upload an image to Wikiversity. I still do not understand how placing a picture of yourself on you user page promotes the educational mission of Wikiversity. --JWSchmidt 00:09, 7 November 2007 (UTC)
- While there have been some issues relating to this with the Creative Commons licenses, there have not been any (at least that I know of) with the GFDL, due to the arduous conditions attached to the use of that license (in the case of the linked example, the billboard would need to have 7 pages of license displayed on it). GFDL is good for publishing and selling things, but terrible for advertising.
- OTOH, if someone is going to use an image for e-vile purposes (like identity theft), they probably won't care what license is on it. --SB_Johnny | talk 09:44, 6 November 2007 (UTC)
- I don't see any reason why a person can't expect their own personal likenesses to be protected to a degree beyond that of the site content. The idea of this site is to provide free educational content, not free access to use and abuse the likenesses of the editors. Beyond that, images can be released under a non-GFDL license, as evidenced from all the CC-BY-SA and other licenses that are used for images already. User pages typically are exempted from the policies of the main-content pages, and there is no reason that this could not be extended to user-page-only images as well. People should be able to use images for their userpages under a restricted license, such as non-commercial, no derivs, all rights reserved, etc. So long as it is not being used in the content (which must be copyleft) and so long as it benefits Wikiversity (albeit indirectly, by helping to establish the identy of the contributor, and keeping a group of editors organized), I see no reason why such a thing should not be allowed. --Whiteknight (Versity) (Books) 23:22, 6 November 2007 (UTC)
- I disagree. All images must use a license which allows it to be freely used, modified, redistricted, etc for both commercial and non-commercial uses, regardless of what page its used on. However I think images, illustrations, etc. of the person can be used on their own user page under terms of fair use if they do not wish for it to be used freely. There should be nothing stopping someone from making an image of themselves or otherwise available under less free terms somewhere else and using it here as fair use. --dark
lama 23:35, 6 November 2007 (UTC)
- The only reason I say what I said is because I dont think this qualifies as a valid "fair use" situation. For instance, an image of my self, being used by me is not a fair use situation. Neither is the posting of a persons picture on a non-content userpage. Fair use is not a license, it is a defense for using a copyrighted work without express permission from the copyright holder in a few specific situations.
- From a different angle, what is the harm in having such images? People want them to be posted for various uses, but people also want to protect their likenesses from all sorts of abuse. If recent events have proven anything, it's that people can never be too careful about the use of their likeness. So long as the images are only used on the user pages (not anywhere in the content pages) they aren't harming anything and the licensing will help provide the protection that some editors would want. --Whiteknight (Versity) (Books) 00:07, 7 November 2007 (UTC)
- Robert, my proposal would be firstly to have uploading an image to your user space as optional. That way, you are not forcing anyone to upload an image - this goes beyond the issue of reuse to that of basic identity management (for want of a better word) - ie the right to be anonymous etc. Secondly, I would allow people to upload an image that represents themselves in some way - like an avatar, a cartoon character, a peaceful scene, whatever - which would allow the person to explore and share their identity beyond that of a photograph. With those basic provisos in place, I would still then encourage people to upload images of themselves, explaining that it can lead to a better learning community and environment. Finally, if they are nervous of reuse of their own images, the best thign to do would be to upload a low-resolution small-sized pic that cannot be used realistically for any marketing etc purposes. Overall, I agree with comments above that images on Wikiversity should be free content - but I agree that the issue of an image of oneself is a tricky one, beyond the mantra of free content. Cormaggio talk 12:39, 8 November 2007 (UTC)
- I know it is technically possible to have the Mediawiki software display images that are located on external servers. For example, at Wikia websites you can use <div style="float:right; padding:10px"></div> to show an image from the file "myimage.jpg" that is hosted at "members.some.website". We could request that this be allowed in the user namespace at Wikiversity. Wikiversity participants could then upload image files to other websites under non-commercial licenses and show those images on their user page at Wikiversity. --JWS 14:21, 8 November 2007 (UTC)
- Well, call me paranoid, but that seems to me to be a rather interesting loophole for certain kinds of vandalism. Would we have a blacklist? Would we need someone to maintain that blacklist all day? Much as I would love to have a "weatherunderground" banner on my page (so people can know how cold it's getting here at SB), I really think allowing external IMGs would be much more trouble than it's worth. If the learning project is about how to set up a facebook page, we can certainly host learning materials about that without compromising on copyrights. Besides, it doesn't matter where the image is posted... if it's on the net anywhere, anyone can use it anyway, assuming they don't care about the copyrights. --SB_Johnny | talk 17:14, 8 November 2007 (UTC)
- Any external images that are displayed on Wikiversity user pages would need to have some value for the project. I'm still not sure about the value of personal photographs on user pages. To some extent Wikiversity is a social networking site. We want Wikiversity participants to form learning communities and we support the idea that Wikiversity participants should make use of their user pages to share their learning goals and objectives with other people. I can understand that some people might feel uncomfortable trying to understand and collaborate with disembodied editors....being able to see a smiling face might be important for some people. I do not really understand the Wikimedia system of URL blacklists/whitelists, but I think we could probably manage any potential abuse of links to external images. I do not know how easy it would be to restrict the display of external images to user pages. --JWSchmidt 20:20, 8 November 2007 (UTC)
- If an image is of value to the project, we should host it here. I do know a bit about how those blacklists work, at least enough to know how much work it involved in maintaining them. We really don't want to go there, and unless we have a very good reason to do this (which we don't), I will absolutely oppose it. This is a dangerous thing, and I don't think we should take on the negative connotations without a very good positive (which, again, we don't have right now). --SB_Johnny | talk 22:38, 8 November 2007 (UTC)
San Francisco / Bay Area Wikiversitans?
Hi, I'm wondering if there is anyone in the Bay Area (San Francisco) who might be interested in facilitating discussion with an Open content-type group in Stanford? That's about as much as I know of this for now, but if you could indicate here (or on my user page or via email), I could put you in touch with a member of that group. Thanks. Cormaggio talk 19:59, 16 November 2007 (UTC)
The philosophy of the basic filmmaking and film scoring courses
Perhaps this will answer some of the questions from above:
- Facebook."
- "a totally new method of communication must be developed which offers extremely easy way to communicate both ideas and data" <-- We now have a place where we can experiment with computer tools that are not currently allowed within the Wikiversity wiki: Topic:Sandbox Server 0.5. For example, we can place QuickTime movies there. --JWSchmidt 15:30, 10 November 2007 (UTC)
- The info about Facebook is most interesting. Thanks Robert Elliott 20:20, 10 November 2007 (UTC)
- There's a large body of literature about online community, and how communities contribute to learning - eg. this paper. In fact this paper points out two sides of the same coin - that photographs and other indicators of 'who we are' serve to bond people to the community and motivate them to participate; but that thinking about how much to divulge about oneself can often lead us to hesitate to participate (ie inhibition can be reduced through anonymity). I think therefore, that people in Wikiversity should be allowed to give as much or as little information about themselves in order participate - but that requiring people to give a certain level of information (ie photo) could reduce participation. I think, Robert, it might be a good idea to outline some of the types of participation/communication you want to facilitate in this community, and to see where/how those can be supported. I would like to explore Wikiversity as a social learning space (it is one of the goals of the project - or certain people at least, including myself) - but would also like to see how Wikiversity could interface with other online spaces, such as Facebook and YouTube. However, I would also heed John's info above in that uploading content on places like Facebook could well be much more exploitable than on Wikiversity (where, at least, you, the uploader can specify how content can be reused). Cormaggio talk 18:30, 10 November 2007 (UTC)
- Three different levels
- As I mention above, there are three different levels. The basic lessons, advanced lessons, and actual projects.
- 1. For the first level, I need to simplify the current methods of uploading completed assignments for viewing by other students. Students need to be able to submit their completed assignments with just one click. That is there needs to be a single button which takes the student to a special version of the UPLOAD file page which is designed specifically for that lesson. The student only needs to select the file from their hard drive and all the rest should be automatic. Since this is for a specific lesson, all the other information normally filled in on the UPLOAD page is already know and can be filled in automatically.
- 2. I am still trying to figure out what is needed for the second level. It needs to be as easy as Facebook.
- 3. For the third level, students will need to communicate just like at a normal film studio. Roger Corman's film studio is a good example since all of the staff and crew were film students doing their first jobs without pay to get credits on real films. Therefore, I think that this type of communication will be rather easy to set up.
- Again, this is a long way in the future but still it is worth thinking about how this can be done. And how it can be programmed. ~~~~ Robert Elliott 17:40, 17 November 2007 (UTC)
I recently discovered that the emailing of changes to ones watchlist is not a feature that is turned on in Wikiversity. This seems to me like quite a shortcoming to the idea of creating active learning projects amongst participants because it requires users to have to login -> check their watchlist, to see if changes have been taking place on projects in which they may be participating. I also don't find the watchlist feature to be all that 'user friendly' and this sentiment has been echoed around a few users groups that I've worked with in other MediaWikis. Essentially Wikiversity is then all pull and no push .. which is a bit of a shame I think. I understand why this feature has been turned off in Wikipedia - way too much email would be generated; but it seems to me that the difference in our project is that it requires far greater two-way communication than does Wikipedia and that at the level it is now, not that much email traffic would be generated. What do others feel? Countrymike 20:51, 1 November 2007 (UTC)
- I find it useful to have the extra email preferences available as options at the Wikimedia meta-wiki (shown in the image). We could put in a request to have this activated for Wikiversity. --JWS 01:57, 2 November 2007 (UTC)
- Wikiversity is likely far too high traffic and has far too high an edit frequency to enable this option. While it's likely not higher than meta, meta is (AFAIK) the only Wikimedia wiki with the option enabled, and so enabling it here would potentially double the load on the mail servers. I also think that most contributors to Wikiversity are fairly frequent participators and would be able to check their watchlists once daily or weekly. On the contrary, on meta, most only visit the site once in a blue moon or when they get e-mail about a page on their watchlist or their talk page being changed, which is why the feature is enabled here. I don't really see the necessity for the feature on Wikiversity. AmiDaniel (talk) 02:01, 4 November 2007 (UTC)
- I think we should go ahead and ask for the feature to be activated, even if the response to our request is "no". If email load is a problem, I wonder if there is a way to throttle the system so that each person using it could only get a limited number of emails in a month. --JWS 14:48, 4 November 2007 (UTC)
- I wouldn't have thought that Wikiversity was that highly edited, but I could be wrong and we'd hope that this was increasing -- so I take your point there. Although, editing alone does not necessarily create emails from the system, only people who have the email functionality turned on in their watchlists get email sent to them. Is there a way to get some stats on the average number of edits that are taking place on Wikiversity, so that we may be able to get a better idea of how much email something like this might create? I still think that the model of having people visit their watchlists all the time, particularly if we want greater use/uptake from less technically inclined, less likely to investigate the complexity (current watchlist functionality does as I said before confuse a lot of people) of the watchlist or the mediawiki software. Almost all other technologies that have emerged around teaching/learning online have some kind of 'push' aspect (usually just email) built in to keep pace and interest going in a project. I fear that while Wikiversity has the 'brand' and the proximity of a large community, it may still languish technically which ultimately may favour another system, another wiki that can adapt. Countrymike 21:21, 4 November 2007 (UTC)
- I just discovered this: Countrymike 22:30, 4 November 2007 (UTC)
- I very much take Countrymike's point - that Wikiversity may end up suffering in terms of participation when other wikis will be offering such functionality. But how about other avenues - such as RSS? I know there has been some work on that in the past, but is it easy enough to set up? Cormaggio talk 12:24, 8 November 2007 (UTC)
- Every Wikiversity "history" page has a RSS and Atom feeds. I suspect more people use email than use a feed reader. --JWS 13:54, 8 November 2007 (UTC)
- The RSS/Atom feeds are pretty awful from what I remember; they tend to bring along all the cryptic diff stuff with them. It's really the 'push' aspect of email that I think WV is missing ... the kind of near instant notification that something in a learning project of which you're actively a part of has changed and by whom. This is entirely different than what is required of in Wikipedia, where for the most part users are consumers of information that they're either actively seeking or just browsing for -- shouldn't WV be actively encouraging participation, debate, comment rather than the consumption of texts? WV is about projects, which to me implies some kind of sustained participation and activity within a communty of users. I feel that in the long run the stuggle of WV may actually be to differentiate itself technically from its parents. Countrymike 22:54, 8 November 2007 (UTC)
It is possible to set up some public list, and every two or three hours send a robot to scan it and then, should there be a chanage, send a personal electronic mail to every watcher. The list should be kept concise. The reading part can be done right away with the pywikipedia framework. And python does support Simple Mail Transport Protocol (SMTP) and Internet Message Access Protocol (IMAP); I have not learnt it, but interested participants may read Martelli's book Python in a Nutshell, p.503 for details. Hillgentleman|Talk 22:27, 19 November 2007 (UTC)
And see also Hillgentleman|Talk 22:31, 19 November 2007 (UTC)
Public/Private Key Instructure
I wonder if it is time for us to consider participating in one of these public/private key projects[1] or establish our own web of trustw:Public_key_infrastructure using free/open software tools and standards so that people who wish to participate publicly and use their earned life expertise and credentials as part of the reputation/trust web can do so reliably? My understanding as of a couple of years ago is that the overlapping circles of peer certification while not fool proof do provide a substantial amount of confidence that the encryption keys are actually in use by known individuals with legal identities and responsibilities. Discussion at regarding PKI[2]. Should be fairly easy via wikimania to propagate a dense web of trust verification around the globe. For example, groups of engineers can know and certify each others encryption keys and then when someone receives confirmation a safety review has been passed it can be double checked with relatively secure communications. In other words, a kid cannot simply tell his guardian or mentor or teacher or sponser that his model rocket passed while never sending it in for inspection. The enrollment process is setup with secure handshake requirements such that an uninspected model rocket will not be allowed on the firing range. Indeed, steps might be taken to shutdown a child's project by confiscating all tools, data, computers, etc. if they attempt to forge documents or break contracts and launch uninspected models in violation of fire safety requirements. Perhaps interaction online with children responsibly is a bit tough chunk to big to worry about now ... it would help Lunar Boom Town to succeed as a project if anyone claiming to be an experienced engineer or astronaut had sufficient interaction with the public key infrastructure that we have a high (not perfect) level of confidence in their assessments and judgement. Otherwise we are reduced to continually rechecking each others basic facts. Now this is an educational process for all involved and will encourage much mutual tutoring but it can be a bit boring trying to get something fun, exciting, and motivational completed to maintain and accelerate appropriate momentum vectors. Still we can live without it if necessary at this time, some nices thing about virtual components and tasks they can be managed by contraction or expansion of appropriate blackboxs to meet individual needs with a fair amount of forethought, planning, and simple cut and paste and tailoring in recycled materials. Mirwin 10:30, 25 November 2007 (UTC)
- We could set up a project for experiments with a "Web of Trust", but a fundamental reality is that Wikimedia Foundation wiki communities rely almost entirely on one measure of trust: a user's history of edits. Wikiversity wants to find ways to encourage participation by experts, but there is no proven way to extrapolate from expertise in non-wiki-based activities to the status of "trusted wiki editor". I think we can encourage wiki editors to keep "portfolios", summaries of their wiki-related activities that are guides to their edit histories. Wiki communities also find it useful to keep public records of reviews of the behavior of editors....reviews performed by other editors. It can take a long time to review someone's edit history. I think it would be great if we had a standard format for reviews of edit histories that could efficiently generate concise and trusted summaries of someone's edit history. Such a system of edit history reviews would provide a useful tool for expanding webs of trust in wiki communities. Many people with expertise "in the real world" may not like the idea that they cannot "automatically" convert that expertise into trust within a wiki-based community at a place like Wikiversity. Maybe we need a learning project that explains to experts the fact that they are welcome at Wikiversity, but they have to earn trust here by creating a history of good edits. I think the other side of the equation for getting experts to participate is that a wiki needs to have a workable system that prevents the good editing of experts from being destroyed. The "any one can edit and vandalize any page" approach developed at Wikipedia does not work and Wikiversity needs to find an alternative approach. --JWSchmidt 15:33, 25 November 2007 (UTC)
- Yes I agree substantially with most of your reasoning. I think as we expand and morph discrete yet interconnecting pieces of Lunar Boom Town we will start to get tentative schedules and configurations that we will protect at a specific version number after consensus of a formal design review or audit. That way other related data objects may be updated and we will get a real world configuration management process where different teams are using different versions and configuration and yet must coordinate work and agree how to merge at routine update milestones. Mirwin 17:54, 25 November 2007 (UTC)
Connexions
A similar project is run by a separate organization Connexions. Their modules in CC. Anyone seen that already? Are there ways to cooperate? Dedalus 14:27, 23 November 2007 (UTC)
- Nice find Dedalus! I will try to look it over a bit. It seems to me that one of the easiest ways to cooperate is to link to their material whenever it is useful. For example: If you are trying to explain electron cloud theory in chemical bonds to a grade school level chemistry student you probably want to use simple explanations and examples. However, for precosius students you may provide a link to the detailed information available there. This may also help the parents of a student who have forgotten their high school or college level chemistry and just want to help their child complete a one page report for the class on "Chemistry". Most web sites hate to provide links to alternatives but we are not a commercial organization and do not really need to worry if a learner wanders off as long as they eventually remember to come back. It can be a bit irritating for computer novices who have not discovered how to use the back page button in their browser but they will eventually either discover it or memorize and learn to use the search function or directory structure find the location from which they departed via the web link. Mirwin 02:37, 24 November 2007 (UTC)
- I've been in Rice University and met with the people at Connexions - it sparked a potential Wikiversity and Connexions collaboration but nothing has come of it to date. However, I still would very much like to get it going again. Cormaggio talk 16:47, 27 November 2007 (UTC)
Proposed new hierarchical structure
I propose a new hierarchical structure for editing personnel that will not only allow for a much more rapid addition of content, but will also maintain a high standard of page editing due to the invisible hand of human pride. As far as I am aware, there are two types of personnel currently - editors and administrators.
I suggest that, because this is a university, we should have structures in place to achieve the highest possible learning potential for visitors to this site. I propose that school structure can only be edited by those people with at least a professor status. The professors will guide the writing of the editors who will oversee and educate the editing of newcomers or "students" everyone else should be considered visitors. The ladder should be very accessible to talented individuals and the administrators should oversee school structure to ensure none are falling behind. Please read my Proposal and let me know what you think. DónalMcK 16:28, 23 November 2007 (UTC)
- I think people should have maximum freedom to use the facilities in any way they choose as long as we play safe and repair any damage done to others. I counter propose (without reading your detailed proposal) that you ask an admin to fork/duplicate the entire existing structure and then tweak the forked copy to your top down hiearchy. That way we can have chaotic random jottings for the lazy creative types like me as well as good top down structured materials for people seeking guidance. I will attempt to look over your detailed proposal soon but right now I am in middle of a severe brainstorm and I need to collect my thoughts while it is raining. 71.161.21.39 13:13, 24 November 2007 (UTC)
Thank you for responding. Please do read it. You will see that it still provides for the random jottings that a lot of people enjoy, however, the only privelege you will lose is the ability to edit "school:" and "topic:" pages. I believe that the "trunk" of the knowledge tree should be structured by those people dedicated to a spcific area of study. I agree that random jotters should enjoy the freedom of editing any article, but not school homepages. Also, admins are very busy people. If we know that someone is dedicated to a specific school, rather than monitoring the actions of editors in all areas, the top-down guidance system might even work a little better. Please come back to me. DónalMcK 13:28, 24 November 2007 (UTC)
- I persnally think this is a bad idea... for one thing we'd need to go around verifying expertise (a lot of work), and second it's sort of missing the point of using the distributed expertise model of wikis.
- Administrators (called Custodians on Wikiversity) do tend to be generalists, but some can also be dedicated to specific areas of study as well. However, I can't really imagine a case where it would be appropriate for Custodians to be steering content using sysop tools. --SB_Johnny | talk 14:33, 24 November 2007 (UTC)
Verification is a problem, I admit. How is verification of admin expertise currently done? Could we not use the same system? I see that the proposal in its entirety is unfavourable. Can members display their feelings on the theory of providing protection for all "school:" and "topic:" pages? DónalMcK 14:54, 24 November 2007 (UTC)
- I am against protection of the above mentioned pages. There are different kinds of protection, with different reasons why something should be protected. But the question would be, why should school or topic pages be protected (besides e.g. protecting against v.nd.ls - for only a certain period of time) ? Any of the above protections limits access to that page for certain groups. We are also a wiki - and one of our strengths lies in the freedom that everyone can contribute to enlarge the learning benefit. Everyone (see also here) should be given the chance to participate to increase the learning. If access is given to some pages to only to some groups they e.g. need to be asked, when changes are needed. Imagine one of them is ill or has a not harmonizing timezone, the user wanting a change has to wait and can not immediately continue her idea and is interrupted in her flow of (creative) thinking/editing. Another problem may be, that limited access also limits the direction a certain page goes. With input from different persons/fields there can be awaken new impulses - what might seem from ones perspective a bad idea, another person can use to improve a topic. I am not sure if rebuilding hierarchical structures from real world may not throw obstacles in the learning (by doing). I mean we all probably come from a background where we got "normal" education from institutions. Who knows in the future children go to the virtual wikiversity school (see also Wikiversity:Pre-tertiary portal)? If we start protecting certain pages now, then in the future we will find another thing to protect and surely step by step the freedom might get lost ?
- Custodian expertise: am not sure what you mean with this, but custodians have mentors before being appointed. Also anytime their edits can be checked. Custodians are - and this is important - just normal users with access to tools. They - as any human - can make mistakes - and hopefully they learn over time from them to increase their expertise.
- Wikiversity counts on trust - imagine we are guest friendly: we let our doors open, even when we go to bed. ----Erkan Yilmaz (Wikiversity:Chat, wiki blog) 15:30, 24 November 2007 (UTC)
- Just to try to be clear. Custodians are not require to have expertise in anything, so there is no need for a verification system. --dark
lama 17:19, 24 November 2007 (UTC)
Duely noted; proposal withdrawn. Thank you everyone for your thoughts, i've learned a lot about the culture here DónalMcK 01:14, 25 November 2007 (UTC)
- That was quick - I didn't even have a chance to comment. :-) Dónal, while I agree with arguments above that the proposal is flawed, I hope that one of the things you've learned isn't that you can't submit proposals for comment. :-) It's often in these kinds of fresh-eyed proposals that we discover who we are - so keep the ideas coming... Cormaggio talk 17:11, 27 November 2007 (UTC)
why should we try to divertified our thoughts about humans as a jewel on earth
i always thought of human why they are being sent to earth or from they are till what time they would stay on earth orfor whatpurpose the are there to whom they are being sent and to whom they have to obey "are they really independent in their decessions (yesor not) you are invited to discuss i will show my thoughts later when i will think that some one is there to think about ?????/
bestregards keep thinking
- I wonder if we should divert this discussion to some philosophy forum via a link? Personally, I believe in Jesus Christ who when asked for the secret of the universe by hostile religious establishment in attempt to trap him told them to 1. Love god (god<-->universe) 2. love others as self. I believe if one unravels the levels and layers of interlinked systems of equations it implies that if god exists then lucifer is a reliable first officer while if god is somehow absent we intend to create him appropriately so we win the ongoing evolutionary battles. In other words our species has taken conscious control of our own evolution and we intend to rig the universe to our satisfaction and benefit. Mirwin 10:11, 25 November 2007 (UTC)
- We have a page which deals with free will from a philosophical perspective - we also have a theology school. It depends on what exactly you want to explore as to where you might want to start, but either of those two areas seem relevant for now. If you want to set up a learning project, you can think of a suitable name for it, create a page by that name and start editing it. (If you need help with this, see here or in more detail, here.) Cormaggio talk 17:28, 27 November 2007 (UTC)
Recommented format
Hi all, I am trying to put together a few lessons on Ship strength () but the more I write the more I get the feeling I should put this in WikiBooks instead of here.... Could someone point out an exemplary page and even put it at the front page, so that it is clear how it should look? Keep in mind that teaching someone how to use a piece of software is not the same as teaching something more abstract. A million thanks for any help. Dpservis 10:52, 24 November 2007 (UTC)
- It isn't clear how (ie the way) something should look in Wikiversity, because there are many different ways of producing and formatting educational materials. If you get the feeling that it should be in a book format, then perhaps it should be, as you say, on Wikibooks. Some materials on Wikiversity are more interactive, or use graphics (eg Filmmaking basics course, and some are much more text-oriented - there just aren't any hard and fast rules. We have Wikiversity:Featured which gives some sense of materials here (and which is linked from the main page), but it does need updating (as does the main page), and any help/feedback would be appreciated. Cormaggio talk 17:03, 27 November 2007 (UTC)
Hi Cormaggio and thanks for the reply. I really wonder how this should be structured in order to be more helpful and interesting for someone. Naturally there would be two ways to do it: one would be to keep all the textbook information along with the lessons and exercises and the other one to move textbooks to wikibooks, and keep here only essential information, links and exercises. So this boils down: what's the purpose, give people links, info and exercises? Or the whole package? I do not have an answer myself and wonder what would be interesting and intuitive. I tend to think that the two should be separated: there are people that prefer to read first a lot on the subject and then exercise and people that first want to exercise and then tackle theoretical issues. But even in that case, should something like a textbook be here or wikibooks?Thanks a lot.Dpservis 22:49, 27 November 2007 (UTC)
- Thanks Dpservis, it's never been the intention of Wikiversity to overstep into Wikibooks' domain, so a textbook should clearly be on Wikibooks. However, content that might have been made into a textbook could just as easily be presented as a series of 'lessons' (for example), which is the format that the filmmaking course takes. And of course, there could be both textbook and lessons&exercises (or whatever you want to call them). But the issue here is not necessarily for you to decide definitively where this all should be - content can easily be created in Wikiversity/Wikibooks, and moved afterwards when it is realised that some or all of it would actually be better off in the other project. Both projects work together pretty closely, so this needn't be an either/or question. However, Wikiversity is probably looser in structure than Wikibooks, and might be easier to develop an intuitive structure, which, if it turns out to resemble a textbook, can be simply moved. Cormaggio talk 13:09, 28 November 2007 (UTC)
- Yup. Import is enabled from wikiversity to wikibooks now, so no problem writing here, there, or both --user:SB_Johnny 14:27, 28 November 2007 (UTC) (logged in using alternate acct)
Ah, great, thanks. Good to know info. Therefore I assume that it's OK to go on with my deployment of the subject and arrange it as it goes on provided that people have interest in that and give some feedback.Dpservis 22:47, 28 November 2007 (UTC)
Diploma
Can we get a diploma at the wikiuniversity ?
--Jonano 12:31, 25 November 2007 (UTC)
- No, please see Wikiversity:Scope#Earning_Degrees, Wikiversity:What Wikiversity is not, Wikiversity:Colloquium/archives/January_2007#High_School_Diplomas. What is the reason that you want to get diplomas ? ----Erkan Yilmaz (Wikiversity:Chat, wiki blog) 12:37, 25 November 2007 (UTC)
That would be fun to get a graduate diploma, with exams from a university, I have only a high school diploma with some course in college. --Jonano 13:01, 25 November 2007 (UTC)
- Fun is important at Wikiversity. A basic source of fun is participation in learning communities. If we can start a good collection of learning resources with fun "learn by doing" activities then it should be possible to grow communities of learners who are interested in exploring many different topics. --JWSchmidt 15:43, 25 November 2007 (UTC)
More recent discussion of this topic: User_talk:JWSchmidt#Diploma, [[User_talk:Robert_Elliott#diplomas]
--JWS] 15:49, 30 November 2007 (UTC)
Structured Semantic Wikis
A query to get some thinking going. Namely, has anyone designed and constructed a structured semantic wiki? One that to some degree combines features of a structured and a semantic wiki. What features would you include? Which would be effectively redundent? How would those elements exclusive to structured wikis be changed or eliminated by the semantic aspects. How would those elements exclusive to semantic wikis be changed or eliminated by the structured elements. Would it be best to merge the two types together, or to design a unique structure using elements made for use in structured semantic wikis?
Additional questions regarding this subject are most welcome, as are proposed solutions. The goal being to provide a basic structure that is sturdy, easy to learn, and easy to implement. That can also be expanded upon depending on how much the SSW (Structured Semantic Wiki) is being called upon to do. So ask, suggest, propose, brainstorm, confab, hypothesize, theorize, and/or speculate.
Mythusmage 12:46, 24 November 2007 (UTC)
- Can you provide any links to overviews of what makes up a "semantic wiki" or "semantic web"? Are you talking about meta information (information about the information) of some kind. Is this some kind of adaptive interface that tracks the user and preemptively modifies the content or menu structure like some advanced software packages do? Mirwin 19:05, 24 November 2007 (UTC)
- Do you mean ? Hillgentleman|Talk 21:18, 24 November 2007 (UTC)
- Yes, Semantic MediaWiki (at that link from Hillgentleman) is a great initiative, and one we might consider adopting for use in Wikiversity. There's a page for brainstorming metadata system for Wikiversity at Wikiversity:Metadata - please contribute ideas there (or continue discussion here). Cormaggio talk 17:17, 27 November 2007 (UTC)
- See also : mw:extension:data. Hillgentleman|Talk 05:33, 7 December 2007 (UTC)
Wikipedia article about Wikiversity (not notable)
Someone placed this tag on the w:wikiversity article.
The content of this article may not satisfy the notability guideline for web content. deletion, per Wikipedia:Guide to deletion.
--mikeu 20:05, 19 November 2007 (UTC)
- Thanks for the notice. I removed the template. There are many editors at Wikipedia who are not aware of the Wikimedia Foundation and its goals. --JWSchmidt 21:19, 19 November 2007 (UTC)
- I've never entirely been able to understand what does and doesn't pass as an acceptable reference to wikipedians. Would this news article pass muster? Or does it have to be a book or other non-internet source? --Luai lashire 21:45, 20 November 2007 (UTC)
- In a way the guy is right... the article is not notable, in fact it kind of sucks. I suggest that we use Edit Wikipedia Week as an excuse to rally the troops around improving this article! Countrymike 04:33, 22 November 2007 (UTC)
- We should start collecting references to wikiversity in a page like w:Wikipedia:Wikipedia in the media The current article could use some external references.--mikeu 15:26, 23 November 2007 (UTC)
- We have Wikiversity:Wikiversity in the media, as well as a confusingly similarly titled learning project Wikiversity in the media - both of which need work (particularly the latter). Cormaggio talk 17:35, 27 November 2007 (UTC)
The references section of the w:Wikiversity article has now been tagged with "This article or section needs sources or references that appear in reliable, third-party publications." and an edit summary of "Primary sources are not sufficient per w:WP:CITE" --mikeu 22:30, 8 December 2007 (UTC)
Interactive quiz
Is there a way to setup interactive quizes for language study ? That is to say showing a word in a language and asking the corresponding form in English and the way reverse. Eg. English form : To be | Breton form : ?? (Hidden answer : Bezañ)
Ideally, this would be sort of table format with two columns and a third element which would be a colour scheme red/green (or green good tick symbol / red bad tick symbol) changed if the answer is good or bad.--Luzmael 08:38, 30 November 2007 (UTC)
- Take a look at Test and Quiz for some options. --JWS 15:46, 30 November 2007 (UTC)
- Thanks for the answer. It's a set of tools I will use !
- Now, I wonder if there could be an input box to enter the answer ?
- More precisely, this would look as follows :
- Question (eg. "Enter the Breton equivalent of the English words")
- Challenges : A list of English words
- Answers : Input boxes for each challenge
- Result : Green = Right ; Red = Wrong
- This utility might be worth evaluating. Take note of their cautions regarding creative commons license and others work. Please report back any insight gained here. Thanks! Mirwin 18:54, 16 December 2007 (UTC)
Request Programming Assistance
Lunar Boom Town project needs a chaotic budgeting budget to simulate real world issues venture entrepreneurs and engineers face during budget, technical, and schedule planning and review processes. I would like to initialize the "budget" with NASAs actual current planned budget for the next 20 years. Then have the participants at the policy discussionsPublic_Policy_Debate add or subtract chunks of money as they reach consensus on what NASA and U.S.G. space policy should be or what they think it will be, etc. This policy discussion is a different project than Lunar Boom Town so using this result of political discussions as input to the planning cycle for the budding venture teams will hopefully simulate real world conditions and allow useful application of systems engineering tailoring techniques. If somebody knows a way to implement this in a web page via script or html I would appreciate the insight or programming. Could this be done with fancy html table so we simply add a line to log of changes each time a consensus of a person or subgroup or group is reached to change the projected budget profile? If so, if someone could provide a link the Wikipedia article where something simular is used or a place in the editing manuals .... never mind I will check those. Anyone have thoughts on good way to use results of political discussions as the budget input for Lunar Boom Town to work against? Maybe it will be easier to designate indidators from Aerospace Behomeths and space agencies worldwide to forecast spending, then revise the data annually as it is available. What I am trying to trigger is a planning cycle and a revision cycle that is subject to external chaotic inputs where everything sort of impacts everything else and effective entrepreneurs and engineers must simply revise best they can as fast as they can before conditions changes again. Anyone else's thought would be potentially highly stimulating. Thanks!
—The preceding unsigned comment was added by Mirwin (talk • contribs) 01:26, 25 November 2007 (UTC) | http://en.wikiversity.org/wiki/Wikiversity:Colloquium/archives/November_2007 | CC-MAIN-2013-48 | refinedweb | 8,050 | 57.3 |
On 18 October 2016 at 07:54, jorge - w <jwien...@gmail.com> wrote: > My view is that XEP-0050 is fine as an admin tool, just like XEP-0133. > > But what is fine for admins is not always the same for regular users. That's > why i think there should be a different interface for regular users mostly > aimed to external applications. Users might prefer :app_short_name to launch > them without the need for extra menus. >
Advertising
I think it's somewhat amusing that you're trying to suggest that admins like a graphical form-based interface, whereas ordinary users would prefer a command line. My experience suggests the exact opposite, if anything. I follow how ":app_short_name" might work. What I don't understand is how one discovers that "app_short_name" exists, what it does, and what parameters can be used, and how those parameters are passed. Dave. > > > El 17/10/2016 a las 16:33, Dave Cridland escribió: >> >> On 17 October 2016 at 12:03, jorge - w <jwien...@gmail.com> wrote: >>> >>> I'd like to discuss about the scope of XEP-0050. According to Motivation >>> the >>> objetive is to expand Jabber beyond instant messaging. >>> However I see few XMPP clients feature command execution. I wonder if >>> another approach could be considered. >>> >> A number of clients do support remote command execution, but I agree >> it's something of a niche feature. >> >>> XEP-0245 introduces a different way of executing a command, just by using >>> a >>> sequence of characters (/me ). Why not taking a similar approach for >>> executing commands in general that will be addressed to the server? Gajim >>> does it internally, but I mean a standard that does not depend on client, >>> since it would be implemented at the server side. >>> >> "/me " is not a command; it's a presentation hint. We documented it >> mostly because it was in widespread usage already, and not because it >> was a particularly great design. >> >> But let's suppose we do commands entirely by fixed-prefix handling: >> >> * We need to have a way of unambiguously identifying commands. We >> cannot risk collisions, and our normal practise of using XML >> namespaces to avoid the need for a central registry won't really work >> here. >> * This in turn means that - positing a command "example" - we don't >> know if your "/example " command means the same as mine. >> * We also need a discovery mechanism for commands. We could of course >> use "/help ", but we'll need to format the text response carefully. >> Using a structured discovery mechanism needs support in the client, so >> that's out. >> * We'll have no support for structured data. We could, arguably, use >> further formatting to inject parameters - perhaps a ":" prefix, since >> we seem to be badly copying IRC anyway at this point. Again, we'll >> need to have this support in the discovery mechanism. >> >> So we're looking at a mechanism whereby we reserve, and hope, that >> "/help " will respond with a semi-structured (but human readable) >> command listing which will provide enough syntax cues that we can >> identify what the command does and how to invoke it, plus - ideally - >> a standards-based identifier for it. >> >> I'm willing to reserve judgement on such a concept until I've seen a >> specification for it, but do you think that's practical? >> >>> I hope I'm not missing something... >>> >>> Regards >>> >>> >>> >>> _______________________________________________ >>> Standards mailing list >>> Info: >>> Unsubscribe: standards-unsubscr...@xmpp.org >>> _______________________________________________ >>> >> _______________________________________________ >> Standards mailing list >> Info: >> Unsubscribe: standards-unsubscr...@xmpp.org >> _______________________________________________ > > > _______________________________________________ > Standards mailing list > Info: > Unsubscribe: standards-unsubscr...@xmpp.org > _______________________________________________ _______________________________________________ Standards mailing list Info: Unsubscribe: standards-unsubscr...@xmpp.org _______________________________________________ | https://www.mail-archive.com/standards@xmpp.org/msg15861.html | CC-MAIN-2016-44 | refinedweb | 599 | 55.95 |
Read from a file without moving the file pointer
#include <unistd.h> ssize_t pread(int filedes, void *buff, size_t nbytes, off_t offset ); ssize_t pread64( int filedes, void *buff, size_t nbytes, off64_t offset );
libc
Use the -l c option to qcc to link against this library. This library is usually included automatically.
The pread() function performs the same action as read(), except that it reads from a given position in the file without changing the file pointer.().
The number of bytes actually read, or -1 if an error occurred (errno is set).
pread() is POSIX 1003.1 XSI; pread64() is Large-file support
close(), creat(), dup(), dup2(), errno, fcntl(), lseek(), open(), pipe(), pwrite(), read(), readblock(), readv(), select(), write(), writeblock(), writev() | http://www.qnx.com/developers/docs/6.4.1/neutrino/lib_ref/p/pread.html | CC-MAIN-2022-05 | refinedweb | 119 | 71.75 |
Tuan Truong
Tuesday, 10 June 2014
Upon the heavily uses of the popular MakingWaves EPiImage gallery property , I have decided to put some of the effort to upgrade the property from old school jquery based on EPiServer 6 on to EPiServer 7. This was at the same time as a good exercise to help started with Dojo development. Furthermore, since the VPP is no longer used, and replaced with media content on EPiServer 7.5, with the help from others I have also managed to upgrade the property, so that it is compatible with the latest assets management system. Drag and drop images from assets management is also added on to this new release.
Following the philisopy of contributing back to the community, I hope this could provide back some benefits for other EPiServer developer
Properties values are also stored in the same format as the original values so that they could be easily migrated. Here are the list of items within the project:
Here is the structure view of the project:
From top to bottom:
1. Templates Folder store all the html templates for the controls used for EPiImage and EPiImage Gallery Property controls. EPiImageInfoForm is to allow user to edit image title, description, and image link. Here is how the form will look like:
On most of the project that requires Image Gallery property, it is found that you usually need these information for the images, .i.e. to be used for slideshow.
2. EPiImage.js , EPiIMageGallery.js
These files store the client logic implementation of the editing control for EPiImage & EPiImage Gallery properties. Digging in to the code, you could see that they are based on a lot of diiferent dojo and EPi-dojo controls
3. EPiImageInfoForm
This is to handle the client side logic for the above editing image infor form
4. ModuleInitializer & RequireModule
These files are responsible to initialize the connection to the Rest Store Controller on the server side, so that the client control could send request back to the server side.
5. EPiImageGalleryProperty, EPiIMageProperty, EPiImageEditorDescriptot, EPiIMageGalleryEditorDescriptor.
These files are responsible for custom property declaration, serialize data, and declare the editor template for the properties.
6. ImageFile on Media Folder
Since EPiServer 7.5, all asset must be based on an image content data class, therefore we will need one for our images as well. Please feel free to remove this file from your copy if you already had one on your project.
7. EPiImageStore
This class is responsible for getting the image info from Content Data Store. This is inherited from RestControllerBase , that available from EPiServer 7.
8. Module.config
You will need to copy the bits that needed from within this module.config file on to your project, to get the properties working:
<assemblies>
<add assembly="EPiImage" />
</assemblies>
<clientResources>
<add name="epiimage.editors.style" path="Styles/imagegallery.css" resourceType="Style" />
</clientResources>
<dojo>
<!-- Add a mapping from alloy to ~/ClientResources/Scripts to the dojo loader configuration -->
<paths>
<add name="app" path="Scripts" />
<add name="epiimage" path="Scripts/EPiImage/" />
</paths>
</dojo>
<clientModule initializer="app.ModuleInitializer">
<requiredResources>
<add name="epiimage.editors.style"/>
</requiredResources>
</clientModule>
For the above config:
8.1. Fist block,add assembly to generate the url for EpiImageStore, which is declared as a Rest Store, to get the file info from the server back to the client.
8.2. Second block, the client resources are for module initializer, and css that will be used for our properties.
8.3. . Third block, the dojo module is to define the namespace for Scripts folder that will be used for EPI Image/Gallery client editor.
Here is how the image gallery property will look like on the back-end :
When editing the image collection, user will has the ability drag and drop images from media assets to the add image box as well.
Here is how the properties can be called on the fron-end:
@if (Model.ImageCollection != null)
{
foreach (EPiImageGalleryImage image in Model.ImageCollection)
{
<a class="image_view" href="@Url.ContentUrl(image.ImageUrl)" rel="lightbox"><img src="@Url.ContentUrl(image.ImageUrl)" /></a>
}
}
This could also be used on Webform project without any problems.
A screencast on how the property works could be found here:
The full source code of this project is available at here. If you need the sample project or any help send a request here.
Tuan
Thank you for publishing this and for the care you've put into it. I can see several uses for this in my projects and for ideas that could stand on its shoulders as well.
Good job!
Hi Tuan
I am following you given code and steps to integrate it in EPiServer 7.5 WebForm project to migrate old version or MakingWave.EPiImage to new one which you provided.
But in EPiImageProperty, when I edit existing image and trying to edit info to change title, description and LinkUrl its not saving new value. Any idea?
Also in EPiImageGallery, I can add only first image and not second or more...
It would be great if you can help us to figureout on this issue. If required I can share my sample code which I did.
Regards
Yagnik
hi Yagnik,
Sorry, I only see your comment now. If you still need help with this please email me at tuan.truong at niteco.se. Thanks
Hi Tuan
We are still stucked in this, we migrated whole site in EPiServer 7.5 and unable to go live without to make EPiImageGallery working in EPiServer 7.5. We have extensively used it in project.
We following below code for EPiImage for EPiServer 7.5
When I trying to set image to EPiImage property for first time, I can add image and its information like description, Title etc are working.
But after coming back to same to edit it, I am not able to edit any information. Also, publish or auto save notification of CMS doesn't triggered.
And about Image Gallery, I cannot add second image and subsequently its information. There is no action or response by click on Add Image button.
We tried above by creating clean EPiServer 7.5 installation site.
Here is reference blog created by me
So it would be great if you can help us on this. I am available on skype if you available to talk.
Thanks in advance :)
Regards Yagnik
Skype: yagnik_v_jadav
hi Yagnik,
I have fixed the issue that you have seen and committed to github with latest change, please help update the code for EPIImageGallery.js.
Thanks
© Episerver 2017 │ About Episerver World | https://world.episerver.com/Blogs/Tuan--Truong/Dates/2014/6/EPiImage--EPiImage-Gallery-dojo-based-custom-property-for-EPiServer-75-and-above/ | CC-MAIN-2017-51 | refinedweb | 1,089 | 55.64 |
Python day 28
The stock market is going up again today! So let’s construct a simple Python program that generates the stock price range for Tech companies, Banks, and Pharma companies.
Last time we use the Abstract Class to print out multiple song lyrics, today we will level up the difficulties by assigning different computation rules to the corresponding classes.
Rule of Stock Price Range Approximation:
Technology company: Stock price varies between 0.8x to 1.25x.
Bank: Stock price varies between 0.9x to 1.1x
Pharmaceutical: Stock price varies between 0.8x to 1.5x.
Python code:
from abc import ABCMeta, abstractmethod class Company(object, metaclass=ABCMeta): def __init__(self, name): self._name = name @property def name(self): return self._name @abstractmethod def get_stock(self): pass class Tech(Company): def __init__(self, name, p): super().__init__(name) self._p=p @property def p(self): return self._p @p.setter def p(self,p): self._p=p def get_stock(self): return (self._p * 0.8, 1.25*self._p) class Bank(Company): def __init__(self, name, loan): super().__init__(name) self._loan = loan @property def loan(self): return self._loan @loan.setter def loan(self, loan): self._loan = loan def get_stock(self): return (self._loan*0.9 , 1.1 *self._loan) class Phar(Company): def __init__(self, name, sales): super().__init__(name) self._sales = sales @property def sales(self): return self._sales @sales.setter def sales(self, sales): self._sales = sales def get_stock(self): return (self._sales * 0.8, self._sales*1.5)
Call the Stock Price Code
def price(): emps = [ Tech('Facebook', 200), Tech('Apple',370), Bank("JPMorgan", 99 ), Bank("BAC", 26), Phar("Pfizer", 30), Phar("Moderna",70) ] for emp in emps: print('The range of %s stock price is %s dollars in 2020 撒花!⊙o⊙' % (emp.name, emp.get_stock())) price()
Output:
The range of Facebook stock price is (160.0, 250.0) dollars in 2020 撒花!⊙o⊙ The range of Apple stock price is (296.0, 462.5) dollars in 2020 撒花!⊙o⊙ The range of JPMorgan stock price is (59.4, 108.9) dollars in 2020 撒花!⊙o⊙ The range of BAC stock price is (15.6, 28.6) dollars in 2020 撒花!⊙o⊙ The range of Pfizer stock price is (24.0, 45.0) dollars in 2020 撒花!⊙o⊙ The range of Moderna stock price is (56.0, 105.0) dollars in 2020 撒花!⊙o⊙
Note: I checked the stock price and selected a reasonable x to fit the market value, this is NOT accurate stock prices!
Happy Studying and be Rich!
Reference: | https://fangya18.com/2020/08/17/python-class-polymorphism-with-stock-price/ | CC-MAIN-2021-17 | refinedweb | 429 | 69.38 |
!
Fourth COMMENT!!!!
*5TH...
As soon as I posted your message showed up -.- same thing happened to you didn't it...
Same here! Scatter is my favorite ;D
Who's arguing? @fmmmlee
Back to where?
And it's 5:23...
ChaChaspicy: It's not in classes...
Really?! :P I mean that discussion is still on sets, but it's not in classes(on the dashboard)...
9th. I think you should be able to speak the term (make it able to be turned on and off) if your computer has a microphone or on your phone. Quizlet is awesome! :)
I WANT THE CLASS DISCUSSION BACK NOW!!!
yeah me to!
i like the idea of a corkboard tht would b so cool!! maybe some more games would be helpful because u kinda get bored with the ones we already hav after a while. if u r doin a cumulative test learn mode really helps a lot!!!
30th comment
btw
Yeah the corkboard idea is cool.
Please put i this thing where we can do class discussions and stuff. that would be awesome and I think many students would benefit from it
I use scatter a lot it is very useful to me and my class has a competition to see who can get the best time and i usually win
I agree with bengals02. You should make it so that you can speak the term. It would make it
SO much easier than having to type everything!! PLEASE MAKE IT SO YOU CAN SPEAK THE TERM!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
I've found Quizlet most helpful in learning a language (for me, it's Latin). However, it also helps me memorize definitions for Science, Logic, and several other subjects. I love how easy, elegant, organized and fun it is - it saves me a lot of hassle.
As for tips... I've found that putting in pictures from Flickr can help me remember difficult words (images stick in your mind) and I've also found that using Speller is a great way to begin familiarizing yourself with words for the first time. It's easier and more fun than saying the words over and over again to yourself, and hearing the Quizlet lady say the words often helps me remember them.
Thanks so much for making Quizlet - it is such a wonderful study tool!
THIS ISN'T A CHAT ROOM!!!!!! you guys have no life
Yep I just hope quizlet reads it
This is basically the latest chat room until some type of discussion gets put back up.
My Long Suggestion and Stuff Post:
1. I like the cork board idea.
2. You guys should put some sort of discussion/chat back onto the main class page so that all of the class can communicate in some form. I have seen multiple scenarios where the teacher/admin is needing to communicate with the students and the students are unable to know what is going on. Maybe you can have a mature language filter? Or maybe like a more complex chat system where you can actually have multiple folders/rooms per a class. Just an idea.
3. Another Idea: You should add some sort of multiplayer-real-time game sort of thing to make studying more fun for younger kids that need to get used to studying if they have high school/college ahead of them. Or maybe like a game where you make up questions for like a study partner or friend and they answer them real time, that way studying is more of a cooperative thing to make it more interesting.
4. Possibly a Calendar/Bulletin tab for a class: This would be excellent for those classes who are actually school classes. This would be great for those teachers that give out hw assignments and also it would be great to tell announcements to the entire class.
5. Quizlet is huge. I use Quizlet a ton in my everyday life for Studying, working on hw, and communicating with my fellow classmates. If you guys can enhance the learning experience and possibly make it more interactive and maybe some different modes/games your visits per a day would go right up because you would be getting so many more people. Also people who are on Quizlet now, will be getting on Quizlet more often. It would only help your business and make the entire user experience more interesting.
6. MUST HAVE MULTIPLAYER GAME BACK! I noticed the update back in May of 2012 and unfortunately I was not on during the time of the testing of the game. I understand there were a few disadvantages to it such as Vulgar language and off topicality but there were so many advantages to those users that actually wanted to use it as a learning experience. I just absolutely wish that I could've tried it out because it looked so fun but I highly recommend that you guys work on it a little longer and then put it out there another month for some more testing.
7. I understand you guys work very hard and program probably most of the day. I know enough about web development though to know that what matters the most in a company or organization is the listening to the users. I am very impressed with the feedback option and find it very convenient to post your ideas and comments about certain things. Unfortunately I am sure that many people would agree that the last update that came out was a very big change. This was a somewhat large problem for those classes that relied on the chat feature everyday. I enjoy using the new GUI and find it somewhat easy to navigate. I would like to speak on the behalf of many people right now and say that the discussion feature was very good and probably needs to be put back on . I apologize if this comes across rude or a command but I would predict that it would make the entire Quizlet experience more functional and easier to use and help out your classmates. If you are actually working on the discussion box right now and refining it or something then I and probably most of us are very happy.
Thank You and Have A Great Day.
Thanks for sharing, scripturegirl!!
Thank You tjacks99 PARAGRAPH MAN FTW
Thanks for the great ideas, epicryan14. You've put a lot of thought into Quizlet---that's awesome. How about the cool ways you use Quizlet now? Are there any ways you've come up with yourself to study on Quizlet that we may not have thought of?
I'd like the class discussion back. It really helped me when I had a question or needed help. I also think that the multiplayer game would be really helpful. Some friendly competition between friends never hurt and I think that it would really help with studying.
I really like quizlet but there something that I'd like it to have, I'd like to combine permanently some of my sets, but it is possible to combine them just temporaly, for example, I created a new set with my new vocabulary, but once I learned those words I'd like to combine those with one set that I've learned before.
I use quizlet in many ways. I use it to study and communicate with my fellow classmembers. (or should I say ex-communicate) anyways, I would recommend if you are in College or High-School, to create a class with your friends or class mates and have everyone post things about the class after a day in that class.
You can even have multiple classes for different subjects.
You can post sets about :
Upcoming Assignments/Homework
Notes over that day
Reviews for Upcoming quizes or tests.
And maybe like an announcement set where someone manages it about announcements in a class.
Quizlet will help your grades in all of your classes if you use it right.
Keep up the good work!
~Epicryan
This page on Average has approximately 15 Comments per A minute!
Before the Update of removing discussion, the average comments per a minute were .13!
That is a 1115% Increase!
Wow!
Rosebud6980: We all are very happy with your love of music. Thank you for your comment.
@sevenluckynumber As of February 13th 2013, The high fives are currently updated and are working.
Nice facts epicryan! That proves that most people want the class discussion back! REVOLT!
Thank You for the feedback guys! I am a genius with an I.Q. of 167. I appreciate all of the feedback. Goodbye for now. I will be needing to go program in Python for a little while. I am working on a chat component, goodbye.
Well, I use it different for different subjects!
Thank You Tjacks99.
The post per a minute Are Increasing at a hastened incremented pace! If we can show Quizlet Helpful evidence to bring back Discussion, We must! We are doing very well supporting our Points of Views!
If I am cramming for vocab test, for example, i would do scatter three times or more times to get familiarized with word choices. Then I would do learn section. (i have a good memory so I usually only do learn once-but if the words are hard I do twice or more) Then I would know all words!
So far here is my complex recursion function for using a Spanish to English Conversion in a chat based platform.
It is currently written in Java:
import IIC1103Package.*;
public class jjj {
public static void main(String[] args) {
int[][] a = archivoAMatriz("Hola");
imprimirMatriz(a);
estimacion(3, 3, "Hola");
int[] asa = {3,5,6,1,6,1};
System.out.println(hallarminimo(asa));
}
static int[][] archivoAMatriz(String input){
ArchivoDeLectura a = new ArchivoDeLectura();
String aux = "";
String [] aux2;
int filas = 0;
int columnas = 0;
if (a.abrir(input)) {
aux = a.leer();
aux2 = aux.split(",");
filas = Integer.parseInt(aux2[0]);
columnas = Integer.parseInt(aux2[1]);
}
else{
Usuario.mensaje("Ese archivo no existe");
}
int[][] matriz = new int[filas][columnas];
while (!a.EOF()) {
for (int i = 0; i < matriz.length; i++) {
aux = a.leer();
aux2 = aux.split(",");
for (int j = 0; j < matriz.length; j++) {
matriz[i][j] = Integer.parseInt(aux2[j]);
}
}
}
a.cerrar();
return matriz;
}
static void imprimirMatriz(int[][] a){
for (int i = 0; i < a.length; i++) {
for (int j = 0; j < a[0].length; j++){
System.out.print(a[i][j] + " ");
}
System.out.println();
}
}
static int hallarminimo(int[] a){
int ret = 0;
for (int i = 0; i < a.length; i++) {
ret = a[i];
if (i>0) {
if (a[i]<=a[i-1]) {
ret = a[i];
}
}
}
return ret;
}
static void estimacion(int simulaciones, int n, String output){
int[][] a = archivoAMatriz(output);
int aux = 0;
int cont = 0;
int minimo = 0;
int[] aux2 = new int[n+1];
for (int i = 0; i <= simulaciones; i++) {
minimo = hallarminimo(aux2);
if (cont>=1) {
if (hallarminimo(aux2)<minimo) {
minimo=hallarminimo(aux2);
}
}
for (int j = 0; j <= n; j++) {
int k = Aleatorio.entero(0,a.length-1);
int j2 = Aleatorio.entero(0, a.length-1);
if (a[k][j2]>0) {
aux+=a[k][j2];
aux2[j] = a[k][j2];
if (j==0) {
cont = 1;
}
}
}
}
System.out.println("El costo promedio de los viajes es: " + aux/(n*simulaciones));
System.out.println("El costo mìnimo de los viajes es: " + minimo);
}
}
Sorry everyone for the Showing off. I am going to go back to work on my recursion. Sorry Guys!
What in heavens name was that
@ChaChaspicy : Are you wondering about my recursive method written in Java?
Yup.
Update:
Unfortunately the Comments per a minutes have fallen down to 5 guys.
@ChaChaspicy: Do you know how to program in any language? If so I wrote that in Java. I have been working on it for a while.
brb
Ok. epicryan14: I haven't the slightest idea about what you are talking about. Are you, like, some old wise person or something?
Oh. Well, computers hate moi
How would you know who epicryan14 is?
This is pathetic. Why am I even here?
During exams, take all Chapter and combine them into a new set
Ithink it would be helpful if we could be able to "subscribe" to set makers so that when my teacher maks a new one i find him easily. I currently haveto type in his name every time.
I'm disappointed but not really mad about the No Class Discussion.
-It would be so cool if I could create new sets on my iPhone 5 or iTouch 5, I know that the feature is coming but I can't wait.
-The functionality to be able to make folders for our sets so that their not so disorganized.
By the way the Quizlet app on the Apple App Store is great.
This is the way that I like to use quizlet.
1.
I use the learn tool with learn prompting with the definition ignoring spaces, capitals, and punctuation. This allows me to sort of familiarize myself with the terms.
2.
I use the learn tool again, except prompting with the term ignoring spaces, capitals, and punctuation. The key thing here though is not to use "Override I was right". This really helps me with learning the definitions word for word.
3.
I keep on using the learn tool until I can get all the terms correct every time.
4.
I will use scatter or space race or the test to finish up.
The one thing that really bothers me is that, with the new update, quizlet removed the options to ignore punctuation, caps, spaces, and stuff in perensises (srry for bad grammer). Also, quizlet should add a few more games and study tools.
Okay everyone, just my two cents. A lot of the comments above have some great ideas and I believe that they should all be examined for inspiration. Personally, I really like the idea of bringing class discussions back and the cork board sounds like a great implementation of that. Additionally, I didn't see any mention of mobile apps. I would love to not only see an iPad sized app (instead of the 2x version of the iPhone app) but also wider compatibility across several types of devices. With so many people on the go, the more support that is offered the greater the usage will be. For me, it would be great to have an app that I could take on the bus ride to school to polish up on my last minute studying before test day. Additionally, offline support would be a great addition for such an app, as it would allow people to take Quizlet on an iPod without WiFi. Finally, I heard mention of a wider array of games. I would love this! For me, I have found that if a fun way of learning is provided, it helps me to enjoy learning the material more and prevents me from zoning out.
I think that we should have group scatter where everybody can work on a scatter for a set and the top 2 fastest go against each other.!
hhhhhhhheeeeeeeeeellllllllllllooooooooo i would like the ability to do stuff
Hello everyone...
We need more games! Also.. love scatter!
rosebud6989... I love your picture ..... luv Harry potter 4eva but have to admit the corkboard is cool
NEVER MIND you changed it.
Ok, so this is going to happen in the future....
Well then, My new post:
1. I disagree with some points in that post.
-I understand the problems with the chat.
-There are many ways to get around this problem.
-Add Hide Discussion Box for admin/teachers
-Make a "prove your not a robot" security filter to make chatting less social and
more studying related.
-Make a simple Regex Filter on the chat.
-People should be able to choose how they use this wonderful resource. If they
choose to use it in a school way then that is how it should be. If they want to use
it in a social way then it is their problem if they get a bad grade on the test the
next day.
-I completely understand the reasoning behind this, you want this to be a more
educational site then anything else, the only problem is that....no website will
ever be fully educational. There will always be those people who take it socially.
2. This comment page will be full of unrelated posts.
- I would consider this a problem since when a quizlet staff member is looking
through everything and is wanting to find some feedback, they will have to sort
through all this non related topic information/people who are actually using this
to chat.
-I feel sorry for those teachers, I have seen numerous incidents since Thrusday, of
teachers and admins that relied on the discussion box to use for an educational
purpose, I even witnessed a person not able to get their assignment done and
another person who had to switch to worksheets instead of using the class
discussion.
3. I understand that the set discussion is still there, but it just isnt the same. When the update went out, most of my classes were in absouloute confusion, not knowing what was going on and no way to communicate, I myself even had tests the next day and had questions about a review that I needed answered, I believe that many people would agree with me.
4. I absolutely utterly understand your point of view Quizlet. I understand that the discussion box will probably never be seen again, but because of this, I am sure we will all be hoping for some sort of new communication on the class page, whether that be, another chat box with enhanced language filtering, a calendar with an optional chat plugin, bulletin board, cork board, drawing type of thing, voice chat, video chat, the possibilities are endless!
As a final thought I congradulate you guys on updating the websites layout and the excellent use of CSS. I am very impressed by the newest layout and hope to see many more updates coming up soon. I am sure many people agree with the above comment and hope that some sort of communication will come soon.
Last Thought:
Disadvantages of Class Discussion Box:
-Vulgar Language
-Spam
-Unrelated study topic while people are trying to be on task.
Advantages of Class Discussion Box:
-Communication throughout the class
-Answer questions by your real class mates about upcoming tests and quizes.
-Tutoring your classmates or getting help yourself.
-Admin distributing valuable information about class announcements.
-Enables the use of giving your class a grade on discussing a topic.
- Allows the class to tell their personal view on a topic and how they view it.
-Allowing a teacher to explain a topic to you.
-Enhances peoples social skills.
-Enhances and helps people with cooperative learning.
-If wanted allows being on a study website fun by being somewhat social and giving your self breaks while still being more on task then going somewhere else to play video games.
Those were my points of view and are not necessarily true. I appologize if this comes across rude. I would just like you guys to know my point of view from a dedicated user.
Thanks
-epicryan
i know!
make it 20,000!!!!! PLEASE
make separate sections for different sub-groups like homework tests quizzes etc.
@epicryan14 that wasn't even recursive code and I know you didn't write that because you are in Spanish I. Also you don't know how to code in Python either :/
I think it would be nice and helpful, but not necessary, to add boldface, italics, underlining, and possibly even highlighting to the set creation page. This is simple to do in HTML, and would add emphasis to sets when you think you need it. Plus, the italics would be hugely helpful in writing book titles such as Romeo and Juliet or The Odyssey, both of which should be italicized. I know I have suggested this before to the programmers through the comment page, but I just wanted to say that it would be nice. Thanks! I think Quizlet is brilliant the way it is!
I second the idea above me and I was wondering if we could record the way we want things to sound when we use Quizlet for foreign languages. For example, if I wanted to learn Mandarin, I think it would be more helpful if I could ask my friend who speaks it to record the words I want so I could hear a voice I am familiar with.
When I first joined quizlet, there was a sidebar on the right side of each set that had words people missed the most, and it was good to see that so I could see if my classmates were also having issues with the same words/topics as I was. I wish that would come back! Also, class discussion is a must. Really not liking the update, looks too much life Facebook!
I like the quarkboard idea. I like the class discussion too but my class doesn't use it
It would be great if we can have Multiplayer mode back!.
Opps, sorry, accdentally posted the same thimg twice.
Who likes my idea?
Did anyone hear , my idea?
Helloooo? Anyone?
Well, I hope Quizlet will be able to create more game for each set, it's a bit boring playing only scatter and space race...
It's a good website and I like it though it's just a bit boring...
^^
actully, I found this more like a chat room than a comment room...
=_=
well, there are alot of ideas that I like up there, hope Quizlet will improve a bit.
^^
i still say we should do the corkboard!!!!
Okay, Troy, you have no idea what hacking is. Just stop talking, and go do something else other than whining and crying all day.
Ryan, there was no way you wrote that, and you even did the multi-dimension array wrong, which I doubt you'd know what that is.
Bring back old layout and multiplayer game...
multiplayer was really helpful!
Yea please put up a multiplayer game! It would be a easy way to study
m
mu
mul
mult
multi
multip
multipl
multipla
multiplaye
multiplayer
multiplaye
multiplay
multipla
multipl
multip
multi
mult
mul
mu
m
*funner way to study
m
mu
mul
mult
multi
multip
multipl
multipla
multiplaye
multiplayer
multiplaye
multiplay
multipla
multipl
multip
multi
mult
mul
mu
m). This each side of each card (which offers you the chance to use RWA with both languages present in your set) with a "RWA" (or whatever name you guys decide suits this mechanism the best,). I'd have a window pop up with generated words similar to the word selected by the user and let him work it from there. The RWA should be adjustable so that you can search for several words to combine and associate to the word (in the case of the word being extra long).
Hope that wasn't too open-minded or ambitious and clear enough for people to get the jest.
*I apologize, I forgot to proofread and edit in my previous post, I did it now though :)*). Hungarian both sides of each card (which offers you the chance to use RWA with both languages present in your set) with a "RWA" icon next to it (or whatever name you guys decide suits this mechanism the best,). I'd have a window pop up with generated words similar to the word selected by the user and let him work it from there. The RWA should have adjustable settings so that you can search for several words to combine and associate to the word (in the case of the word being really long).
Hope that wasn't too ambitious and that it was clear enough for people to get the jest.
I'd suggest you work on some algorithm which tracks how many times someone has studied a set, and recommends the sets or words the user seems to have trouble remembering, based on his previous errors. something like spaced repetition or whatever. I sometimes make sets and forget to study them lol.
if that's hard to implement in real life or demands too much resources, Quizlet is still pretty awesome for studying languages. it's really helped me a lot. thanks a lot.
SPACE RACE RULES!!!!!!! but it could be cooler if you had something like lasers shooting out...
I use Quizlet to study science mostly. I use all the study modes. :) Thanks for all your work Quizlet!
re-add discussion
My favorite mode is "test" mode.
Tip 1: I do learn mode once through on each of my sets in order to learn them and become familiar with the content. Then I do "test" mode combining all my sets that are going to be on the particular test that I am studying for.
The reason I don't put all my vocabulary into one set is that -......
Tip 2: I have found that it is easier to learn and more motivating if you break up say 200 words into multiple sets of only 20 or 30, finish learn mode than combine them in test mode.
Tip 3: I have found that you can only combine up to 28 sets at a time. So after I learn each set and I know that all the content is right, I combine them in groups of 28 or less and make a new set. Then I can combine those groups of 28 or less again in order to get all my test content into one test in test mode.
怎么用啊?
the owner to be able to degrade admins
PLEASE PUT MORE GAMES AND TURN SOME INTO MULITPLAYER GAMES THAK U
OH PLUS PUT CLASS DISSCUSION BACK ON PLEZ
North Carolina is the best state! #1 in Perfect!
by the way, you should really work out a way to delete a combination. it would be helpful
1. Just keep doing what you're doing
2. BRING THE DISCUSSIONS BACK!!
3. Work on that multiplayer thang you got goin on!
ok, first the games could use some more interesting gameplay mecanics. second the multiplayer game sounds great, but try to also get the tron game, it would help the male population in quizlet, who are only here because thay have to study. That would bring me back, also making rewards for the constant members, like log on every day for a year, get free quizlet plus, or something. rewards are great, i just love getting rewards, and it feels like the site cares about us. not that you dont, but it would prove it! the sentence game sounds fun, but make it more real time, like people make a group and meet in a lobby, then start the game, then additonal people joining would atomatically go into spectate mode, untill the next round, but the players would create the sentences, vote, and most votes wins! then they can show off how many times they won in the lobby. the lobby would have a chat log, and the names of all the participents, and how many wins they have. ok that was a little much, but i just had the best ideas and finnally got them out, WOW! that was intense. Oh well, better get back to studying! GO QUIZLET!
I don't know if somebody reads this or not, because I've been posting this tip for X time and had no reply. So this is why I'm going to write about it once again.
Most of us learning words probably go through Flashcards first to familliarize with the words, and the do the Learn mode or Scatter and others. Here's the trick: Th option of rejecting already memorized vocabulary in Flashcards mode. If your set of cards has e.g.50 cards and you managed to memorize 30 of them (20 left because were more difficult) so, I believe, it is pointless revising all of them. It'd be better if we could concentrate on the ones we have difficulties with memorizing them. In that way, 30 aleady memorized are rejected and the users tries to memorize the cards s/he has problems with. Like in real flashcards. If you learn some cards with words, you simply put them away and concenrate on the others. And after some time you go back to repeat the previously learnt.
In my opinion, this would be very helpful option. Please take this tip into consideration. I also leave this tip to others to comment on...
Thanx!
thx tht is a good trick!!
I am seconding what drazion6690 said. I think it would be fun to have badges, patches or some sort of reward system for studying. Like for example, someone who completes 100 test and learn answers could get a badge or electronic reward. You could have a different one if someone finishes learn mode on 50 sets. You could have one for someone who gets 100% on Speller. And the list could go on and on. This would encourage people to study more to earn more badges or patches.
@MarcinGryz: To me the Learn mode already incorporates "getting rid of the cards you already know. Once you get to Round 2, the cards you are left with the ones you don't know or didn't answer correctly the first time.
If you want to study only the ones you don't know, there is a screen that shows at the end of the learn mode where you can choose to start over from only a particular round which would not give you all the cards that you answered correctly in the previous round. It would only give you all the card you answered incorrectly or did not know. Additionally, learn mode helps you actually learn the material since you have to type out all the ones you don't know.
so_sew2
I know there is such an option in Learn Mode, but what I'm getting at is the fact that in Learn mode you have to type in the words, but if you want to go quickly through your set without typing anything, flashcards mode is the best option. In Flashcards remembered the words we already knew, we could focus on the ones we cannot memorize. That's the point. To be honest, I used many computer programs and search the Net in order to find simillar webs with flashcards and learn vocabulary. Only one of them had something simillar to what I'm writing about. Nevertheless, quizlet is the best one. And adding such an option to flashcards would only improve this fantastic website.
I like Quizlet because I can learn in a fun and easy way. For example, Scatter and Space Race make you think faster when having to memorize things. However, Quizlet would be more fun if you guys added more games besides scatter. Plus, personalization would be cool too. Also, it would be cool if Quizlet added sections in the middle of Flashcards to organize them. For example, let's say I want vocab and people in a different spot. I would type a vocabulary word on the "Vocab" section.
guys, their already making ideas for the new multiplayer game, you guys should look on other "Inside Quizlet" topics
I'm interested in a supermemo style learning mode where you can go through the flash cards and tell Quizlet whether or not you got the answer correct and then it will show you the cards you miss more often and space out the ones you get right for long term retention.
Let's say you're a teacher and all of your students have Quizlet accounts. (I'm not one, but this would be cool.) It would be cool if there was a way to assign a certain flashcard set and test the students on it, then get their scores. I think that might be a helpful feature for some people.
Did anyone hear my other comment? about the competitive multiplayer speller game for each set?
Quizlet should add that for each set so that the guy who teaches the class knows who's doing good, and who's falling behind.
Tip 4: Another thing that I like to do is: promote Quizlet!
I create a class and sets for each class that I am taking. Then I email the link to all my classmates. This way I will be letting those who don't know about Quizlet know about it. I also get some more people on Quizlet who are taking my class so that I can have some competition, like on the scatter and speller scores:)
Ok, this one's a monster! I've been using Quizlet for a few years now, and I've come up with many techniques to help me study any subject effectively on Quizlet. Here's how:
1) Taking advantage of the term/definition aspect of Quizlet -> The basis of
my Quizlet technique is to make questions for myself. Since, in the "Learn"
function, you are prompted with the definition and are expected to give the
proper term, I put the question for myself in the definition space and its
answer in the term space. For example, for one term, my definition will be
"What important figure was born in 1982?" and the term would be "Sean
Bernett." So, essentially, I use Quizlet to make practice tests for myself.
2) Taking advantage of the properties of the "/" to make versatile lists ->
Through experimentation with Quizlet's mechanics, I have found that objects
in a term seperated by "/"s are valid in any order. In "Learn" mode, for a
definition for which the term is "A/B/C," "A/B/C" is a correct answer, but
so are "B/C/A," "C/B/A," "A/C/B," "B/A/C," and "C/A/B." I use this property
to help me make lists for which order is not important. For example, I might
create a question for myself, "Who are the members of your family?" for
which the term would be "Dad/Mom/Sister". Now, when I go into learn mode,
any permutation of "Dad/Mom/Sister" will be correct.
3) Taking advantage of the exclusion of anything in parentheses in "Speller"
in order to learn different aspects of a language within one Quizlet set ->
I take Chinese in school. The Chinese language is written using characters,
but is voiced using a tonal structure. Since Westeners have trouble with the
tonal structure of Chinese, a method of writing down proper pronunciation of
Chinese words has been developed, called Pinyin. Now, for my tests at
school, I need to know the proper character, pinyin, and definition. I
wanted to use Quizlet to test myself on all three somewhat separately,
without having to have several sets for each lesson I was to be tested on.
Through experimentation, I found that any items that were in parentheses
within a term would not be read by the computer in "Speller" mode. For
example, if the term was "Happy (day)" the computer would only read out
"Happy." However, things in parentheses will not be ignored in learn mode.
So, my system has been, for each term, to include the Chinese character
followed by its pinyin in parentheses (e.g. "你(ni3)"). While I'm in speller
mode, the computer will only ask me for the chinese character, and I can
isolate my knowledge of that character in "Speller" mode. Then, I use learn
mode to learn the character and its accompanying pinyin.
4) Using "*" to bold text -> Through experimentation, I have found that any
word surrounded by a single * (e.g. "*the*") will become boldfaced. I use
this to emphasize certain parts of my question that are important.
5) Using the "Override: I was right" button -> Sometimes my complex
techniques don't quite work to perfection, and this button is indispensable.
I honestly think that it would be a good idea to have school sections like for Hill School Of Forth Worth (my school) has a certain section for that school and admin and teachers can join that section and only teachers can post new quizlet notecard sections and students can be admitted to these class sections.
you should be able to upload audio. like music. For my humanities class we have to memorize music and learn the author, dates, etc. and memorize them as well. it would be helpful and useful if we could upload audio to help learn as well!
Another good way I study with Quizlet is taking the flashcards and printing them. Then I can practice on the way to class.
Agreed, Fredo00. That would indeed be awesome.
I love using learn and matching game thingy
the tests are also super duper
Only one thing. maybe when you get a question wrong you would have to write the right one? Because I know it helps me to type/write what I don't understand
To be hones.... I think that it should have more games. Games are fun right, I mean who doesn't love games.....not that there's anything wrong with Scatter or the other one but...I think their should be more games.
That would be cool if we had more game and exciting and classic on the Uncle Sam ;)
we should be able to color sort the flash cards!
I only have one suggestion: do you guys mind doing something about the descriptions? Whenever I make a set, I usually put in the description something like, "Make sure you review Directed Readings A, B, and C," or "GUYSSSSS I DON'T PUT THE SAME TERM TWICE WITH A DIFFERENT DEFINITION TO TORTURE YOU! PLEASE UNDERSTAND! :D," or "Read the chapter. I've noticed that Mr./Mrs./Ms. (XYZ) takes questions from (ABC)."
What makes me a bit frustrated is the fact that people at my school talk about ArdentKnight1250 (which is me. I've hidden my identity from almost half of the school even though I've made a good amount of sets.) and they complain about how they fail some tests because of my sets. And I keep telling them, "You know, looking at the description helps..." and they don't listen to me. I'm also frustrated at that because they blame ME for THEIR failures. (I mean, only I can do something about that. I just wanted to vent it out.)
ONE MORE THING I FORGOT!
I think we should have an option to pick which games we want our sets to have. From past eavesdropping (yeah... I have to do that...) I've heard that people do Speller for vocabulary tests. And the vocabulary tests are all multiple choice with definitions, synonyms, and antonyms. Never in my school career have I seen a vocabulary test where you have to actually spell the word, and I don't think I'll see one like that ever in my life since I'm in 8th grade. I also don't think that Scatter works when trying to learn how to spell something. My friend, who is in 1st grade, used scatter for one of his spelling tests. He wasn't too happy with his grade. In short, I would like to make sure that people are utilizing the right study materials for the right things by personalizing the activities that an individual can do on each set.
Ummm... Yeah... That's just about it. ^^ Please excuse my punctuation... My brain is fried from my first research paper...
Anypony out there? /)
I know! Quizlet should change the change picture to where they could get their own profile pics off the internet. Also, they should have a way for people to post videos with each flash card in each set so they could be more specific about what they teach in their sets. Lastly, Quizlet should have the website automatically translate any words you put in a set if you are teaching someone else how to speak a foreign language, because when I try to translate words in a set I create to spanish, I have no idea how to translate the words, and I end up spending my time searching the internet for what the words mean in English.
Quizlet should also let people choose their profile pic when they first sign in, so we Don't get stuck with some random picture.
me no like
Quizlet should also make a hyperlink option where the students studying a givin set can learn more about their givin class
To be more specific about that, they should let the class teachers hyperlink their flashcard sets to other websites that can let other students know more about the subject they have, therefore letting them have more knowledge overall.
I love quizlet and have used it several years now. My students really like it. However, I have one request. I realize that you must have advertising to help this to remain a free site for us to use, but would you note the types of pictures and ads that are displayed. My fifth graders commented the other day about a picture of a girl in a T-shirt that said something about Fart Loading. I didn't see it, but the one they did show me seemed inappropriate. I hate to have to censor for this. Thanks for all you do.
I love quizlet and has helped me with school SO MUCH :D But those ads. They are not appropriate enough for children that are at a 4th and under grade. Including girls dressed to show their skin.
Quizlet rules!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
I want to be able to create sets on my phone. PLEASE DO IT NOW
Hangman.
I agree with austinr21. We should have a hangman game in quizlet. the high score is givin to the person who guesses the most words without losing. you can lose by making the whole hangman picture. if you make the whole hangman picture, you will lose a life. lose all of your lives, it is game over, making it kind of like space race.
They should also make a "buddy system", where users on quizlet could invite each other to be friends. when a user is friends with another user, they could invite each other to the sets one or the other users are studying, and they could compare their high scores on games like scatter and space race.
on learn mode, i only say "override its right" if i spelled it wrong by one or two letters. That's how you really learn it.
P.S. Do you have any new games you can add? I love scatter and space race, but I would like something new! :)
i think there should be more ways to study. Such as hangman or a matching game (also multiplayer games) :)
That is a good idea @nhabtem ari am we should also have class disscussion and a visual whitebaord
hey!check out epiktiger.deviantart.com
I'd love if you guys could make folders to organize sets better, my grade has a class and there are way to many sets in one place! Also I don't know if this is possible but could you make something that could help people study for oral tests, like a record your voice and get corrections? I don't know if thats possible though :)
You should also have multiplayer games that you can play with users in your Quizlet class
Please also download the languages Hindu and Sinhalese
THIS SITE SUCKS!
DELETE YOUR ACCOUNTS AND SAVE YOU SOME STRESS
AT LEAST FIR YOUR OWN SAKE
comment
Could you make Quizlet more iPad friendly? Especially with games like scatter and space race where a touch screen doesn't work as well.
shane mooney
Shane Mooney
Shane Mooney
Any tips for getting a good space race score
Any tips for getting a good space race score
I wish that I could high-five comments that people post in the discussion box like you can high-five what Quizlet staff post. I like that for groups now you can see the group members' icons on top near the class name. I wish that in classes/groups though, that you could have all of your sets organized by subject. Another suggestion I have is to make the number of new comments posted for a set on its box or link like how the number of flashcards in the set is posted in the box that takes you to that set. I think that it would be better if people couldn't curse on sets. On one site I went on, it was programmed so that if you typed a bad word, the text wouldn't send and you would get a warning. I think that it would be cooler if instead of not sending the message, Quizlet automatically changes the "bad word" to a different word. For example: "Holy mushrooms!" OR "You're a tuba." It will be more fun, and cleaner for everyone! please consider some of my suggestions!
Scatter is my Favorite, because on the run game you have to get it "Word Perfect" and its confusing
I love quizlet and I think it is great but I would really like to see Multiplayer come back! It helped my class tons when we use it to study in French and although the other games are good, it is better practice to be able to write sentences and play as a class rather then individually :)
I would love to be able to create sets in the mobile app so I don't have to log onto a computer all the time to make them.
I would like a multiplayer game.
That would be cool.
Y-O Q-U-I-Z-L-E-T.
.-.
<3 the pic
HA I COMMENTED LOL
another game maybe multiplayer, but the games have gotten pretty boring.
please read this quizlet
so many comments...
I know right
I really hope a Quizlet staff member reads this... PLS
@HappyAlley I think the cork board idea is cool... You could program it so that class members and administrators would be able to post study notes and reminders, things coming up, etc. It is a great idea!!!!!!😊
Use asterisks to input bold text
*Like this!*
Oh darn it didn't work
needs more interactive games to make students more involved. me being a student I feel like this will really help.
yes !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
sweet
also forget to subscribe.
i need a hack
I can't seem to get past four seconds on scatter 'any tips'
It never workes i hate it
It would be awesome, if either the Quizlet team could add third/fourth columns for flashcard building or just allow users to input more than one language on either of the two columns. I don't think the interface permits one to do this and I think it would be excellent, especially for folks like myself, who are attempting to learn a third language and would like to view terms from the 1st/2nd conjoining with terms from the target language. Is this a possible addition that could get underway soon?
More interactive activities/games to help remember stuff
Not just Scatter and Space Race
1,015th high five!
Yea 1,018th High~fiver!!!
IMA SAY GOOD STATEMENT
Hey
How do I turn punctuation off in Space Race?
Ty!
Make a place where the class can communicate? Other than that, Quizlet is perfect :)
Oh, and also , bring back....
m
mu
mul
mult
multi
multip
multipl
multipla
multiplaye
multiplayer
multiplaye
multiplay
multipla
multipl
multip
multi
mult
mul
mu
m
hi
yeet
Login to leave a comment | https://quizlet.com/blog/what-are-your-best-quizlet-tips | CC-MAIN-2016-22 | refinedweb | 7,978 | 71.95 |
"Serge E. Hallyn" <serue@us.ibm.com> writes:> Now that the iproute2 patch is upstream, this patchset really is the> only thing keeping us from using network namespaces. Given that the> details of the tagging are trivially changeable with no abi changes, I'd> personally much rather see the patches go in as is, with whatever new> tagging patches Benjamin whips up, using ida or some new idea, being> applied later if we feel the need.My point exactly. No one seems to contest the userspace semantics soas long as we don't put ourselves into a real mess we should be fine.Eric | http://lkml.org/lkml/2008/7/1/53 | CC-MAIN-2014-41 | refinedweb | 104 | 63.73 |
Opened 4 years ago
Closed 4 years ago
Last modified 4 years ago
#15243 closed (fixed)
commit_unless_managed clarification for multiple databases in the docs
Description
First ticket of mine here guys.
( Please be nice )
Today I was working with a two databases setup: default and my_other_db.
I wrote a function to execute some queries on my_other_db.
Example code:
def email_update(email, password): """ Update a email account from the database with a new password. """ cursor = connections["my_other_db"].cursor() query = "UPDATE users SET password=ENCRYPT(%s) WHERE email=%s" # Perform the query cursor.execute(query, [password, email]) # Hey Django, ensure changes are done to the DB transaction.commit_unless_managed()
So the problem is that the last line does nothing.
Well, it does something, a rollback on the query because I'm not hitting the right db, the one I chose from the connections dict.
Specifying, again, the database alias name on the commit_unless_managed method, with the using keyword argument makes the function to work.
Example:
transaction.commit_unless_managed(using="my_other_db")
I think, a tiny one-two lines note placed below the third paragraph of the section called Executing custom SQL directly at should be ok.
Hope it helps.
Related:
Where's the DRY principle of Django here?
That transaction should look for the database alias used with the cursor, right?
Attachments (1)
Change History (6)
comment:1 Changed 4 years ago by gabrielhurley
- Keywords easy-pickings added
- Needs documentation unset
- Needs tests unset
- Patch needs improvement unset
- Triage Stage changed from Unreviewed to Accepted
comment:2 Changed 4 years ago by jasonkotenko
- Owner changed from nobody to jasonkotenko
- Status changed from new to assigned
Changed 4 years ago by jasonkotenko
SVN Patch
comment:3 Changed 4 years ago by jasonkotenko
- Cc jasonkotenko added
- Has patch set
Two lines of documentation mentioned above have been added. Not promoting to "Ready for Checkin" because this is my first ticket and I don't want to make a stupid mistake. Thanks.
comment:4 Changed 4 years ago by Alex
- Resolution set to fixed
- Status changed from assigned to closed
Thanks for the report!
As the docs you referenced note: "The object django.db.connection represents the default database connection, and django.db.transaction represents the default database transaction." The important point is that they use the default database connection. If you are manually using a different cursor selected from the available connections there's no seamless way for the transaction machinery to know which is the appropriate connection to commit or roll back. Making it "guess" at transactions would cause more bugs for unwitting users than forcing people to be explicit.
I agree the docs there ought to remind people of this fact, and I think the fix is extremely simple. Simply adding two more lines to the example at the end of that section so it reads more like this ought to make it clear:
That way the example which shows you how to select another DB also shows you how to commit to it. | https://code.djangoproject.com/ticket/15243 | CC-MAIN-2015-22 | refinedweb | 499 | 59.94 |
An exception is an issue (run time error) occurred during the execution of a program. When an exception occurred the program gets terminated abruptly and, the code past the line that generated the exception never gets executed.
To handle exceptions Java provides a try-catch block mechanism.
A try/catch block is placed around the code that might generate an exception. Code within a try/catch block is referred to as protected code. it is passed to the catch block (or blocks) that follows it.
If the type of exception that occurred is listed in a catch block, the exception is passed to the catch block much as an argument is passed into a method parameter."); } } }
Given file path is not found
The finally block follows a try block or a catch block. A finally block of code always executes, irrespective of occurrence of an Exception. You cannot skip the execution of the final block. Still if you want to do it forcefully when an exception occurred, the only way is to call the System.exit(0) method, at the end of the catch block which is just before the finally block.
public class FinallyExample { public static void main(String args[]) { int a[] = {21, 32, 65, 78}; try { System.out.println("Access element three :" + a[5]); } catch (ArrayIndexOutOfBoundsException e) { System.out.println("Exception thrown :" + e); System.exit(0); } finally { a[0] = 6; System.out.println("First element value: " + a[0]); System.out.println("The finally statement is executed"); } } }
Exception thrown :java.lang.ArrayIndexOutOfBoundsException: 5 | https://www.tutorialspoint.com/is-there-any-way-to-skip-finally-block-even-if-some-exception-occurs-in-exception-block-using-java | CC-MAIN-2022-05 | refinedweb | 254 | 50.12 |
How to Compile a .CPP With a Header File
A file with the .CPP file extension is a C++ source code file. A header file may be called by the .CPP file and compiled along with the source code file using the C++ compiler. The C++ header file is normally identified by a file extension of ".h." To compile a .CPP source code file with a header file, you call the header file from within the .CPP file using the "include" statement at the top of the .CPP code file.
Instructions
- 1
Right-click on the .CPP file and click "Open With" from the context menu.
- 2
Click the "Notepad" option to open the .CPP file in the text editor.
- 3
Type the following line at the top of the .CPP file, where "header.h" is the name of your header file:
#include <header.h>
- 4
Click the "File" option on the top navigation bar in Notepad, and then click "Save" to save the .CPP file.
- 5
Open the Microsoft Visual Studio Command Prompt by clicking the Windows "Start" button, and then clicking "All Programs. Click "Microsoft Visual Studio 2010" and then click "Visual Studio Tools." Finally, click the "Visual Studio 2010 Command Prompt" to open a window with the VS command prompt.
- 6
Type "cl file.cpp" (without quotes) where "file" is the name of your .CPP file. Press the "Enter" key. The header file is called by the source code (.CPP) file and compiled into an executable (.EXE) program named "file.exe."
- 7
Double-click the executable (.EXE) file to run and test the program. | http://www.ehow.com/how_10003459_compile-cpp-header-file.html | crawl-003 | refinedweb | 267 | 87.72 |
TemplateBinding and TemplateParent Digging
Monday, May 9, 2016
One of the things that can completely freak you out. Lets start with facts: 1. While customize an application you might use something like this: <TextBlock Text="{Binding RelativeSource={RelativeSource TemplatedParent},Path=MyProperty}"/> Or maybe this: <TextBlock Text="{TemplateBinding MyProperty}" /> While this is on the ResourceDictionary of your app and about to styling your custom control, and MyProperty is an DependencyProperty on the logic part of the Control so what the different between those two? And what the default choose should be? By the way while most of programmers used to write You can use the Short way as: Text="{Binding TemplatedParent.MyProperty2}" 2. A full demonstration will be like this one: public class CustomControl1 : Control ...
אין תגובות | http://blogs.microsoft.co.il/uriel99/tag/custom-control/ | CC-MAIN-2018-13 | refinedweb | 126 | 51.38 |
stat - get file status
#include <sys/stat.h> the Base Definitions volume of IEEE Std 1003.1-2001, Section 4.7, File Times Update), before writing into the stat structure.
Unless otherwise specified, the structure members st_mode, st_ino, st_dev, st_uid, st_gid, st_atime, st_ctime, and st_mtime shall have meaningful values for all file types defined in this volume of IEEE Std 1003.1-2001. The value of the member st_nlink shall be set to the number of links to the file.
Upon successful completion, 0 shall be returned. Otherwise, -1 shall be returned and errno set to indicate the error.
The stat()]
- A value to be stored would overflow one of the members of the stat structure... | http://www.opengroup.org/onlinepubs/000095399/functions/stat.html | crawl-002 | refinedweb | 116 | 67.65 |
We'll start off by creating our
Account class. First, though, you probably noticed this bit of fanciness in the last exercise:
def initialize(name, balance=100) @name = name @balance = balance
What's that
balance=100 doing? It's signifying an optional parameter. Ruby is saying that you can pass one or two arguments to
initialize; if you pass two, it uses your
balance argument to set
@balance; if you only pass a
name,
balance gets a default value of
100, and that's what gets stored in
@balance.
You probably also noticed we used underscores in our
1_000_000 (one million). Ruby allows this, and it makes it easier to read big numbers! Cool, no? | https://www.codecademy.com/courses/learn-ruby/lessons/banking-on-ruby/exercises/creating-the-account-class | CC-MAIN-2018-39 | refinedweb | 115 | 61.67 |
Vol. 9, Issue 9, 2577-2593, September 1998
Department of Cell Biology and Anatomy, The Johns Hopkins University School of Medicine, Baltimore, Maryland 21205Submitted April 17, 1998; Accepted June 23, 1998
The Tim23 protein is an essential inner membrane (IM) component of the yeast mitochondrial protein import pathway. Tim23p does not carry an amino-terminal presequence; therefore, the targeting information resides within the mature protein. Tim23p is anchored in the IM via four transmembrane segments and has two positively charged loops facing the matrix. To identify the import signal for Tim23p, we have constructed several altered versions of the Tim23 protein and examined their function and import in yeast cells, as well as their import into isolated mitochondria. We replaced the positively charged amino acids in one or both loops with alanine residues and found that the positive charges are not required for import into mitochondria, but at least one positively charged loop is required for insertion into the IM. Furthermore, we find that the signal to target Tim23p to mitochondria is carried in at least two of the hydrophobic transmembrane segments. Our results suggest that Tim23p contains separate import signals: hydrophobic segments for targeting Tim23p to mitochondria, and positively charged loops for insertion into the IM. We therefore propose that Tim23p is imported into mitochondria in at least two distinct steps.
Eukaryotic membrane proteins face many problems during their biogenesis. For example, membrane proteins must be targeted to the correct organelle within the cell. They also must be inserted into the lipid bilayer in the correct topological arrangement. In addition, since organelles such as mitochondria are encompassed by two membranes, proteins destined for the inner membrane (IM) must first cross the outer membrane (OM). At present, little is known about the mechanisms by which eukaryotic proteins are targeted to specific membranes and inserted in their correct conformation.
Most mitochondrial proteins are synthesized in the cytosol and imported
into the organelle via a multistep pathway that includes interaction
with cytosolic chaperones, binding to receptors on the OM surface, and
translocation across one or both of the mitochondrial membranes (for
review see Schatz and Dobberstein, 1996
; Stuart and Neupert, 1996
;
Stuart et al., 1996
; Jensen and Kinnally, 1997
; Pfanner and
Meijer, 1997
). Cytosolic chaperones bind precursors to prevent
premature folding or aggregation, and one chaperone MSF also plays a
role in targeting the precursor to the mitochondria (Hachiya et
al., 1994
, 1995
; Komiya et al., 1996
). On the
mitochondrial surface, precursors encounter several proteins proposed
to act as receptors, including Tom70p, Tom37p, Tom22p, and Tom20p
(Hines et al., 1990
; Söllner et al., 1990
,
1992
; Schlossmann et al., 1994
; Gratzer et al.,
1995
; Mayer et al., 1995
). The outer membrane receptors,
along with Tom40p, Tom6p, Tom7p, and Tom8p, make up the TOM complex,
which translocates precursors across the mitochondrial outer membrane
(Kiebler et al., 1990
, 1993
; Moczko et al., 1992
; Söllner et al., 1992
).
Translocation of precursors across the IM is mediated by the TIM
complex, which includes Tim44p, Tim23p, Tim17p, and a matrix-localized Hsp70 protein, called mt-Hsp70 (Kang et al., 1990
; Maarse
et al., 1992
, 1994
; Emtage and Jensen, 1993
). Tim23p and
Tim17p are proposed to form a protein-translocating channel in the IM
(Emtage and Jensen, 1993
; Maarse et al., 1994
; Ryan et
al., 1994
; Lohret et al., 1997
). Tim44p and mt-Hsp70
are thought to "pull" precursors through the channel (Pfanner
et al., 1994
; Stuart et al., 1994
; Glick, 1995
;
von Ahsen et al., 1995
) by a process that requires matrix
ATP (Chen and Douglas, 1987
; Eilers et al., 1987
, 1988
; Pfanner and Neupert, 1987
; Pfanner et al., 1987
; Stuart
et al., 1994
; Wachter et al., 1994
) and a
electrochemical potential across the IM (Schleyer et al.,
1982
; Pfanner and Neupert, 1985
, 1987
; Chen and Douglas, 1987
; Eilers
et al., 1987
). Recently, a new IM complex, containing Tim54p
and Tim22p, has been shown to mediate the insertion of at least some
polytopic proteins into the IM (Sirrenberg et al., 1996
;
Kerscher et al., 1997
). Two intermembrane space proteins,
Tim12p and Tim10p, appear to be part of this new complex (Koehler
et al., 1998
; Sirrenberg et al., 1998
).
Most imported mitochondrial proteins are synthesized with an
amino-terminal targeting signal called a presequence. Presequences vary
in length and primary amino acid sequence, yet share a common motif
consisting of a number of positively charged amino acids, a lack of
acidic residues, no long stretches of hydrophobic residues, and the
ability to form an amphipathic structure (Allison and Schatz, 1986
;
Roise et al., 1986
, 1988
; Roise, 1992
). Once in the matrix,
the presequence is removed by a two-subunit-processing protease,
called MPP (McAda and Douglas, 1982
; Yaffe et al., 1985
; Jensen and Yaffe, 1988
; Pollock et al., 1988
; Witte et
al., 1988
; Yang et al., 1988
). Some proteins destined
for the mitochondrial IM carry a cleavable presequence followed by one
or more hydrophobic membrane-spanning segments (Stuart and Neupert,
1996
). The transmembrane segments are proposed to either function as
stop-transfer sequences in the IM (Miller and Cumsky, 1991
, 1993
), or
to facilitate the insertion of the polypeptide into the IM after its
complete import into the matrix (Mahlke et al., 1990
;
Herrmann et al., 1997
).
Some imported mitochondrial proteins do not carry cleavable,
amino-terminal presequences. The import information therefore resides
within the mature part of the protein. The targeting signal for Bcs1p,
an IM protein without an amino-terminal presequence, has recently been
identified (Fölsch et al., 1996
). Bcs1p has a
positively charged stretch of amino acids immediately adjacent to its
single transmembrane-spanning segment. This positively charged segment
has the capability to form an amphipathic
-helix, and exposing this
region of Bcs1p by deletion of the N terminus and transmembrane domain
resulted in the mislocalization of the truncated Bcs1 protein to the
matrix. Fölsch et al. (1996)
proposed that the
positively charged stretch functions as an internal targeting signal
functionally analogous to amino-terminal presequences.
Other proteins without presequences that are localized to the
mitochondrial IM include the yeast ADP/ATP carrier proteins (Aac1p,
Aac2p, Aac3p; Lawson and Douglas, 1988
), the mammalian uncoupling
protein (UCP; Aquila et al., 1985
; Liu et al.,
1988
), and the yeast phosphate carrier (PiC; Zara et al.,
1991
). The PiC, UCP, and the Aac proteins belong to the mitochondrial
carrier family and contain six transmembrane segments and three
matrix-facing, positively charged loops between the transmembrane
segments (Aquila et al., 1985
; Runswick et al.,
1987
; Gawaz et al., 1990
; Lawson et al., 1990
;
Palmieri et al., 1993
). The positive charges in the matrix
loops have been proposed to function as internal targeting signals
similar to that in Bcs1p (Fölsch et al., 1996
). In
addition, mitochondrial carrier family proteins are thought to be
composed of a threefold repeat structure of two transmembrane segments with an intervening loop (Runswick et al., 1987
).
Consistent with this idea, redundant targeting information has been
found in UCP and the Aac1 proteins (Pfanner et al., 1987
;
Liu et al., 1988
, 1990
; Smagula and Douglas, 1988a
,b
).
Tim23p, Tim17p, and Tim22p are three homologous proteins of the IM
import machinery and are also synthesized without an amino-terminal presequence (Dekker et al., 1993
; Emtage and Jensen, 1993
;
Maarse et al., 1994
; Ryan et al., 1994
). The
Tim23 protein appears to have four transmembrane domains and is
inserted in the IM with both its amino and carboxyl termini facing the
intermembrane space (Bauer et al., 1996
; Ryan et
al., 1998
; Emtage, Kerscher, and Jensen, unpublished data).
This proposed topology places two positively charged segments of Tim23p
in the matrix. To test the possibility that the matrix-facing,
positively charged loops of Tim23p mediate import into the
mitochondrial IM, we replaced the positively charged amino acids in one
or both loops with alanine residues. We find that the positive charges
are not required for import into mitochondria, but at least one
positively charged loop is required for insertion into the IM. We find
that the signal to target Tim23p to mitochondria is carried in at least
two of the hydrophobic transmembrane segments, but these segments are
not sufficient to insert Tim23p into the IM. Our results suggest that
Tim23p contains separate and distinct import signals: hydrophobic
segments for targeting Tim23p to mitochondria, and positively charged
loops for insertion into the IM. The import information for Tim23p thus
differs from that of other IM proteins, such as the Bcs1 protein, and
Tim23p appears to contain novel import signals that have not been
previously described. We propose that Tim23p is imported into
mitochondria in at least two distinct steps.
Yeast Strains and Genetic Methods
The haploid tim23::URA3 ura3 trp1 leu2
strain KRR146 was obtained by crossing the MAT
ura3 trp1
strain BY134 (Brachmann et al., 1997
) with strain KRR123
(Ryan et al., 1998
). Strain KRR146 also carries plasmid
pKR1, a TIM23-LEU2-CYH2 plasmid (Ryan et al.,
1998
). wt Strain D273-10b has been described (Sherman, 1964
). Yeast
transformations were performed as described (Schiestl and Gietz, 1989
).
Standard yeast media and genetic techniques were used (Kaiser et
al., 1994
).
Plasmid Constructions
Tim23p-HA construct. pAD91, a CEN-LEU2
plasmid containing the Tim23 protein with an insertion of the
hemagglutinin (HA) epitope in the middle of loop L2, was
constructed as follows. First, a SacI/NotI
fragment containing amino acids 1-168 of Tim23p was isolated from
plasmid pKR34 (see below). The SacI/NotI fragment was inserted into SacI/NotI-digested pKR31, which
encodes amino acids 173-222 of Tim23p (Ryan, unpublished data). The
resulting plasmid pAD90 encodes Tim23 with a NotI site in
the middle of loop L2. A NotI fragment encoding the triple
HA epitope (Field et al., 1988
) was cloned into the
NotI site of pAD90, forming pAD91. pAD91 encodes amino acids
1-168 of Tim23p, followed by amino acids GGR, the triple HA epitope,
residues GGR, and then amino acids 173-222 of Tim23p. pJE8, a
CEN-LEU2 plasmid encoding Tim23p with the triple-HA epitope
inserted at its carboxyl terminus, has been described (Emtage,
Kerscher, and Jensen, unpublished data).
L1Neut, L3Neut and L1L3Neut Constructs.
pAD62, a
LEU2 plasmid that expresses a L1Neut, a Tim23 protein with
the positively charged residues in the first loop changed to neutral
alanine residues (K131A, K143A and
R144A), was created as follows. First, lys131
was changed to ala131 using the PCR (Saiki et
al., 1985
) using oligo 176 (5'-GTTCAATTGCAATGCTCCGGGACTATTC-3'), oligo 20 (5'-AATACGACTCACTATAG-3'), and plasmid pKR1 (Ryan and Jensen,
1993
) as a template. The PCR fragment was digested with XbaI
and MunI. In a second PCR reaction, Lys143 and
Arg144 were changed to alanines using oligo 185 (5'-TTGCAATTGAACACCGTCCTGAATCACATTACTGCGGCAGGTCCCTTCTTAG-3'), oligo 21 (5'-ATTAACCCTCACTAAAG-3'), and pKR1. The PCR fragment was
digested with MunI and BamHI, added to the first
PCR fragment, and ligated into
XbaI-BamHI-digested pJE50, a
LEU2-TIM23 plasmid (Emtage, unpublished data), forming
pAD62. pAD66, which carries the L1Neut-coding sequences downstream of
the SP6 promoter, was formed by inserting a SalI and
BamHI fragment from pAD62 into the
SalI-BamHI sites of SP6-TIM23 plasmid pJE29
(Ryan et al., 1998
).
Tim23Np and Tim23Cp
Constructs.
pKR14, an SP6-containing plasmid that
expresses Tim23Np, and pKR15, which carries Tim23Cp, have been
described previously (Ryan et al., 1998
). Tim23Np consists
of amino acids 1-96 of Tim23p, and Tim23Cp contains residues 95-222
of Tim23p.
Tim23p Deletion Constructs.
A series of either N-terminal or
C-terminal deletions of TIM23 were constructed using
specific oligonucleotides and PCR. Constructs were subcloned into
either a CEN6-LEU2 plasmid (pRS315, Sikorski and Hieter,
1989
) for expression in yeast, or into the SP6-containing plasmids,
pSP64 or pSP65 (Promega, Madison, WI), for in vitro synthesis. All
Tim23p constructs for expression in yeast carry 560 base pairs (bp) of
promoter sequences upstream of the coding region and 950 bp downstream
of coding sequences (Emtage and Jensen, 1993
). All SP6 constructs carry
77 bp of upstream sequences (Ryan et al., 1998
). Deletion
junctions were engineered to contain a NotI site, which adds
three extra amino acids (GGR).
Imports into Isolated Mitochondria
Mitochondria were isolated from wt strain D273-10b as described
(Sherman, 1964
), except that SEH buffer (250 mM sucrose, 1 mM EDTA, 20 mM HEPES-KOH, pH 7.4) was used in place of breaking buffer.
Radiolabeled proteins were made from SP6-containing plasmids using 1.5 mCi/ml [35S]-methionine (1000 Ci/mmol, Amersham,
Arlington Heights, IL) in a coupled transcription/translation system
(SP6 TNT System, Promega, Madison, WI) according to the
manufacturer's instructions. For import reactions, mitochondria were
suspended in import buffer (Scherer et al., 1992
) to a final
concentration of 1 mg/ml protein. Mitochondria (200 µg) and 10 µl
of lysate containing the radiolabeled protein were used per reaction.
Import reactions were incubated at 30°C for 30 min and were stopped
by placing the samples on ice and the addition of carbonyl cyanide
m-chlorophenyl hydrazone (Sigma, St. Louis, MO) to a
final concentration of 30 µM. Samples were treated with the indicated
amounts of trypsin (Sigma) or proteinase K (Calbiochem, San Diego, CA)
for 20 min on ice, followed by the addition of either 1 mg/ml soybean
trypsin inhibitor (Sigma) or 1 mM phenylmethylsulfonyl fluoride
(Sigma). Disrupting of the OM (forming mitoplasts) was performed by
diluting mitochondria with 9 volumes of 20 mM HEPES, pH 7.4, followed
by incubation on ice for 30 min. After imports and protease treatment,
mitochondria or mitoplasts were reisolated by centrifugation at
12,500 × g for 10 min through a 1-ml sucrose cushion
(0.625 M sucrose, 20 mM HEPES-KOH, pH 7.4). For analysis, pellets were
resuspended in 1× sample buffer (125 mM Tris, pH 6.8, 2% SDS, 20%
glycerol) containing 4%
-mercaptoethanol and subjected to SDS-PAGE
(Laemmli, 1970
). Radiolabeled proteins were visualized by fluorography
(Bonner and Laskey, 1974
).
Cellular Fractionation
tim23::URA3 trp1 leu2 cyh2 strain KRR146
containing plasmids expressing Tim23p (pKR50), L1Neut (pAD62), L3Neut
(pAD58), or L1L3Neut (pAD64) were grown to an OD600 of 1.5 in YEP medium containing 2% sodium lactate, pH 5.5. Cells were
converted to spheroplasts, homogenized, and separated into a 9,600 × g mitochondrial pellet and a postmitochondrial
supernatant as described (Daum et al., 1982
), except that
SEH buffer was used in place of breaking buffer. Proteins from the cell
fractions were separated by SDS-PAGE and transferred (Laemmli, 1970
;
Haid and Suissa, 1983
) to Immobilon filters (Millipore, Bedford, MA).
Filters were probed with 1:10,000 dilution of antiserum to the
subunit of the F1-ATPase (F1
) (a gift from M. Yaffe,
University of California, San Diego), hexokinase (a gift from M. Yaffe), or against Tim23p (Emtage and Jensen, 1993
). Immune complexes
were visualized using a 1:10,000 dilution of HRP-conjugated secondary
antibody (Amersham) followed by chemiluminescence (Supersignal, Pierce
Chemical, Rockford, IL).
Miscellaneous
Quantitation of import reactions was done using Molecular
Dynamics ImageQuant software version 1.1 (Molecular Dynamics,
Sunnyvale, CA). Gels were exposed to a Molecular Dynamics Phosphor
screen overnight and scanned using a Molecular Dynamics Storm 860 phosphorimager (Molecular Dynamics). Alternatively, fluorographs were
scanned using a UMAX VistaScan flatbed scanner, and the results were
quantitated with ImageQuant. Antibodies to the HA epitope (Niman
et al., 1983
), Tom70p (a gift from G. Schatz, Biocenter,
Basel, Switzerland), and
-MPP (Jensen and Yaffe, 1988
) were used to
decorate immune blots.
One of Two Sets of Positively Charged Segments within Tim23p Is Required for Function, but Not for Targeting to Mitochondria
The Tim23 protein has four predicted transmembrane segments and is
proposed to be inserted in the IM with both its amino and carboxyl
termini facing the intermembrane space (Figure
1A; Bauer et al., 1996
; Ryan
et al., 1998
; Emtage, Kerscher, and Jensen, unpublished data). This topology places two positively charged segments
of Tim23p in the matrix. One segment, called loop L1, is located
between the first and second transmembrane regions, and the other
segment, called loop L3, lies between the third and fourth
transmembrane stretches (Figure 1A). Loop L1, which is 14 amino acids
in length (KLQLNTVLNHITKR), and loop L3, which is 7 amino acids long
(KSSKGLK), both contain three positively charged residues and no acidic
amino acids. In contrast, loop L2, which is proposed to face the
intermembrane space (IMS), contains 8 amino acids (DALRGKHD), 2 of
which are negatively charged and 2 are positively charged.
To further support the model for the configuration of Tim23p in the IM,
we inserted an epitope tag into loop L2 of Tim23p and asked whether
this tag faced the mitochondrial IMS. We inserted the influenza HA
epitope (Field et al., 1988
) between residues 168 and 173 of
Tim23p. Surprisingly, we found that the Tim23p-HA fusion protein was
functional since it rescued the lethality of a
tim23::URA3 disruption. Mitochondria were isolated
from cells expressing Tim23p with the HA tag in loop L2 (called I-HA),
as well as from wt cells, or cells expressing Tim23p with the HA tag at
its carboxyl terminus (called C-HA; Emtage, Kerscher, and Jensen,
unpublished data). As shown in Figure 1B, immune blotting showed
that both the internal HA tag (I-HA) and the carboxyl-terminal HA tag
(C-HA) were protected from protease digestion in intact mitochondria,
but that both tags were digested when the mitochondrial OM was
disrupted by osmotic shock (OS). Control blots showed that the matrix
marker
-MPP was not accessible to protease digestion even when the
OM was disrupted. Our results thus support the model that Tim23p has
four transmembrane segments with two matrix-facing loops (loops L1 and
L3) and one loop facing the IMS (loop L2).
Tim23p is a member of a set of proteins that are imported into
mitochondria without an amino-terminal, cleavable presequence. The
import signal for one of these proteins Bcs1p was recently shown to be
an internal, positively charged segment facing the matrix that had many
of the properties of a mitochondrial presequence (Fölsch et
al., 1996
). We therefore tested the possibility that the
matrix-facing, positively charged loops of Tim23p mediate its import
into mitochondria. As diagrammed in Figure
2A, we made three mutant versions of
Tim23p: L1Neut, in which we replaced the two lysines and one arginine
in loop L1 with alanines; L3Neut, where we substituted alanines for the
three lysines in loop L3; and L1L3Neut, in which we replaced the six
positively charged amino acids in both loop L1 and L3 with alanines. We
first examined the ability of these constructs to provide Tim23p
function in yeast cells. LEU2-containing plasmids expressing
either Tim23p, L1Neut, L3Neut, or L1L3Neut were transformed into
tim23::URA3 disruption strain KRR146, which also
contains the TIM23-CEN-CYH2 plasmid pKR1 (Figure 2B). Leu+
transformants were patched onto medium lacking leucine (SD
Leu). Since
CYH2-containing cells are unable to grow in the presence of
cycloheximide (Sikorski and Boeke, 1991
), we tested our transformants
for their ability to lose the TIM23-CYH2 plasmid by replica
plating them onto medium containing cycloheximide (YEPD + CYH). Tim23p
is essential for cell viability (Emtage and Jensen, 1993
); therefore,
only transformants carrying a second copy of functional
TIM23 will be able to grow on cycloheximide-containing
medium. We found that both L1Neut and L3Neut provided wt Tim23p
activity when grown at 24°C, 30°C and 37°C on both fermentable
and nonfermentable medium. In contrast, the L1L3Neut construct did not
grow on media with cycloheximide at any temperature, and thus did not
provide Tim23p function. Our results suggest that only one of the two
positively charged matrix segments within Tim23p is required for its
function. Full Tim23p activity is observed when the lysine and arginine
residues are replaced by alanine in either loop L1 or L3, but activity is lost when the positive charges are removed from both the
matrix-facing loops.
To examine the level and the location of the different Tim23p
constructs in yeast, we grew tim23::URA3 cells
expressing either Tim23p, L1Neut, or L3Neut. Cells were homogenized
(HOM) and separated into a mitochondrial fraction (MITO) and a
postmitochondrial supernatant (PMS) by centrifugation. When we analyzed
our fractions by immune blotting, we found that all the Tim23 proteins
cofractionated with F1
, a mitochondrial protein (Figure 2C). No
Tim23p, L1Neut, or L3Neut was found in the supernatant with the
cytosolic hexokinase (Hex) protein. While the L1Neut and L3Neut
proteins are targeted to mitochondria, their steady-state levels appear
to be reduced when compared with wt Tim23p. When the level of Tim23p,
L1Neut, and L3Neut were standardized to the amount of F1
in each
cell fractionation, we found that L1Neut and L3Neut were reduced two- to threefold compared with wt Tim23p. We also examined the level of the
L1L3Neut construct, which did not complement the tim23 disruption. Immune blotting of yeast cells showed that the amount of
L1L3Neut is reduced at least 100-fold as compared with the level of wt
Tim23p. We propose that the altered Tim23p constructs are more rapidly
turned over in cells since they are not efficiently inserted into the
mitochondrial IM (see below).
Internal Positively Charged Segments Mediate the Insertion of Tim23p into the IM, but Are Not Required for Import into Mitochondria
To directly examine the role of the positively charged loops in
Tim23p, we examined the import of the different constructs into
isolated mitochondria. Radiolabeled Tim23, L1Neut, L3Neut, and L1L3Neut
proteins were made by in vitro transcription and translation and were
then incubated with isolated mitochondria (Figure
3A). After the import reaction, samples
were divided into aliquots. One aliquot was treated with trypsin to
digest proteins that were not imported into the mitochondria.
Mitochondrial proteins were isolated by centrifugation and separated by
SDS-PAGE, and the radiolabeled proteins were visualized by
fluorography. We found that Tim23p, L1Neut, L3Neut, and L1L3Neut were
all imported into mitochondria and protected from protease digestion to
the same extent (Figure 3A, mitos + protease). While the majority of
Tim23p, L1Neut, L3Neut, and L1L3Neut molecules required an IM potential
for their import, a small amount of all four proteins were protected
from protease digestion after import into mitochondria treated with
valinomycin (
). Whether this small amount of protein represented potential-independent import or protease-resistant material
is not clear. Nonetheless, we conclude that the positively charged
loops L1 and L3 are not required for the efficient import of Tim23p
into mitochondria. We next examined whether the different Tim23p
constructs were correctly inserted into the mitochondrial IM. wt Tim23p
resides within the IM with a 9-kDa amino-terminal hydrophilic domain
facing the IMS (Bauer et al., 1996
; Lohret et
al., 1997
; Ryan et al., 1998
; Emtage, Kerscher, and
Jensen, unpublished data). When the OM of mitochondria is
disrupted (forming mitoplasts), the amino-terminal domain can be
digested by protease yielding a characteristic 14-kDa fragment. This
fragment represents the carboxyl-terminal domain of Tim23p that is
embedded in the IM. We imported Tim23p, L1Neut, L3Neut, and L1L3Neut
into mitochondria, disrupted the mitochondrial OM by OS, and then
digested the mitoplasts with proteinase K (Figure 3A, mitoplasts + protease). We found that the L1Neut and L3Neut constructs were inserted
in the IM, but not to the same extent as wt Tim23p. Reduced amounts of
the 14-kDa protease-protected fragment were seen after import of L1Neut and L3Neut as compared with Tim23p. In contrast, virtually no 14-kDa
fragment was seen after import of the L1L3Neut construct.
To further determine whether the altered Tim23p constructs were
inserted into the IM, we asked whether the proteins could be extracted
from mitochondria after treatment with alkali (Figure 3B). Tim23p,
L1Neut, L3Neut, L1L3Neut, and the peripheral membrane protein
F1
were imported into mitochondria and then treated with protease to remove any proteins that were not imported into the organelle. Mitochondrial pellets were resuspended in 0.1 M sodium carbonate and separated into a membrane pellet and supernatant fraction
by centrifugation. We found that 80% of the imported Tim23p protein
remained with the mitochondrial membranes after alkali treatment,
whereas virtually all the F1
protein was removed. Compared with Tim23p, a lesser amount of L1Neut and L3Neut remained membrane associated (~25% and 40%, respectively). In contrast, virtually all of the L1L3Neut protein was removed from the membranes. Our results suggests that while the positively charged loops of Tim23p
are not required for import into mitochondria, they play an important
role in the insertion of Tim23p into the IM. One set of positive
charges, carried in either loop L1 or L3, are sufficient for the
partial insertion of Tim23p into the IM, whereas complete insertion
requires both sets of positively charged loops.
The Hydrophobic Carboxyl Terminus of Tim23p Carries Redundant Targeting Information
We found that the hydrophilic amino-terminal domain of Tim23p does
not carry targeting information. A Tim23p construct lacking its first 9 kDa lacks function (Ryan et al., 1998
), but it is efficiently imported into mitochondria and inserted into the IM (Figure
4). We synthesized Tim23p, along with
Tim23Np, which contains the amino-terminal portion (amino acids 1-96)
of Tim23p, and Tim23Cp, which contains the carboxyl-terminal domain
(residues 95-222) of Tim23p, and incubated the three proteins with
isolated mitochondria. In the presence of energized mitochondria, wt
Tim23p was imported into mitochondria (Figure 4, mitos) and was
protected from exogenously added protease after the import reaction
(Figure 4, mito + trypsin). When the OM was disrupted by OS after
import of Tim23p, proteinase K digestion produced the 14-kDa fragment
indicative of IM insertion (Figure 4, mitoplasts + protease). In the
absence of membrane potential (
), the amount of Tim23p imported
into mitochondria was reduced, and virtually no Tim23p was inserted
into the IM. The carboxyl-terminal domain of Tim23p, Tim23Cp, was also
efficiently imported into mitochondria (Figure 4, mitos + protease;
mitoplasts + protease). Surprisingly, a significant amount of Tim23Cp
was imported into valinomycin-treated mitochondria (Figure 4,
). In contrast to Tim23p and Tim23Cp, the amino-terminal
portion of Tim23p, Tim23Np, was not imported into energized
mitochondria. Tim23Np did not even bind to mitochondria and failed to
pellet with the organelles after the import reaction (Figure 4, mitos). These results support our previous studies indicating that the import
signal within Tim23p resides within the carboxyl-terminal half of the
molecule (Ryan et al., 1998
).
As described above, we found that the positively charged loops of
Tim23p are required for IM insertion, but not for import into the
organelle. To localize the mitochondrial import signal within the
carboxyl-terminal region of Tim23p, we have created constructs lacking
one or more of the hydrophobic transmembrane (TM) segments. As shown in
Figure 5A, we generated a protein lacking the third and fourth TM segments, called
3
4, a protein lacking TM
segments 1 and 2, called
1
2, a protein lacking TM segments 2 and
3, called
2
3, and a protein lacking TM segments 1 and 4, called
1
4. When expressed in yeast cells, none of these constructs provides Tim23p function.
We synthesized Tim23p, along with the different deletion constructs,
and asked whether they could be imported into isolated mitochondria. As
shown in Figure 5B, Tim23p and the
3
4,
1
2, and
2
3
proteins were all imported into energized mitochondria, but import for
all of the proteins was reduced in the absence of membrane potential
(
).
1
4 differed from the other constructs and was not
imported into mitochondria to a protease-protected location (Figure
5E). Since
3
4 and
1
2 are both imported into mitochondria,
our results suggest that the Tim23p carboxyl terminus carries two
targeting signals. Furthermore, since
3
4,
1
2, and
2
3
were capable of import but
1
4 was not suggests that the targeting
information is located in or near TM segments 1 and 4.
Quantitation of the imports of Tim23p,
3
4,
1
2, and
2
3
indicated that while similar amounts of the altered constructs pelleted
with mitochondria as compared with wt Tim23p, none were protected from
protease digestion to the same extent as Tim23p. It is likely that
3
4,
1
2, and
2
3 were more sensitive than Tim23p to
digestion after import because they were not completely imported into
the organelle. Demonstrating that the mitochondrial OM remained intact
in our studies with mitochondria, we found that the amino-terminal
domain of the endogenous Tim23 protein (which faces the IMS) was
protected from protease digestion (Figure 5C). In contrast, the
N-terminal domain of Tim23p was readily digested when the mitochondrial
OM was disrupted by OS.
We suggest that altered Tim23 constructs were arrested at an early step
in the import pathway, and much of the proteins were incompletely
translocated across the OM. Consistent with their incomplete import, we
found that
3
4 and
1
2 were not inserted into the IM and
could be extracted from mitochondrial membranes by carbonate treatment
(Figure 5D). While 80% of the imported Tim23p protein remained with
the mitochondrial membranes after carbonate treatment, almost all the
3
4,
1
2, and F1
proteins were removed. Also indicating
that
3
4 and
1
2 were not inserted into the IM, we failed to
detect any protease-resistant fragment in mitoplasts after import of
3
4 and
1
2. All of the
3
4 and
1
2 proteins were
completely digested when the OM was disrupted. Although the
3
4
and
1
2 proteins were not inserted into the IM, we found that both
proteins were membrane associated after their import. When the
mitochondrial OM was disrupted,
3
4 and
1
2 were not released
with soluble IMS proteins and instead pelleted with the mitoplast
fraction. We suggest that
3
4 and
1
2 are stuck in the OM
import machinery at an early step in the import pathway.
Import of either the
3
4 or
1
2 proteins into mitochondria
did not require the positively charged residues in the matrix-facing loops. A
3
4 construct, in which the two lysines and one arginine in loop L1 were replaced by alanines, and a
1
2 construct, in which the three lysines were replaced by alanine, were imported to the
same extent as the
3
4 or
1
2 construct containing the positively charged loop. These results support our conclusion that the
import signalfor Tim23p is separate from the signal
required for insertion into the IM.
In contrast to
3
4 and
1
2, ~40% of the
2
3 protein
remained with the membrane fraction after carbonate treatment (Figure 5D). Our results suggest that a significant amount of the
2
3 protein was inserted in the IM.
2
3 contains the first and fourth TM segments of Tim23p and, as described above, may carry two sets of
Tim23p-targeting information. Therefore, the observation that
2
3
was imported more completely than either
3
4 or
1
2 may not
be surprising.
2
3 also carries a chimeric loop consisting of the
first two amino acids of loop L1, amino acids GGR created by the
cloning procedure, and the last two amino acids of loop L3. This hybrid
loop (KLGGRLK), which has three positively charged residues and no
acidic residues, appears to function as an effective IM insertion
signal. Our results suggest that positively charged amino acids may
play a more critical role in IM insertion than a specific amino acid
sequence or secondary structure.
Efficient Import of Tim23p Requires a Pair of Hydrophobic Segments
Our results above suggest that Tim23p carries redundant targeting
information in TM segments 1 and 4. To test whether either TM segment 1 or 4 is sufficient for targeting, we created Tim23p constructs that
contain only a single TM segment. As shown in Figure
6A, starting with a Tim23p construct that
lacks the first two TM segments (
1
2), we removed TM segment 4. Similarly we removed both loop L3 and the fourth TM segment from
1
2, and we also made a construct that lacks loop L3 and the third
TM segment. We found that while
1
2 was imported into mitochondria
to a protease-protected location, constructs lacking TM segment 4 failed to be imported (Figure 6A). A Tim23p construct that contains
only TM segment 4 is imported into mitochondria, but ~10-fold less
efficiently than the
1
2 construct, which contains both TM3 and
TM4. A construct that contains only TM3 is not imported and fails to
even bind to mitochondria. Our results suggest that TM segment 4 functions as a more effective targeting signal when paired with TM
segment 3.
We similarly found that the targeting activity of TM segment 1 is
increased in combination with TM segment 2, as compared with TM segment
1 alone. Starting with a construct that lacks TM segments 3 and 4 (
3
4), w deleted TM segment 2 (Figure 6B). We
also created a construct that carries only TM segment 2. While
3
4
was imported into mitochondria, very little of the protein containing
only TM1 was imported into mitochondria. A construct containing only
TM2 was not imported. Our results suggest that the import information
of Tim23p is carried in TM segments 1 and 4, and both segments need the
cooperation of adjacent hydrophobic segments to be recognized by the
import machinery. This conclusion is also supported by our observation
that a Tim23p construct that carries only TM segments 1 and 4 (
2
3) is imported into mitochondria (and inserted into the IM)
almost as efficiently as the wt Tim23 protein (Figure 4B).
Tim23p Lacking the Fourth TM Segment Is Not Efficiently Imported into Mitochondria
Our results, suggesting that the Tim23p TM segments need to cooperate to promote efficient import into mitochondria, raise the possibility that a specific secondary structure, such as paired TM segments, is recognized by the import machinery. Supporting this idea, we found that Tim23p constructs lacking TM segment 4 were incompletely imported into mitochondria. Tim23p lacking TM segment 4 or a construct lacking both loop L3 and TM segment 4 were incubated with isolated mitochondria along with the wt Tim23 protein (Figure 7A). When mitochondria were treated with protease after the import reaction, we found that most of Tim23p was inside the mitochondria and protected from digestion. In contrast, ~80% of both constructs lacking TM 4 were digested to a smaller form by trypsin digestion (Figure 7A, mitos + trypsin, labeled f) or by proteinase K digestion (Figure 7A, mitos + proK, labeled f'). Both of the constructs lacking TM 4 appeared to get stuck in transit across the OM at the same point since protease treatment generated fragments of identical size from both proteins. The estimated mass of the proteinase K fragment (~17.5 kDa) represents a Tim23 protein lacking TM 3, TM 4, and loop L3. We conclude from these results that after recognition and binding of the paired TM-targeting signals, the wt Tim23 protein is imported into mitochondria in an N-to-C direction. Constructs that lack TM segment 4 cannot form a correctly paired structure. Therefore, TM segment 3, in the absence of TM 4, is not efficiently recognized by the import machinery, and the carboxyl-terminal region of Tim23p remains outside the OM accessible to protease digestion.
While the majority of the molecules lacking TM 4 got stuck during
import into mitochondria, a small number of proteins were completely
imported. As shown in Figure 7A, ~10-20% of the construct without
TM 4 was protected from protease digestion after import. Supporting
this conclusion, we found that constructs lacking TM 4, or both loop L3
and TM4, provide functional Tim23p activity in yeast cells (Figure 7B).
Since both constructs can rescue the lethality of a
tim23::URA3 disruption, some fraction of these proteins must be imported into mitochondria and inserted into the IM.
Surprisingly, while TM segment 4 and loop L3 appear to play an
important role in Tim23p import, these sequences do not seem critical
for Tim23p function. Constructs lacking TM 4 and loop L3, however, are
not fully functional, as they cannot rescue tim23::URA3 strains at elevated temperatures (Ryan
and Jensen, 1993
).
Tim23p, along with several other proteins of the mitochondrial IM, do not carry amino-terminal presequences. The most likely topology for Tim23p places the protein in the IM with four TM segments, with its hydrophilic amino-terminal domain facing the matrix, and with two positively charged loops facing the matrix. We replaced the positively charged amino acids in one or both loops with alanine residues and found that the positive charges are not required for import into mitochondria, but at least one positively charged loop is required for insertion into the IM. We found that the signal to import Tim23p across the OM and into mitochondria is carried in the first and fourth hydrophobic TM segments. These TM segments can mediate the import of Tim23p into mitochondria, but they are not sufficient to insert Tim23p into the IM. These hydrophobic segments represent novel mitochondrial targeting information and differ dramatically from the positively charged import signals carried on most matrix-targeted precursor proteins. Our results suggest that Tim23p contains separate and distinct targeting signals: hydrophobic signals for import into the organelle and positively charged loops for IM insertion. We therefore propose that Tim23p is imported into mitochondria in at least two independent steps using machinery different than that used by presequence-containing proteins.
The import of Tim23p appears to differ from another IM protein Bcs1p
whose targeting signal has been recently characterized (Fölsch
et al., 1996
). Bcs1p, like Tim23p, does not carry an amino-terminal presequence and its targeting signal has been shown to
be a positively charged stretch of amino acids immediately adjacent to
a single TM-spanning segment. This positively charged region, which has
the capacity to form an amphipathic helix, is proposed to function in a
manner analogous to presequences. The TM segment of Bcs1p is thought to
be a stop-transfer sequence preventing complete translocation of Bcs1p
into the matrix. In contrast to Bcs1p, we find that the positively
charged loops in Tim23p do not function as import signals. Tim23p
constructs lacking the positive charges in loops L1 or L3 are still
imported into the organelle.
Our results suggest that the positively charged loops of Tim23p mediate
insertion into the IM. The mitochondrial IM has two separate import
complexes, the Tim54p-Tim22p complex and the Tim23p-Tim17p complex
(Sirrenberg et al., 1996
, 1998
; Kerscher et al.,
1997
; Koehler et al., 1998
). We have recently shown that
Tim23p is inserted into the IM via the Tim54p/Tim22p machinery
(Kerscher et al., 1997
). In contrast, Bcs1p appears to use
the Tim23p/Tim17p pathway (Fölsch et al., 1996
,
Kerscher and Jensen, unpublished data). Furthermore, matrix-destined
precursor proteins with amino-terminal presequences appear to be
translocated across the IM by the Tim23p/Tim17p machinery (Sirrenberg
et al., 1996
; Kerscher et al., 1997
; Emtage, Kerscher, and Jensen, unpublished data; Kerscher and Jensen,
unpublished data). Tim23p must therefore carry a different signal
directing it to the Tim54p-Tim22p complex. We propose that proteins
that carry either an amino-terminal presequence, or an internal segment capable of forming an amphipathic helix, are recognized by the Tim23p-Tim17p complex, while the positively charged loops of Tim23p (which are not amphipathic) are recognized by the Tim54p-Tim22p complex. In addition to Tim23p, several other polytopic proteins, including Tim22p, Tim17p, Aac1p, and PiC, are inserted into the IM via
the Tim54p/Tim22p machinery (Sirrenberg et al., 1996
;
Kerscher et al., 1997
). We predict that the insertion of
these IM proteins is mediated by positively charged, matrix-facing
loops similar to those in Tim23p.
While the positively charged loops are required for the IM insertion of
Tim23p, these loops do not mediate the import of Tim23p into
mitochondria. Instead, hydrophobic sequences in TM segments 1 and 4 appear to meditate the import of Tim23p into the organelle. Supporting
our hypothesis that the import signal for this class of proteins is
hydrophobic, Kübrich et al. (1998)
have recently identified a translocation intermediate of Aac1 during its transfer across the OM. The majority of the Aac1p intermediate is exposed to the
IMS, but remains stuck in the OM with its carboxyl terminus exposed to
the matrix. This intermediate of Aac1p is strikingly similar to many of
our Tim23p constructs that are unable to insert into the IM.
Interestingly, the Aac1p intermediate cannot be removed from the
mitochondrial membranes by high-salt treatment, suggesting that its
association with the OM import machinery is via hydrophobic interactions.
Although the L1L3Neut version of Tim23p does not carry positively charged residues in loops L1 or L3, do basic residues located in other parts of the molecule, in particular in the amino-terminal domain or at the C terminus, contribute to the potential-dependent import of L1L3Neut into mitochondria? In preliminary studies, we have mutated the basic residues in the C terminus of Tim23p and find that the altered protein rescues the tim23::URA3 disruption strain and is efficiently imported into isolated mitochondria (Davis and Jensen, unpublished observations). We also find that Tim23 proteins lacking the amino-terminal domain are efficiently imported into mitochondria in the absence of positively charges in either loop L1 or L3 (Davis and Jensen, unpublished observations). We therefore argue that the potential-dependent import of L1L3Neut is independent of basic residues. Experiments to determine whether the TM segments of Tim23p are sufficient for import of a passenger protein into mitochondria are in progress.
Aac1p, an ATP/ADP carrier, and PiC, the phosphate carrier, are members
of the mitochondrial carrier family. Carrier family members contain six
TM segments and are composed of a threefold repeat structure of two TM
segments with an intervening loop. Studies with Aac1p indicate that it
carries import information in the first one-third of the protein and in
the carboxyl-terminal two-thirds (Adrian et al., 1986
;
Pfanner et al., 1987
; Smagula and Douglas, 1988a
,b
).
Similarly, another member of the carrier family UCP, the mammalian
brown fat-UCP, has at least two internal targeting signals (Liu
et al., 1988
, 1990
). Like Aac1p and UCP, we find that Tim23p
has redundant import information, with targeting signals in the first
and fourth TM segments of Tim23p.
Although TM segment 1 and TM segment 4 of Tim23p promote import, they
do not function efficiently when present as the sole TM domain. The
targeting activity of TM1 is much more effective when present with TM2,
and TM4 works better in concert with TM3. It is therefore possible that
some sort of secondary structure, such as paired TM segments are
important for import across the OM. Supporting this possibility, we
find that Tim23p lacking its fourth TM segment gets stuck in the OM
during its import into mitochondria. Protease digestion indicates most
of the Tim23 protein is inside the OM with TM3 extending outside the
organelle. Our results argue that Tim23p is imported into mitochondria
in an N-to-C direction, and that unpaired TM segments are not
efficiently recognized by the import machinery. Similar observations
suggesting that coordination between TM segments or that specific
secondary structures are important determinants for import have been
noted in studies with other IM proteins (Liu et al., 1988
,
1990
).
Recently, Káldi et al. (1998)
analyzed the import
pathway of Tim23p and identified two import signals within Tim23p. One import signal was reported to be located in the first 62 amino acid
residues of Tim23p and mediated the translocation of the Tim23 protein
across the OM in the presence or absence of membrane potential. We find
no evidence for an import signal in the amino-terminal region of
Tim23p. Tim23Np, which contains the first 96 residues of Tim23p, is not
imported into isolated mitochondria, even in the presence of a membrane
potential (Ryan et al., 1998
; Figure 4). Quantitation of
gels indicates that <1% of the Tim23Np protein added to the import
reaction is protected from protease digestion. Whether this protected
material is a small amount of Tim23 protein actually imported into the
organelle or represents incompletely digested protein is unclear.
Nonetheless, in our hands the Tim23p amino-terminal domain does not
appear to contain a significant import signal in experiments with
isolated mitochondria. Supporting this view, we find that constructs
carrying the Tim23p amino-terminal region with either TM2 or TM3 are
also not imported (Figure 6).
Káldi et al. (1998)
identified a second import signal
in Tim23p, the positively charged loop L3, and proposed that L3
functioned as an internal import signal. When L3 was placed at the
amino terminus of a passenger protein, the authors observed that the chimeric protein (called IS23-DHFR) was imported into the matrix. We
find no evidence that L3 functions as an import signal, but instead
find that L3 mediates the insertion of Tim23p into the IM after it has
crossed the OM. We find that Tim23p constructs lacking the positively
charged residues in loop L3 are still imported into mitochondria. In
addition, when we placed loop L3 of Tim23p in front of the DHFR
protein, the L3-DHFR construct was not imported and did not even bind
to mitochondria (Davis and Jensen, unpublished observations). We
speculate that the passenger protein, called d2-20 (Klaus et
al., 1996
), used by Káldi et al. to construct IS23-DHFR, contains additional basic residues that, when combined with
the positive charges in L3, generate a functional presequence. Supporting this view, Káldi et al. found the import of
IS23-DHFR, like other presequence-containing proteins, was dependent
upon Tim23p function. In contrast, import of authentic Tim23p is
dependent upon Tim54p/Tim22p and does not require Tim23p function.
If the import signal for Tim23p is truly a hydrophobic TM segment, then
several questions remain. For example, what mitochondrial import
machinery specifically recognizes this signal? In vitro studies suggest
that proteins with presequences prefer the OM Tom20p/Tom22p receptors
for their import, while proteins with internal targeting information,
such as Aac1p, utilize Tom70p (Söllner et al., 1989
,
1990
). Consistent with this idea, we find that Tim23p uses the Tom70p
receptor for its import (Emtage and Jensen, unpublished observations).
Our studies suggest that Tom70p may specifically recognize the
hydrophobic import signals within proteins like Tim23p, whereas the
Tom22p/Tom20p receptors appear to interact with positively charged
presequences. Recent studies, however, suggest that Tom70p may also
recognize presequences (Brix et al., 1997
; Komiya et
al., 1997
), raising the possibility that Tom70p interacts with
both types of import signals. Experiments are currently underway to
identify mitochondrial proteins that directly recognize the hydrophobic
import signals within Tim23p.
Another unanswered question is why Tim23p is targeted to the mitochondria and not to other cellular organelles, such as the endoplasmic reticulum. Since the Tim23p hydrophobic segments have no apparent distinguishing features, why aren't they recognized by the signal recognition particle or other endoplasmic reticulum translocation machinery? Whether cells contain a cytosolic factor that recognizes the signals within Tim23p, and targets it specifically to mitochondria, awaits further studies.
We thank Carolyn Machamer, Dan Isaac, Mike Maceyka, Hiromi
Sesaki, Jason Holder, and Oliver Kerscher for critical comments on the
manuscript. We also thank Mike Yaffe for the F1
and hexokinase antisera and Jeff Schatz for antiserum to Tom70p. This work was supported by grant R01-GM-46803 from the United States Public Health
Service to R.E.J.; Medical Scientist Training Program grant GM-07309 to
K.R.R; and a National Institute of Health Predoctoral Training grant
5T32GN07445 to A.J.D.
* Present address: Department of Developmental Biology, Stanford University, Beckman Center, Stanford, CA 94305.
Corresponding author.
This article has been cited by other articles: | http://www.molbiolcell.org/cgi/content/full/9/9/2577 | crawl-002 | refinedweb | 7,908 | 51.38 |
Serverside non-blocking IO in Swift
Ask questions in our Slack channel!
Lightning
(formerly Edge)
Node.
Reactive Programming
Lightning's event API embraces Functional Reactive Programming by generalizing the familiar concept of promises. This API is called StreamKit.
StreamKit's architecture is inspired by both ReactiveCocoa and RxSwift.
Why did we reimplement?
- Lightning should be easy to use out of the box.
- Lightning is optimized for maximum performance, which requires careful tuning of the internals.
- The modified API is meant to be more similar to the familiar concepts of Futures and Promises.
- We don't want to be opinionated about any one framework. We want it to be easy to integate Lightning with either ReactiveCocoa or RxSwift.
FRP, greatly simplies management of asynchronous events. The general concept is that we can build a spout which pushes out asynchronous events as they happen. Then we hookup a pipeline of transformations that operate on events and pass the transformed values along. We can even do things like merge streams in interesting ways! Take a look at some of these operations or watch this talk about how FRP is used at Netflix.
Installation
Lightning is available as a Swift 3/4 package. Simply add Lightning as a dependency to your Swift Package.
Swift 3
import PackageDescription let package = Package( name: "MyProject", dependencies: [ .Package(url: "", majorVersion: 0, minor: 3) ] )
Swift 4
// swift-tools-version:4.0 // The swift-tools-version declares the minimum version of Swift required to build this package. import PackageDescription let package = Package( name: "MyProject", dependencies: [ .package(url: "", from: "0.3.0"), ] )
Usage
Routing
import Lightning import Foundation // Create an API router. let api = Router() // Add a GET "/users" endpoint. api.get("/users") { request in return Response(status: .ok) } // NOTE: Equivalent to `api.post("/auth/login")` let auth = api.subrouter("/auth") auth.post("/login") { request in return Response(status: .ok) } // Middleware to log all requests // NOTE: Middleware is a simple as a map function or closure! let app = Router() app.map { request in print(request) return request } // Mount the API router under "/v1.0". app.add("/v1.0", api) // NOTE: Warnings on all unhandled requests. No more hanging clients! app.any { _ in return Response(status: .notFound) } // Start the application. app.start(host: "0.0.0.0", port: 3000)
Raw HTTP
import Lightning import Foundation func handleRequest(request: Request) -> Response { print(String(bytes: request.body, encoding: .utf8)!) return try! Response(json: ["message": "Message received!"]) } let server = HTTP.Server() server.listen(host: "0.0.0.0", port: 3000).startWithNext { client in let requestStream = client.read() requestStream.map(handleRequest).onNext{ response in client.write(response).start() } requestStream.onFailed { clientError in print("Oh no, there was an error! \(clientError)") } requestStream.onCompleted { print("Goodbye \(client)!") } requestStream.start() } RunLoop.runAll()
TCP
import Lightning import Foundation let server = try! TCP.Server() try! server.bind(host: "0.0.0.0", port: 50000) server.listen().startWithNext { connection in let byteStream = connection.read() let strings = byteStream.map { String(bytes: $0, encoding: .utf8)! } strings.onNext { message in print("Client \(connection) says \"\(message)\"!") } strings.onFailed { error in print("Oh no, there was an error! \(error)") } strings.onCompleted { print("Goodbye \(connection)!") } strings.start() } RunLoop.runAll()
Lightning is not Node.js
Lightning is not meant to fulfill all of the roles of Node.js. Node.js is a JavaScript runtime, while Lightning is a TCP/Web server framework. The Swift compiler and package manager, combined with third-party Swift packages, make it unnecessary to build that functionality into Lightning.
Github
Help us keep the lights on
Used By
Total: 1 | http://swiftpack.co/package/skylab-inc/Lightning | CC-MAIN-2018-39 | refinedweb | 586 | 54.18 |
Hi Nitin,
Great answer. Thanks a lot. One more question...
I am in the Javaland here, so another viable option for my application is
using JCR, such as the Apache Jackrabbit implementation.
Did you happen to take a look at that as well? I think JCR has even more
similarities with CouchDB than RDF.
How would you compare JCR and CouchDB ?
Thanks a lot,
Demetrius
On Thu, May 7, 2009 at 5:04 PM, Nitin Borwankar <nitin@borwankar.com> wrote:
> Demetrius Nunes wrote:
>
>> Hi
>>
>>
>>
> Hi Demetrius,
>
> We ( bibkn.org) have investigated and used SQL databases, RDF store
> (Virtuoso) and CouchDB for bibliographic metadata management. I am the
> project manager and data architect for this project.
> Relnl databases are a first choice often but have many limitations in
> management of loosely typed, messy, string based data sets. So we are in
> agreement on not using that technology.
>
> We, bibkn.org, need both the schemalessness of CouchDB at one end of our
> workflow and the strongly-typedness of RDF at the other end of the workflow
> when all our data has been cleaned up and "ontologized". So we don't see
> this as an either/or between CouchDB and RDF stores.
> However we can definitely say one thing - if you need just the flexible
> schema aspect and are using RDF to give you that, then that is massive
> overkill and the conceptual overhead of the RDF (ontology, schemas,
> namespaces, completely normalized everything ie URI's for subject,
> predictae, object) , is simply not worth it. If however you want to do
> logical inference and reasoning over your data then clearly the RDF and
> semantic machinery gives you a whole lot of goodness that is worth the
> overhead.
>
> So CouchDB is not a substitute for an RDF-store, but you may be using an
> RDF-store for the lesser things it gives you (flexible schema) and in that
> case CouchDB can do a lot more for you at a much lower overhead and much
> greater ease of use and integration into existing tools.
>
> Additionally SPARQL (like SQL) is not really meant for text search which
> is critical for loosely typed data. So even at our RDF end we have a Solr
> instance for rapid text search over the RDF store.
> Additionally we have couchdb-lucene as an extension on our CouchDB instance
> and this has given us everything we need at the loosely typed data end of
> our workflow.
>
> So if semi-structured data and document management is your primary use case
> and there is no semantic/ontology/inference component then forget RDF-stores
> and just go with CouchDB.
>
> In our project we are developing a format on top of JSON to export
> bibliographic metadata for integration into JSON friendly date consumers, it
> also happens to have easy mapping to RDF.
> So even if you go to Couch now you may be able to integrate into an
> RDF-store at some later stage if the need arises.
>
> Hope this helps,
>
> Nitin Borwankar,
> Project Manager, Bibliographic Knowledge Network
> bibkn.org
>
>
>
>
>
--
____________________________ | http://mail-archives.apache.org/mod_mbox/couchdb-user/200905.mbox/%3C4aa4f4d60905071311s82a0805p7df9b201c077b840@mail.gmail.com%3E | CC-MAIN-2013-48 | refinedweb | 506 | 60.14 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.