text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
On 09/05/2010 05:29 PM, James Morris wrote:> On Sat, 4 Sep 2010, John Johansen wrote:> >> On 09/04/2010 04:57 AM, Jiri Slaby wrote:>>> Hi,>>>>>> stanse found that you possibly double lock ns->lock:>>> aa_remove_profiles>>> -> write_lock(&ns->lock)>>> -> __remove_namespace(ns)>>> -> destroy_namespace(ns)>>> -> write_lock(&ns->lock)>>>>>> Could you fix that if that may happen at all?>>>>> Yes, thanks Jiri, the patch has already been posted>>>>> > Are all of the patches in that set ready to be merged to Linus?> yes they should be now, I believe so.I have just posted out the updated Fix security_task_setrlimit logicpatch that fixes the commenting. I can repost the series if you wouldlike.thanksjohn | http://lkml.org/lkml/2010/9/6/287 | CC-MAIN-2015-27 | refinedweb | 113 | 68.5 |
:
Developing the Business Logic
Introduction
In this article, we will focus on the business logic tasks that need to be implemented in a loan request process. The part of the loan request process we will cover in this article can be seen in figure 1.
First, we take a look at the script task and, then, we implement the Java service tasks. Finally, at the end of this section, we show the BPMN 2.0 XML we created so far and test the process with Eclipse.
Implementing a script task
The first task we encounter when we look at the loan request process is the script task. We will use it to implement the credit check.
Understanding script tasks in BPMN 2.0
The script task is an official BPMN 2.0 construct. In figure 1, you can see the symbol that BPMN 2.0 prescribes for the script task. It has the same rectangular shape as a regular service task. The little graphical marker in the upper left corner indicates that the task is a script task. The script that is defined in the task will be executed by the process engine—in our case, the Activiti engine. An analyst will define the task in the model and a developer has to implement the script with a language the engine can interpret. When execution of the script is completed, the script task itself will also complete, and the engine moves toward the next execution.
Important BPMN 2.0 attributes of the script task construct are scriptFormat and script. The scriptFormat attribute defines the format of the script and is mandatory. The optional script attribute contains the actual script that needs to be executed. If no script is defined, the task will just complete without doing anything.
Working with Script tasks in Activiti
For the Activiti engine to execute the coded script, the scriptFormat attribute must have a value that is compatible with the JSR-223, Scripting for the Java platform. The scripting languages that come with engines that conform to the JSR are numerous. We will name a just a few of the languages that are supported: Groovy, Jaskell, AWK, Python, JavaScript, and PHP. For more information, you can check out the JSR-223 specification at.
Because, by default, the Groovy jar is shipped with the Activiti distribution, we will use Groovy as the script language in the Check Credit task. If you want to use another JSR-223-compatible scripting engine, it is sufficient to add the corresponding jar file to the classpath and use the appropriate name in the script task configuration.
All the process variables are standard accessible in the script, because the script has access to the execution that arrives in the task. You can, for example, use the process variable inputArray, an array of integers, as shown below.
<script> Sum = 0; for ( i in inputArray ) { sum += i } </script>
That’s great stuff isn’t it? Besides reading variables, it is also possible to set process variables in a script by using an assignment statement. In the example above, the sum variable will be stored as a process variable after the script has been executed. If you want to avoid this default behavior, script-local variables can be used as well. In Groovy, the keyword def must be used so def sum = 0. In that case, the sum variable is not stored as a process variable.
An alternative way to set process variables is done by explicitly using the execution variable that is available in the script task.
<script> def bookVar = "BPMN 2.0 with Activiti" execution.setVariable("bookName", bookVar); </script>
As a final remark on some limitations while coding script, it is worth mentioning that some keywords cannot be used as variable names—out, out:print, lang:import, context, and elcontext—because these are reserved keywords within Activiti and Groovy. Back to our process—back to the Check Credit script task.
Implementing the credit check script task
The implementation of the Check Credit task is pretty straightforward. The Loan Sharks Company agrees to let a customer pass the credit check when his or her income is bigger than the requested loan amount. Check out listing 1 to see the BPMN 2.0 XML fragment that defines the script task.
Listing 1 Script task BPMN fragment
<scriptTask id="checkCredit" scriptFormat="groovy"> #A <script> out:print "Checking credit for " + name + "\n"; #1 creditCheckOk = false; #2 if(income > loanAmount){ creditCheckOk = true; } out:print "Credit checked for " + name + "\n"; </script> </scriptTask> #A The Groovy script language declaration #1 Using the process variable ‘name’ #2 Defining a new process variable
In the script, we use the name variable (#1) to print some logging on the console. Then, we create a new process variable (#2) that will hold the information about the credit check in a Boolean. As long as our loan requestor’s income is bigger than the requested loan amount, the credit check will pass (#3).
Using script tasks on the Activiti Engine
To use the Groovy scripting engine on the standard Tomcat distribution that is installed with the Activiti installation, it’s necessary to copy the Groovy groovy-all-version.jar to the tomcat/lib directory. You can find the Groovy jar in the examples/activiti-engine-examples/libs-runtime directory of your Activiti distribution.
We have our first script task in the process under control—moving on to the java service tasks.
Defining Java service tasks
Now, we are going to implement the Check Risk task and the Create Application task. The Check Risk task will return a Boolean indicating whether the risk of lending money to a certain customer is too high or not. The Create Application task gathers all the information produced so far in a LoanApplication Java bean and puts this bean on the process as a variable so we have easy access to it in the subsequent steps of the process.
Implementing the risk check Java service task
The Check Risk task is implemented with a Java service task. Typically, these kinds of checks contain valuable logic for the business and change frequently. To give more control to the business in maintaining this kind of logic and enable possible reuse for other applications, business rule engines are often used. Now, we’ll use the Java service task to implement the check.
To illustrate how the risk check behaves, see listing 2.
Listing 2 Implementation of the Check Risk service task
public class RiskChecker implements JavaDelegation { #A public void execute(DelegateExecution execution) { String name = (String) execution.getVariable("name"); System.out.println("Checking loan risk for : " + name); boolean riskCheckOk = false; if(!name.equalsIgnoreCase("Evil Harry")) { #1 riskCheckOk = true; } execution.setVariable("riskCheckOk", riskCheckOk); #2 } } #A Java service task standard interface #1 Checking the name process variable #2 Setting riskCheckOk variable on execution
In the execute method of the RiskChecker class, we check if the name of our loan applicant is Evil Harry (#1). We have to do this because the Loan Sharks Company knows Harry well enough to be sure that, whenever money is lent to him, nobody ever sees it back again. Harry will not pass the risk check! Don’t forget to set the riskCheckOk variable on the execution (#2) so we can use it later.
Implementing the create application Java service task
The Create Application service task gathers all the data that was produced in the previous steps in one object instance and puts that instance in the process as a process variable. Code listing 3 displays the service task implementation.
Listing 3 The Create Application service task implementation
public class ApplicationCreator implements JavaDelegation { public void execute(DelegateExecution execution) { LoanApplication la = new LoanApplication(); #1 la.setCreditCheckOk((Boolean) execution .getVariable("creditCheckOk")); #2 la.setRiskCheckOk((Boolean) execution.getVariable("riskCheckOk")); la.setCustomerName((String) execution.getVariable("name")); la.setRequestedAmount((Integer) execution.getVariable("loanAmount")); la.setEmailAddres((String) execution.getVariable("emailAddress")); execution.setVariable("loanApplication", la); #3 } } #1 Creating the LoanApplication bean #2 Retrieving process variable to populate the bean #3 Setting the LoanApplication instance on the process
In the execute method of the ApplicationCreator Java service task class, we create the LoanApplication instance (#1). Remember that this object has to implement the Serializable interface; otherwise, the Activiti engine will not be able to store its state in the process database. The values with which we populate the object (#2) are retrieved, on one hand, from the start form we will build and, on the other, the ones that have been set by the two checks we implemented in the earlier tasks. At the end, don’t forget to store the variable in the execution (#3).
We saw that the Check Credit script task and the Check Risk service task can be executed in parallel so we take a look at that the parallel gateway BPMN construct.
Explaining the parallel gateway
The parallel gateway is used in the loan request process model to indicate that the check tasks can be executed independently and in parallel. The parallel gateway concept in BPMN is used to model concurrency in a process. It allows the execution path of a process to fork into multiple paths of execution or join multiple incoming paths together to a single point. The functionality of the parallel gateway is based on the incoming and outgoing sequence flow.
- join—All concurrent executions arriving at the parallel gateway wait in the gateway until an execution has arrived for each incoming sequence flow. Then the process continues past the joining gateway.
- fork—All outgoing sequence flows are followed in parallel, creating one concurrent execution for each sequence flow.
An important difference with, for example, the exclusive gateway is that the parallel gateway does not evaluate conditions.
Now that we have our business logic together and have talked about the control flow surrounding it, we will take a look at the resulting BPMN 2.0 XML.
Creating the BPMN 2.0 XML file
To be able to test this part of the loan request process, we are going to build a BPMN 2.0 XML file with all the tasks covered so far. We saw in the BPMN 2.0 model that the Check Credit task and the Check Risk task should be executed in parallel. The construct that is used in BPMN to realize this kind of behavior is the parallel gateway.
Take a look at code listing 4 to see how our loan request process looks like so far.
Listing 4 BPMN 2.0 XML for the partly finished loan request process
<process id="loanrequest" name="Process to handle a loan request"> <startEvent id='theStart' /> <sequenceFlow id='flow1' sourceRef='theStart' targetRef='fork' /> <parallelGateway id="fork" /> #1 <sequenceFlow id='flow2' sourceRef="fork" targetRef="checkCredit" /> <sequenceFlow id='flow3' sourceRef="fork" #2 <scriptTask id="checkCredit" scriptFormat="groovy"> <script> out:print "Checking credit for " + name + "\n"; creditCheckOk = false; if(income < loanAmount){ creditCheckOk = true; } out:print "Credit checked for " + name + "\n"; </script> </scriptTask> <sequenceFlow id='flow4' sourceRef="checkCredit" #3 <serviceTask id="checkRisk" activiti: </serviceTask> <sequenceFlow id='flow5' sourceRef="checkRisk" targetRef="join" /> <parallelGateway id="join" /> #4 <sequenceFlow id='flow6' sourceRef="join" targetRef="createApplication" /> <serviceTask id="createApplication" activiti: </serviceTask> <sequenceFlow id='flow7' sourceRef="createApplication" targetRef="wait" /> <userTask id='wait' /> #5 <sequenceFlow id='flow8' sourceRef="wait" targetRef="theEnd" /> <endEvent id='theEnd' /> </process> #1 Declaration of the parallel gateway fork #2 One execution flow forking to a task #3 After the task is finished join again #4 Declaration of the parallel gateway join #5 User task for testing purposes
After the start of the process, execution is brought to the parallel gateway (#1) by the first sequence flow. Execution forks and the checkCredit script task and checkRisk Java service task are executed concurrently (#2). After both these tasks have finished, the process is guided by the outgoing sequence flows of the tasks (#3) toward the join (#4). After that, the process continues normally with the createApplication task. The user task that is defined at the bottom of the xml (#5) is purely there for testing purposes. Without it, the process ends after the createApplication task and we cannot query it anymore to see the value of the process variables.
NOTE In listing 4, we didn’t use the definitions element. We just leave it out to be brief but remember that it is needed when we want to execute the BPMN 2.0 XML, whether we use it standalone in a unit test or on Activiti engine after a deployment.
All the constructs used in the process so far can be easily tested in a unit test in Eclipse. It is good practice to test as early as possible. We want to get rid of possible bugs in the BPMN before we are deploying on Activiti engine, so let’s give our process a spin!
Testing the process with Eclipse
We will use the ActivitiRule class to get the RuntimeService and use the @Deployment annotation to deploy our process. Take a look at the code in code listing 5 to see how it is done.
Listing 5 Testing the loan request process tasks
public class LoanRequestTest { @Rule public ActivitiRule activitiRule = new ActivitiRule("activiti.cfg-mem.xml"); #A @Test @Deployment(resources={"chapter4/loanrequest_firstpart.bpmn20.xml"}) public void creditCheckTrue() { Map<String, Object> processVariables = #1 ProcessInstance pi = activitiRule.getRuntimeService() .startProcessInstanceByKey( "loanrequest", processVariables); processVariables = activitiRule.getRuntimeService() .getVariables(pi.getId()); LoanApplication la = (LoanApplication) #2 processVariables.get("loanApplication"); #2 assertEquals(true, la.isCreditCheckOk()); assertEquals(true, la.isRiskCheckOk()); #3 } } #A Configures Activiti to use the in-memory database #1 Starts the process with a variables map #2 Retrieves the LoanApplication process variable #3 Tests the process variable
As you can see, we don’t use the default activiti.cfg.xml but take a configuration file that uses the in-memory H2 database. We deploy the loan request BPMN 2.0 XML that we defined in listing 4 to do some early testing. Since we didn’t implement a start form for the process yet, we have to start the process with some programmatically defined variables (#1). If we don’t do this the service tasks will run into NullPointerExceptions.
Let’s start a loan request for Miss Piggy. As you can see Miss Piggy earns more money than she wants to borrow so passing the checks shouldn’t be any problem. You can see as well that she has an email address; this address can be used later to implement the email service task. After the process is started, we get the loanApplication variable out of the process (#2). This variable is set by the CreateApplication task. If the tests (#3) succeed, it means that all the tasks have run successfully.
Summary
You have seen how script tasks and Java service tasks can perform the logic that is needed to handle a loan request. We have seen how to implement a bit of business logic and test it with a simple unit test. We also covered two kinds of gateway that BPMN 2.0 provides, to control the paths of execution in a process, the exclusive gateway and the parallel gateway.
Speak Your Mind | http://www.javabeat.net/developing-the-business-logic-in-activiti/ | CC-MAIN-2014-42 | refinedweb | 2,509 | 53.1 |
A few days ago Microsoft announced TypeScript 4.3 Beta. Here are 3 updates that I found the most interesting and a list of the rest of the updates. Let's go!
overrides + noImplicitOverrides compiler option
TypeScript now takes care of method names' safety when overriding parent class methods. When a method is marked with override, TypeScript will always make sure that a method with the same name exists in a base class. So if you make a change in the method name in the base class, you will be forced to also update it in the derived class. Neat!
But what if you forget to put
override on a method? TypeScript's got a compiler option for you: with
noImplicitOverrides enabled TypeScript will throw an error if you have a method with the same name in both base and derived classes:
class Base { show() { // ... } } class Derived extends Base { // TypeScript will throw an error // unless there's the "override" before the method name show() { // ... } }
Different types for getter and setter
You don't have to limit getters and setters to the same type anymore.
Let's say you have a private field of type
number. You want a setter for the field to accept both
number and
string, convert to
number, and put the value into the private field. But you want a getter to always return
number because the field can't be anything but
number. This code would throw an error before 4.3 and now this is an acceptable way of typing getter and setter:
class IrresponsibleNumStorage { private _num = 42; // Before 4.3 TS would throw an error: // 'get' and 'set' accessor must have the same type. get num(): number { return this._num; } // Before 4.3 TS would throw an error: // 'get' and 'set' accessor must have the same type. set num(maybeNumber: number | string) { this._num = Number(maybeNumber); } }
Import statement completions
This is not something that's going to be directly used in day-to-day coding, but it's going to have a huge impact on developer experience. Starting from version 4.3 TypeScript will provide a list of possible imports after just typing the
import keyword. Just look at the demo by Microsoft:
Some IDE's already have similar functionality implemented from their side, but it's going to become more broad and consistent thanks to the native support by the TS language server.
What else?
- Improvements to the template string typings
- Methods and accessors now also can be given #private names
- Checking if a promise is truthy(
if (promise) {}) under
strictNullChecksnow throws an error
- Index signatures can be used on static class members
- Enums with number members can't be compared to numbers
Thank you for reading!
P.S. I'm new to Twitter and will be happy if you drop me a line there!
Discussion (6)
Great job! Thank you!
Thanks! Glad you liked it!
Nice article.
Thank you!
Great article! :)
Thanks Klaus! | https://dev.to/alexkhismatulin/typescript-4-3-quickly-my-top-3-updates-1n4a | CC-MAIN-2022-27 | refinedweb | 491 | 70.84 |
Converting from Hypersonic to MYSQLChristopher Bird Jun 17, 2004 4:23 PM
This is actually a series of messages strung together from a posting at "theserverside.com" I didn't get many helpful answers here, so I am submitting the evidence to this august group and begging for assistance.
Sorry the post is so long, I wanted to offer as much evidence as I could.
"I had a good working demo of an EJB/CMP application on the Hypersonic DB. However, Hypersonic didn't do some of the things I needed, so I "upgraded" to mysql on a windows platform.
My development environment is Eclipse 2.1.3 and Lomboz 2.1.3.
My problem comes in creating the DDs for both the mysql world and the beans themselves. I have modified the xdoclet.xml file to put in what I thought were the right pieces of data for the generation of the jbosscmp-jdbc.xml file.
Some key parts of that are as follows:
<jbosscmp-jdbc>
mySQLDS
<datasource-mapping>mySQL</datasource-mapping>
<preferred-relation-mapping>foreign-key</preferred-relation-mapping>
-------------------------------------------------------
The following is the log output from the deployment in jboss
Depends On Me: , ObjectName: jboss.j2ee:jndiName=RoleBean,service=EJB
state: FAILED
I Depend On:
Depends On Me: org.jboss.deployment.DeploymentException: Error in jbosscmp-jdbc
.xml : datasource-mapping mySQLDS not found, ObjectName: jboss.j2ee:jndiName=Per
sonBean,service=EJB
state: FAILED
I Depend On:
Depends On Me: org.jboss.deployment.DeploymentException: Error in jbosscmp-jdbc
.xml : datasource-mapping mySQLDS not found]
-----------------------------------------------------------
The following is the sum total of my mysql-service.xml file. This file is in the jboss/server/default/deploy folder. I found the skeleton of this file in the jboss/docs/examples/jca directory. There was a recommendation on one site (I have forgotten which) that I modify this file. It is somewhat worrying that the comments in the file suggest that it is for mySQL 2.011 and I am using release 3.2.3 of JBOSS.
Any help would be greatly appreciated. I am fast losing hair - I keep tearing it out!
Chris Bird - see XML below.
<?xml version="1.0" encoding="UTF-8"?>
<!-- ===================================================================== -->
<!-- -->
<!-- JBoss Server Configuration -->
<!-- -->
<!-- ===================================================================== -->
<!-- $Id: mysql-ds.xml,v 1.1 2002/07/22 22:57:24 d_jencks Exp $ -->
<!-- ==================================================================== -->
<!-- Datasource config for MySQL using 2.0.11 driver -->
<!-- ==================================================================== -->
<local-tx-datasource>
<jndi-name>mySQLDS</jndi-name>
<connection-url>jdbc:mysql://localhost/kipoko</connection-url>
<driver-class>com.mysql.jdbc.Driver</driver-class>
<user-name>chris</user-name>
</local-tx-datasource>
I then followed the Chapter 8 - Using Other Databases of the JBoss 3.2 Getting Started PDF file located here to setup MySQL database to use with JBosss.
I changed my jbosscmp-jdbc.xml to look like the following:
-----------------------------------------------------------------------------
<jbosscmp-jdbc>
java:/DefaultDS
<datasource-mapping>mySQL</datasource-mapping>
<preferred-relation-mapping>foreign-key</preferred-relation-mapping>
-----------------------------------------------------------------------------
The mysql-service.xml (in the deploy directory) looks like the following. Note that apart from comments this is the whole file. It would appear to me that there ought to be more than this.
-----------------------------------------------------------------------------
<local-tx-datasource>
<jndi-name>DefaultDS</jndi-name>
<connection-url>jdbc:mysql://localhost:3306/kipoko</connection-url>
<driver-class>com.mysql.jdbc.Driver</driver-class>
<user-name>chris</user-name>
</local-tx-datasource>
-------------------------------------------------------------------------------
When i go to to look at what has been deployed, I am not exactly sure what to look for, but here are some indications that things are not right:
In the section jboss.jdbc there is 1 service: text says
service=SQLExceptionProcessor
If I go to the JNDIView service, I would expect to see a DefaultDS service under the java namespace. It isn't there. That confirms the error messages from the jbossdeployer, but doesn't solve the problem for me! My guess is that the mysql-service.xml did not deploy correctly, but the log file entries don't confirm that. Here is the extract from the log....
------------------------------------------------------------------------------
2004-06-17 10:03:44,310 INFO [org.jboss.deployment.MainDeployer] Starting deployment of package: file:/D:/jboss/jboss-3.2.3/server/default/deploy/mysql-service.xml
2004-06-17 10:03:44,360 INFO [org.jboss.deployment.MainDeployer] Deployed package: file:/D:/jboss/jboss-3.2.3/server/default/deploy/mysql-service.xml
-------------------------------------------------------------------------------
The actual error messages in the log for the failure to deploy my beans are:
-----------------------------------------------------------------------------
ObjectName: jboss.j2ee:jndiName=PersonBean,service=EJB
state: FAILED
I Depend On:
Depends On Me: org.jboss.deployment.DeploymentException: Error: can't find data source: java:/DefaultDS; - nested throwable: (javax.naming.NameNotFoundException: DefaultDS not bound)]
-----------------------------------------------------------------------------
Any (and all) further help greatly appreciated. If anyone would prefer to get in touch offline, then feel free to email me at:
seabird(nospam)@msn.com.
Of course, you would need to remove the (nospam) entry!
There is clearly something squirrely here. It doesn't seem to matter what entry I put in here
<driver-class>com.mysql.jdbc.Driver</driver-class>
Even if I comment out this line, jboss claims to deploy it correctly (according to the log). So I am truly confused. "
Thanks in advance:
Chris
1. Re: Converting from Hypersonic to MYSQLChristopher Bird Jun 17, 2004 4:27 PM (in response to Christopher Bird)
There is a format error (or something) in the previous post.
The XML did not render cleanly.
It should have read: I had to drop the opening < from <defaults and </defaults to get the text to display correctly.
<jbosscmp-jdbc>
defaults>
mySQLDS
<datasource-mapping>mySQL</datasource-mapping>
<preferred-relation-mapping>foreign-key</preferred-relation-mapping>
defaults>
2. Re: Converting from Hypersonic to MYSQLPedro Nevado Jun 18, 2004 5:44 AM (in response to Christopher Bird)
In case you may find it useful, here it is my configuration with MySql: it works with JBoss 3.2.4 and MySql 4.0.18:
<!-- ===================================================================== --> <!-- --> <!-- Standard JBossCMP-JDBC Configuration --> <!-- --> <!-- ===================================================================== --> <!-- $Id: standardjbosscmp-jdbc.xml,v 1.39.2.46 2004/04/20 18:20:40 loubyansky Exp $ --> <jbosscmp-jdbc> <defaults> <datasource>java:/mysqlDS</datasource> <datasource-mapping>mySQL</datasource-mapping> ======================================== <datasources> <local-tx-datasource> <jndi-name>mysqlDS</jndi-name> <connection-url>jdbc:mysql://localhost:3306/vivoen</connection-url> <driver-class>com.mysql.jdbc.Driver</driver-class> <user-name>p</user-name> <password>s</password> <!-- sql to call on an existing pooled connection when it is obtained from pool --> <check-valid-connection-sql>SELECT 1</check-valid-connection-sql> </local-tx-datasource> </datasources>
3. Re: Converting from Hypersonic to MYSQLGary Ratcliffe Jun 18, 2004 10:16 AM (in response to Christopher Bird)
You mention your mysql-service.xml file. To define a new datasource you need a -ds.xml file. This is processed by the deployer to create various MBeans that respresent the JDBC data source.
Rename your mysql-service.xml file mysql-ds.xml and redeploy. You should see a number of additional MBeans in the jboss.jca section of the JMX console. The actual data source name you've defined in the <jndi-name> element should be visible in the JNDI view in the java: section.
This is the data source the EJB must be configured to use.
Hope this helps.
If you buy the full documentation it covers configuring new data sources (in the JCA chapter) and the CMP layer.
4. Thanks to All - Re: Converting from Hypersonic to MYSQLChristopher Bird Jun 18, 2004 1:14 PM (in response to Christopher Bird)
Thanks very much for your assistance. The subtelty of the -ds had previously escaped me. While I was about it I upgraded to JBOSS 3.2.4. All is working as expected now.
When I stop moving around, I will certainly buy the books.
Regards
Chris | https://developer.jboss.org/thread/76048 | CC-MAIN-2018-17 | refinedweb | 1,273 | 51.65 |
Ram.
I don't know about you, but I surely got excited about the package from his intro video:
It's a simple screen cast with good music.
In the GitHub rMaps repository you can find the simple installation instructions as well as three different examples. They all work if you run them in the latest version of RStudio otherwise you might run into a couple minor hiccups like I did.
Just to get you excited, this is the third example where you can easily add markers with pop ups.
suppressMessages(library("rMaps"))
map <- Leaflet$new() map$setView(c(51.505, -0.09), zoom = 13) map$tileLayer(provider = "Stamen.Watercolor") map$marker(c(51.5, -0.09), bindPopup = "Hi. I am a popup") map
You can view the interactive version of the example here -- I'm sure that a feature will be added later to make it easy to share the maps you make.
Overall, I think that this is a great start and I look forward to using it. For now, don't be discouraged with the lack of documentation. I'm sure that if you ask nicely Ramnath will answer asap!
References
Citations made with
knitcitations (Boettiger, 2014).
- Carl Boettiger, (2014) knitcitations: Citations for knitr markdown files.
- Ramnath Vaidyanathan, (2013) rCharts: Interactive Charts using Polycharts.js.
- Ramnath Vaidyanathan, (2014) rMaps: Interactive Maps from R.] rMaps_0.1 knitcitations_0.5-0 bibtex_0.3-6 ## [4] knitr_1.5 ## ## loaded via a namespace (and not attached): ## [1] codetools_0.2-8 digest_0.6.4 evaluate_0.5.1 ## [4] formatR_0.10 grid_3.0.2 httr_0.2 ## [7] lattice_0.20-24 plyr_1.8 rCharts_0.4.2 ## [10] RColorBrewer_1.0-5 RCurl_1.95-4.1 RJSONIO_1.0-3 ## [13] stringr_0.6.2 tools_3.0.2 whisker_0.3-2 ## [16] XML_3.95-0.2 xtable_1.7-1 yaml_2.1... | http://www.r-bloggers.com/rmaps-released/ | CC-MAIN-2014-42 | refinedweb | 302 | 68.87 |
In this scenario, the user wants to run a MariaDB database container out of their home directory, and they want to mount a volume from their home directory into the container. Let's discover how to manage security when mounting volumes in rootless containers.
Managing SELinux
I have talked several times about how SELinux is an excellent way to confine containers and how simple it is to work with when running a container. The container engine, Podman, launches each container with a unique process SELinux label (usually container_t) and labels all of the container content with a single label (usually container_file_t). We have rules that state that container_t can read and write all content labeled container_file_t. This simple idea has blocked major file system exploits.
Everything works perfectly until the user attempts a volume mount. The problem with volumes is that they usually only bind mounts on the host. They bring in the labels from the host, which the SELinux policy does not allow the process label to interact with, and the container blows up. This is not a bug; it is a feature. Even if users explicitly mount volumes, SELinux will, by default, prevent any access following the "security should never be opt-in" philosophy.
On the first attempt, if the user tries the following command:
$ podman run --rm -v $HOME/mysql-data:/var/lib/mysql/data -e MYSQL_USER=user -e MYSQL_PASSWORD=pass -e MYSQL_DATABASE=db -p 3306:3306 mariadb/server Permission denied ...
It blows up with permission denied. The user reads the man page, and figures out the problem is SELinux. The user sees that they can add a
:Z option to the volume mount, which tells Podman to relabel the volume's content to match the label inside the container. And the SELinux problem is solved.
$ podman run --rm -v $HOME/mysql-data:/var/lib/mysql/data:Z -e MYSQL_USER=user -e MYSQL_PASSWORD=pass -e MYSQL_DATABASE=db -p 3306:3306 mariadb/server Permission denied …
Oops, sad trombone sound - SELinux is fixed, but now the user hits another issue.
User namespace
This time, the problem is that the
$HOME/mysql-data directory is owned by the user. In a previous blog, I talked about how
--user works in rootless containers. I explained that the root user of a rootless container, by default, is the user's UID. That means files owned by the user inside of the container are owned by root inside of the container. The issue here is that MariaDB needs to own the database directory, and it does not run as root inside of the container. Instead, it runs as the MariaDB user.
$ podman run -ti mariadb/server grep mysql /etc/passwd mysql:x:999:999::/home/mysql:/bin/sh
After a little detective work, the user figures out that the MariaDB server runs as the user 999. Therefore, the user needs to
chown the mysql-data to be 999:999, so that MariaDB inside of the container can read/write the database.
Now, the user could attempt the following fix:
chown 999:999 -R $HOME/mysql-data
But the user is going to get permission denied. Furthermore, this is the wrong UID:GID pair. Remember that the UID:GID pair is relative to the user namespace that the user is going to run the container with. Now we have a big math problem. We must look at the user namespace the user going to run the container with and then add 999 to the beginning UID of the range - 1. And hope we got it right.
So, the user could try this:
sudo chown CONTAINER999:CONTAINER999 -R $HOME/mysql-data
An easier way to handle this situation would be to use
podman unshare. The unshare command is a cool command that joins the user namespace without running any containers.
For example, the user could enter:
podman unshare chown 999:999 -R $HOME/mysql-data
Now the user is ready to run the rootless container with the following command:
$ podman run --rm -v $HOME/mysql-data:/var/lib/mysql/data:Z -e MYSQL_USER=user -e MYSQL_PASSWORD=pass -e MYSQL_DATABASE=db -p 3306:3306 mariadb/server
Conclusion
Running containers in a rootless environment is very secure, and most containers will work out of the box. But when you start adding
--volumes, you can have issues with some of the security mechanisms protecting your host from the container. Understanding what is going on will save you a lot of time and aggravation.
[ Getting started with containers? Check out this free course - Deploying containerized applications: A technical overview. ] | https://www.redhat.com/sysadmin/user-namespaces-selinux-rootless-containers | CC-MAIN-2021-31 | refinedweb | 760 | 52.9 |
Synopsis:
Running into an issue where whole cluster stopping reads and writes for a period of time (size of the cluster did not matter), when a single node in the cluster fails when running Async java client.
Resolution:
The issue observed from the async client on the java benchmark tool suspect is that this issue would be mitigated by specifying a timeout in the benchmark. Say somewhere between 100ms and 1 second. The interval down should be related to the amount you set the timeout to. Aerospike agrees that the client should behave better in this event and has added a work item to the queue to enhance the async client in the event of node failure without a timeout or a long timeout set.
Example code:
./run_benchmarks -h 127.0.0.1 -p 3000 -n test -k 100000 -S 1 -o S:50 -T 1000 -w RU,50 -z 1 -async -asyncMaxCommands 1000 -asyncSelectorThreads 8 - h : host - p : port - k : number of keys - S : startkey - o : objectSpec - T : timeout - w : workload - z : threads - a : async - C : asyncMaxCommands - W : asyncSelectorThreads
So in the above script will run the benchmark on host 127.0.0.1 port 3000, on namespace test, with 100000 keys, starting key at 1, with string object, with a read and write 1000 timeouts, with a 50 percent read update, with 1 thread in async method, with 8 threads running in asynchronous mode. | https://discuss.aerospike.com/t/java-configure-time-out/709 | CC-MAIN-2019-09 | refinedweb | 236 | 67.93 |
Your browser does not seem to support JavaScript. As a result, your viewing experience will be diminished, and you have been placed in read-only mode.
Please download a browser that supports JavaScript, or enable it if it's disabled (i.e. NoScript).
On 17/03/2013 at 05:48, xxxxxxxx wrote:
User Information:
Cinema 4D Version: 14
Platform: Windows ;
Language(s) : C++ ;
---------
Hello;
I am looking for the 3D point (vector) of the nearest object under the cursor in a viewport.
I could use a ViewportSelect, or maybe a GeRayCollider. However, both seem to have a horrible overhead with objects passed and initialized and copied... returning arrays of collision points, and so on, and so on. I am looking for something lightweight.
Now the C4D system is doing something similar when the navigation mode is "Cursor" (WPREF_CAMERAROTATION_CURSOR) : at the start of a movement, the POI is determined by checking the object under the cursor. This needs to be done very often and very fast, so I believe there is a lightweight check behind it, or at least a pre-cached something.
How is this done?
Also, if you are not directly responding to a plugin message that passes the mouse coordinates, how do you get the actual mouse position in the screen or preferably viewport coordinates? All methods that return the "cursor info" are part of a special plugin class; I don't see a method in the BaseView or general classes.
Thanks in advance...
On 18/03/2013 at 12:01, xxxxxxxx wrote:
Hi,
If you just need to select an object in a viewport you should consider using a PickSessionDataStruct.
And yes, it's only possible to get mouse coordinates from within a plugin mouse proc.
On 19/03/2013 at 08:25, xxxxxxxx wrote:
Interesting, but PickSessionDataStruct returns an object list, not the actual point of impact.
What I am trying to do:
I have a plugin for the Space Navigator that replicates the mouse navigation behavior. Now if the user is in "Cursor mode" (C4D navigation mode) and starts to turn the knob, the plugin wants to look at the point the mouse is currently pointing at, and use that as the pivot of the rotation - exactly the same as the mouse navigation does under the same circumstances. (Since the Space Navigator has no pointer on the screen, I am recycling the mouse pointer, although it's not perfectly logical.)
So, the method I am looking for is "what is the first hit 3D point in the scene under the mouse pointer".
(If there is no such method, I can live without it. But since the mouse navigation does the very same thing, there should.)
On 19/03/2013 at 20:40, xxxxxxxx wrote:
Your best bet is to use a GeRayCollider after you find the 'nearest' object with the PickSessionDataStruct.
For instance, what I do for letting a user pick a point on an object as a starting for growing ivy (as a tool plugin) is:
// ToolData.MouseInput
//*---------------------------------------------------------------------------*
Bool IvyTool::MouseInput(BaseDocument* doc, BaseContainer& data, BaseDraw* bd, EditorWindow* win, const BaseContainer& msg)
//*---------------------------------------------------------------------------*
{
obj = NULL;
document = NULL;
if (!doc) return FALSE;
if (msg.GetLong(BFM_INPUT_CHANNEL) != BFM_INPUT_MOUSELEFT) return TRUE;
if (!msg.GetBool(BFM_INPUT_DOUBLECLICK)) return TRUE;
// Get Mouse coordinates
if (!(bd && win)) return FALSE;
mouseX = msg.GetReal(BFM_INPUT_X);
mouseY = msg.GetReal(BFM_INPUT_Y);
// - Mouse outside of Editor view
if (mouseX < 0.0) return TRUE;
// Get selected object (if any)
ivy.seeded = FALSE;
ivy.grown = FALSE;
ivy.newborn = FALSE;
obj = doc->GetActiveObject();
if (!obj) return TRUE;
// - Polygonize and triangulate
PolygonObject::Free(pobj);
pobj = PolygonizeObject(doc);
if (!pobj) { GePrint("IvyTool.MouseInput.pobj"); return TRUE; }
// Get 3D coordinates on Object for grow point
// - Is cursor over the visible portion of the object
if (!rayC->Init(pobj, TRUE)) { GePrint("IvyTool.MouseInput.rayC->Init()"); DeletePolyObject(); return TRUE; }
//Matrix m;
Vector wtail = bd->SW(Vector(mouseX,mouseY,IVYCZ_START));
Vector otail = wtail; //m * wtail;
Vector oray = (bd->SW(Vector(mouseX,mouseY,IVYCZ_END)) - wtail); // ^ m;
if (!rayC->Intersect(otail, !oray, IVYCZ_LEN)) { GePrint("IvyTool.MouseInput.rayC->Intersect()"); DeletePolyObject(); return TRUE; }
// - Compare the hits for visibility and nearness
GeRayColResult res;
if (!rayC->GetNearestIntersection(&res)) { GePrint("IvyTool.MouseInput.rayC->GetNearestIntersection()"); DeletePolyObject(); return TRUE; }
// - Remove polygon object from document
pobj->Remove();
// Enable Growing Process
ivy.seed(res.hitpos);
document = doc;
return TRUE;
}
On 20/03/2013 at 10:18, xxxxxxxx wrote:
Maybe I'm not understanding the desired result. But doesn't the highlighting effect do this kind of thing?
Where your mouse highlights a point when the cursor gets close enough to it?
I have a C++ plugin called PolygonTool that highlights and selects polygons when the mouse cursor gets close to them using the GetNearestPolygon() method.
But the code can be changed to GetNearestPoint() to work with points instead.
If I change the code to this.
It prints the point id# the mouse is hovering over (based the the range option) on the selected object. Seems pretty fast too:
if (mode == Mpoints)
{
highlight = TRUE;
ViewportPixel *vpp = NULL;
LONG mx = x;
LONG my = y;
vpp = vs->GetNearestPoint(op, mx, my, 10, FALSE, NULL, 0);
if (vpp)
{
pointid = vpp->i;
GePrint("You selected point# " + LongToString(pointid));
DrawViews(DRAWFLAGS_ONLY_ACTIVE_VIEW|DRAWFLAGS_NO_THREAD|DRAWFLAGS_NO_ANIMATION, bd);
}
else pointid = -1;
}
-ScottA
On 26/03/2013 at 03:55, xxxxxxxx wrote:
Thanks a lot... I believe that for a single pick a GeRayCollider may be best indeed.
I don't know whether I could get the necessary response times out of the system though if I use the feature in a Space Navigator controller. I guess I will just drop this aspect; the Navigator doesn't have a mouse pointer anyway...
On 26/03/2013 at 11:33, xxxxxxxx wrote:
Regarding using GeRayCollider in exactly the context you are talking about.
I'm also looking at doing a ray collide on clicking in the view and I have it working in the same fashion as Robert has posted.
The ViewportSelect::PickObject call nicely gets the object under the cursor ... but ...
If that object is not polygonal, how to you change it to being in the correct format? I see you can convert an entire document (Polygonize) and walk the doc to locate the object by name, but that seems crazy! In Roberts example it's tucked away inside the PolygonizeObject call. What are you doing in there Robert?
The Init call for the ray collider can be quite slow. Is this the only way to do it?
If the user is clicking around a bit, I'd like to at least cache some of the data sets to speed things up. Maybe on starting the tool I just polygonize the entire document, init the ray collider, then just have the hit for the intersect test on clicking. This would give a massive hit at the start whilst we set up a "global" ray collider system, and a bit like taking a sledgehammer to an egg, but hopefully the users experience would be a lot smoother. Otherwise I fear that the repeated hit of polygonizing the document (or object if that's doable) and initialising the ray collider would make for a very sluggish experience
-Simon
On 26/03/2013 at 12:31, xxxxxxxx wrote:
const Matrix unitMatrix;
// CollisionDeformerObj.GetPolygonObject
//*---------------------------------------------------------------------------*
PolygonObject* CollisionDeformerObj::GetPolygonObject(BaseObject* op)
//*---------------------------------------------------------------------------*
{
// Clone object and put into temporary document
BaseObject* clop = static_cast<BaseObject*>(op->GetClone(COPYFLAGS_NO_ANIMATION, NULL));
if (!clop) return (PolygonObject* )MessageSystem::NullThrow(GeLoadString(KDZERR_GENERAL), "CollisionDeformerObj.GetPolygonObject.cop");
fakeDoc->InsertObject(clop, NULL, NULL, FALSE);
// Current State to Object
mcd.op = clop;
Bool smc = SendModelingCommand(MCOMMAND_CURRENTSTATETOOBJECT, mcd) && mcd.result;
// - Remove and Delete clone
clop->Remove();
BaseObject::Free(clop);
if (!smc) return (PolygonObject* )MessageSystem::NullThrow(GeLoadString(KDZERR_GENERAL), "CollisionDeformerObj.GetPolygonObject.SendModelingCommand(CURRENTSTATETOOBJECT)");
// Get result
BaseObject*(CSTO)");
// Select Children
cstoObj->SetBit(BIT_ACTIVE);
SelectChildren(cstoObj);
// Connect object (trick: result is always in global space)
mcd.op = cstoObj;
smc = SendModelingCommand(MCOMMAND_JOIN, mcd) && mcd.result;
// - Free object created by CurrentStateToObject
BaseObject::Free(cstoObj);
if (!smc) return (PolygonObject* )MessageSystem::NullThrow(GeLoadString(KDZERR_GENERAL), "CollisionDeformerObj.GetPolygonObject.SendModelingCommand(JOIN)");(JOIN)");
// Triangulate
mcd.op = cstoObj;
if (!SendModelingCommand(MCOMMAND_TRIANGULATE, mcd))
{
BaseObject::Free(cstoObj);
return (PolygonObject* )MessageSystem::NullThrow(GeLoadString(KDZERR_GENERAL), "CollisionDeformerObj.GetPolygonObject.SendModelingCommand(TRIANGULATE)");
}
// Finish up
fakeDoc->InsertObject(cstoObj, NULL, NULL, FALSE);
cstoObj->SetMg(unitMatrix);
cstoObj->Remove();
return ToPoly(cstoObj);
}
// Select Children under op - recursive
//*---------------------------------------------------------------------------*
void CollisionDeformerObj::SelectChildren(BaseObject* op)
//*---------------------------------------------------------------------------*
{
for (; op; op = op->GetNext())
{
op->SetBit(BIT_ACTIVE);
if (op->GetDown()) SelectChildren(op->GetDown());
}
}
On 27/03/2013 at 13:48, xxxxxxxx wrote:
Thanks Robert! I'm not sure that I would have worked out the intricacies of that one by myself.
Oh, I assumed that the fake doc and ModelingCommandData structures were just members of your class so I just autoalloc-ed the fakedoc and had an instance of the mcd.
Do you know under what circumstances you need to re-initialise the ray collider? I suspect that even if the object is just moved/rotated globally (ie, in its local space nothing has changed) then you would still have to re-initialise it.
I'm just wondering if I can "know" that I need to re-initialise the collider or not to speed things up for the end user if they are just clicking around without actually touching the model or camera, or if I have to just accept the lag and be done with it
Thanks again for your help Robert - I really appreciate it
On 27/03/2013 at 14:23, xxxxxxxx wrote:
As far as I know, you should only need to re-initialize the GeRayCollider if data on the collider object changes - obj->IsDirty(DIRTYFLAGS_DATA). Transformations will not change the mesh relationships (vertices) locally.
On 28/03/2013 at 09:01, xxxxxxxx wrote:
OK, I'm not getting quite what I expect here.
But ... if I then rotate the cube around the X-axis, then use (2b) above, the normal is reported in what I assume is object space. ie, if I rotate by 45 degrees around the X-Axis, I still get a normal along the X-axis on the originally facing x-face.
It does not explicitly say the results are in object space so I'm wondering what is going on. (The hit point in the GeRayColResult also seems to be in object space too).
If it is in object space, I guess I need to work out how to transform that back into world space. Is there a simple call to do this that takes any hierarchy of transformation matrices into account etc? | https://plugincafe.maxon.net/topic/7040/7950_finding-3d-point-under-the-cursor | CC-MAIN-2022-40 | refinedweb | 1,734 | 54.83 |
I'm using Rails on a side project I'm playing with. Many of my peers would probably ask why would I do this to myself. The answer is simple: Rails helps me get stuff done quickly because it is super boring. It is so boring that it makes me excited.
My app is split in two: a widget that every website can use — a JS bundle, and a back-office/API. For the back-office I mainly use Rails and the magnificent Alpine.js. Authoring server-side rendered routes is so much easier to do with these two. Rails provides all the things I need in terms of back-end (even E-mailing comes built in!), and Alpine allows me to sprinkle JS as if my HTML was a React application: declarative, co-located JavaScript. For the widget, I use Preact. I originally started it as a React project, but I wanted to keep a minimal bundle size.
I launched a new project, and I immediately installed
graphql-ruby as a GraphQL server implementation, to easily declare resources that can later be translated into type-safe data fetching from my widget. I mostly do TypeScript so it's soothing me knowing that I can generate types and enforce them at runtime. I used
urql as a GraphQL client, because it looked like it would result in a smaller bundle (~4 times smaller than Apollo) and I wanted to experiment with it.
By measuring the bundle size using tools like Webpack Visualizer, I found out that Urql bundles
graphql.js to the client, and that's something that I don't really need — therefore, I do not want. Turned out, that Urql and its dependencies were more than 50% of my bundle size. I mean, this wasn't very large and I was quite satisfied with Urql, but this is a widget, not an entire application. The smaller - the better - and I want GraphQL for the amazing developer experience coming from the tight TypeScript integration, but that's something I'm fine with sacrificing in favor of my production bundle size (or solve later). Therefore, I decided to drop GraphQL and migrate my data fetching to use simple REST endpoints, with
swr to hook up with Preact.
As I started building a landing page, I wanted to make an animation to showcase the product — so I made one by myself with Tailwind CSS and Alpine. Eventually, I had a very clean animation with better looks than the current product. However, since my widget is a Preact app and my server is a Rails app, I couldn't share the components between my backend and the widget.
Or could I..?
Most Preact and React apps use JSON to pass data between client and server. What if the server already knows how to render stuff? Well, instead of serving JSONs we can serve HTML — Exactly what DHH was preaching about lately when they introduced Hotwire. So, instead of the following payload:
{ "message_id": "abcd1234", "text": "Hey, friend!", "author": { "name": "Chandler Bing", "avatar_url": "" } }
I could return the following HTML:
<div id="message-abcd1234"> <img class="avatar" src="" /> <div>Hey, friend!</div> <span>— Chandler Bing</span> </div>
And use
dangerouslySetInnerHTML in Preact and React to show the message. Since I'm using Rails and I know for sure that my HTML is sanitized, that's not done dangerously at all. This way, I can keep my authorization and render specific layout for specific layouts and keep all his logic in my precious, well-tested back-end.
The funny thing is, that it is not a new thing. The web did that before React was a thing! You don't have to use JSON! But, since React and other SPA frameworks have taken the world by storm, I regularly meet people who don't know about old-school frameworks like Rails and Django. And sometimes, the greatest solutions come from mixing modern and old solutions.
Now, this path is not all gummy bears. If you're into optimistic updates, that's not the path for you — because it relies on the fact you want to keep as much of the business in your back-end. Rendering HTML is the cherry on top of everything.
Personally, I think that most apps are either offline-centric or online-centric. Being somewhere in the middle is confusing. If you want to implement optimistic updates, you're probably trying to do that by manually crafting an optimistic response. That can be very hard to maintain, and you can probably get better results if you architecture your app to work offline with tools like PouchDB.
When working on my side-project, I don't want to waste time on optimistic updates. If my server is down, I'd rather get an error. I want my project to be as simple as possible. It's not a realtime chat application.
It's also harder to bind to event handlers, compared to Preact-rendered applications. How would you "rehydrate" the HTML coming from the server? How can you ensure the buttons whatever you need when they are being clicked? Consider the following HTML:
<button onclick="what_should_this_fn_be()">Click me!</button>
what_should_this_fn_be() needs to be replaced with something in order for our button to be interactive. It can be inline JS, like the good ol' days, but we won't be able to bind it to functions in our bundle if we're minifying them — or we would have to export them globally. Anyway, this ship has sailed. We need a better solution for event binding in our dynamic HTML sections:
Using event bubbling
This is the "manual" or "explicit" way. It's been in use for years.
When adding
onClick={myFunction} in Preact and React, you will actually get events that bubbled from the children of the provided DOM node — not just events that happened on the specific DOM node. This is a great way to solve our issue — if you have dynamic HTML that can be clicked, you can lift the event handling to the container, which lives in Preact and renders the dynamic HTML. So instead of having just a
<button>, you can add some hints like
<button data-, and reference this
data-action in your event handler:
function MyComponent() { const html = `<button data-click me</button>`; return ( <div dangerouslySetInnerHTML={{ __html: html }} onClick={(event) => { if (event.target?.dataset.action === "showAnAlert") { event.preventDefault(); alert(`Look at me, I'm doing something!`); } }} /> ); }
This way, the server can declaratively say what is the role of a button, and you can have the implementation in JS land.
Using custom elements
We can expose Preact elements as custom elements. So, instead of having the following code:
<button>What should I do?</button>
We can use a custom component:
<my-alert-button>Show an alert!</my-alert-button>
That would work kinda well with Preact, and can also be reused in our Rails backend. In fact, that's what I do when rendering icons inside the Rails and the widget app, as I mentioned on this one tweet. That is somewhat a win, but when used heavily, it creates some issues.
First, I will have to work with Shadow DOM and will go outside of Preact land just to go back using Preact using the custom element. So
Preact -> HTML -> Custom Element -> Preact. I can live with it but there is a better solution, that doesn't have that massive accessibility issue:
dangerouslySetInnerHTML hurts accessibility
The big issue for both the solutions mentioned before is the accessibility issue coming from
dangerouslySetInnerHTML: when the HTML is replaced, the DOM elements will be replaced by detaching them from the DOM and attaching new elements. That means that you lose focus and DOM state — So if you had
input fields or
details popovers, they will be reset.
When using a library that does DOM diffing for you, doesn't matter if it's virtual or not, you want to use this diff. So in Preact, we would probably want to parse our HTML into Preact elements, so Preact will know how to diff them. In React, we would want to make them React elements. In Svelte, I'm pretty sure we wouldn't have any way of doing so because all the diffing is compiled away — so we would need to use a library like
morphdom to do that.
Let's talk about Preact.
Using
preact-markup
Preact Markup is a cool project that parses HTML to Preact elements, allowing you to render custom HTML elements using Preact components, without the real component boundary. It even lets you override standard HTML elements with your own components. Check out the following example, which has a
my-button element and overriding the standard
button one:
Preact Markup's implementation is rather easy to understand. I suggest you to try building one yourself to fully grasp the ideas there. It can be translated to React very easily. Maybe that could be a future blog post, who knows?
Summing up
Getting HTML back from the server and injecting that to our client-side apps is so nice. It works tremendously with SWR, and helped me building my side-project in a veeeeeery fast pace. The Server Components initiative by the React team is probably onto something — but you don't need React to get the server magic. It's all a question of trade-offs. If server-side rendering is mostly your jam, you could stick with it.
Once you need a more complicated behaviors, you can always make a JSON response — and maybe you will find yourself embedding a server-generated HTML into it to sweeten the pill 😉 | https://gal.hagever.com/posts/hotwire-in-preact-apps | CC-MAIN-2021-49 | refinedweb | 1,614 | 62.68 |
...making Linux just a little more fun!.
Citrix Systems agreed to acquire XenSource, an open source leader in virtual infrastructure solutions. Originally created at the University of Cambridge, the Xen virtualization "engine" is now developed collaboratively by an active open source community of senior engineers at many of the industry's most innovative infrastructure companies, including leading hardware vendors like Intel, IBM, HP and AMD. This open collaborative approach significantly accelerates the innovation of the Xen engine, leading to continual state-of-the-art improvements in performance, scalability and cross-platform support. The next-generation Xen architecture is widely acknowledged for its industry-leading performance, efficiency, security and native support for the latest hardware-assisted virtualization features.:
SCALE 6x
Main conference:
Women in Open Source mini-conference:
Open Source in Education mini-conference:; Palau de Congressos de Catalunya;;
Gartner - 26th Annual Data Center Conference
November 27 - 30; Las Vegas, NV.
August saw the first release candidate of Damn Small Linux 4.0 and an update of the current stable branch, the 3.4 series, to version 3.4.1. Both are available here:": Ron Peterson
In my last article, I introduced the basic framework for creating your own PostgreSQL function in C. In this article, I'd like to expand on that introduction. I'll introduce:
I'm also going to eschew the use of the PostgreSQL extension building infrastructure I used last time, in order to illustrate the details of how PostgreSQL shared object files are built in Linux.
The same prerequisites as in my previous article still apply. All of the code presented here can be downloaded as a single tarball if you would prefer to avoid typing practice (and the consequent frustration of debugging typos, rather than code.)
Before we begin, let's look at what we want to accomplish. Let's say we'd like to create a set of PostgreSQL functions that implement the features of Mark Galassi's excellent GNU Scientific Library. Let's pick one of the library's functions, gsl_complex_add, and see what we need to do to create a corresponding PostgreSQL function. When we're finished, we'll be able to write SQL statements like this:
> select gsl_complex_add( ROW( 3.2e4, -3.2 ), ROW( 4.1, 4.245e-3 ) ); gsl_complex_add --------------------- (32004.1,-3.195755)
I think it's appropriate to represent complex numbers in PostgreSQL as tuples, where the real and imaginary components get passed around together as a pair. Think of a tuple as a structure in C. The tuple concept jibes with the way we're taught to think about these things in other domains. We'll be using PostgreSQL's CREATE TYPE statement to define the composite type we use as follows:
DROP FUNCTION gsl_complex_add ( __complex, __complex ); DROP TYPE __complex; CREATE TYPE __complex AS ( r float, i float ); CREATE OR REPLACE FUNCTION gsl_complex_add( __complex, __complex ) RETURNS __complex AS 'example.so', 'c_complex_add' LANGUAGE C STRICT IMMUTABLE;
OK, so now that we know what we would like to do, let's look at how we get there. I'll dump all of the code on you at one time, and follow up by trying to explain how it works. I won't spend too much time repeating what I say in the code comments though, because that would be redundant, just like this sentence.
// example.c: // PostgreSQL includes #include "postgres.h" #include "fmgr.h" // Tuple building functions and macros #include "access/heapam.h" #include "funcapi.h" #include <string.h> // GNU Scientific Library headers #include <gsl/gsl_complex.h> #include <gsl/gsl_complex_math.h> #ifdef PG_MODULE_MAGIC PG_MODULE_MAGIC; #endif // forward declaration to keep compiler happy Datum c_complex_add( PG_FUNCTION_ARGS ); PG_FUNCTION_INFO_V1( c_complex_add ); Datum c_complex_add( PG_FUNCTION_ARGS ) { // input variables HeapTupleHeader lt, rt; bool isNull; int tuplen; bool *nulls; // things we need to deal with constructing our composite type TupleDesc tupdesc; Datum values[2]; HeapTuple tuple; // See PostgreSQL Manual section 33.9.2 for base types in C language // functions, which tells us that our sql 'float' (aka 'double // precision') is a 'float8 *' in PostgreSQL C code. float8 *tmp; // defined by GSL library gsl_complex l, r, ret; // Get arguments. If we declare our function as STRICT, then // this check is superfluous. if( PG_ARGISNULL(0) || PG_ARGISNULL(1) ) { PG_RETURN_NULL(); } // Get components of first complex number //// get the tuple lt = PG_GETARG_HEAPTUPLEHEADER(0); ////// get the first element of the tuple tmp = (float8*)GetAttributeByNum( lt, 1, &isNull ); if( isNull ) { PG_RETURN_NULL(); } GSL_SET_REAL( &l, *tmp ); ////// get the second element of the tuple tmp = (float8*)GetAttributeByNum( lt, 2, &isNull ); if( isNull ) { PG_RETURN_NULL(); } GSL_SET_IMAG( &l, *tmp ); // Get components of second complex number rt = PG_GETARG_HEAPTUPLEHEADER(1); tmp = (float8*)GetAttributeByNum( rt, 1, &isNull ); if( isNull ) { PG_RETURN_NULL(); } GSL_SET_REAL( &r, *tmp ); tmp = (float8*)GetAttributeByNum( rt, 2, &isNull ); if( isNull ) { PG_RETURN_NULL(); } GSL_SET_IMAG( &r, *tmp ); // Example of how to print informational debugging statements from // your PostgreSQL module. Remember to set minimum log error // levels appropriately in postgresql.conf, or you might not // see any output. ereport( INFO, ( errcode( ERRCODE_SUCCESSFUL_COMPLETION ), errmsg( "tmp: %e\n", *tmp ))); // call our GSL library function ret = gsl_complex_add( l, r ); // Now we need to convert this value into a PostgreSQL composite // type. if( get_call_result_type( fcinfo, NULL, &tupdesc ) != TYPEFUNC_COMPOSITE ) ereport( ERROR, ( errcode( ERRCODE_FEATURE_NOT_SUPPORTED ), errmsg( "function returning record called in context " "that cannot accept type record" ))); // Use BlessTupleDesc if working with Datums. Use // TupleDescGetAttInMetadata if working with C strings (official // 8.2 docs section 33.9.9 shows usage) BlessTupleDesc( tupdesc ); // WARNING: Architecture specific code! // GSL uses double representation of complex numbers, which // on x86 is eight bytes. // Float8GetDatum defined in postgres.h. values[0] = Float8GetDatum( GSL_REAL( ret ) ); values[1] = Float8GetDatum( GSL_IMAG( ret ) ); tuplen = tupdesc->natts; nulls = palloc( tuplen * sizeof( bool ) ); // build tuple from datum array tuple = heap_form_tuple( tupdesc, values, nulls ); pfree( nulls ); // A float8 datum palloc's space, so if we free them too soon, // their values will be corrupted (so don't pfree here, let // PostgreSQL take care of it.) // pfree(values); PG_RETURN_DATUM( HeapTupleGetDatum( tuple ) ); }
Wow, those comments are so illustrative, I think the article is almost finished! Alright, I'll try to explicate a few of the finer points. After all, that's what I don't get paid for.
There's nothing much new going on here relative to my last article until we see the declaration of our HeapTupleHeader variables lt and rt (for "left tuple" and "right tuple".) We're not taking simple data types as arguments here, we're taking tuple arguments that we defined with our CREATE TYPE statement. Each of our tuples have two double precision components, representing our complex number's real and imaginary components.
First, we read our tuple arguments in to rt and lt, using the PG_GETARG_HEAPTUPLEHEADER macro. Then we pick the component values out of our tuple using the GetAttributeByNum function. Refer to the Base Types in C Language Functions section of the manual (33.9.2) for information about how to represent PostgreSQL data types in your C code. In our case, this table tells us that our double precision (aka "float") values in SQL are represented in PostgreSQL C code as "float8 *".
It so happens that our GSL library's complex number functions expect "double" values as input, which on the x86 Linux platform I'm running, are conveniently eight bytes, and map directly to the float8 values used by PostgreSQL. Pay close attention here, because if your data types don't map properly, you'll get a headache.
We then use the GSL library's GSL_SET_REAL and GSL_SET_IMAG macros to construct complex number representations that we can pass to the gsl_complex_add function. We convert the data that GSL understands back into a form that PostgreSQL understands by using the Float8GetDatum function. You can see the set of other typical C type to Datum conversion functions in postgres.h.
To create the tuple we'd like to return, we first construct an array of datum values in our "values" variable. The heap_formtuple function converts this array into a PostgreSQL tuple, which the HeapTupleGetDatum function converts into a datum form we can return with PG_RETURN_DATUM.
If we were working with C strings, we would probably do things a bit differently. I'm not going to illustrate how that works, because The Fine Manual already includes a nice example. Note that the example in the manual is also illustrating how to return a set of tuples, which we are not concerning ourselves with here.
Note the ereport( INFO ... ) function in the middle of our code. I find this function very handy for printing debugging information to the SQL console while I'm developing new code. You can see how this works if you leave this uncommented when you compile and install this code.
It's time to turn this code into something we can use. Instead of using the PGXS infrastructure as I did in my last article, we'll get under the hood. It's not only educational to see how to build a shared module, but creating your own Makefile also gives you a little more latitude to tweak your build options just the way you like. It might also make it easier for you to handle building projects with lots of dependencies.
Here's a simple Makefile to illustrate how we build our shared object file. In real life, I'd probably use some automatic variables and such, but I don't want to obfuscate the basic build process with Makefile arcana. The pg_config command is your friend, and will help you ascertain where the include files and such are installed on your system. Building the shared object file is a simple matter of first building a position independent (the -fpic flag) object file, and then linking against all required libraries using the -shared flag to build the shared object file. This is all detailed in section 33.9.6 of the manual, which also includes instructions for other architectures besides Linux.
INCLUDEDIRS := -I. INCLUDEDIRS += -I$(shell pg_config --includedir-server) INCLUDEDIRS += -I$(shell pg_config --includedir) # If you are using shared libraries, make sure this location can be # found at runtime (see /etc/ld.so.conf and ldconfig command). LIBDIR = -L$(shell pg_config --libdir) # This is where the shared object should be installed LIBINSTALL = $(shell pg_config --pkglibdir) example.so: example.c Makefile gcc -fpic -o example.o -c example.c $(INCLUDEDIRS) gcc -shared -o example.so example.o $(LIBDIR) -lpq -lgsl -lgslcblas -lm cp example.so $(LIBINSTALL)
The Makefile copies the shared object file into the PostgreSQL library directory, so that we can execute the SQL I showed you at the beginning of this article to create our __complex composite type and our gsl_complex_add function. Just fire up psql as a user with permissions to do such things, and then type '\i example.sql' to do so. And that brings us to...
Well, we started at the end, so I guess that means we're finished. As you can see, once you have grasped the basic framework, you have the whole world of C library functions available for you to use directly within PostgreSQL. This gives you all of the attendant advantages of working within a transactional database system. I hope you find this prospect interesting enough to port some intriguing libraries into PostgreSQL, because Lord knows I certainly don't have time to do it all myself. :)
Happy hacking. And a special thanks to the PostgreSQL coding gurus who made this fantastic database in the first place..
I will focus on using mutt as a MUA. The confusing advantage of mutt is that it submits the email to a MTA on the local machine for delivery..
Now let's try to get everything to work together seamlessly..
Although I didn't show how to configure encryption in this example, I strongly suggest using TLS with every MTA you run. The setup isn't too hard and having encrypted SMTP AUTH sessions is the best way to protect the passwords.
This is one of many articles written about this topic. You can find more details. | http://www.tldp.org/LDP/LGNET/142/TWDT.html | CC-MAIN-2016-40 | refinedweb | 1,987 | 53.92 |
Reimport a module in python while interactive
This should work (for Python < 3.4):
reload(my.module)
From the Python docs
Reload a previously imported module. The argument must be a module object, so it must have been successfully imported before. This is useful if you have edited the module source file using an external editor and want to try out the new version without leaving the Python interpreter.
If running Python 3.4 and up, do
import importlib, then do
importlib.reload(nameOfModule).
Don't forget the caveats of using this method:
When a module is reloaded, its dictionary (containing the module’s global variables) is retained. Redefinitions of names will override the old definitions, so this is generally not a problem, but if the new version of a module does not define a name that was defined by the old version, the old definition is not removed..
In python 3,
reload is no longer a built in function.
If you are using python 3.4+ you should use
reload from the
importlib library instead:
import importlibimportlib.reload(some_module)
If you are using python 3.2 or 3.3 you should:
import imp imp.reload(module)
instead. See
If you are using
ipython, definitely consider using the
autoreload extension:
%load_ext autoreload%autoreload 2
Actually, in Python 3 the module
imp is marked as DEPRECATED. Well, at least that's true for 3.4.
Instead the
reload function from the
importlib module should be used:
But be aware that this library had some API-changes with the last two minor versions. | https://codehunter.cc/a/python/reimport-a-module-in-python-while-interactive | CC-MAIN-2022-21 | refinedweb | 261 | 65.52 |
Kohn-Sham wavefunctions of the oxygen atom and CO molecule¶
In this section we will look at the Kohn-Sham wavefunctions of the O atom and CO molecule and compare them to results from molecular orbital theory.
The first script
O.pysets up an oxygen atom in a cubic supercell with non-periodic boundary conditions and calculates the total energy. A couple of extra bands (i.e. Kohn-Sham states) are included in the calculation:
from ase import Atoms from ase.io import write from gpaw import GPAW # Oxygen atom: atom = Atoms('O', cell=[6, 6, 6], pbc=False) atom.center() calc = GPAW(h=0.2, hund=True, # assigns the atom its correct magnetic moment txt='O.txt') atom.set_calculator(calc) atom.get_potential_energy() # Write wave functions to gpw file: calc.write('O.gpw', mode='all') # Generate cube-files of the orbitals: for spin in [0, 1]: for n in range(calc.get_number_of_bands()): wf = calc.get_pseudo_wave_function(band=n, spin=spin) write('O.%d.%d.cube' % (spin, n), atom, data=wf)
Towards the end, a
.gpwfile is written with the Kohn-Sham wavefunctions by
calc.write('O.gpw', mode='all')and also some cube files containing individual orbatals are written.
Run the script and check the text-output file. What are the occupation numbers for the free oxygen atom?
The orbitals can be visualized using Mayavi and its
mayavi.mlab.contour3d()function and the GPAW-calculators
get_pseudo_wave_function()method. Reload the gpw-file and look at one of the orbitals like this:
from gpaw import GPAW from mayavi import mlab calc = GPAW('O.gpw', txt=None) lumo = calc.get_pseudo_wave_function(band=2, spin=1) mlab.contour3d(lumo) mlab.show()
For an alternative way of viewing the orbitals, see Visualizing iso-surfaces.
Can you identify the highest occupied state and the lowest unoccupied state?
How do your wavefunctions compare to atomic s- and p-orbitals?
Make a script where a CO molecule is placed in the center of a cubic unit cell with non-periodic boundary conditions, e.g. of 6 Å. For more accurate calculations, the cell should definitely be bigger, but for reasons of speed, we use this cell here. A grid spacing of around 0.20 Å will suffice. Include a couple of unoccupied bands in the calculation (what is the number of valence electrons in CO?). You can quickly create the Atoms object with the CO molecule by:
from ase.build import molecule CO = molecule('CO')
This will create a CO molecule with an approximately correct bond length and the correct magnetic moments on each atom.
Then relax the CO molecule to its minimum energy position. Write the relaxation to a trajectory file and the final results to a
.gpwfile. The wavefunctions are not written to the
.gpwfile by default, but can again be saved by writing
calc.write('CO.gpw', mode='all'), where
calcis the calculator object. Assuming you use
opt = QuasiNewton(..., trajectory='CO.traj'), the trajectory can be viewed by:
$ ase gui CO.traj
Try looking at the file while the optimization is running and mark the two atoms to see the bond length.
As this is a calculation of a molecule, one should get integer occupation numbers - check this in the text output. What electronic temperature was used and what is the significance of this?
Plot the Kohn-Sham wavefunctions of the different wavefunctions of the CO molecule like you did for the oxygen atom.
Can you identify the highest occupied state and the lowest unoccupied state?
How does your wavefunctions compare to a molecular orbital picture? Try to Identify \(\sigma\) and \(\pi\) orbitals. Which wavefunctions are bonding and which are antibonding?
Hint
You might find it useful to look at the molecular orbital diagram below, taken from The Chemogenesis Web Book.
| https://wiki.fysik.dtu.dk/gpaw/exercises/wavefunctions/wavefunctions.html | CC-MAIN-2020-05 | refinedweb | 624 | 60.11 |
C++ Quiz
You've answered 0 of 71 questions correctly. (Clear)
Question #124 Difficulty:
According to the C++11 standard, what is the output of this program?
#include <iostream> using namespace std; struct A {}; struct B {}; template<typename T = A> struct X; template<> struct X<A> { static void f() { cout << 1 << endl; } }; template<> struct X<B> { static void f() { cout << 2 << endl; } }; template< template<typename T = B> class C> void g() { C<>::f(); } int main() { g<X>(); }
Problems? View a hint or try another question.
I give up, show me the answer (make 3 more attempts first).
Mode : Training
You are currently in training mode, answering random questions. Why not Start a new quiz? Then you can boast about your score, and invite your friends. | http://cppquiz.org/quiz/question/124 | CC-MAIN-2016-50 | refinedweb | 125 | 73.58 |
I have a linux proc entry in
/proc/sys/fs/offs/ts/enable
echo 1 > /proc/sys/fs/offs/ts/enable
echo 0 > /proc/sys/fs/offs/ts/enable
def set_mode(enable=True):
with open('/proc/sys/fs/offs/ts/enable', 'w') as p:
if enable:
p.write("1")
else:
p.write("0")
p.flush()
There are a couple of problems with your code.
Firstly, you want to write to the file, but you're opening it in read mode.
Secondly,
.write expects string data, not an integer.
We can get rid of the
if test by exploiting the fact that
False and
True have integer values of 0 & 1, respectively. The code below uses the
.write because
int(enable) to a string. Also,
end argument), so this way the Python code performs the same action as your Bash command lines.
def set_mode(enable=True): with open('/proc/sys/fs/offs/ts/enable', 'w') as p: print(int(enable), file=p)
If you want to do it with
.write, change the
p.write(str(int(enable)) + '\n')
There's a way to do that conversion from boolean to string in one step: use the boolean to index into a string literal:
'01'[enable]
It's short & fast, but some would argue that it's a little cryptic to use booleans as indices. | https://codedump.io/share/4wAp2Ix8N3HV/1/what-is-the-recommended-way-to-update-a-proc-entry-from-python | CC-MAIN-2017-26 | refinedweb | 222 | 65.32 |
Troy Curtis Jr wrote:
> On Wed, May 28, 2008 at 12:56 PM, Julian Foad wrote:
>.
[...]
>
> My first thought was "no thank you, I've already looked into that and
> its too much for me". But your comment about it being limited to
> subversion/svn caught my interest, so I looked at it again. Sure
> enough, you were right, who would've thought :)
>
> I'm not sure what I was thinking before, I seemed to think that the
> peg revs were processed quite deep, and I have comments in this thread
> that imply as much. But it really isn't like that. Perhaps it was
> because a couple of other people were asking for additional things in
> that returned structure that actually WERE processed deep in the API?
> I don't remember exactly.
>
> So on further consideration I've decided that yes, I'd like to give it a shot.
I'm glad!
BTW, I said it's only used in "svn" but it's used a bit in one or more of the
accompanying programs "svnadmin", "svnserve", etc. as well.
> There are several APIs that expect arrays of targets, but they
> invariably deal with working paths only or are modification commands
> that can affect HEAD only (i.e. svn_client_mkdir(),
> svn_client_delete(), etc) (thus not needing pegs, right?). I guess I
> will also need to make sure that no peg revisions were specified for
> those types of commands, and give an appropriate error message if one
> is found.
Yes, that sounds right.
> This was brought up last time and so I'll bring it up again, should I
> create a brand-new structure to return (like svn_client_target_t for
> instance), or should I try to use the existing
> svn_client_copy_source_t. The name on that current one would be
> generally misleading of course. Should I create my new
> svn_client_target_t structure and also rev svn_client_copy4() to use
> it instead of it's special purpose svn_client_copy_source_t?
Create a new one. As it's local to the command-line client program(s) and not a
public thing exposed by libraries, ... oh, but it will be exposed by the public
svn_client_args_to_target_arrayN(). In that case, yes, like you said,
"svn_client_target_t" would be a good name.
Your struct will only have two of the fields that svn_client_copy_source_t has,
so doesn't replace it, so no point in changing the ...copy4() API. And I think
your struct might as well have the revision struct embedded in like
svn_wc_external_item2_t does, rather than another level of indirection like
..._copy_source_t uses, since it's small and constant-size.
- Julian
---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe_at_subversion.tigris.org
For additional commands, e-mail: dev-help_at_subversion.tigris.org
Received on 2008-05-29 09:16:03 CEST
This is an archived mail posted to the Subversion Dev
mailing list. | http://svn.haxx.se/dev/archive-2008-05/1342.shtml | CC-MAIN-2016-22 | refinedweb | 463 | 63.19 |
Setting up Tailwind With create-react-app
Matt Hagner
Updated on
・8 min read
What is Tailwind?
Tailwind is a functional CSS framework that is ergonomic to use, but low level enough to make it fully customizable. You can configure it, add plugins, and override defaults. It generates CSS class names for you so that you can use compose them throughout your project.
I've found that Tailwind lends itself particularly well to developing components in React and Vue.
What does it look like?
import React from 'react' export default function Input(inputProps) { return ( <input className="px-2 py-1 text-gray-700 bg-gray-200 rounded-lg shadow-md border-2 border-gray-800 focused:border-blue-400" {...inputProps} /> ) }
What do all of those classes mean? Most of the classes should be pretty self explanatory. The
px-2 and
py-1 are horizontal (x), and vertical (y) padding respectively. The
2 and
1 refer to the sizing.
By default Tailwind generates a set of sizes for you that you can customize. Sizing 1 starts at
0.25rem and the sizing goes up by
0.25rem each step.
The class naming follows pretty easy to understand conventions so once you start learning some you will understand how to use most. For instance to set a margin vertical margin of 2 rem you would use the class name
my-8.
m because you are setting margin,
y because you want to set only the vertical axis margin, and
8 because you want 2 rem and the sizing is 4 per rem.
Things that can accept a color value like text, border, or background have their prefix
text,
border and
bg, followed by the color name
text-gray,
border-gray or
bg-gray and then a value from 100-900 that jumps by 100. So
text-gray-700 will make the text a fairly dark gray, and
bg-gray-200 will give the background a fairly light gray color.
The
focused:border-blue-400 class applies a blue 400 color to the border when the focused pseudo class is active for the element.
rounded has a number of suffixes to affect the class like
sm,
lg, and
full with the default being a medium rounded border if there isn't a suffix. There is even the ability to change any corner individually.
shadow is similar to
rounded but with the default being small with no suffix, and sizing all the way to
2xl. Additional modifiers that make sense for a box shadow are also available like
inner or
outline.
Why would you use it?
When you get into the flow it's like writing regular CSS with shorthands except you don't have to do it in a separate file, you don't have to come up with a bunch of class names, and you don't have to possibly update two files every time you change the styles for a single element.
It makes your code easier to delete. We'll touch on this more later, but traditional CSS is append only, which means it is really hard to know when you are okay to delete some styles.
Component based styling, which you can absolutely do with Tailwind, allows you to delete the styles along with the component when you no longer need it.
Tailwind is also totally and completely extendable. Want to add different colors, or change the ones included with Tailwind? You totally can and the API to do so is pretty well documented and easy to follow.
How do we set up create-react-app to use Tailwind?
Let's set up our project by scaffolding a new react app with
create-react-app. If you don't have it installed you can use npx.
npx create-react-app setting-up-tailwind && cd setting-up-tailwind
Now we need to install some dev dependencies.
yarn add -D tailwindcss autoprefixer postcss-cli
In the root of the project create a
postcss.config.js file and open it up in your favorite editor.
module.exports = { plugins: [ require('tailwindcss'), require('autoprefixer'), ] }
If you're interested in finding out more about PostCSS check out the Github
Autoprefixer is recommended to install alongside Tailwind, because autoprefixer automatically tracks caniuse.com to see which CSS properties still need to be prefixed, and out of the box Tailwind does not provide any vendor prefixing.
Now we should initialize Tailwind. This will create a tailwind.config.js file in the root of our project with a default configuration. This step is optional, but I usually do this when setting up a Tailwind project so that I can customize things later without having to come back.
npx tailwind init
If you open it up it looks pretty barren right now. Maybe in a different post I'll go over adding plugins, or customizing Tailwind.
// tailwind.config.js module.exports = { theme: { extend: {} }, variants: {}, plugins: [] }
We also need to create an input CSS file for PostCSS to process with Tailwind. I usually call this
tailwind.css and add it to the
src folder in my React projects, but you can name it whatever, and place it in any place that makes sense to you.
/* src/tailwind.css */ @tailwind base; @tailwind components; @tailwind utilities;
These are Tailwind directives that add the three main parts of core Tailwind. You can make your bundle smaller by omitting one or multiple if you don't need them, but to get the most from Tailwind you will probably end up using at least some classes from each.
When Tailwind (the first plugin in PostCSS) sees these directives it will replace each
@tailwind <name> with some CSS.
To make it easy on ourselves in the future case where we might be changing the
tailwind.config.js we should add a few scripts to our
package.json file. Add the following three scripts to the scripts object.
// package.json { //... "scripts": { //... place these after the four scripts created by CRA "build:styles": "postcss tailwind.css -o src/styles.css", "prebuild": "yarn build:styles", "prestart": "yarn build:styles" } }
Or if you use npm change
yarn to
npm run
{ //... "scripts": { //... place these after the four scripts created by CRA "build:styles": "postcss tailwind.css -o src/styles.css", "prebuild": "npm run build:styles", "prestart": "npm run build:styles" } }
Building our React component
Let's delete some of the unnecessary stuff that create-react-app makes for us.
rm src/App.test.js src/App.css src/index.css src/logo.svg
Open up
src/index.js and make the following changes.
// src/index.js import React from 'react'; import ReactDOM from 'react-dom'; import './styles.css' // <- change './index.css' to './styles.css' import App from './App'; import * as serviceWorker from './serviceWorker'; ReactDOM.render(<App />, document.getElementById('root')); serviceWorker.unregister();
Now open up
src/App.js, delete the whole thing and start from scratch.
// src/App.js import React from "react"; import Button from "./components/button"; function App() { return ( <div className="flex flex-col w-3/4 mx-auto my-12 items-center"> <h1>Super cool page</h1> <Button onClick={() => console.log("I was clicked")}> I am a button </Button> </div> ); } export default App;
Let's create a simple button component, this will be a small wrapper around a normal button, but will contain some styles. I'm making this component in a
components directory inside of
src, but you can put the component wherever you want.
// src/components/button.js import React from "react"; export default function Button({ children, ...buttonProps }) { return ( <button className="px-2 py-1 rounded-lg bg-green-400 text-green-800 text-xl font-light uppercase shadow-md hover:shadow-lg" {...buttonProps} > {children} </button> ); }
If you run yarn start now you should see that PostCSS is processing our styles for us, and then you should see something like this.
Such beauty. It is almost too much to behold!
Checking our app out in production
So our app is looking great now and we are ready to send it off into the world, but first we need to build for production.
yarn build
Now to check our production build, we can use a tool like
serve. Either install it globally,
yarn global add serve or you can use npx.
If you installed globally you'll use
serve -s build
or if you want to use npx
npx serve -s build
Sweet! Our page looks pretty rad if I do say so myself. Now let's just open up the developer tools in our browser, click on the network tab, refresh the page and see how slim our sleek new CSS is...
Look at the size of the CSS bundle. 350KB... Yikes! Why is it so big!?
Well Tailwind generates classes. A lot of classes. The stylesheet that it generates is over 3000 lines long. But we are only using a fraction of those classes right now so what can we do?
Slimming Our Build
There is a utility called PurgeCSS which will parse any files that match the given file globs for the usage of the selectors in your CSS. If a selector isn't present in any of the matched files, then it rips those styles out of the CSS, ultimately slimming the build.
There is a PostCSS plugin for PurgeCSS so we can just install our new dependency, and add a little bit more set up to
postcss.config.js.
yarn add -D @fullhuman/postcss-purgecss
Open up your
postcss.config.js file and make some additions. The following set up is taken directly from the Tailwind docs.
// postcss.config.js const purgecss = require('@fullhuman/postcss-purgecss')({ // Specify the paths to all of the template files in your project content: [ './src/**/*.js', './public/index.html', ], // Include any special characters you're using in this regular expression defaultExtractor: content => content.match(/[A-Za-z0-9-_:/]+/g) || [] }) module.exports = { plugins: [ require('tailwindcss'), require('autoprefixer'), ...process.env.NODE_ENV === 'production' ? [purgecss] : [] ] }
The content property in the PurgeCSS plugin takes an array of file globs that it should check for the inclusion of CSS selectors. In a create-react-app project we want it to check all of our React components so we pass
./src/**/*.js which means check any nested folders inside of src for any file with an extension of
.js. We also want it to look at our
./public/index.html file because Tailwind uses Normalize, and without having it check the projects HTML page, it will gut a lot of the Normalize rules that we want it to include.
There are some pitfalls with PurgeCSS, like it won't actually render your components to check dynamic class usage, so you want to avoid partial class names in dynamic renders and instead stick to full class names.
import React from 'react' // DO NOT DO THIS function Button({ color, children }) { return <button className={`text-${color}`}>{children}</button> } const App = () => ( <Button color="red-300">Do not click me</Button> ) /////////////////////////////////// // Instead do this! function Button({ color, children }) { return <button className={`${color}`}>{children}</button> } const App = () => ( <Button color="text-red-300">Do not click me</Button> )
The other thing that we need to do is make a slight modification to one of our scripts in
package.json. The addition of
NODE_ENV=production to our
prebuild script will set the environment variable for Webpack which create-react-app uses under the hood, and will trigger the PostCSS cli to use PurgeCSS in the building of our styles.
// package.json { "scripts": { //... "prebuild": "NODE_ENV=production yarn build:styles" } }
Now let's build for production, serve our app, open up the dev tools and check out our network tab again.
yarn build && serve -s build
Much better!
If you want to further slim the build there is great documentation on how to control the size of Tailwind.
So now you know how to set up Tailwind in your create-react-app projects and how to get some decent production wins with PurgeCSS + PostCSS. Let me know if you any questions in the comments, or if you enjoyed this article.
If I don't use React, am I still a developer?
A self-taught developer, more confident in his abilities than ever, feels he's just a hack because he doesn't understand anything anymore
Animations in React 2020 - react-spring
Jonas Grøndahl -
Keeping Javascript test naming updated after refactoring
Jan Küster -
Building a URL shortener service series, Introduction.
ADONIS SIMO -
===I did everything except adding these:
"scripts": {
//... place these after the four scripts created by CRA
"build:styles": "postcss tailwind.css -o src/styles.css",
"prebuild": "npm run build:styles",
"prestart": "npm run build:styles"
}
===If I do just that then nothing works. Shows plain html
===But
===If I do add those scripts, and do npm start I get this error:
Input Error: You must pass a valid list of files to parse
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! setting-up-tailwind@0.1.0 build:styles:
postcss tailwind.css -o src/styles.css
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the setting-up-tailwind@0.1.0 build:styles script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
npm ERR! A complete log of this run can be found in:
npm ERR! C:\tmp\nodejs\npm-cache_logs\2019-10-17T19_50_22_634Z-debug.log
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! setting-up-tailwind@0.1.0 prestart:
npm run build:styles
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the setting-up-tailwind@0.1.0 prestart script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
npm ERR! A complete log of this run can be found in:
npm ERR! C:\tmp\nodejs\npm-cache_logs\2019-10-17T19_50_24_096Z-debug.log
===I've tried this multiple times over and over and the same keeps happening
The error says that tailwind.css could not be found. There should be src/ before tailwind.css.
"scripts": {
//... place these after the four scripts created by CRA
"build:styles": "postcss src/tailwind.css -o src/styles.css",
"prebuild": "NODE_ENV=production npm run build:styles",
"prestart": "npm run build:styles"
},
Hey Parsa, do you mind throwing up a github repo with the issue you are having? It would help me locate what the issue might be.
Hey buddy,
check your package.json and double check you are doing npm run in all cases
I had this exact error and replaced yarn with NPM without thinking so was doing npm build:styles :facepalm:
nice post matt! Loving tailwind.
etting you know there a typo on this line
yarn add -D tailwdincss ...
🤦♂️ Thanks! Fixing now.
TIL:
pre<scriptname>and
post<scriptname>. Thanks for the post @hagnerd
@applyis not working inside component styles.css. Is there a workaround for this?
Classes set up with apply should be created in the tailwind.css file after the
@tailwind components;and before the
@tailwind utilities;
Read more about it on the Tailwind docs.
Great! This one is good tutorial | https://dev.to/hagnerd/setting-up-tailwind-with-create-react-app-4jd | CC-MAIN-2020-05 | refinedweb | 2,515 | 65.42 |
With the NativeScript 6.0 release we announced a new beta of the NativeScript core theme, which looks a little something like this.
In this article we’ll take a deeper look at the new theme, including what changed, and how to try out the new theme for yourself today. Let’s start by looking at how you can get up and running.
The new NativeScript theme is already on npm as version 2.x, and you can change any existing app to use the new theme by running the
tns plugin update command.
tns plugin update nativescript-theme-core
NOTE: Because the 2.0 theme is still in beta, all NativeScript app templates still use version 1.x of the theme by default. Therefore, if you start a new NativeScript app today you still need to run
tns plugin updateto try out the updated theme.
The NativeScript 2.0 theme leverages a number of global class names that we added in NativeScript 6.1. Therefore, if you’re using a version of NativeScript earlier than 6.1, you’ll also need to add the following line of code to your app, which takes care of adding the appropriate class names manually.
import "nativescript-theme-core";
NOTE: Add the above code to your
app.jsor
app.tsfile if you’re using NativeScript Core or NativeScript-Vue apps, and your
main.tsfile if you’re using NativeScript Angular.
Here are the list of class names that are now globally available for your app. The theme utilizes these class names for styling purposes, and you might also find these class names useful for your own custom styling.
ns-root
ns-ios: Present on iOS only
ns-android: Present on Android only
ns-phone: Present when your app runs on phones (and not tablets).
ns-tablet: Present when your app runs on tablets (and not phones).
ns-portrait: Present when your app runs in portrait mode.
ns-landscape: Present when your app runs in landscape mode.
TIP: There are also a number of new CSS variables that you can use in your custom CSS as well.
There’s one final change you have to make to finish updating to the new theme. To make that change, open your
app.css file and find your current theme import, which will look something like this.
@import '~nativescript-theme-core/css/<skin-name>.css';
Replace that line of code with the following imports.
@import "~nativescript-theme-core/css/core.css"; @import "~nativescript-theme-core/css/blue.css";
NOTE: If you use SASS, your imports will instead be
@import "~nativescript-theme-core/core";and
@import "~nativescript-theme-core/blue";.
The second file (e.g.
blue.css) determines your app’s color scheme. You must include a color scheme in order for the theme to work correctly, and you can choose between the following options:
aqua.css,
blue.css,
brown.css,
forest.css,
grey.css,
lemon.css,
lime.css,
orange.css,
purple.css,
ruby.css, or
sky.css.
If you’re looking for a way to quickly experiment with your color options, the NativeScript theme’s GitHub repo contains a demo app that makes it easy to test out your choices. You can download and run this demo by executing the following commands in your terminal or command prompt.
git clone cd theme tns run android -- or -- tns run ios
When the app is up and running, tap the “Theme” button in the top-right corner to experiment with different looks.
NOTE: We’ll look at how to enable the material theme and dark mode you see in the gif above momentarily.
With these steps complete you should now have the new theme up and running, so let’s next look at the changes and new features you can try out.
TIP: You can also try the new theme in NativeScript Playground. Use these examples as a starting point, as they already have the new theme installed and ready to go.
- Vue.js:
- Angular:
- Core:
There’s one big update to the way the new core theme works, and I think it’s easiest to understand by looking at a bit of code first.
<!-- Before --> <Button text="My Button" class="btn"></Button> <!-- After --> <Button text="My Button"></Button>
Before, the NativeScript theme required you to explicitly provide a class name to enable the theme styling, for example the
btn class name for
<Button> components.
This is no longer necessary, and all NativeScript components get a base set of styles without any class names at all. For example, here’s what the default button looks like in a NativeScript app using the new theme and the blue color scheme.
A number of the other theme class names have been shortened to make them easier to use. For example, here’s a before and after of how to use the various NativeScript button class names.
<!-- Before --> <Button text="Normal Button" class="btn"></Button> <Button text="Primary Button" class="btn btn-primary"></Button> <Button text="Outline Button" class="btn btn-outline"></Button> <Button text="Rounded Button" class="btn btn-primary btn-rounded-lg"></Button> <Button text="Another Rounded Button" class="btn btn-outline btn-rounded-sm"></Button> <!-- After --> <Button text="Normal Button"></Button> <Button text="Primary Button" class="-primary"></Button> <Button text="Outline Button" class="-outline"></Button> <Button text="Rounded Button" class="-primary -rounded-lg"></Button> <Button text="Another Rounded Button" class="-outline -rounded-sm"></Button>
And here’s what those buttons look like in that same NativeScript app using the blue color scheme.
One super important note before we go further: the NativeScript 2.0 theme provides full backwards compatibility with the 1.0 theme class names through additional CSS files. Therefore, you can update to the new theme without having to change all the class names by adding the following imports to your
app.css file.
@import "~nativescript-theme-core/core.compat"; @import "~nativescript-theme-core/<your-color-scheme-name>.compat";
For example a full
app.css file for an app that uses the blue color scheme in compatability mode should look like this.
@import "~nativescript-theme-core/css/core.compat.css"; @import "~nativescript-theme-core/css/blue.compat.css";
And if you want both set of selectors to work simultaneously, e.g. you want both
<Button class="-primary"> and
<Button class="btn btn-primary"> to work, you’ll need to include both set of CSS files in your
app.css, which looks like this.
@import "~nativescript-theme-core/css/core.css"; @import "~nativescript-theme-core/css/core.compat.css"; @import "~nativescript-theme-core/css/blue.css"; @import "~nativescript-theme-core/css/blue.compat.css";
The compatibility CSS files will help you get your apps converted, but to finish updating you’ll likely have a few additional CSS changes to make. For example, suppose you use this button in your app.
<Button text="Confirm your choice"></Button>
Because this button does not use the
btn class name, it was not styled with the NativeScript 1.0 theme, but it will be styled as soon as you update to the NativeScript 2.0 theme; therefore, you might need to write some custom CSS to ensure these type of components continue to look correct in your updated app.
With this big change out of the way, let’s look at some of the cool new features the new theme offers.
Twitter has a dark mode, iOS is getting a dark mode, and now your NativeScript app can have a dark mode too. The new NativeScript theme has a built-in dark mode that works for all color schemes.
For example, here’s what our simple button app looks like with the new theme’s dark mode applied.
Enabling this dark mode is as easy as adding a
ns-dark class name to the root element of your NativeScript app.
There are a few different elements that might be your root depending on how you built your app. For NativeScript-Angular apps your root element is usually your
<page-router-outlet>, which usually lives in your
app.component.html file.
<page-router-outlet</page-router-outlet>
If your app uses a drawer, your root element will likely be a
<RadSideDrawer>.
<RadSideDrawer class="ns-dark"> ... </RadSideDrawer>
And finally, if your app does not use a drawer, and does not use Angular, you’ll likely need to apply the
ns-dark class name to a
<Frame>.
<Frame class="ns-dark"></Frame>
Regardless, once you apply the
ns-dark class name your app should instantly change to display using a dark set of colors, regardless of the color scheme you’re using.
There is additionally a JavaScript / TypeScript API for programmatically switching your app from dark mode to light mode, in case you want to provide dark mode as an option for your users to toggle.
import Theme from "nativescript-theme-core"; Theme.setMode(Theme.Dark); // Or Theme.Light
Here’s what that API looks like in action in our sample app.
TIP: You can detect whether the user has dark mode enabled on their iOS device, and conditionally apply a dark mode in your app based on the user’s global iOS preference. Pretty cool, huh?
Now that you know how to use the new theme and how to try out dark mode, let’s look at one last cool feature of the new theme—the ability to use the Kendo UI ThemeBuilder.
Kendo UI ThemeBuilder is a tool for visually creating themes.
Historically the tool was used only for web apps, but now you can use the same tool to configure color schemes for your NativeScript apps as well.
To try it out, visit the tool, select Start Theming, select a base theme (Material is a good starting point for mobile apps), and click the Create button.
Feel free to play with the theme colors if you’d like, and then click the DOWNLOAD button in the top-right corner, which will give you a
.zip bundle with two files, an
app.css file you can use on the web if you’d like, and a
variables.scss file you’ll need for using the color scheme with NativeScript.
One important note before we continue: to use ThemeBuilder-built color schemes in NativeScript you must use SASS in your NativeScript apps, as ThemeBuilder outputs SASS variables, which the new NativeScript theme consumes.
The good news is that SASS is really easy to use in NativeScript. In fact, as of NativeScript 6.0, SASS support is built into all new apps by default, and all you need to do is create an
app.scss file in the same folder as your
app.css file to get started.
Once you’ve configured your theme in ThemeBuilder, downloaded the appropriate file, and created an
app.scss file for your app, open your
app.scss file and paste in the contents of your downloaded
variables.scss file. Your
app.scss file should look like this.
/* Contents of variables.scss */ @import "~nativescript-theme-core/index"; /* Your custom CSS */
For example here’s what the default Bootstrap theme looks like.
";
And... that’s it! Just by plugging in those variables you’ll have an app that uses your ThemeBuilder-configured theme. For example here’s what the ThemeBuilder Bootstrap color scheme looks like for our sample button app.
Feel free to experiment with Kendo UI ThemeBuilder to create the perfect theme for your own apps.
The new NativeScript theme provides a new look, as well as a number of new features you can leverage in your apps, such as a dark mode and the ability to use Kendo UI ThemeBuilder to build custom themes.
The new theme is in beta, and as such we’d love to have your feedback. What do think about the updates to the theme class names? Does dark mode seem like something you’ll use? Does everything work like you expect? Do you like using Kendo UI ThemeBuilder?
Let us know in the comments, and if you find any issues when testing the theme, create an issue on GitHub so we can address it before the theme’s final release. | https://blog.nativescript.org/an-early-look-at-the-new-nativescript-core-theme/index.html | CC-MAIN-2021-21 | refinedweb | 2,026 | 64.2 |
Create and Package a Lambda Function
In order for a Python Lambda function to run on an AWS Greengrass device, it must be packaged with specific folders from the Python AWS Greengrass core SDK. In the following, you will:
Download the Python AWS Greengrass core SDK to your computer (not the AWS Greengrass device).
Decompress the downloaded SDK file (either for Windows or macOS).
Obtain the Python Lambda function, called
greengrassHelloWorld.py, from the (decompressed) SDK.
Package
greengrassHelloWorld.pywith the SDK folders (three in total) by creating a file called
hello_world_python_lambda.zip.
Upload the
hello_world_python_lambda.zippackage to the Amazon Web Services Lambda console.
Use the AWS Greengrass console to transfer the package to the AWS Greengrass device.
In the AWS IoT console, choose Software.
To get the AWS Greengrass Core SDK, on the home page, scroll down to SDKs, and choose Configure download.
From the drop-down list, choose Python 2.7 version 1.0.0, and then choose Download Greengrass Core SDK.
Based on your operating system, choose a tab to decompress the downloaded SDK.
- Windows
Use a tool for decompressing
.tar.gzfiles on Windows such as 7-Zip, WinZip, or similar. As an example, the 7-Zip tool can be used to decompress
greengrass-core-python-sdk-1.0.0.tar.gzas follows:
After installing 7-Zip, navigate to the
greengrass-core-python-sdk-1.0.0.tar.gzfile using Windows File Explorer (Windows logo key + E), right-click the file, choose 7-Zip, then choose Open archive.
In the resulting 7-Zip window, double-click
greegrass-core-python-sdk-1.0.0.tar,
aws_greengrass_core_sdk,
examples,
HelloWorld, and then
greengrassHelloWorld.zip.
Optionally using the Ctrl key, select the three SDK folders
greengrasssdk,
greengrass_common,
greengrass_ipc_python_sdkand the Python
greengrassHelloWorld.pyLambda file. Next, choose Extract, pick a location to extract the files to, and choose OK.
- macOS
Using Finder, navigated to the
greengrass-core-python-sdk-1.0.0.tar.gzfile and double-click it. The creates the
aws_greengrass_core_sdkfolder.
Expand the
aws_greengrass_core_sdkfolder, tehn the
examplesfolder, and then the
HelloWorldfolder.
Double click the
greengrassHelloWorld.zipfile. This creates the
greengrassHelloWorldfolder – expand this folder.
Note the three SDK subfolders
greengrasssdk,
greengrass_common,
greengrass_ipc_python_sdkand the Python
greengrassHelloWorld.pyLambda script file.
- UNIX-like system
Open a Terminal window and navigate to the directory containing the
greengrass-core-python-sdk-1.0.0.tar.gzfile.
Run the following command to decompress the file:
sudo tar -xzf greengrass-core-python-sdk-1.0.0.tar.gz
This creates the
aws_greengrass_core_sdkdirectory. Next, run the following commands:
cd /aws_greengrass_core_sdk/examples/HelloWorld sudo unzip greengrassHelloWorld.zip
This creates the three SDK folders
greengrass_common,
greengrass_ipc_python_sdk,
greengrasssdkand the Python
greengrassHelloWorld.pyLambda file.
Note that the
greengrassHelloWorld.pyPython Lambda function publishes one of two possible messages every 5 seconds to the
hello/worldtopic, as shown next (to save space, all code comments have been removed):
import greengrasssdk import platform from threading import Timer import time client = greengrasssdk.client('iot-data') my_platform = platform.platform()() greengrass_hello_world_run() def function_handler(event, context): return
In order to run the Python
greengrassHelloWorld.pyLambda function in the cloud, you must package it with the AWS Greengrass core SDK.. Therefore, after you have extracted the SDK folders
greengrass_common,
greengrass_ipc_python_sdk,
greengrasssdkand the
greengrassHelloWorld.pyPython Lambda file, package them into a compressed
.zipfile named
hello_world_python_lambda.zip:
For UNIX-like systems (including the Mac terminal), this can be accomplished with the following command:
sudo zip -r hello_world_python_lambda.zip greengrass_common/ greengrass_ipc_python_sdk/ greengrasssdk/ greengrassHelloWorld.py
Note
Depending on your distribution, you may need to install
zipfirst. For example,
sudo apt-get install zip(this installation command may differ for your distribution).
You are now ready to upload your Lambda function
.zipfile to the AWS Lambda console:
From the AWS Management Console, open the Lambda console.
Choose Create a function.
Choose Author from scratch (this option may alread by selected).
Name your function
Greengrass_HelloWorldand set the remaining fields as follows. You can create a security role for your Lambda function or use an existing one. If you create one, use the AWS IoT Button permissions role (as shown below). Next, choose Create function.
On the Configuration tab, in the Function code region, under the Code entry type drop-down menu, choose Upload a .ZIP file. For Runtime, choose Python 2.7. For Handler, type
greengrassHelloWorld.function_handler, and then upload the
hello_world_python_lambda.zipfile you created earlier, as shown:
Your
hello_world_python_lambda.zipfile size may vary (i.e., 15.7 kB in this example).
Choose Save.
To publish this Lambda function, under Actions, choose Publish new version:
Write a description in the Version description field, such as
First version, then select Publish:
Note
The above screenshot is a pop-up modal dialog box whose title is Publish new version from $LATEST.
Now, create an alias for the Lambda function. Aliases create a single entity for your Lambda function that AWS Greengrass devices can subscribe to without having to update subscriptions with Lambda version numbers every time a function is modified. From the Actions drop-down menu, choose Create alias.
Note
If future versions of this Lambda are published, you must point the alias to the new version.
Name the alias
GG_HelloWorld, set the version to
1(
1corresponds to the latest published version), and then choose Create. Note that AWS Greengrass does not support Lambda aliases for $LATEST. | https://docs.aws.amazon.com/greengrass/latest/developerguide/create-lambda.html | CC-MAIN-2018-09 | refinedweb | 875 | 51.85 |
In this section, the simple web service that was introduced in Using SOAP services is extended to handle SOAP headers.
If you have followed the steps outlined in the previous section, you can skip steps 1 through 4 and go directly to step 5.
To create a web service server
Create a database.
Start a server using this database.
Connect to the server using Interactive SQL.
Using Interactive SQL, create a web service.
Define the stored procedure that this service is to call to perform the calculation needed to convert from a temperature expressed in degrees Fahrenheit to a temperature expressed in degrees Celsius. Unlike the example in the previous section, this one includes additional statements to process a special SOAP header. If you have already worked through the example in the previous section, change the CREATE below to ALTER since you are now going to modify the stored procedure.
Headers in SOAP requests can be obtained using a combination of the NEXT_SOAP_HEADER and SOAP_HEADER functions. The NEXT_SOAP_HEADER function iterates through the SOAP headers included within a request and returns the next SOAP header name. Calling it with NULL causes it to return the name of the first header. Subsequent headers are retrieved by passing the name of the previous header to the NEXT_SOAP_HEADER function. This function returns NULL when called with the name of the last header. The SQL code that does the SOAP header retrieval in the example is this. It exits the loop when NULL is finally returned.
Calling this function repeatedly returns all the header fields exactly once, but not necessarily in the order they appear in the SOAP request.
The SOAP_HEADER function returns the value of the named SOAP header field, or NULL if not called from an SOAP service. It is used when processing an SOAP request via a web service. If a header for the given field-name does not exist, the return value is NULL.
The example searches for a SOAP header named Authentication. When it finds this header, it extracts the value for entire SOAP
header as well as the values of the
@namespace and
mustUnderstand attributes. The SOAP header value might look something like this XML string:
For this header, the
@namespace attribute value would be:
SecretAgent
Also, the
mustUnderstand attribute value would be:
1
The interior of this XML string is parsed with the OPENXML function using an XPath string set to
/*:Authentication/*:userName.
Using the sample SOAP header value shown above, the SELECT statement would create a result set as follows:
A cursor is declared on this result set and the three column values are fetched into three variables. At this point, you have all the information of interest that was passed to the web service. You have the temperature in Fahrenheit degrees and you have some additional attributes that were passed to the web service in a SOAP header. So what could you do with this information?
You could look up the name and alias that were provided to see if the person is authorized to use the web service. However, this exercise is not shown in the example.
The next step in the stored procedure is to create a response in the SOAP format. You can build the XML response as follows:
This builds the following XML string:
Finally, to return the SOAP response to the caller, the SA_SET_SOAP_HEADER stored procedure is used:
As in the example in the previous section, the last step is the calculation that converts from degrees Fahrenheit to degrees Celsius.
At this point, you now have a SQL Anywhere web service server running that can convert temperatures from degrees Fahrenheit to degrees Celsius as in the previous section. The major difference, however, is that it can also process a SOAP header from the caller and send a SOAP response back to the caller.
This is only half of the picture. The next step is to develop an example client that can send SOAP requests and receive SOAP responses.
If you have followed the steps outlined in the previous section, you can skip steps 1 through 3 and go directly to step 4.
To send and receive SOAP headers
Create another database for use with a second server.
Start the personal server using this database.
Connect to the personal server using another instance of Interactive SQL.
Using Interactive SQL, create a stored procedure.
The URL clause is used to reference the SOAP web service. The string
'' specifies the URI of the web service that is going to be used. This is a reference to the web server that is listening on
port 8082. will have to be used to extract the SOAP response header information.
You need a wrapper for the FtoC stored procedure so create a second stored procedure as follows. Unlike the example in the previous section, this one includes additional statements to create a special SOAP header, send it in a web service call, and process a response from the web server. If you have already worked through the example in the previous section, change the CREATE below to ALTER since you are now going to modify the stored procedure.
This stored procedure acts as a cover procedure for the call to the web service. The stored procedure has been enhanced from the example in the previous section. It creates two SOAP headers. The first one is this.
The second one is this.
When the cursor is opened, the SOAP request is sent to the web service.
The FtoC stored procedure returns a result set that this stored procedure will process. The result set will look something like this.
The OPENXML function is used to parse the XML that is returned, extracting the value that is the temperature in degrees Celsius.
Call the stored procedure in order to send the request and obtain the response:
The Fahrenheit temperature and the Celsius equivalent appears.
A SQL Anywhere web service client can be declared as either a function or a procedure. A SQL Anywhere client function declaration effectively restricts all parameters to in mode only (parameters cannot be assigned by the called function). Calling a SQL Anywhere web service function will return the raw SOAP envelope response whereas a procedure returns a result set.
A SOAPHEADER clause has been added to the create/alter procedure/function statements. A SOAP header can be declared as a static constant or can be dynamically set using the parameter substitution mechanism. A web service client function can define one or more in mode substitution parameters whereas a web service client procedure can also define a single inout or out substitution parameter. Therefore a web service client procedure can return the response SOAP header within an out (or inout) substitution parameter. A web service function must parse the response SOAP envelope to obtain the header entries.
The following example illustrates how a client can specify the sending of several header entries with parameters and receiving the response SOAP header data.
'Expression has unsupported data type' SQLCODE=-624, ODBC 3 State-"HY000" | http://dcx.sap.com/1100/en/dbprogramming_en11/working-soap.html | CC-MAIN-2017-22 | refinedweb | 1,175 | 62.27 |
Hello. I just bought a Wemos D1 R2 to have alexa in a school project. Then i figured out that the pinouts of the wemos are super complicated to me. Could anyone help me to port over the pins and code? Here is the pinouts that i had in my Uno:
#include <SPI.h> #include <MFRC522.h> #include <LiquidCrystal.h> #include <Wire.h> #include <Servo.h> #define RST_PIN 9 // Configurable, see typical pin layout above #define SS_PIN 10 // Configurable, see typical pin layout above Servo myservo; int pos = 0; MFRC522 mfrc522(SS_PIN, RST_PIN); // Create MFRC522 instance const int rs = 7, en = 6, d4 = 5, d5 = 4, d6 = 8, d7 = 2; LiquidCrystal lcd(rs, en, d4, d5, d6, d7); myservo.attach(3) | https://forum.arduino.cc/t/porting-arduino-uno-code-to-wemos-d1-r2/697001 | CC-MAIN-2022-33 | refinedweb | 121 | 75.81 |
This is the mail archive of the libc-alpha@sourceware.org mailing list for the glibc project.
> Patches that follow will remove uses of IS_IN_* variables with the s/remove/replace/ > Verified that there are no relevant source changes. One source change > that will crop up repeatedly is that of nscd_stat, since it uses the > build timestamp as a constant in its logic. s/source/binary/ > +in-module = $(strip $(foreach lib,$(libof-$(basename $(@F))) $(libof-$(<F)) \ > + $(libof-$(@F)),-DIN_MODULE=MODULE_$(lib))) I think you can make this: in-module = $(firstword $(libof-$(basename $(@F))) \ $(libof-$(<F)) \ $(libof-$(@F)) libc) There should never be more than one nonempty libof-* expansion unless it's multiple that are the same. Also don't repeat the fixed parts (i.e. -DIN_MODULE=MODULE_). > - $(CPPFLAGS-$(suffix $@)) \ > + $(CPPFLAGS-$(suffix $@)) $(module-def) \ Here just use -DIN_MODULE=MODULE_$(in-module) directly. > diff --git a/elf/rtld-Rules b/elf/rtld-Rules > index 0a5d6af..4d78d90 100644 > --- a/elf/rtld-Rules > +++ b/elf/rtld-Rules > @@ -138,6 +138,11 @@ ifdef rtld-depfiles > -include $(rtld-depfiles) > endif > > +# Set libof-* for each routine. > +cpp-srcs-left := $(subst .os,,$(rtld-modules)) Use $(rtld-modules:%.os=%) (or patsubst if you prefer). Plain subst will eat .os out of the middle of a name, which is wrong. > --- /dev/null > +++ b/include/libc-modules.h Even though it's just for a brief window of the revision history, please put a comment in this file saying it should/will be generated. > +#include "libc-modules.h" Use <>. Add a short comment saying that is defines the MODULE_* macros. > -CPPFLAGS-locale-programs = -DLOCALE_PATH='$(localepath)' \ > +CPPFLAGS-locale_programs = -DLOCALE_PATH='$(localepath)' \ Rather than changing this name, I think the libc-modules.h generation should just turn all nonidentifier characters into _. OK with those changes. Thanks, Roland | https://sourceware.org/legacy-ml/libc-alpha/2014-11/msg00152.html | CC-MAIN-2020-16 | refinedweb | 294 | 50.23 |
Free Open Source Electronic Document Management System
Project description instructions visit the Mayan EDMS documentation at:
The final version of the book “Exploring Mayan EDMS” available now!
Click the image or visit:
Hardware requirements
- 2 Gigabytes of RAM (1 Gigabyte if OCR is turned off).
- Multiple core CPU (64 bit, faster than 1 GHz recommended).
Important links
- Documentation
- Contributing
- Forum
- Source code, issues, bugs
- Plug-ins, other related projects
- Translations
- Videos
3.4.9 (2020-05-26)
- Add the packaging library explicitly as a dependency. Closes GitLab issue #825. Thanks to Martin (@efelon) for the report and debug information.
3.4.8 (2020-05-25)
- Move django-qsstats-magic to the mayan_statistics app.
- Update Pillow from version 7.0.0 to 7.1.2.
- Update Werkzeug from version 1.0.0 to 1.0.1.
- Update devpi-server from version 5.4.1 to 5.5.0.
- Update django-celery-beat from version 1.5.0 to 2.0.0.
- Update translation files.
- Encapsulate actstream registry inside a EventModelRegistry.
- Improve default binary path detections in OpenBSD 6.7.
- Fix README link to installation chapter. Closes GitLab issue #823. Thanks to Matthias Löblich (@startmat) for the report.
- Add document and document version pre creation hooks.
- Use pre creation hooks to check quotas before document or document version creation and block user early on before the task is submitted.
- Wrap around long texts in the panel’s body.
- Wrap around long tags when showing them in a panel’s body.
- Move templating to the templating app.
- Expose Django’s AUTHENTICATION_BACKENDS setting.
3.4.7 (2020-04-28)
- Darken dropdown menu text to increase contrast and legibility.
- Capture and display double check in and non checked out document checkout attempts. Closes GitLab issue #820. Thanks to Gerald Fuchs (@geraldf) for the report and debug information.
- The Docker volume change owner command is now only run if there is a change in the UID or GID of the container’s user. Merge request !81. Thanks to Matthias Bilger (@m42e) for the patch.
- The pip option --no-use-pep517 has been removed from the installation and version 3.4 upgrade documents. Closes GitLab issue #810. Thanks to jhayn49 (@jhayn49) for the report.
- Replace self.get_object() with self.object where applicable.
- Fixed HTTP workflow action field_order. Merge request !82. Thanks to Matthias Bilger (@m42e) for the report and the patch.
- Add MERC 0007 defining the new repository branch layout.
- Remove outdated development version deployment instructions. Closes GitLab issue #821. Thanks to Gerald Fuchs (@geraldf) for the report.
3.4.6 (2020-04-19)
- Update Django to version 2.2.12.
- Support custom URL base paths. Add the new setting COMMON_URL_BASE_PATH.
- Expose Django’s SESSION_COOKIE_NAME and SESSION_ENGINE settings.
- The checkdependencies command will now mark missing production dependencies with a symbol and an ANSI coloration.
- Add --csv option to the checkdependencies command to output the result as comma delimited values.
3.4.5 (2020-04-14)
- Make sure FUSE’s getattr.st_size always return a 0 and not None when the document is invalid. Close GitLab issue #797. Thanks to telsch (@telsch) for the report and debug information.
- Add the Un series Korean TrueType fonts (fonts-unfonts-core) to the Docker image.
- Fix the document page disable and enable links. Close GitLab issue #809. Thanks to Kalloritis (@Kalloritis) for the report and research.
- Fix a specific scenario with the document count limit quota backend where a user might still be able to upload a new document past the quota limit.
- Fix typo in the document version upload URL pattern.
- Standardize the icon for returning to the document from child views.
- Move the links to return to the document from the page list, version detail and page image, from the facet menu to the secondary menu for proper UX flow.
- Fix a typo in the resolved smart link URL parameter.
- Improve resolved smart link access filtering.
- Allow apps without an urlpatterns entry.
- Update the Docker image to use Debian 10.3.
- Update the quota app to work with more deployment types.
- Add a dependency definition for the gpg binary used by the Django GPG app.
- Fix document list mode on the cabinet detail view.
- Fine tune extra small button appearance and space usage.
- Move some of the extra small button presentation from the template to the stylesheet.
3.4.4 (2020-04-08)
- Add a custom app static media finder to workaround Django’s AppDirectoriesFinder limitation that caused the missing staticfiles manifest entry error.
- Use tmpfs for gunicorn’s heartbeat file under Docker. Closes GitLab issue #754. References:, and
3.4.3 (2020-04-04)
- Fix document page interactive transformation pages.
- Fix layer transformation selection view.
- Improve permission checking of the layer transformation selection view.
- Make document tag widget clickable.
- Make document cabinet widget clickable.
- Apply the DOCUMENTS_LIST_THUMBNAIL_WIDTH setting value to document pages and document version thumbnails too.
- Send all exception to the log system and let the log system perform the filtering.
- Improve the design of the 404, 403 and 500 error pages.
- Update production error log settings. Max bytes from 1024 to 65535 and backup from 3 to 5.
3.4.2 (2020-04-02)
- Fix search forms action URLs. Closes GitLab issue #802. Thanks to holzhannes (@holzhannes) for the report and debug information.
- Update document deletion message to say the documents were submitted for deletion and not actually deleted at the moment of the request.
- Detect if devpi-server is installed before building the Docker image.
- Re-add SQLite3 upgrade test now that the code upgrades from two Django 2.2 versions.
- Allow apps to inject their own head or foot templates to the root template.
- Added new document setting DOCUMENTS_LIST_THUMBNAIL_WIDTH to control the size of the thumbnails on list view mode.
- Added document head template to inject the DOCUMENTS_LIST_THUMBNAIL_WIDTH as a CSS style.
- Show the full path to the cabinet on cabinet search results.
- Add support for index instance search.
- Add support for search for cabinets by their document basic attributes.
- Add support for app passthru URL patterns.
3.4.1 (2020-04-01)
- Add development setting for Docker databases.
- Add manage target against Docker databases.
- Add git-core to the Docker image to allow installing development Python libraries.
- Fix pre upgrade cache cleanup in file caching migration 0005.
3.4 (2020-03-30)
- Update Django to version 2.2.10.
- Backport list display mode. Support switching between item and list mode.
- Update app URLs to use explicit parameters.
- Move dependencies environments to their own module called dependencies.environments.py.
- Increase the size of the file cache maximum size field.
- Add user impersonation support.
- Add support for uncompressing Outlook .msg files. Adds dependency extract-msg.
- Updated converter to show preview of the text part of .msg files.
- Decouple the Checkouts and Sources apps. It is now possible to disable the Checkouts app.
- Add new document version pre save hooks.
- Fix OCR model property.
- Add workflow transition conditionals.
- Add workflow state action conditionals.
- Add document version pre save signal.
- Update the document type and document models to avoid a double save when creating a new document.
- Add quotas app.
- Add support for HTTP methods to the workflow HTTP request state action.
- Add the trash document workflow state action.
- Add support for GPG backends. Add two new settings SIGNATURES_BACKEND and SIGNATURES_BACKEND_ARGUMENTS. This change also removes two settings: SIGNATURES_GPG_HOME and SIGNATURES_GPG_PATH. SIGNATURES_GPG_HOME had already been deprecated and was innactive. SIGNATURES_GPG_PATH is now component gpg_path of the setting SIGNATURES_BACKEND_ARGUMENTS.
- Add sane default paths for the GPG binary for Linux, FreeBSD, OpenBSD, and MaCOS.
- Refactor the search app to support backends. Adds two new settings: SEARCH_BACKEND (which defaults to mayan.apps.dynamic_search.backends.django.DjangoSearchBackend) and SEARCH_BACKEND_ARGUMENTS.
- Update interface of the CompressedStorage backend.
- Add defined storage class.
- Convert the file caching app to used defined storage.
- Show percentage of usage for file caches.
- Add Passthrough storages.
- Add encrypted storage backend.
- Add compressed storage backend.
- Add management command to process storage.
- Automatic storage module loading.
- Convert file caching app to use defined storage.
- Removed a possible race condition when returning the signature of just signed document using embedded signatures.
- Updated version of the development and documentation dependencies.
- Execute the preparestatic as part of the initialsetup and performupgrade commands.
- Detect redirect loops when attempting to escape the AJAX container.
- Improve icons of the OCR, file metadata, and document parsing apps.
- Detect is a SourceColumn can be made sortable.
- Update python-gnupg from version 0.3.9 to 0.4.5.
- Update Django stronghold to version 0.4.0.
- Update Python libraries versions: Python Redis version.
- Removal of Python library django-timezone-field.
- Remove codecov dependency.
- Remove pathlib2 dependency, it is now part of the standard Python library.
- Remove Django’s admindocs app
3.3.17 (2020-04-09)
- Removed a possible race condition when returning the signature of just signed document using embedded signatures.
- Add development setting for Docker databases.
- Add manage target against Docker databases.
- Use tmpfs for gunicorn’s heartbeat file under Docker. Closes GitLab issue #754. References:, and
- Update contributed LDAP setting file.
- Improve the design of the 404, 403 and 500 error pages.
- Update production error log settings. Max bytes from 1024 to 65535 and backup from 3 to 5.
- Detect if devpi-server is installed before building the Docker image.
- Add git-core to the Docker image to allow installing development Python libraries.
- Send all exception to the log system and let the log system perform the filtering.
- Add development setting for Docker databases.
- Add manage target against Docker databases.
- Copy minor improvements to the default Docker Compose file.
3.3.16 (2020-03-17)
- Fix minor release notes typographical errors.
- Update psutil from version 5.6.3 to 5.7.0. CVE-2019-18874 ()
- Update python-gnupg from version 0.3.9 to 0.4.5. CVE-2019-6690 ()
- Update django from version 1.11.28 to 1.11.29. CVE-2020-9402 ()
- Decrease the code and data inside the transaction. Removes a file caching creation from inside a database transaction. Attempted fix for GitLab issues #782 and #735.
- Fix OCR model property. It was listed as document.content instead of document.ocr_content.
- Revert an API permission change for the EventList API view. Fixes GitLab issue #794. Thanks to Matthew Grady (@FlowerCoffeeCup) for the report and investigation.
3.3.15 (2020-03-05)
- Add Docker environment setting MAYAN_SKIP_CHOWN_ON_STARTUP to skip performing the initial chown on the media folder at /var/lib/mayan. This command is slow on non native block storage backends.
- Remove Wiki links from README files. GitLab Merge request !78. Thanks Steffen Raisin (@zintor) for the merge request.
- Add more API tests to the Tags app.
- Expose Django settings: SECURE_PROXY_SSL_HEADER, USE_X_FORWARDED_HOST, and USE_X_FORWARDED_PORT.
- Change the default of DATABASE_CONN_MAX_AGE to 0 which is the safest value.
- Update default Docker Compose file.
- Correct the icon used for multi document cabinet add action. GitLab merge !79. Thanks to Giacomo Catenazzi (@cateee).
- Add environment variable MAYAN_DOCKER_WAIT to have the Docker image wait for a host and port to become available.
- Turn hard-coded constant STUB_EXPIRATION_INTERVAL into a user setting named DOCUMENTS_STUB_EXPIRATION_INTERVAL. Defaults to previous value of 24 hours to preserve existing behavior.
3.3.14 (2020-02-23)
- Add missing backslash in deployment instructions. Closes GitLab issue #780. Thanks to Steve Palmer (@steverpalmer) for the report.
- Update CI script to push multiple tags.
- Remove Wiki link in the about view.
- Remove social media links.
- Add support link.
- Add more expressive error message when an invalid storage argument setting is encountered.
- Make document language field a lazy field. This allows starting Mayan even when there are invalid language codes in the DOCUMENTS_LANGUAGE_CODES setting.
- Warn about invalid document language codes in the DOCUMENTS_LANGUAGE_CODES setting. Thanks to forum user @j_arquimbau for the report.
- Add complete staging folder and staging folder file REST API. Closes GitLab issue #778. Thanks to David Kowis (@dkowis) for the request.
- Add the selenium Firefox geckodriver to the setup-dev-environment target.
- Move the purgeperiodictasks command to the task manager app.
- Remove left over interactive option usage for the purgeperiodictasks command. Closes GitLab issue #785. Thanks to Matthias Löblich (@startmat) for the report.
- Exclude /favicon.ico from the authenticated URL list. Closes GitLab issue #786. Thanks to Matthias Löblich (@startmat) for the report.
- Rename test document creation method for clarity.
3.3.13 (2020-02-14)
- Update management command interface. Subclasses of BaseCommand no longer have an ‘interactive’ option.
-.
- Update Django to version 1.1.28 ()
- Prioritize Mayan’s translations over Django’s built in ones. Fixes GitLab issue #734. Thanks to Roberto Novaes (@rvnovaes) for the report.
- Add make file target to remove fuzzy translation markers.
- Move the language files for the Bosnian language from the bs_BA locale to the bs locale.
- Move the language files for the Slovenian language from the sl_SI locale to the sl locale.
- Move the language files for the Vietnamese language from the vi_VN locale to the vi locale.
- Move the language files for the Dutch language from the nl_NL locale to the nl locale.
- Move the language files for the Danish language from the da_DK locale to the da locale.
- Add make file target to cleanup source translation files.
- Cleanup minor but frequent translation files issues accumulated by the automatic tools. Many new text string are now available for translation.
- Update the doToastrMessages to avoid appending new style updated indefinitely on list sort updates. Closes GitLab issue #772. Thanks to Matthias Löblich (@startmat) for the report and debug information.
3.3.12 (2020-02-10)
- Fix issue with the template object count logic introduced in the last optimization.
- Fix Chinese translation. Locale cn has been renamed to cn-hans.
3.3.11 (2020-02-07)
- Fix document preview rendering issue introduced by the read only decimal field display addition. Closes GitLab issue #771. Thanks to Christoph Roeder (@brightdroid) for the report and investigation.
- Add message about decompression bomb DOS attacks. Add mention how to disable the protection by increasing the allowed image size.
- Optimize lists title item count calculations.
- Fix document properties form default language selection. Closes GitLab issue #770. Thanks to Albert ARIBAUD (@aaribaud) for the report and for narrowing down the cause.
- Add document language codes settings tests. Closes GitLab issue #547. Thanks to Bebef (@Bebef) for the report and research.
- Move the django.contrib.admindocs to be loaded after the Tags app to avoid its translations to take precedence. Closes GitLab issue #734. Thanks to Roberto Novaes (@rvnovaes) for the report.
3.3.10 (2020-01-31)
- Turn TarArchiveClassTestCase in to reusable archive test case class. #MD-10.
- Add test runner option for testing excluded tests.
- Add data operation to file metadata 0002 to remove duplicated entries. Closes GitLab issue #762. Thanks to forum user benaser for the report.
- Add package django_migration_test and add migration test to the file metadata app for migration 0002.
- Update make file to remove repeated commands and add migration testing target.
- Update the GitLab CI file to use the test makefile target and add migration testing.
- Update the Docker run_tests command to perform migration testing.
- Update translation files.
- Add support for specifying related fields per model to the templating app.
- Add grouping to the templating widget. Model attributes are now group into model properties, models fields and the new model related fields.
- Add document OCR content and parsed content as document model properties for use in templates.
- Fix the staging folder file API views. GitLab issue #764. Thanks to David Kowis (@dkowis) for the report, debug, and research.
- Add command to show the current version of Mayan. The command is named showversion. The command has one option –build-string` that will show the build string instead. Closes #MD-14.
- Add command to check if the current version is the latest one. The command is named checkversion. Closes issue #MD-28.
- Add button to launch a specific workflow for existing documents. Issue #MD-171.
- Update Pillow to version 6.2.2.
- Improve image page count detection by capturing undocumented Pillow exception. Close GitLab issue #767. Thanks to Frédéric Sheedy (@fsheedy) for the report, debug information, and test image.
- Add new setting to disable the API documentation links from the tools menu. The setting is named REST_API_DISABLE_LINKS and defaults to false.
- Add new setting to disable the password reset link in the login form. This link is not used for third party authentication such as when using LDAP. The setting is named AUTHENTICATION_DISABLE_PASSWORD_RESET and defaults to false.
- Improve workflow app navigation.
- Add fall back read-only render for form fields.
3.3.9 (2020-01-18)
- Update Document and Lock models to avoid triggering a new migrations on default document language change and on default lock timeout change. Closes GitLab issue #759.
- Cleanup repository top level. Moved helper scripts to contrib/scripts.
- Add makefile target to make it easier to create the code coverage report.
- Remove unused Magnum and Travis CI files.
- Add makefile target to run GitLab CI jobs locally.
- Add GitLab CI jobs to test upgrading from current to newest version.
3.3.8 (2020-01-17)
- Update literals so the correct paths of pdfinfo, pdftoppm, libreoffice, exiftool and tesseract are found. Relates to Gitlab issue #308
- Fix document detached signing. Closes GitLab issue #732. Thanks to holzhannes (@holzhannes) for the report and debug information.
- Updated direct deployment documentation to advise users installing in a custom directory to verify the automatically generated supervisor configuration file. Addresses GitLab issue #739
- Added a note to the LDAP section of the FAQ to assist users with potential local environment issues
- Updated docker-compose.yml and documentation to ensure RabbitMQ messages are persistent
- Improve the File Storage section of the Documentation
- Add support and documentation for S3 storage backend
-.
- Remove repeated raise statement that cause HTML markup to show on upload error display.
- Improve file metadata property label.
- Improve file metadata property path reading. Will not error out when passed invalid path to the driver as reference.
- Make the sandbox template field a required field.
- Fix Tag apps API required permissions. The required permissions of the API match those of the view and comply with MERC 000 ‘pillow_maximum_image_pixels’ and defaults to 89478485.
- Fix document metadata add, edit, and remove redirects.
3.3.7 (2019-12-31)
- Use Python Redis client 3.3.11 to enable .client() method for the Redis lock backend. Add version check to the Redis lock backend. GitLab issue #719. Thanks to Rob de Canha-Knight (@rssfed23) for the report and research.
- Run Selenium tests in headless mode.
- Remove repeated document tags preview column.
- Remove cabinet links from the document cabinet list view.
- Enable display of MissingItem class instances.
- Add tests for the common.http.URL class.
- Update FAQ and troubleshooting chapters.
- Update Docker installer, sample docker-compose file and documentation to add a password to the Redis container. GitLab issue #712. Thanks to Matthew Thode (@prometheanfire) for the report.
- Use a fake config file during tests.
- Update Django to version 1.11.27.
- Add password to the Redis container for the staging Docker targets.
- Add new test case BaseTransactionTestCase.
- Improve file metadata driver database registration. Improve indexing based on file metadata properties. Improves GitLab issue #720 on the signal commit side of the indexing. Thanks to Rob de Canha-Knight (@rssfed23) for the report and debug information.
- Replicate transaction handling improvements from the file metadata app to the OCR and document parsing apps.
- Initialize indexes in a predictable way. Solves GitLab issue #720 Thanks to Rob de Canha-Knight (@rssfed23) for the report and debug information.
- Make file metadata StoredDriver fields unique. Relates to GitLab issue #720 Thanks to Rob de Canha-Knight (@rssfed23) for the report and debug information.
- Fix the POP3 source under Python 3. GitLab issue #724. Thanks to Kevin Pawsey (@kevinpawsey) for the report and debug information.
- Merge NFS troubleshooting section. Thanks to Rob de Canha-Knight (@rssfed23). GitLab merge !67.
- Improve mirroring code to support slashes in index node values and document labels and also support duplicate nodes values or documents labels. Slashes are replaced with underscores. To handle duplicates, the primary key of the object is appended to the label inside parenthesis. Closes GitLab issue #722. Thanks to Rob de Canha-Knight (@rssfed23) for the report and research.
- Fix workflow document signing action. Also add message when trying to use action for an initial state when the created document has no version associated. GitLab issue #726. Thanks to forum user @holzhannes for the report.
3.3.6 (2019-12-19)
- Make list toolbar stick to the top of the view when scrolling.
- Fix page count on some PDF files, and fix a Python 3 incompatibility. GitLab merge !64. Thanks to O2 Graphics (@O2Graphics).
- Improve the executables paths on FreeBSD/OpenBSD. GitLab merge !63. Thanks to O2 Graphics (@O2Graphics).
- Fix document orientation detection. GitLab issue #713. Thanks to Rob de Canha-Knight (@rssfed23) for the report and debug information.
- Update the Redis lock connection initialization so that is works with Redis versions < 5.0. GitLab issue #709. Rob de Canha-Knight (@rssfed23) for the report and debug information.
- Update the ZipArchive class to work with badly encoded filenames. GitLab issue #651. Thanks to Fabian (@ruffy91) for the report.
- Delete periodic task on document type delete. Closes GitLab issue #715. Thanks to Rob de Canha-Knight (@rssfed23) for the report and research.
- Add transaction handling to the interval sources delete and save methods.
- Add support for functional tests using selenium. Use TEST_SELENIUM_SKIP to skip these tests.
- Add test for issue #494.
- Add support for configurable test view template.
- Add support for public test views.
- Reapply fix for issue #494. To avoid exploit of cross site scripting in login view. Thanks to the Checkmarx SCA AppSec team for the research regarding this issue for the recent version and thanks to Lokesh (@lokesh1095) for the original report and solution. GitLab issue #494.
- Settings: Display overridden instead of overrided. GitLab merge !65. Thanks to Rob de Canha-Knight (@rssfed23).
- Update the address of PyPI when checking for new versions to avoid SSL errors from reusing the old address (pypi.python.org/pypi) certificate. GitLab issue #717. Thanks to Jordan Wages (@wagesj45) for the report.
- Allow passing TEST_SELENIUM_SKIP as an environment variable.
- Skip Selenium tests inside the Docker container.
3.3.5 (2019-12-13)
- Pin django-timezone-field to version 3.1. GitLab issue #698. Thanks to Rob de Canha-Knight (@rssfed23) for the report and research.
- Pin kombu to version 4.6.7. GitLab issue #699. Thanks to Rob de Canha-Knight (@rssfed23) for the report and the research.
- Update instances of the word “weblink” to “web link”.
- Unify the creation of the temporary config file used in tests.
- Update all 0001 setting migrations to accept manually migrated settings.
- Update TemplateField to concatenate existing help texts.
- Don’t show the edit and delete links for resolved web links.
- Exclude Smart link setup columns and links from the resolved smart link views.
- TemplateField shows the available variable in the help text automatically.
- Use TemplateField for the web link template.
- Use TemplateField for smart links.
- Add the ID and the URL to the checkout serializer.
- Add BaseTransformationType metaclass in a way compatible with Python 2 and Python 3.
-.
- Add missing link icons.
- Add missing field help texts.
3.3.4 (2019-12-09)
- Update the gunicorn worker class to synchronous.
- Update the way the BaseTransformationType metaclass is passed to work on Python 3.
- Add locking to the file metadata document processing task.
- Update devpi-server version to 5.3.1.
- Add targets to run staging containers using RabbitMQ as broker.
- Don’t set SourceColumn to the attribute name when no help text is defined.
- Make it clear when a setting is being overridden by an environment variable. Add better text explanation. Change the column to a check mark widget.
- Add icons to the smart settings links.
- Fix docker-runtest-all target.
- Fix the evaluation priority of the bootstrap settings. Closes GitLab issue #702. Thanks to Kevin Pawsey (@kevinpawsey) for the report and the help debugging the issue.
- Switch from librabbitmq to py-amqp. Closes GitLab issue #699. Thanks to Rob de Canha-Knight (@rssfed23) for the report, research, and debug.
- Darken content area when opening the mobile menu.
3.3.3 (2019-12-05)
- Fix transformation label display in transformation create view.
- Remove supervisor environment variable expansion.
- Don’t exit GitLab makefile target if the branch to delete doesn’t exist.
- Automatically create transformations from the selection form that doesn’t have arguments.
- Add missing message displays for transformation error creation and not argument transformation creation.
- Mark missing text for document indexing as translatable.
3.3.2 (2019-12-05)
- Improve setting migration method matching. Avoid executing a migrations for settings with similar but shorter names.
- Fix sources app setting migrations.
- Add OCR app setting migrations.
- Improve upgrade and deployment instructions.
- Update backup chapters to refer to upstream database documentation.
3.3.1 (2019-12-04)
- Update Celery broker environment variable in the docker installer.
- Add preparestatic command to documentation. GitLab issue #692. Thanks to Christopher S. Meiklejohn (@cmeiklejohn2) for the report.
- Add sources setting migration.
- Savesettings command fixes.
- Fix username color on mobile screens.
- Hide the multi item selection help text on mobile screens.
- Update Django to version 1.11.26.
- Remove body spacer HTML and JavaScript. Not needed with the new UI.
- Change the required permission to view the document parsing error from “View document parsed content” to “Parse document”. This way only users with the access to affect the parsed content are the only ones that can view what errors occurred during parsing.
3.3 (2019-12-03)
- Add support for icon shadows.
- Add icons and no-result template to the object error log view and links.
- Use Select2 widget for the document type selection form.
- Backport the vertical main menu update.
- Backport workflow preview refactor. GitLab issue #532.
- Add support for source column inheritance.
- Add support for source column exclusion.
- Backport workflow context support.
- Backport workflow transitions field support.
- Backport workflow email action.
- Backport individual index rebuild support.
- Rename the installjavascript command to installdependencies.
- Remove database conversion command.
- Remove support for quoted configuration entries. Support unquoted, nested dictionaries in the configuration. Requires manual update of existing config.yml files.
- Support user specified locations for the configuration file with the CONFIGURATION_FILEPATH (MAYAN_CONFIGURATION_FILEPATH environment variable), and CONFIGURATION_LAST_GOOD_FILEPATH (MAYAN_CONFIGURATION_LAST_GOOD_FILEPATH environment variable) settings.
- Move bootstrapped settings code to their own module in the smart_settings apps.
- Remove individual database configuration options. All database configuration is now done using MAYAN_DATABASES to mirror Django way of doing atabase etup.
- Added support for YAML encoded environment variables to the platform templates apps.
- Move YAML code to its own module.
- Move Django and Celery settings.
- Backport FakeStorageSubclass from versions/next.
- Remove django-environ.
- Support checking in and out multiple documents.
- Remove encapsulate helper.
- Add support for menu inheritance.
- Emphasize source column labels.
- Backport file cache manager app.
- Convert document image cache to use file cache manager app. Add setting DOCUMENTS_CACHE_MAXIMUM_SIZE defaults to 500 MB.
- Replace djcelery and replace it with django-celery-beat.
- Update Celery to version 4.3.0 Thanks to Jakob Haufe (@sur5r) and Jesaja Everling (@jeverling) for much of the research and code updates.
- Support wildcard MIME type associations for the file metadata drivers.
- Update Gunicorn to use sync workers.
- Include devpi-server as a development dependency. Used to speed up local builds of the Docker image.
- Update default Docker stack file.
- Remove Redis from the Docker image. A separate container must now be deployed.
- Add Celery flower to the Docker image.
- Allow PIP proxying to the Docker image during build. Can be used with the local devpi-server or other similar.
- Default Celery worker concurrency to 0 (auto).
- Set DJANGO_SETTINGS_MODULE environment variable to make it available to sub processes.
- Add entrypoint commands to run single workers, single gunicorn or single celery commands like “flower”.
- Add platform template to return queues for a worker.
- Update the EXIFTOOL driver to run for all documents regardless of MIME type.
- Remove task inspection from task manager app.
- Move pagination navigation inside the toolbar.
- Remove document image clear link and view. This is now handled by the file caching app.
- Add web links app.
- Add support to display column help text as a tooltip.
- Update numeric dashboard widget to display thousand commas.
- Add support for disabling document pages.
- Add support for converter layers.
- Add redactions app.
- Unify all line endings to be Linux style.
- Add support for changing the system messages position. GitLab issue #640. Thanks to Matthias Urhahn (@d4rken).
- Update Docker deploy script. Use alpine postgres version. Support Docker networks and make it the default. Delete the containers to allow the script to be idempotent. Deploy a Redis container.
- Improve document version upload form.
- Use dropzone for document version upload form.
- Allow the “Execute document tools” permission to be granted via ACL.
- Update IMAP source to be UID based.
- Add support for custom IMAP search criteria.
- Add support for executing custom IMAP STORE commands on processed messages.
- Add support to execute the IMAP expunge command after each processed message.
- Add support for specifing a destination IMAP mailbox for processed messages. GitLab issue #399. Thanks to Robert Schöftner (@robert.schoeftner).
- Support simple search disable via the new SEARCH_DISABLE_SIMPLE_SEARCH setting.
- Move all generic API classes definitions to the rest_api.generics module.
- Update API status code on insufficient access for the apps: indexes, parsing, documents, metadata, ocr, permission, user management.
- Split document app links.
- Make Postgres container wait delay configurable.
- Enable the sidebar workflow runtime link when the workflow view permission is granted to at least one workflow.
- Add ACL support to smart links.
- Add “no result” template to staging folder files view.
- Split duplicated document views, links into their own module.
- Update label and icon of the document sign form Label updated from “Save” to “Sign”.
- Document signatures API views.
- Add and improve document signatures app tests.
- Rename document_states/tests/test_workflow_actions.py to document_states/tests/base.py.
- Added TestServerTestCaseMixin to perform mocked HTTP requests.
- Authentication and headers added to the workflow HTTP POST action.
- Update the timeout field of the workflow HTTP POST action to support templates. The timeout field also support integers, float, or empty values.
- DjangoSMTP mailer password field size increased to 192 characters.
- Improve TestModelTestMixin. Allow specifying a base model. Fix passing the dynamic Meta class to the test model.
- Support for proxy model permission inheritance. Proxy models now get the permission inheritance from their base models.
- Update common.http.URL to allow passing a query dictionary.
- Add the document template sandbox feature.
- Use the generic TemplateField for the expression field of index tree templates.
- Add document trashed event. Closes GitLab issue #608 Thanks to Vikas Kedia (@vikaskedia) for the report.
- Add transaction handling to document model events.
- Add back support for individual database settings for compatibility with version 3.2 settings. These are now a fallback if the new ‘DATABASES’ setting is not specified.
- Refactor the initial setting bootstrap code.
- Use timezone aware date for document statistics
- Show placeholder label on invalid action classes Instead of throwing an error a sample label of “Unknown action type” will be used and allow users to delete the unknown state action.
- Add workflow action to sign documents.
- Support running specific tests inside the Docker container. docker run –rm mayanedms/mayanedms:3.3 run_tests
- Make the statistics slug field unique.
- Self-heal statistics results model when multiple results are created using the same slug value. Forum topic 1404.
- Add “run_command” Docker entrypoint option to run arbitrary Mayan management command.
- Allow specifying the queue list for the run_worker Docker command.
- Switch default installation to use two Redis databases. One for the message broker, and the other to store task results.
- Complete the prefixing of template tags with the app name.
- Remove unused template tags.
- Add support for setting migrations.
- Add setting migrations for the common, converter, documents, file metadata, and document signatures app.
- Add document type change API endpoint.
- Change OCR API submit URL from documents/{pk}/submit to documents/{pk}/ocr/submit.
- Add Redis based distributed lock backend. Requires one argument: “redis_url”. Example: redis://127.0.0.1:6379/0
- Add the setting LOCK_MANAGER_BACKEND_ARGUMENTS.
- Automate documentation building dependencies.
- Add sphinx sitemap extension.
- Move the file patching code from the Dependency class to a generalized utility of the storages app.
- Add book link to the documentation.
- Update mayan_statistics migration 0002 to rename duplicate slugs.
- Add document index reset view.
3.2.12 (2019-XX-XX)
- Add Mayan container port environment variable to the docker installer. Thanks to Sergios Kefalas for the patch.
- Fix off-by-one error in document statistics.
3.2.11 (2019-11-28)
- Backport transaction handling to document model events.
- Update example LDAP authentication settings file.
- Update FAQ entry about the LDAP file.
- Automate documentation building dependencies.
- Add sphinx sitemap extension.
- Move the file patching code from the Dependency class to a generalized utility of the storages app.
- Add book link to the documentation.
- Make the statistics slug field unique.
- Self-heal statistics results model when multiple results are created using the same slug value. Forum topic 1404.
- Update mayan_statistics migration 0002 to rename duplicate slugs.
- Fix reverse inheritance permissions.
- Remove index create permission as an ACL permission for indexes.
- Fix API example.
- Fix document check in via the API. GitLab issue #688. Thanks to inam ul haq (@inam.sys) for the report.
- Improve supervisord upgrade instructions. Forum topic 880.
3.2.10 (2019-11-19)
- Auto-import dependencies. No need to use: from .dependencies import * # NOQA
- Add makefile target to run all tests in debug mode. This mode is more strict and sidesteps a Django bug that causes errors in the template code that to be silent during tests.
- Rename expected_content_type to expected_content_types and allow a list of content types to be specified.
- Add missing label to metadata and file metadata model properties entries.
- Improve workflow field help text. Make it usable for the creation/edit form help text and for the column pop over.
- Fix NamedMultiWidget issue on Python 3. Affects document checkout form. GitLab issue #683. Thanks to John Bentley (@johnbentleyii) for the report.
- Add missing Event class cache invalidation when calling the refresh() method.
- Use timezone aware date for document statistics.
- Show placeholder label on invalid action classes Instead of throwing an error a sample label of “Unknown action type” will be used and allow users to delete the unknown state action.
- Automate paths in documentation.
- Settings chapter improvements.
- Documentation paths consistency fixes.
- Expand custom Python setting section.
3.2.9 (2019-11-03)
- Move IMAPMockServer to its own module.
- Display feedback message when testing a mailing profile.
- Add tests to the platform app.
- Fix platformtemplate command –context option help message.
- Language translations update.
- Add target to run all translations targets.
- Backport color log formatter from branch version/next.
- Don’t raise error checking AnonymousUser for permissions. Instead return always False.
- Enable the main menu workflow runtime link when the workflow view permission is granted to at least one workflow.
- Make Postgres container wait delay configurable. GitLab issue #677. Thanks to Antenore Gatta (@antenore) for the report.
-.
- Move Celery and Django Celery dependencies to the task manager app.
- Improve dependecies app tests.
- Return st_nlink of 1 files in mirrored indexes. GitLab issue #676. Thanks to Ezio Vernacotola (@eziove) for the report and solution.
- Fix MAYAN_GUNICORN_TIMEOUT Docker image setting. GitLab issue #671. Thanks to Lennart Sauerbeck (@lennart_s) for the report.
- Add makefile target to launch a production staging Docker image.
- Improve duplicated document list view logic to not show documents with trashed duplicates.
- Backport Docker composer makefile targets.
- Add PermissionTestCaseMixin and SmartSettingTestCaseMixin to better organize cache invalidation of both apps for tests.
- Add a version attribute to setting namespace. These are dumped as SMART_SETTINGS_NAMESPACES.
- Add savesettings command.
- Add extra logging to the IMAP email source. GitLab issue #682. Thanks to Patrick Hütter (@PatrickHuetter) for the report.
- Rename all instances of the IMAP server from mailbox to server for clarity.
- Add book link in the about menu.
- Add unknown exception handling when checking for the latest version.
3.2.8 (2019-10-01)
- Fix error when accessing some API entry points without being authenticated.
- Add cabinet add and remove workflow actions.
- Tweaked the jstree component’s appearance to cope with long cabinet labels.
- Update Django to version 1.11.24
- Update jQuery to version 3.4.1
- Add support for deleting the OCR content of a document or selection of documents.
- Add OCR content deleted event.
- Add missing recursive option to Docker entrypoint chown. GitLab issue #668. Thanks to John Wice (@brilthor) for the report.
- Add support for deleting the parsed content of a document of selection of documents.
- Add parsed content deleted event.
- Allow scaling of UI on mobile devices.
- Add Chinese fonts to the Docker image
3.2.7 (2019-08-28)
- Fix checkout form bug. Thanks to Lucius Schaerer (@lschaer1) for the report.
- Disable pagination current page button Current page button was clickable and would cause the single page navigation to jump to the home view.
- Remove redundant Celery queue declarations from the file_metadata app.
- Add internal_name field to workflow serializer. Fixes workflow API creation view.
- Fix document cabinet list API view. Thanks for forum user “jere” for the report. Forum topic 1039.
- Fix document template column field. GitLab issue #655. Thanks to Christian Wiegand (@christianwgd) for the report.
- Increase mailing profile password field max length from 48 to 128 characters. GitLab issue #657. Thanks to sigsec (@sigsec) for the report.
- Update the Docker entrypoint to update the ownership of files when the UID of GUID are changed. GitLab issue #650. Thanks to Fabian (@ruffy91) for the report.
- Update the Docker entrypoint to allow changing the GID of the mayan user to existing values. GitLab issue #652. Thanks to Fabian (@ruffy91) for the report.
- Rename the MAYAN_USER_GUID environment variable to MAYAN_USER_GID.
- Add automatic adjustment of HTML body on navigation bar changes. Closes GitLab issue #643. Thanks to Light Templar (@LightTemplar) for the report.
- Unify all line endings to be Linux style.
- Make sure system alerts don’t appear under floating elements.
3.2.6 (2019-07-10)
- Remove the smart settings app * import.
- Encode settings YAML before hashing.
- Fix document icon used in the workflow runtime links.
- Add trashed date time label.
- Fix thumbnail generation issue. GitLab issue #637. Thanks to Giacomo Cariello (@giacomocariello) for the report and the merge request fixing the issue.
3.2.5 (2019-07-05)
- Don’t error out if the EXTRA_APPS or the DISABLED_APPS settings are set to blank.
- Update troubleshooting documentation topic.
- Add data migration to the file metadata app. Synchronizes the document type settings model of existing document types.
- Fix cabinet and tags upload wizard steps missing some entries. GitLab issue #632. Thanks to Matthias Urhahn (@d4rken) for the report.
- Add alert when settings are changed and util the installation is restarted. GitLab issue #605. Thanks to Vikas Kedia (@vikaskedia) to the report.
- Update Django to version 1.11.22, PyYAML to version 5.1.1, django-widget-tweaks to version 1.4.5, pathlib2 to version 2.3.4, Werkzeug to version 0.15.4, django-extensions to version 2.1.9, django-rosetta to version 0.9.3, psutil to version 5.6.3.
3.2.4 (2019-06-29)
- Support configurable GUnicorn timeouts. Defaults to current value of 120 seconds.
- Fix help text of the platformtemplate command.
- Fix IMAP4 mailbox.store flags argument. Python’s documentation incorrectly state it is named flag_list. Closes GitLab issue #606.
- Improve the workflow preview generation. Use polylines instead of splines. Add state actions to the preview. Highlight the initial state.
- Add help text to the workflow transition form comment field.
- Fix direct deployment instructions.
- Add user, group, and role dashboard widgets.
- Add test mixin detect database connection leaks.
- Remove tag create event registration from the tag instances. The tag create event is not applicable to existing tags.
- Add proper redirection after moving a document to the trash.
- Remove the INSTALLED_APPS setting. Replace it with the new COMMON_EXTRA_APPS and COMMON_DISABLED_APPS.
- Improve email metadata support. Can now work on email with nested parts. Also the metadata.yaml attachment no longer needs to be the first attachment.
3.2.3 (2019-06-21)
- Add support for disabling the random primary key test mixin.
- Fix mailing profile log columns mappings. GitLab issue #626. Thanks to Jesaja Everling (@jeverling) for the report.
- Fix the Django SMTP backend username field name. GitLab issue #625. Thanks to Jesaja Everling (@jeverling) for the report and the research.
- Increase the Django STMP username. GitLab issue #625. Thanks to Jesaja Everling (@jeverling) for the report and the research.
3.2.2 (2019-06-19)
- Fix document type change view. Closes GitLab issue #614 Thanks to Christoph Roeder (@brightdroid) for the report.
- Fix document parsing tool view typo. Closes GitLab issue #615. Thanks to Tyler Page (@iamtpage) for the report.
- Update the task_check_interval_source reference GitLab issue #617. Thanks to Lukas Gill (@lukkigi) for the report and debug information.
3.2.1 (2019-06-14)
- Fix sub cabinet creation view. Thanks to Frédéric Sheedy (@fsheedy) for the report.
- Add PostgreSQL troubleshooting entry. Closes GitLab issues #523 and #602
- Use YAML SafeDumper to avoid adding YAML datatype tags. Closes GitLab issue #599. Thanks to Frédéric Sheedy (@fsheedy) for the report and debug information.
- Add check for app references and point users to release notes for details. GitLab issue #603. Thanks to Vikas Kedia (@vikaskedia) for the report.
- Remove sidebar floar right. Fixed GitLab issue #600. Thanks to Frédéric Sheedy (@fsheedy) for the report and debug information.
- Collapse sidebar on small screen Display sidebar at the bottom of the screen on small displays.
3.2 (2019-06-13)
- Split sources models into separate modules.
- Add support for subfolder scanning to watchfolders. Closes GitLab issue #498 and #563.
- Updated the source check behavior to allow checking a source even when the source is disabled and to not deleted processed files during a check.
- Switch to full app paths.
- Split document app models into separate modules.
- Split workflow views into separate modules.
- Add custom DatabaseWarning to tag the SQLite usage warning.
- Add keyword arguments to add_to_class instances.
- Move add_to_class function to their own module called methods.py
- Remove catch all exception handling for the check in and check out views.
- Improve checkouts tests code reducing redundant code.
- Change how the HOME_VIEW setting is defined.
- Remove the role permission grant and revoke permission.
- Split trashed document views into their own module.
- Show entire sys trace when an App import exception is raised.
- Remove Django suit from requirements.
- Remove development URLs from main URL file.
- Move API documentation generation from the root URLs module to the REST API app’s URLs module.
- Update Pillow to version 6.0.0
- Update PyYAML to version 5.1. Update use of safe_load and safe_dump to load and dump using the SafeLoader.
- Add SilenceLoggerTestCaseMixin to lower level of loggers during tests.
- New default value for setting DOCUMENTS_HASH_BLOCK_SIZE is 65535.
- New default value for setting MIMETYPE_FILE_READ_SIZE is 1024.
- Add workaround for Tesseract bug 1670
- Move setting COMMON_TEMPORARY_DIRECTORY to the storage app. The setting is now STORAGE_TEMPORARY_DIRECTORY.
- Move file related utilities to the storage app.
- Backport and remove unused code from the permission app.
- Move the navigation and authentication templates to their respective apps.
- Add dashboard app.
- Remove queryset slicing hack from the Document list view. And slice the Recently Added Document queryset itself.
- Move stub filtering to the Document model manager.
- Increase the default number of recently added documents and recently accessed documents from 40 to 400.
- Integrate django-autoadmin into the core apps.
- Update middleware to new style classes.
- Add server side invalid document template.
- Move tag specific JavaScript to the tags app.
- Reduce form boilerplate code with new FormOptions class.
- Use FormOptions for the DetailForm class.
- DetailForm now support help text on extra fields.
- Add FilteredSelectionForm class.
- Use FilteredSelectionForm for TagMultipleSelectionForm.
- Use FilteredSelectionForm for the class CabinetListForm.
- Add keyword arguments to URL definitions.
- Use FilteredSelectionForm to add a new ACLCreateForm.
- Rename IndexListForm to IndexTemplateFilteredForm.
- Use FilteredSelectionForm for IndexTemplateFilteredForm.
- Use FilteredSelectionForm for DocumentVersionSignatureCreateForm.
- Improve document signatures tests.
- Add docstrings to most models.
- Add support to the mailing profiles for specifying a from address. Closes GitLab issue #522.
- Expose new Django settings: AUTH_PASSWORD_VALIDATORS, DEFAULT_FROM_EMAIL, EMAIL_TIMEOUT, INTERNAL_IPS, LANGUAGES, LANGUAGE_CODE, STATIC_URL, STATICFILES_STORAGE, TIME_ZONE, WSGI_APPLICATION.
- Convert language choices into a function.
- Move language choices generation to documents.utils.
- Remove support for generating documents images in base 64 format.
- Move Pillow initialization from the module to the backend class initialization.
- Remove star import from the ACL and Common apps.
- Add dependencies app
- Convert the document tags widget to use HTML templates.
- Move Tag app HTML widgets to their own module.
- Move the document index app widgets to the html_widget.py module.
- Update group members view permission. The group edit and user edit permission are now required.
- Add keyword arguments to messages uses.
- Add keyword arguments to the reverse use in views.
- Add MERCs 5 and 6.
- Update authentication function views to use Django’s new class based authentication views.
- Expose Django’s LOGOUT_REDIRECT_URL setting.
- Move current user views from the common app to the user management app.
- Move the purge permission logic to the StorePermission manager.
- Remove the MIMETYPE_FILE_READ_SIZE setting.
- Use copyfileobj in the document parsers.
- Backport list facet menu code.
- Backport sidebar code.
- CSS updates to maximize usable width.
- Improve partial navigation error messages and display.
- Add user created and user edited events.
- Add group created and group edited events.
- Add support for SourceColumn widgets.
- Improve styling of the template debug view.
- Add support for showing the current user’s events.
- Add support kwargs to the SourceColumn class.
- Improve the event widgets, views and tests.
- Add mailer use event.
- Remove the include fontawesome and download it from the NPMregistry.
- Fix issue installing scoped NPM packages.
- Add new icons classes and templates.
- Add support for icon composition.
- Add support for link icon path imports.
- Remove support for link icon strings.
- Split document app form into separate modules.
- Move the favorite document views to their own module.
- Replace DocumentTypeSelectioForm with an improved version that does filtering.
- Update OCR links activation.
- Update document parsing link activation.
- Add favorite document views tests.
- Add document state action view test.
- Remove sidebar menu instance. The secondary menu and the previour sidebar menu now perform the same function.
- Backport source column identifiable and sortable improvements.
- Update the way the no-result template is shown.
- Improve TwoStateWidget to use a template. Make it compatible with the SourceColumn.
- Update SourceColumn to support related attributes.
- Add support for display for empty values for source columns.
- Add support for source column object or attribute absolute URLs.
- Add sortable columns to all apps.
- Remove permission list display from the ACL list view. Reduces clutter and unpredictable column size.
- Remove the full name from the user list.
- Add the first name and last name to the user list.
- Add file metadata app.
- Add support for submitting forms by pressing the Enter key or by double clicking.
- Rename form template ‘form_class’ to ‘form_css_classes’.
- Add support for adding form button aside from the default submit and cancel.
- Update ChoiceForm to be full height.
- Add AddRemoveView to replace AssignRemoveView
- Update the group roles view to use the new AddRemoveView.
- Add role create and edit events.
- Sort users by lastname, firstname.
- Switch user groups and group users views to AddRemoveView.
- Commit user edit event when an user is added or removed from a group.
- Commit the group edit event when a group is added or remove from an user.
- Require dual permissions when add or removing users to and from group. Same with group to users.
- Backport search improvements.
- Remove search elapsed time calculation.
- Remove SEARCH_LIMIT setting.
- Use the ‘handler’ prefix for all the signal handler functions.
- Remove custom email widget and use Django’s.
- Increase default maximum number of favorite documents to 400.
- Update the role group list view to use the new AddRemoveView.
- Commit the group event in conjunction with the role event when a group is added or remove from role.
- Update the role permission view to use the new AddRemoveView.
- Rename transformation manager method add_for_model to add_to_object.
- Rename transformation manager method get_for_model to get_for_object.
- Load the converter class on demand.
- Remove app top level star imports.
- Monkeypatch group and user models to make their fields translatable.
- Add new and default Tesseract OCR backend to avoid Tesseract bug 1670 ()
- Load only one language in the document properties form.
- Convert title calculation form to a template tag.
- Show the full title as a hover title even when truncated.
- Increase default title truncation length to 120 characters.
- Improve inherited permission computation.
- Add test case mixin that produces ephimeral models.
- Update ACL permissions view to use the new AddRemoveView class.
- Add ACL created and edited events.
- Update index document types view to use the new AddRemoveView class.
- Add index create and edit events.
- Allow overloading the action_add and action_remove methods from the AddRemoveView.
- Add view to link document type and indexes from the document type side.
- Update smart link document type selection view to use AddRemoveView class.
- Add smart link created and edited events.
- Fix smart link ACL support.
- Update JavaScript downloader to work with Python 3.
- Improve speed of the NPM package hash verification.
- Add view to enable smart links for documents types from the document type side.
- Enable list link icons.
- Add outline links CSS for facets.
- Add a bottom margin to list links.
- Use copyfileobj to save documents to files
- Add user logged in and logged out events.
- Add transaction handling in more places.
- Update ACLs tests to use ephimeral models.
- Add new app to handle all dependencies.
- Remove the licenses.py module and replace it with a dependencies.py module.
- Backport ACL computation improvements.
- Remove model permission proxy models.
- Remove related access control argument. This is now handled by the related field registration.
- Allow nested access control checking.
- check_access’s permissions argument must now be an interable.
- Remove permissions_related from links.
- Remove mayan_permission_attribute_check from API permission.
- Update Bootstrap and Bootswatch to version 3.4.1.
- Convert the workflow document types view to use the new AddRemove view.
- Add the workflow created and edited events.
- Remove AssignRemove View.
- Add view to setup workflows per document type from the document type side.
- Make workflows, workflows states, workflow transitions column sortable.
- Show completion and intial state in the workflow proxy instance menu list.
- Fix translation of the source upload forms using dropzone.js
- Rename get_object_list to get_source_queryset.
- Add uniqueness validation to SingleObjectCreateView.
- Remove MultipleInstanceActionMixin.
- Backport MultipleObjectMixin improvements.
- Remove ObjectListPermissionFilterMixin.
- Add deprecation warning to convertdb
- Add the preparestatic command.
- Remove the related attribute of check_access.
- Remove filter_by_access. Replaced by restrict_queryset.
- Move the user set password views to the authentication app.
- All views redirect to common’s home view instead of the REDIRECT_URL setting.
- Update tag document list and the document tag list views to require the view permissions for both objects.
- Install and server static content to and from the image.
- Add support for editing document comments.
- Remove Internet Explorer specific markup.
- Fix optional metadata remove when mixed with required metadata.
- Create intermedia file cache folder. Fixes preview errors when the first document uploaded is an office file.
- Move queue and task registration to the CeleryQueue class. The .queues.py module is now loaded automatically.
- Allow setting the Docker user UID and GUID.
- Add task path validation.
- Increase dropzone upload file size limit to 2GB.
- Add cabinet created and edited events.
- Show a null mailer backend if there is backend with an invalid path. Due to the app full path change, existing mailer setups need to be recreated.
- The document link URL when mailed is now composed of the COMMON_PROJECT_URL + document URL instead of the Site domain.
- Add the checkdependencies command.
- Add comment and make file target to generate all requirement files.
- Place deletion policies units before periods for clarity.
- Remove repeated EMAIL_TIMEOUT setting.
- Invert order to the Action Object and Target columns for clarity.
- Add note about the new preparestatic command.
- Add no-result template for workflow instance detail view.
- Update HTTP workflow action to new requests API.
- Remove the included Lato font. The font is now downloaded at install time.
- Add support for Google Fonts dependencies.
- Add support for patchin dependency files using rewriting rules.
- Allow searching documents by UUID.
- Improve search negation logic.
- Add support for search field transformations.
- Disable hiding page navigation on idle.
- Display namespace in the transition trigger view.
- Sort events list in the transition trigger view.
- Add support for form media to DynamicFormMixin.
- Fix tag attach and remove action form media.
- Sort content type list of the access grant and remove action.
- Use select2 for the content type filed of the access grant and remove action.
- Add Latvian translation.
- Support search model selection.
- Support passing a queryset factory to the search model.
- Add workflow actions to grant or remove permissions to a document.
- Add support for locked files for watchfolder.
3.1.11 (2019-04-XX)
- Fix multiple tag selection wizard step.
- Change the required permission for the checkout info link from document check in to document checkout details view.
- Lower the log severity when links don’t resolve.
- Add DOCUMENTS_HASH_BLOCK_SIZE to control the size of the file block when calculating a document’s checksum.
3.1.10 (2019-04-04)
- Backport test case improvements from the development branch. Add random primary key mixin. Split test case code into mixins. Make the view test case and the API test cases part of the same class hierarchy. Update tests that failed due to the new import locations.
- Add support for disabling the content type checking test case mixin.
- Update document indexing tests to be order agnostic. GitLab issue #559.
- Add test for the advanced search API.
- Apply merge !36 by Simeon Walker (@simeon-walker) to fix the advanced search API.
- Apply merge !35 by Manoel Brunnen (@mbru) to fix building the Docker image on the armv7l platform (RasperryPi, Odroid XU4, Odroid HC2). Also fixes assertion errors from pip ().
- Apply merge !37 by Roger Hunwicks (@roger.hunwicks) to allow TestViewTestCaseMixin to work with a custom ROOT_URLCONF. GitLab issue #566.
- Apply merge !40 by Roger Hunwicks (@/roger.hunwicks) to pin the Tornado version used to 6.0 and continue supporting Python 2.7. GitLab issue #568.
- Apply merge !41 by Jorge E. Gomez (@jorgeegomez) to fix the compressed class method name. GitLab issue #572.
- Remove notification badge AJAX setup. Individual link AJAX workers are obsolete now that the menu is being rendered by its own AJAX renderer. GitLab issue #562.
- Add support for server side link badges.
- Add API to list all templates.
- Remove newlines from the rendered templates.
- Reject emails attachments of size 0. Thanks to Robert Schoeftner (@robert.schoeftner)for the report and solution. GitLab issue #574.
- Add missing document index API view create permission.
- Fix index list API view. Add index create, delete, detail API tests. GitLab issue #564. Thanks to the Stéphane (@shoyu) for the report and debug information.
- Validate the state completion value before saving. Thanks to Manoel Brunnen (@mbru) for the report and debug information. GitLab issue #557.
- Add the MIMETYPE_FILE_READ_SIZE setting to limit the number of bytes read to determine the MIME type of a new document.
- Force object to text when raising PermissionDenied to avoid UnicodeDecodeError. Thanks to Mathias Behrle (@mbehrle) for the report and the debug information. GitLab issue #576.
- Add support for skipping a default set of tests.
3.1.9 (2018-11-01)
- Convert the furl instance to text to allow serializing it into JSON to be passed as arguments to the background task.
3.1.8 (2018-10-31)
- Reorganize documentation into topics and chapters.
- Add Workflows and API chapters.
- Add new material from the Wiki to the documentation.
- Add data migrations to the sources app migraton 0019 to ensure all labels are unique before performing the schema migations.
- Add improvements to the metadata URL encoding and decoding to support ampersand characters as part of the metadata value. GitLab issue #529. Thanks to Mark Maglana @relaxdiego for the report.
- Add custom validator for multiple emails in a single text field. Change the widget of the email fields in the mailer app to avoid browser side email validation. Closes GitLab issue #530. Thanks to Mark Maglana @relaxdiego for the report.
- Add configuration option to change the project/installation URL. This is used in the password reset emails and in the default document mailing templates.
- Increase the size of the workflow preview image.
- Center the workflow preview image.
- Move the noop OCR backend to the right place.
- Add new management command to display the current configuration settings.
- Default the YAML flow format to False which never uses inline.
- Add support for reindexing documents when their base properties like the label and description are edited.
3.1.7 (2018-10-14)
- Fix an issue with some browsers not firing the .load event on cached images. Ref:
- Remove duplicate YAML loading of environment variables.
- Don’t load development apps if they are already loaded.
- Make sure all key used as input for the cache key hash are bytes and not unicode. GitLab issue #520. Thanks to TheOneValen @TheOneValen for the report.
- Ignore document stub from the index mirror. GitLab issue #520. Thanks to TheOneValen @TheOneValen for the report.
- Fix for the Docker image INSTALL_FLAG path. Thanks to Mark Maglana @relaxdiego for the report and to Hamish Farroq @farroq_HAM for the patch. GitLab issue #525.
- Fix the typo in the Docker variable for worker concurrency. Thanks to Mark Maglana @relaxdiego for the report and to Hamish Farroq @farroq_HAM for the patch. GitLab issue #527.
- Add a noop OCR backend that disables OCR and the check for the Tesseract OCR binaries. Set the OCR_BACKEND setting or MAYAN_OCR_BACKEND environment variable to ocr.backends.pyocr.PyOCR to use this.
- All tests pass on Python 3.
- documentation: Add Docker installation method using a dedicated Docker network.
- documentation: Add scaling up chapter.
- documentation: Add S3 storage configuration section.
3.1.6 (2018-10-09)
- Improve index mirroring value clean up code to remove the spaces at the starts and at the end of directories. Closes again GitLab issue #520 Thanks to TheOneValen @ for the report.
- Improve index mirroring cache class to use the hash of the keys instead of the literal keys. Avoid warning about invalid key characters. Closes GitLab issue #518. Thanks to TheOneValen @ for the report.
- Only render the Template API view for authenticated users. Thanks rgarcia for the report.
- Add icon to the cabinet “Add new level” link.
- Display the cabinet “Add new level” link in the top level view too.
3.1.5 (2018-10-08)
- Consolidate some document indexing test code into a new mixin.
- Split the code of the mountindex command to be able to add tests.
- Fix the way the children of IndexInstanceNode are accessed. Fixes GitLab issue #518. Thanks to TheOneValen @TheOneValen for the report.
- Remove newlines from the index name levels before using them as FUSE directories.
- Fixed duplicated FUSE directory removal.
- Add link and view to show the parsed content of each document page.
- Add a modelform for adding and editing transformation and perform YAML validation of arguments.
- Add stricted error checking to the crop transformation.
- Update compressed files class module to work with Python 3.
- Update document parsing app tests to work with Python 3.
- Handle office files in explicit binary mode for Python 3.
- Return a proper list of SearchModel instances (Python 3).
- Specify FUSE literals in explicit octal notation (Python 3).
- URL quote the encoded names of the staging files using Django’s compat module. (Python 3)
- Open staging file in explicit binary mode. (Python 3)
- Add separate Python 2 and Python 3 versions of the MetadataType model .comma_splitter() static method.
- Update the metadata app tests to work on Python 3.
- Make sure metadata lookup choices are a list to be able to add the optional marker (Python 3).
- Make sure the image in the document preview view is centered when it is smaller than the viewport.
- Restore use of the .store_body variable accidentally remove in 63a77d0235ffef3cd49924ba280879313c622682. Closes GitLab issue #519. Thanks to TheOneValen @TheOneValen for the report.
- Add shared cache class and add mounted index cache invalidation when document and index instance nodes are updated or deleted.
- Fix document metadata app view error when adding multiple optional metadata types. Closes GitLab issue #521. Thanks to the TheOneValen @TheOneValen for the report.
3.1.4 (2018-10-04)
- Fix the link to the documenation. Closes GitLab issue #516. Thanks to Matthias Urlichs @smurfix for the report.
- Update related links. Add links to the new Wiki and Forum.
- Add Redis config entries in the Docker images to disable saving the database and to only provision 1 database.
- Remove use of hard coded font icon for document page rendering busy indicator.
- Disable the fancybox caption link if the document is in the trash.
- Load the DropZone CSS from package and remove the hard code CSS from appearance/base.css.
- Add support for indexing on OCR content changes.
- Add support for reindexing document on content parsing changes.
- Strip HTML entities from the browser’s window title. Closes GitLab issue #517. Thanks to Daniel Carrico @daniel1113 for the report.
- Improve search app. Refactored to resolve search queries by terms first then by field.
- Add explanation to the launch workflows tool.
3.1.3 (2018-09-27)
- Make sure template API renders in non US languages.
- Fix user groups view.
- Add no results help text to the document type -> metadata type association view.
- Expose the Django INSTALLED_APPS setting.
- Add support for changing the concurrency of the Celery workers in the Docker image. Add environment variables MAYAN_WORKER_FAST_CONCURRENCY, MAYAN_WORKER_MEDIUM_CONCURRENCY and MAYAN_WORKER_SLOW_CONCURRENCY.
- Add latest translation updates.
- Fixes a few text typos.
- Documentation updates in the deployment and docker chapters.
3.1.2 (2018-09-21)
- Database access in data migrations defaults to the ‘default’ database. Force it to the user selected database instead.
- Don’t use a hardcoded database alias for the destination of the database conversion.
- Improve natural key support in the UserOptions model.
- Update from Django 1.11.11 to 1.11.15.
- Add support to the convertdb command to operate on specified apps too.
- Add test mixin to test the db conversion (dumping and loading) of a specific app.
- Add an user test mixin to group user testing.
- Add test the user managament app for database conversion.
- Add support for natural keys to the DocumentPageImageCache model.
- Add database conversion test to the common app.
- Fix label display for resolved smart links when not using a dynamic label.
- Only show smart link resolution errors to the user with the smart link edit permission.
- Intercept document list view exception and display them as an error message.
3.1.1 (2018-09-18)
- CSS tweak to make sure the AJAX spinner stays in place.
- Fix 90, 180 and 270 degrees rotation transformations.
3.1 (2018-09-17)
- Improve database vendor migration support
- Add convertdb management command.
- Add error checking to the crop transformation arguments.
- Update dropzone.js’ timeout from 30 seconds to 120 to allow upload of large files on slow connections.
- Increase gunicorn’s timeout from 30 seconds to 120.
- Update packages versions: Pillow:5.2.0, PyYAML:3.13, django-environ:0.4.5, django-model-utils:3.1.2, django-mptt:0.9.1, django-widget-tweaks: 1.4.2, flanker:0.9.0, flex:6.13.2, furl:1.2, gevent:1.3.5, graphviz: 0.8.4, gunicorn:19.9.0, pyocr:0.5.2, python-dateutil:2.7.3
- Remove use of django-compressor and cssmin now that the project used Whitenoise.
- Display error when attempting to recalculate the page count of an empty document (document stub that has no document version).
- Add support for client side caching of document page images. The time the images are cached is controlled by the new setting DOCUMENTS_PAGE_IMAGE_CACHE_TIME which defaults to 31556926 seconds (1 year).
- The document quick label selection field now uses a select2 widget.
- Include querystring when force reload of a bare template view.
- Speed up document image fade in reveal.
- Use reseteable timer to ensure more document panels heights are matched.
- Rewrote Mayan’s JavaScript suite MayanApp into ECMAScript2015.
- Remove use is waitForJQuery.
- Remove code statistics from the documentation.
- Remove the pending work chapter. This is now available in the Wiki: wiki.mayan-edms.com
- Unify template title rendering.
- Add support for template subtitles.
- Make sure the on entry action of the initial state of workflows executes on document creation.
- Add new document app events: document type created and document type edited.
- Add link to document type events.
- Add new metadata app events: metadata type created, metadata type edited, metadata type to document type relationship update.
- Add link to metadata type events.
- Add support for subscribing to metadata type events.
- Add link to view the events of a tag.
- Add support for subscribing to the events of a tag.
- Add the tag events view permissions to the tag model ACL.
- Hide the title link of documents in the trash.
- Add support for document metadata events: add, edit and remove.
- Add workflow action to update the label and description of a document.
- Add COMMON_PROJECT_TITLE as a setting option to customize the title string.
- Add support for YAML configuration files.
- Add support for editing setting options and saving them using the new YAML configuration file support.
- Add new revertsettings management command.
- Add new permission to edit setting via the UI.
- Renamed setting LOCK_MANAGER_DEFAULT_BACKEND to LOCK_MANAGER_BACKEND.
- Add help texts to more setting options.
- Add ACL support for metadata types.
- Add cascade permission checks for links. Avoid allowing users to reach a empty views because they don’t access to any of the view’s objects.
- Apply link permission cascade checks to the message of the day, indexing and parsing, setup link.
- Add ACL support to the message of the day app.
- The index rebuild permission can now be set as part of the index ACL for each individual index.
- Add cascade permission check to the index rebuild tool link.
- The index rebuild tool now responds with the number of indexes queued to rebuild instead of a static acknowledment.
- Add missing permission check to the document duplicate scan link.
- Add new document indexing permission. This permission allows user to view an index instance as opposed to the current permission which allows viewing an index definiton on the setup menu.
- Add support to conditionally disable menus.
- Disable the Tags menu when the user doesn’t have the tag create permission or the tag view access for any tag.
- Disable the Cabinets menu when the user doesn’t have the cabinet create permission or the cabinet view permission for any cabinet.
- Update forum link in the about menu.
- Only show the settings namespace list link where it is relevant.
- Add support for the fillcolor argument to the rotate transformation.
- Sort documents by label.
- Add recently added document list view. The setting DOCUMENTS_RECENT_COUNT has been renamed to DOCUMENTS_RECENT_ACCESS_COUNT. New setting DOCUMENTS_RECENT_ADDED_COUNT added.
- Use platform independant hashing for transformations.
- Add support to the ObjectActionMixin to report on instance action failures. Add also an error_message class property and the new ActionError exception.
- Add favorite documents per user. Adds new setting option DOCUMENTS_FAVORITE_COUNT.
- Add new class based dashboard widget. This new widget supports subclassing and is template based. All exising widgets have been converted. ACL filtering was added to the widget results.
- In addition to the document view permission, the checkout detail view permission is now needed to view the list of checked out document.
- After queuing a chart for update, the view will now redirect to the same chart.
- The multiple document action dropdown is now sorted alphabetically.
- Improve statistics subclassing. Split class module into classes and renderers.
- Sort facet link, object, secondady and sidebar actions.
- Add support for extended templates when there are no results.
- Add help messages and useful links to several apps when there are no results available.
- Add a new column to settings showing if they are overrided via environment variable.
- The official config filename is config.yml.
- Interpret ALLOWED_HOSTS as YAML.
- Don’t show the document types of an index instance.
- Add the tag created and tag edited events.
- Add support for blocking the changing of password for specify users.
- Add support for changing the HOME_VIEW, LOGIN_URL and LOGIN_REDIRECT_URL from the settings view.
- Instead of the document content view, the document type parsing setup permissions is now required to view the parsing error list.
- The document type parsing setup permission can now be granted for individual document types.
- Add link to view a specific page’s OCR content.
- Remove the duplicated setting pdftotext_path from the OCR path. This is now handled by the document parsing app.
- Implement partial refresh of the main menu.
- Remove usage of pace.js. Would cause XMLRequest to fallback to synchronous mode.
- Add custom AJAX spinner.
- Complete refactor of the compress archive class support. Closes GitLab issue #7.
- Add support for preserving the extension of document files when using the quick label feature. Added to the document properties edit view and the document upload view. Closes GitLab issue #360.
- Add new dashboard item to display the total page count.
- Show the document type being uploaded in the source view title.
- Setting SOURCE_SCANIMAGE_PATH is now SOURCES_SCANIMAGE_PATH.
- Refactor the staging file image generation to support background task generation, caching and cache sharing.
- New queue: sources_fast. Used for staging file generation.
- New settings: SOURCES_STAGING_FILE_CACHE_STORAGE_BACKEND and SOURCES_STAGING_FILE_CACHE_STORAGE_BACKEND_ARGUMENTS to control where and how staging file caching is done.
- Fix an edge case on the document indexing where an empty node could be left behind.
- Improve the speed of the document indexing.
- Move the matchHeight call from lazy loading to image loading. Reduces the chance of wrongly sized cards.
- Generalize the JavaScript menu rendering into an API for templates that only refresh the menu when there are changes. Closes GitLab issue #511. Thanks to Daniel Carrico @daniel1113 for the report.
- Refactor the ModelAttribute class into two separate classes: ModelAttribute for executable model attributes and ModelField for actual ORM fields.
- Expose more document fields for use in smart links.
- The size of the document type label field has been increased from 32 to 96 characters.
- Add file_size and datetime fields to the DocumentPageCachedImage model.
- Make icon classes file template based.
- Add the current step and total steps of a wizard in the template context.
- Chart updates: Show last update date and time in list view and details view. Change color scheme to match rest of project. Increase size of data points. Improve responsive settings. Redirect to the current view after queueing.
- Split document type retention policies into it own view.
3.0.3 (2018-08-17)
- Tags app: Add explicit casting of escaped tag labels to prevent exploit of cross site scripting. Thanks to Lokesh (@lokesh1095) for the report and proposed solutions. Closes GitLab issue #496.
- Tags app: Add explicit post action redirect for the tag attach and tag remove actions when working on a single document.
3.0.2 (2018-08-16)
- Docker install script: Default to verbose.
- Docker install script: Increase startup timer to 10 seconds.
- Docker install script: Allow configuring the PostgreSQL port.
- Documentation: Add deployment step that configures Redis to discard unused task data when it runs out of memory.
- Index app: Add natural key support to the Index model.
- Mailer app: Add natural key support to the mailer app.
- Cabinets: Redirect to the cabinet list view after creating a new cabinet.
- Builds: Limit the number of branches that trigger the full test suit.
-.
- Docker install script: Detect if Docker installed and provide help text if not.
- Sources app: Update dropzone.js’ timeout from 30 seconds to 120 to allow upload of large files on slow connections.
- Documentation: Increase gunicorn’s timeout from 30 seconds to 120.
-.
- Documentation: Remove code statistics from the documentation.
- Documentation: Remove the pending work chapter. This is now available in the Wiki: wiki.mayan-edms.com
-.
- Language translation synchonization.
3.0.1 (2018-07-08)
- Pin javascript libraries to specific versions to avoid using potentianlly broken updates automatically. GitLab issue #486.
- French and Polish language translation updates.
- Merge request #25. Thanks to Daniel Albert @esclear for the patch.
3.0 (2018-06.
- Require the document view permission to view trashed documents.
- Make the multi object form perform an auto submit when the value is changed.
- Improved styling and interaction of the multiple object action form.
- Add checkbox to allow selecting all item in the item list view.
- Revise and improve permission requirements for the documents app API.
- Downloading a document version now requires the document download permission instead of just the document view permission.
- Creating a new document no longer works by having the document create permission in a global manner. It is now possible to create a document via the API by having the document permission for a specific document type.
- Viewing the version list of a document now required the document version view permission instead of the document view permission.
- Not having the document version view permission for a document will not return a 403 error. Instead a blank response will be returned.
- Reverting a document via API will new require the document version revert permission instead of the document edit permission.
-.
- Fix carousel item height issues.
- Add the “to=” keyword argument to all ForeignKey, ManayToMany and OneToOne Fields.
- Add Makefile target to check the format of the README.rst file.
- Mark the feature to detect and fix the orientatin of PDF as experimental.
- Don’t show documents with 0 duplicates in the duplicated document list.
- Clean up the duplicated document model after a document is deleted.
- Add support for roles ACLs.
- Add support for users ACLs.
- Add support for groups ACLs.
- Sort permission namespaces and permissions in the role permission views.
- Invert the columns in the ACL detail view.
- Fix issue #454. Thanks to Andrei Korostelev @kindkaktus for the issue and the solution.
- Update the role permission edit view require the permission grant or permission revoke permissions for the selected role.
- Only show the new document link if the user has access to create documents of at least one document type. GitLab Issue #302. Thanks to kg @kgraves.
- Support passing arguments to the document, document cache and document signatures storage backends. New settings: DOCUMENTS_STORAGE_BACKEND_ARGUMENTS, DOCUMENTS_CACHE_STORAGE_BACKEND_ARGUMENTS, SIGNATURES_STORAGE_BACKEND_ARGUMENTS.
- Remove the setting STORAGE_FILESTORAGE_LOCATION. Document storage location for the storage.backend.filebasedstorage.FileBasedStorage backdend must now passed via the DOCUMENTS_STORAGE_BACKEND_ARGUMENTS, DOCUMENTS_CACHE_STORAGE_BACKEND_ARGUMENTS, or SIGNATURES_STORAGE_BACKEND_ARGUMENTS if the backend is used to documents, the document image cache and/or document signatures. Use DOCUMENTS_STORAGE_BACKEND_ARGUMENTS = ‘{ location: <specific_path> }’ If no path is specified the backend will default to ‘mayan/media/document_storage’.
- Standardize the way storages are used. All apps that use storage now define their storages in the .storages modules instead of the .runtime module. The storage.backends.filebasedstorage.FileBasedStorage has been remove, instead Django’s default storage is used and each app is responsible of specifying their default path.
- Unify checkbox selection code for list items and table items.
- Add smart checkbox manager.
- Update Chart.js version.
- Improve line chart appearance. Fix mouse hover label issue.
- Add JavaScript dependency manager.
- Add support for passing arguments to the OCR backend.
- Fix issue when using workflows transitions with the new version upload event as trigger. Thanks to Sema @Miggaten for the find and the solution.
- Removing running workflow instances in document of a specific type if that document type is removed from the workflow.
- Make error messages persistent and increase the timeout of warning to 10 seconds.
- Improve rendering of the details form.
- Update rendering of the readonly multiselect widget to conform to Django’s updated field class interface.
- Add warning when using SQLite as the database backend.
- Use Mailgun’s flanker library to process the email sources.
- Add locking for interval sources. This reduces the chance of repeated documents from long running email downloads.
- Add the option to enable or disable parsing when uploading a document for each document type.
- Add a new setting option to enable automatic parsing for each new document type created.
- Add support for HTML bodies to the user mailers.
- Production ALLOWED_HOSTS settings now defaults to a safer [‘127.0.0.1’, ‘localhost’, ‘[::1]’]
- Capture menu resolution errors on invalid URLs. Closes GitLab issue #420.
- New environment variables: MAYAN_SECRET_KEY, MAYAN_CELERY_ALWAYS_EAGER, MAYAN_CELERY_RESULT_BACKEND, MAYAN_BROKER_URL, MAYAN_DATABASE_ENGINE, MAYAN_DATABASE_CONN_MAX_AGE, MAYAN_DATABASE_NAME, MAYAN_DATABASE_USER, MAYAN_DATABASE_PASSWORD, MAYAN_DATABASE_HOST, MAYAN_DATABASE_PORT, MAYAN_DEBUG.
- Stricter defaults. CELERY_ALWAYS_EAGER to False, ALLOWED_HOSTS to [‘127.0.0.1’, ‘localhost’, ‘[::1]’].
- New initialization command. Creates media/system and populates the SECRET_KEY and VERSION files.
- Sane scanner source paper source now defaults to blank.
- Merge Docker image creation back into the main repository.
- Docker image now uses gunicorn and whitenoise instead of NGINX to server the app and the static media.
- All installation artifact are now created and read from the media folder.
- Debian is now the Linux distribution used for the Docker image.
- Most Docker Celery workers are now execute using a lower OS priority number.
- Add COMMON_PRODUCTION_ERROR_LOGGING setting to control the logging of errors in production. Defaults to False.
- Change the error log file handle class to RotatingFileHandle to avoid an indefinitely growing log file.
- Disable embedded signatute verification during the perform upgrade command.
- Replace the DOCUMENTS_LANGUAGE_CHOICES setting option. Replaced with the new DOCUMENTS_LANGUAGE_CODES.
- Fix error when trying to upload a document from and email account with ‘from’ and ‘subject’ metadata.
- Fix typo on message.header get from ‘Suject’ to ‘Subject’.
- On multi part emails keep the original From and Subject properties for all subsequent parts if the sub parts don’t specify them. Fixes issue #481. Thanks to Robert Schöftner @robert.schoeftner for the report and debug information.
- Don’t provide a default for the scanner source adf_mode. Some scanners throw an error even when the selection if supported.
- Add a “Quick Download” action to reduce the number of steps to download a single document. GitLab issue #338.
- Recalculate a document’s indexes when attaching or removing a tag from or to it.
- Recalculate all of a tag’s documents when a tag is about to be deleted.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/mayan-edms/3.4.9/ | CC-MAIN-2020-34 | refinedweb | 13,009 | 54.29 |
, part 2 and part 3 in case you have not gone through that already. Let’s continue the idea …
For Web … Smaller and Lesser is Better
In the previous posts, we have reached to a point where we have included the required JavaScript libraries to the project using bower. All these files are in their respective folders under bower_components for the project, but they are not referenced in the page and are not served to the client with the html page. In this post we will be focusing on the way to deliver these files optimally to the client by using the asp.net web optimization bundles.
System.Web.Optimization contains a Class named BundleCollection, where we can add script and style bundles containing the files we would like to bundle together and serve on a single request, compressed and obfuscated. When I work with ASP.Net, I use this technique to minify and uglify the styles and scripts one the fly as compared to doing that in advance using different compressors and obfuscators. This gives me control on may be dynamically include the script files and conditionally compressing the files, e.g. compress while in production but not in development environment.
Get Them all Squeezed …
First of all we need to add this optimization library as a NuGet package (yes, you read it all right, NuGet is the right choice here in this scenario for maintaining the .net library packages). Open NuGet Manager for the project and search for “Optimization” in the browse tab. Select and install “Microsoft.AspNet.Web.Optimization” package from there as show in the image below. It may install a few required dependencies along with that, which you should allow it to do so.
In App_Start Folder create a new class named BundleConfig.cs and add the below code to it
using System.Web.Optimization; namespace SchoolWorld.Web.App_Start { public partial class BundleConfig { public static void RegisterBundles(BundleCollection bundles) { // bundle scripts bundles.Add(new ScriptBundle("~/js").Include( "~/bower_components/jquery/dist/jquery.js", "~/bower_components/angular/angular.js", "~/bower_components/angular-ui-router/release/angular-ui-router.js", "~/bower_components/bootstrap/dist/js/bootstrap.js")); // bundle styles bundles.Add(new StyleBundle("~/css").Include( "~/bower_components/bootstrap/dist/css/bootstrap.css", "~/Content/Site.css")); } } }
The code above is self explaining, but for your clarity, we are creating 2 bundles, one for scripts and another for styles. These bundles when referenced in the view, will render the links to compressed / uncompressed files.
Bundle the Bundles with MVC View …
In above step we have created the bundles, now we have to register these bundles to the request handler and include them in the _Layout.cshtml. Open the Global.asax and replace the contents with the code below:
using SchoolWorld.Web.App_Start; using System.Web.Mvc; using System.Web.Optimization; using System.Web.Routing; namespace AngularJSwithMVC { public class MvcApplication : System.Web.HttpApplication { protected void Application_Start() { AreaRegistration.RegisterAllAreas(); RouteConfig.RegisterRoutes(RouteTable.Routes); BundleConfig.RegisterBundles(BundleTable.Bundles); } } }
Open web.config under the Views folder (not the one in the project root folder) and add the namespace “System.Web.Optimization” under namespaces section.
<namespaces> <add namespace="System.Web.Mvc" /> <add namespace="System.Web.Mvc.Ajax" /> <add namespace="System.Web.Mvc.Html" /> <add namespace="System.Web.Routing" /> <add namespace="System.Web.Optimization" /> <add namespace="AngularJSwithMVC" /> </namespaces>
Open _Layout.cshtml and replace the code with the one shown in below snippet
<!DOCTYPE html> <html> <head> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>My AngularJS App</title> @Styles.Render("~/css") </head> <body layout="column"> @RenderBody() @Scripts.Render(new string[] { "~/js" }) </body> </html>
Run the application and inspect the source using your browser’s development tools, you should see the output similar to this one
<!DOCTYPE html> <html> <head> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>My AngularJS App</title> <link href="/bower_components/bootstrap/dist/css/bootstrap.css" rel="stylesheet"/> <link href="/Content/Site.css" rel="stylesheet"/> </head> <body> Hello World! <script src="/bower_components/jquery/dist/jquery.js"></script> <script src="/bower_components/angular/angular.js"></script> <script src="/bower_components/angular-ui-router/release/angular-ui-router.js"></script> <script src="/bower_components/bootstrap/dist/js/bootstrap.js"></script> </body> </html>
Where is the Compression?
You can see in the above output that all the files we included in the bundles are included as it is, so where is the compression? Remember I have told you earlier that the files can be compressed conditionally, and here the default condition is Debugging mode. Edit the main web.config file and disable the debugging by setting compilation tag’s debug attribute to false under the system.web section. At this time if you run the application again you will see, what we are trying to achieve.
<!DOCTYPE html> <html> <head> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>My AngularJS App</title> <link href="/css?v=HWyqqv78-MmGFJmETyURBTAYJwiQc_rP-elFPK5iVQU1" rel="stylesheet"/> </head> <body> Hello World! <script src="/js?v=LNUvXDJjMnfrdY07Ho_exMpbrd7-3Pn1CNPHdzuuDy41"></script> </body> </html>
Both the bundles are replaced with a single link and if you inspect the response of these links in the network tab of your bowsers development tool, you will see that the contents are compressed and obfuscated. loving it … are you?
We have started seeing it … isn’t it? The mixture of both the worlds are coming out with a pleasant color. I will conclude this post here and leave you to play more with this project a little more. In case you are not getting a hang of it, I am here, through your questions and I will try to answer them. Remember your comments will be a great help to me directing the right path for the upcoming tutorials and posts. Keep a watch and let me know what you want to learn more. I will add the links of the remaining parts of this posts below here for your reference and navigation. Till next time … Cheers!
Series Links
Part 1 Part 2 Part 3 Part 5 Part 6 Part 7 Part 8 Part 9 Part 10 Part 11 | https://www.mindzgrouptech.net/2017/01/29/angularjs-with-asp-net-mvc-part-4/ | CC-MAIN-2017-13 | refinedweb | 1,026 | 51.14 |
United States
Change
|
All Microsoft Sites
Help and Support
Help and Support Home
Select a Product
Advanced Search
Visual Basic .NET 2002
Code Syntax
Compiler
Controls
Documentation
Performance
Upgrade
How-to articles - Visual Basic .NET 2002
How to create a file-compare function in Visual Basic .NET or in Visual Basic 2005
Describes how to compare two files to see if their contents are the same. This comparison looks at the contents of the two files, not at the file names, locations, dates, times, or other attributes.
HOW TO: Perform a Distributed Transaction with a .NET Provider by Using ServicedComponent in Visual Basic .NET
This step-by-step article demonstrates how to perform a distributed transaction by using a .NET provider with the ServicedComponent class. Although this article uses the SqlClient .NET provider against a Microsoft SQL Server server, you can also use...
How to read XML from a file by using Visual Basic .NET
This article demonstrates how to use the XmlTextReader class to read Extensible Markup Language (XML) from a file. XmlTextReader provides direct parsing and tokenizing of XML and implements the XML 1.0 specification as well as the namespaces in XML...
More How-to articles...
Download details - Visual Basic .NET 2002
FIX: A Build Solution That Has a Large Solution May Cause the Build Process to Stop Responding
When you try to build a large solution that contains several projects that have more than one Visual Basic .NET project, the build process may stop responding (hang). Visual Studio .NET service pack A supported hotfix is now available from...
Buffer overrun in JPEG processing (GDI+) could allow code execution in Visual Basic, Visual C#, Visual C++, Visual J# .NET, and the .NET Framework
Resolves a buffer overrun in Visual Basic, Visual C#, Visual C++, Visual J# .NET, and the .NET Framework.
FIX: Visual Studio .NET quits unexpectedly when you build a large Visual Basic .NET solution that includes many classes
Fixes a problem in Visual Studio .NET 2002 that you fail to build a large project that contains many classes and receive no error messages.
More Download details...
Troubleshooting - Visual Basic .NET 2002
BUG: The Visual Basic .NET or Visual Basic 2005 Upgrade Wizard reports an incorrect warning message for user-defined data types
Explains that you receive a warning message when you use the Visual Basic .NET or Visual Basic 2005 Upgrade Wizard on a Visual Basic 6.0 project with direct user-defined data type assignment between variables. Provides methods to resolve this...
Error when you try to call the Prepare method before you add parameters: "An unhandled exception of type 'System.Data.SqlClient.SqlException' occurred in system.data.dll"
Explains that you receive an unhandled exception error in SQL Server 7.0 when you try to call the Prepare method before you add parameters.
InvalidCastException exception when you reference the value of a DataColumn that is NULL
Describes that when you reference the value of a DataColumn that is NULL you may receive an InvalidCastException exception. Provides a resolution.
More Troubleshooting...
Get Help Now
Contact a support professional by E-mail, Online, or Phone
Technical Communities
MSDN SQL Server Newsgroup
Related Sites
Microsoft Visual Basic .NET Developer Center
Visual Basic .NET on | http://support.microsoft.com/ph/2964 | crawl-002 | refinedweb | 538 | 59.19 |
ImageModel QML Type
The ImageModel type provides a model of place images. More...
Properties
- batchSize : int
- place : Place
- totalCount : int
Detailed Description
The ImageModel is a read-only model used to fetch images related to a Place. Binding a Place via ImageModel::place initiates an initial fetch of images. The model performs fetches incrementally and is intended to be used in conjunction with a View such as a ListView. When the View reaches the last of the images currently in the model, a fetch is performed to retrieve more if they are available. The View is automatically updated as the images are received. The number of images which are fetched at a time is specified by the batchSize property. The total number of images available can be accessed via the totalCount property.
The model returns data for the following roles:
Example
The following example shows how to display images for a place:
import QtQuick 2.0 import QtPositioning 5.5 import QtLocation 5.6 ImageModel { id: imageModel batchSize: 3 place: place } ListView { anchors.top: parent.top width: parent.width spacing: 10 model: imageModel orientation: ListView.Horizontal snapMode: ListView.SnapOneItem delegate: Item { width: listView.width height: listView.height Image { anchors.fill: parent source: url fillMode: Image.PreserveAspectFit } Text { text: supplier.name + "\n" + supplier.url width: parent.width anchors.bottom: parent.bottom } } }
Property Documentation
This property holds the batch size to use when fetching more image items.
This property holds the Place that the images are for.
This property holds the total number of image items for the. | http://doc.qt.io/qt-5/qml-qtlocation-imagemodel.html | CC-MAIN-2016-50 | refinedweb | 256 | 53.37 |
Threads and channels¶
A thread is the basic execution entity. A scheduler controls the execution of threads.
A simple thread that waits to be resumed by another thread.
#include "simba.h" void *my_thread_main(void *arg_p) { UNUSED(arg_p); while (1) { thrd_suspend(NULL); std_printf(FSTR("Thread resumed.\r\n")); } return (NULL); }
Threads usually communicates over channels. There are two kinds of channels; queue and event. Both implementing the same abstract channel interface (see src/kernel/chan.h). This abstraction makes channel very powerful as a synchronization primitive. They can be seen as limited functionality file descriptors in linux.
The most common channel is the queue. It can be either synchronous or semi-asynchronous. In the synchronous version the writing thread will block until all written data has been read by the reader. In the semi-asynchronous version the writer writes to a buffer within the queue, and only blocks all data does not fit in the buffer. The buffer size is selected by the application. | https://simba-os.readthedocs.io/en/11.0.0/developer-guide/threads-and-channels.html | CC-MAIN-2019-30 | refinedweb | 163 | 60.61 |
Introduction
A Stack is a linear data structure in which a data item is inserted and deleted at one record. A stack is called a Last In First Out (LIFO) structure. Because the data item inserted last is the data item deleted first from the stack.
Stacks are used most used is system software such as compilers, operating systems, etc.In fact, many compilers store the local variables inside a function on the stack.
Writing a value to the stack is called as a 'Push' operation, where as reading a value from it is called as a 'Pop' operation. A pop operation in stack is destructive. That is, once an item is popped from stack, then it is no longer available.
#include <stdio.h> #define Stacksize 5 void push(int); int pop(); //int getSize(); int s[Stacksize]; int top=-1; void main() { int k; push (33); push (44); push (77); k=pop(); printf("\n popped element : %d",k); k=pop(); printf("\n popped element : %d",k); } void push(int ele) { if(top==(Stacksize-1)) { printf("\n Stack overflow"); return; } else { s[++top]=ele; } } int pop() { if(top==-1) { printf("\n Stack underflow"); return -1; } else { return s[top--]; } }
Marie Bustamante 7/17/2012 (IST) / Reply
The code above is correct but I hope you anyone can make more simple code for the slow learners programmer like me.
mansoor nisar 12/5/2012 (IST) / Reply
l
lakshmi 4/9/2013 (IST) / Reply
if output is given it will me more better
ttt 7/19/2013 (IST) / Reply
thanx
Seema negi 1/17/2014 (IST) / Reply
Hello sir/mam i want to know about stacks , queue,recursion in data structrure using c plz explain it easily... | http://codemyne.net/Articles/2010/12/Stack-in-Clanguage | CC-MAIN-2021-43 | refinedweb | 284 | 63.02 |
Text Attributes class.
This class is used (in general by secondary inheritance) by many other classes (graphics, histograms). It holds all the text attributes.
Text attributes are:
The text alignment is an integer number (
align) allowing to control the horizontal and vertical position of the text string with respect to the text position. The text alignment of any class inheriting from
TAttText can be changed using the method
SetTextAlign and retrieved using the method
GetTextAlign.
For horizontal alignment the following convention applies:
For vertical alignment the following convention applies:
For example:
Mnemonic constants are available:
They allow to write:
Text angle in degrees. The text angle of any class inheriting from
TAttText can be changed using the method
SetTextAngle and retrieved using the method
GetTextAngle. The following picture shows the text angle:
The text color is a color index (integer) pointing in the ROOT color table. The text color of any class inheriting from
TAttText can be changed using the method
SetTextColor and retrieved using the method
GetTextColor. The following table shows the first 50 default colors.
SetTextColorAlpha(), allows to set a transparent color. In the following example the text color of the text
text.
If the text precision (see next paragraph) is smaller than 3, the text size (
textsize) is a fraction of the current pad size. Therefore the same
textsize value can generate text outputs with different absolute sizes in two different pads. The text size in pixels (
charheight) is computed the following way:
If the text precision is equal to 3, the text size doesn't depend on the pad's dimensions. A given
textsize value always generates the same absolute size. The text size (
charheight) is given in pixels:
Note that to scale fonts to the same size as the old True Type package a scale factor of
0.93376068 is apply to the text size before drawing.
The text size of any class inheriting from
TAttText can be changed using the method
SetTextSize and retrieved using the method
GetTextSize.
The text font code is combination of the font number and the precision.
Font numbers must be between 1 and 14.
The precision can be:
precision = 0fast hardware fonts (steps in the size)
precision = 1scalable and rotatable hardware fonts (see below)
precision = 2scalable and rotatable hardware fonts
precision = 3scalable and rotatable hardware fonts. Text size is given in pixels.
The text font and precision of any class inheriting from
TAttText can be changed using the method
SetTextFont and retrieved using the method
GetTextFont. behaviour depending if the True Type Fonts (TTF) are used or not. If TTF are used, you always get very good quality scalable and rotatable fonts. However TTF are slow.
One can activate the TTF by adding (or activating) the following line in the
.rootrc file:
It is possible to check the TTF are in use in a Root session with the command:
If the TTF are in use the following line will appear at the beginning of the printout given by this command:
The following picture shows how each font looks. The number on the left is the "text font code". In this picture precision 2 was selected.
Definition at line 18 of file TAttText.h.
#include <TAttText.h>
AttText default constructor.
Default text attributes are taken from the current style.
Definition at line 254 of file TAttText.cxx.
AttText normal constructor.
Text attributes are taken from the argument list.
Definition at line 272 of file TAttText.cxx.
AttText destructor.
Definition at line 284 of file TAttText.cxx.
Copy this text attributes to a new TAttText.
Definition at line 291 of file TAttText.cxx.
Change current text attributes if necessary.
Definition at line 303 of file TAttText.cxx.
Reset this text attributes to default values.
Reimplemented in TGWin32VirtualXProxy.
Definition at line 332 of file TAttText.cxx.
Save text attributes as C++ statement(s) on output stream out.
Definition at line 344 of file TAttText.cxx.
Invoke the DialogCanvas Text attributes.
Reimplemented in TGWin32VirtualXProxy.
Definition at line 372 of file TAttText.cxx.
Set a transparent marker color.
talpha defines the percentage of the color opacity from 0. (fully transparent) to 1. (fully opaque).
Definition at line 382 of file TAttText.cxx.
Set the text size in pixels.
If the font precision is greater than 2, the text size is set to npixels, otherwise the text size is computed as a percent of the pad size.
Reimplemented in TGWin32VirtualXProxy.
Definition at line 393 of file TAttText.cxx.
Text alignment.
Definition at line 29 of file TAttText.h.
Text angle.
Definition at line 27 of file TAttText.h.
Text color.
Definition at line 30 of file TAttText.h.
Text font.
Definition at line 31 of file TAttText.h.
Text size.
Definition at line 28 of file TAttText.h. | https://root.cern/doc/master/classTAttText.html | CC-MAIN-2020-29 | refinedweb | 793 | 67.65 |
Tutorial
Getting Started with NestJS
If you’ve worked on a Node.js application, you may have noticed that it became more difficult to maintain over time. The more you add new features to the application, the larger the codebase becomes.
Nest.js is a server-side Node.js framework for building efficient, reliable and scalable applications. It provides backend applications a modular structure for organizing code into separate modules. It was built to eliminate disorganized codebases.
Heavily inspired by Angular, Nest.js was built with TypeScript and uses Express.js under the hood, which makes it compatible with the majority of Express middleware.
In this post, you’ll create a small RESTful API that enables users to fetch, create and delete books in a bookstore.
Prerequisites
To complete this tutorial, you will need:
- A local development environment for Node.js. Follow How to Install Node.js and Create a Local Development Environment
Understanding the Building blocks of Nest.js
The following are the building blocks used when building Nest.js applications:
- Controllers
- Providers
- Modules
We’ll start by looking at controllers. Like most web frameworks, controllers in Nest.js are responsible for handling any incoming requests and returning responses to the client side of the application. For example, if you make an API call to a particular endpoint, say
/home, the controller will receive this request and based on the available resources, it will returned the appropriate response.
Nest.js was structured in a way that the routing mechanism is able to control which controller will be responsible for handling a particular request.
To define a controller in Nest.js, create a TypeScript file and include a decorator
@Controller() as shown in the following code snippet:
import { Controller, Get } from '@nestjs/common'; @Controller('users') export class UsersController { @Get() findAll() { return 'This will return all the users'; } }
The prefix of
users within the Controller decorator will prompt the
UsersController to handle any
/users GET request within an application and return the appropriate response as specified. Other HTTP request handled by the controller includes
PUT,
DELETE as we will see later in the tutorial.
Once a controller is created, it needs to be added to the module definition before Nest.js can easily recognise it. This could be the root
ApplicationModule or any other module created within the application. More about this in the module section of this post.
Now let’s look at providers.
As mentioned earlier, Nest.js was heavily inspired by Angular and similar to an Angular application, you can create a provider and inject it into controllers or other providers. These providers are also called services, and they’re designed to abstract any form of complexity and logic.
A service provider in Nest.js is a JavaScript class with a special
@Injectable() decorator at the top. For example, you can create a service to fetch users:
import { Injectable } from '@nestjs/common'; import { User } from './interfaces/user.interface'; @Injectable() export class UsersService { private readonly users: User[] = []; create(user: User) { this.users.push(user); } findAll(): User[] { return this.users; } }
The provider created above is a class with two methods
create() and
findAll(), which can be used to create and return all users respectively. And to easily help with type checking an interface was used to specify the type of elements that should be received by the methods.
Finally, let’s look at Modules. Modules let you group related files. They are Typescript files decorated with
@Module decorator. This attached decorator provides metadata that Nest makes use of to organize the application structure.
Each Nest.js application must have at least one module, usually referred to as the root module. This root module is the top-level module and usually enough for a small application. It is advisable to break a large application into multiple modules as it helps to maintain the structure of the application.
If you have an application that manages a lot of data or functionality about users , you can group the controller, services, and other related files into a single module, like
UsersModule:
import { Module } from '@nestjs/common'; import { UsersController } from './users.controller.ts'; import { UsersService } from './users.service.ts'; @Module({ controllers: [UsersController], providers: [UsersService] }) export class UsersModule {}
In this example, we are exported a
UsersModule that contains both the
UsersController and
UsersService. With this in place, we can then proceed to import and use the
UsersModule within the root module of the application as shown in the following code snippet:
... import { UsersModule } from './users/users.module'; @Module({ ... }) export class AppModule { }
There are a few other important concepts in Nest.js:
- DTO: Data transfer object is an object that defines how data will be sent over the network.
- Interfaces: TypeScript interfaces are used for type-checking and defining the types of data that can be passed to a controller or a Nest service.
- Dependency injection: Dependency injection is a design pattern used to increase efficiency and modularity of applications. It is often used by the biggest frameworks to keep code clean and easier to use. Nest.js also makes use of it to basically create coupled components.
With this pattern, it is very easy to manage dependencies between building blocks like controllers, providers and modules. The only thing required is to define the dependency for example a
UsersService() in the constructor of a controller as shown here:
... @Controller('users') export class UsersController { constructor(private readonly usersService: UsersService){} ... }
With some of these concepts briefly covered, you can now proceed to the next section, where you will put all the knowledge gained so far in this post into use as you will learn how to seamlessly build a RESTful API using Nest.js.
As stated earlier in this post, you will create a sample application that will help you get a good grasp on some of the core concepts of Nest.js.
This application will be specifically for a bookstore. At the end of the post you would have created a micro-service that will enable users to create and add a new book with few descriptions to an existing list of books. This could be from a database, but to ensure simplicity in this post, we won’t really be connecting our application to a database yet. But instead, we will make use of a mock data of books and once a new book is created, we will push and add it to the list.
Step 1 – Installing Nest.js
In order to scaffold a new Nest.js application, you will need to globally install the Nest CLI application. It is a command-line tool specifically created to craft a new Nest.js app and provide access to several commands to generate different files and produce a well-structured application.
Apart from using the CLI tool, you can also install a new Nest.js application by cloning the starter project from GitHub using Git, but for the purpose of this tutorial run the following command to install the Nest CLI:
- npm install -g @nestjs/cli
This will give you access to the
nest command for project installation and other project specific commands.
Next, run the following command to install a new project named
bookstore-nest within your development folder:
- nest new bookstore-nest
You will be asked a few questions during the installation, just follow the prompt and respond accordingly. Next, once the installation is complete, switch your working directory into the newly created project:
- cd bookstore-nest
Start the application with:
- npm run start
You can also run the followingcommand in order to use Nodemon for the project:
// start the application using nodemon npm run start:dev
Navigate to in your browser and you will see the Hello World! message as shown in the following image:
With the project started, let’s create the root module.
Step 2 – Generating a Module
Let’s generate a module for the bookstore. To do this, you will leverage the Nest CLI’s file generator. Run the following command to scaffold a new module for the application:
- nest generate module books
This creates a new folder named
books within the
src folder. Within the
books folder you will find a
books.module.ts file:
import { Module } from '@nestjs/common'; @Module({}) export class BooksModule {}
This was generated by the command and the module has also been added to the
app.module.ts which happens to be the root module of the application.
Next, you will create routes for the endpoints
Step 3 – Creating Routes and Controllers
As mentioned earlier, routes exist in controllers, so you need to create controllers that will handle individual endpoints. Again, use Nest CLI to generate your controllers, run the following command:
- nest generate controller books
This creates a controller inside the
books folder.
Since we won’t be connecting to the database for now, create a sample mock data for the bookstore. Under the
src folder, create a subfolder named
mocks and within the newly created folder, create a new TypeScript file named
books.mock.ts and add the following code in it:
export const BOOKS = [ { id: 1, title: 'First book', description: "This is the description for the first book", author: 'Olususi Oluyemi' }, { id: 2, title: 'Second book', description: "This is the description for the second book", author: 'John Barry' }, { id: 3, title: 'Third book', description: "This is the description for the third book", author: 'Clement Wilfred' }, { id: 4, title: 'Fourth book', description: "This is the description for the fourth book", author: 'Christian nwamba' }, { id: 5, title: 'Fifth book', description: "This is the description for the fifth book", author: 'Chris anderson' }, { id: 6, title: 'Sixth book', description: "This is the description for the sixth book", author: 'Olususi Oluyemi' }, ];
Next, you will create a service to hold all the logic for the bookstore.
Step 4 – Setting up a Service
Run the following command to generate a service:
nest generate service books
This command will create a new file named
books.service.ts within
./src/books folder.
Next, open the newly created file and paste the following:
import { Injectable, HttpException } from '@nestjs/common'; import { BOOKS } from '../mocks/books.mock'; @Injectable() export class BooksService { books = BOOKS; getBooks(): Promise<any> { return new Promise(resolve => { resolve(this.books); }); } getBook(bookID): Promise<any> { let id = Number(bookID); return new Promise(resolve => { const book = this.books.find(book => book.id === id); if (!book) { throw new HttpException('Book does not exist!', 404); } resolve(book); }); } }
First, you imported the requires modules from Nest.js and also
BOOKS from the mock data you created earlier.
Next, you created two different methods named
getBooks() and
getBook() to retrieve the list of books from the mock data and to fetch just one book using the
bookID as a parameter.
Next, add the following method to the
/src/books/books.service.ts immediately after the
getBook() method:
import { Injectable, HttpException } from '@nestjs/common'; import { BOOKS } from '../mocks/books.mock'; @Injectable() export class BooksService { books = BOOKS; ... addBook(book): Promise<any> { return new Promise(resolve => { this.books.push(book); resolve(this.books); }); } }
The method above will be used to push a new book to the existing list
Finally, add the last method to delete a particular book using the
bookID as a parameter:
import { Injectable, HttpException } from '@nestjs/common'; import { BOOKS } from '../mocks/books.mock'; @Injectable() export class BooksService { books = BOOKS; ... deleteBook(bookID): Promise<any> { let id = Number(bookID); return new Promise(resolve => { let index = this.books.findIndex(book => book.id === id); if (index === -1) { throw new HttpException('Book does not exist!', 404); } this.books.splice(1, index); resolve(this.books); }); } }
Step 5 – Injecting the Service into the Controller
Here, you will use dependency injection design pattern to pass the
BooksService into the
BooksController through a constructor. Open the
BooksController created earlier and paste the following code in it:
import { Controller, Get, Param, Post, Body, Query, Delete } from '@nestjs/common'; import { BooksService } from './books.service'; import { CreateBookDTO } from './dto/create-book.dto'; @Controller('books') export class BooksController { constructor(private booksService: BooksService) { } @Get() async getBooks() { const books = await this.booksService.getBooks(); return books; } @Get(':bookID') async getBook(@Param('bookID') bookID) { const book = await this.booksService.getBook(bookID); return book; } @Post() async addBook(@Body() createBookDTO: CreateBookDTO) { const book = await this.booksService.addBook(createBookDTO); return book; } @Delete() async deleteBook(@Query() query) { const books = await this.booksService.deleteBook(query.bookID); return books; } }
First, the important modules were imported from
@nestjs/common and you also import both the
BooksService and
CreateBookDTO respectively. CreateBookDTO is a data transfer object, a TypeScript class created for type-checking and to define the structures of what an object looks like when creating a new book. We will create this DTO in a bit.
Next, you used
constructor to inject the
BooksService into the controller and created four different methods which are:
getBooks(): Used to fetch the list of all books. It has
@Get()decorator attached to it. This helps to map any
GETrequest sent to /books to this controller.
getBook(): Used to retrieve the details of a particular book by passing the
bookIDas a parameter.
addBook(): Used to create and post a new book to the existing book list. And because we are not persisting into the database, the newly added book will only be held in memory.
deleteBook(): Used to delete a book by passing the
bookIDas a query parameter.
Each of the methods has a special decorator attached to it, which makes it very easy to route each HTTP request to a specific method within the controller.
Step 6 – Defining The DTO
In the previous section, you made use of a data transfer object called
CreateBookDTO. To create it, navigate to the
./src/books folder and create a new subfolder name
dto. Next, within the newly created folder, create another file and call it
create-book.dto.ts and paste the following in it:
export class CreateBookDTO { readonly id: number; readonly title: string; readonly description: string; readonly author: string; }
You are almost done with the application. Navigate back to the
./src/books/books.module.ts file you created earlier and update it with the following code:
import { Module } from '@nestjs/common'; import { BooksController } from './books.controller'; import { BooksService } from './books.service'; @Module({ controllers: [BooksController], providers: [BooksService] }) export class BooksModule {}
Start the application again if it is not running at the moment with:
- npm run start
Then use postman to test the API
Create some new books:
Get a book using an ID:
And delete a book:
Conclusion
In this tutorial you took a quick look at the fundamentals and basic building blocks of Nest.js and then built a RESTful API.
You will find the complete source code of this tutorial here on GitHub. | https://www.digitalocean.com/community/tutorials/getting-started-with-nestjs | CC-MAIN-2021-31 | refinedweb | 2,436 | 54.12 |
Search...
FAQs
Subscribe
Pie
FAQs
Recent topics
Flagged topics
Hot topics
Best topics
Search...
Search Coderanch
Advance search
Google search
Register / Login
Mike Smith
Ranch Hand
85
22
Threads
0
Cows
since Sep 23, 2005 (22/10)
Number Likes Received (0/3)
Number Likes Granted (0/3)
Set bumper stickers in profile (0/1)
Set signature in profile
Set a watch on a thread
Save thread as a bookmark
Create a post with an image (0/1)
Recent posts by Mike Smith
The JQuery Object
Thanks Bear this helps greatly!
show more
9 years ago
HTML Pages with CSS and JavaScript
The JQuery Object
I have been working with JQuery for some time now and I still feel like there is a gap in my understanding.
Does the JQuery Object follow the Decorator Design Pattern or the Singleton Design Pattern, or is it a combination of the two?
Does anyone have a best practice for managing Javascript files that they wish to share? I often find it difficult to stay organized when working with Javascript files; files become disorganized quickly. I am looking for tools that organize files much the same way as the Dojo toolbox does? Any recommendations?
Thanks
Michael
show more
9 years ago
HTML Pages with CSS and JavaScript
jquery.validate and tabber.js problems validating
Hi.
Background:
I would like to have a message box at the top of the form that keeps track of the number of missing fields. The jquery.validate plugin works fine only for the first tab, the 2nd tab is not doing what I wish. Basically all I want to do is the following:
- have a tabbed form that is validated
- if the form fields are missing I wish to have the number of missing fields displayed in the message box.
- 2nd highlight or have the text "missing" displayed next to the field
- if text is entered into the field or if the check box is checked or radio button selected then update that field and remove the error message
Code snippet:
$(function () { $('#tabber > div').each(function(i){ if($('.error', this).length>0) $(this).tabs('activate', i); }); }); /* validate questionnaire form */ $(function() { $("#questionnaire_form").validate({ rules: { /* page 1 */ q1: "required", q2: "required", q3_carbon_storage: "required", q3_flood_control: "required", q3_groundwater_supply:"required", q3_water_quality:"required", q3_hunting_fishing_or_trapping:"required", q3_soil_erosion_prevention:"required", q3_wildlife_habitat:"required", q3_biodiversity:"required", q3_cultural_or_spiritual:"required", q3_tourism_and_recreation:"required", q3_commercial_use:"required", q3_education_and_research:"required", q3_aesthetics:"required", q3_other: "required", /* page 2 */ q4_carbon_storage: "required", q4_flood_control: "required", q4_groundwater_supply: "required", q4_water_quality: "required", q4_hunting_fishing_or_trapping: "required", q4_soil_erosion_prevention: "required", q4_wildlife_habitat: "required", q4_biodiversity: "required", q4_cultural_or_spiritual: "required", q4_tourism_and_recreation: "required", q4_commercial_use: "required", q4_education_and_research: "required", q4_aesthetics: "required", q4_other: "required" }, showErrors: function(errorMap, errorList) { $("#numberOfErrors").html("Your questionnaire is missing " + this.numberOfInvalids() + " errors."); this.defaultShowErrors(); }, /* highlight questions that are empty */ highlight: function(element, errorClass) { $(element).addClass(errorClass); $(element.form).find("label[for=" + element.name + "]") .addClass(errorClass); }, unhighlight: function(element, errorClass) { $(element).removeClass(errorClass); $(element.form).find("label[for=" + element.name + "]") .removeClass(errorClass); }, focusInvalid: function() { if( this.settings.focusInvalid ) { try { var element = jQuery(this.findLastActive() || this.errorList.length && this.errorList[0].element || []).filter(":visible"); element.parents('#tabbed_questionnaire_form').triggerTab( element.parents(".fragment")[0].id.replace("page-","")*1 ); element.focus(); } catch(e) { /* ignore IE throwing errors when focusing hidden elements */ } } }, });
Just to re-iterate, the number of missing fields which are displayed in the message box (above form) appear to be correct when the form is validated (pressing submit - but missing fields exist). But only those fields on the first tab appear to work. For example, if a field is missing on the first tab it will have an error message displayed where I want it and if I enter in text or select a value, the error message disappears. In addition, the number of errors in the message box decrease in value. In contrast, the 2nd tab will display the error message but when you select something the error message does not disappear, nor does the number of errors in the message box decrease. I figure I need a way to iterate over each tab and validate fields to each specific tab instead of trying to validate the entire form at one time, but I am unsure how this is done using tabber.js and jquery.validate.js plugin.
Any suggestions greatly appreciated...
show more
11 years ago
HTML Pages with CSS and JavaScript
using a frame inside an applet
Hello, good day.
I am currently trying to use a class that extends Frame (taken from a text). This class acts as a drawing surface. My goal is create an applet using the Slate class, however, I have not been successful with it. First is this possible? or is there some kind of conflict between invoking paint(). My ultimate goal is to draw the graphics using slate instead of overriding paint in applet and then to create an animated applet. I would assume that creating a new Frame instead a applet would be possible. Any thoughts, greatly appreciated.
/** * class Slate * * purpose: to create graphic images using the abstract window toolkit. * from Downey, Allen; How to Think Like a Computer Scientist */ import java.awt.*; public class Slate extends Frame { // data field, image, is a buffer: when Slate users draw things, they draw on the buffer. // When the Slate gets painted, we copy the image onto the screen. Image image; // Constructor for objects of class Slate); } // paint copies the off-screen buffer onto the screen public void paint(Graphics g) { if (image ==null) return; g.drawImage(image,0,0,null); } //wait for t units of time (milliseconds?) //this method is useful when doing simple animation public void wait(int t) { try { Thread.sleep(t); } catch (InterruptedException e) { return; } } }
show more
13 years ago
Applets
reading in (redirecting command output) into variable
Hello all,
I am wondering how it is possible to do the following:
set month=`date | cut -c5-7`
set userInfo=`last "$1" | grep "$month"`
echo $userInfo <---- how do I to put each line on a separate newline? SO that when I echo this variable (echo $userInfo)- It displays identically to just invoking the last command. For example, last username. At the present time, when this command is invoked it appends each new line to the preceding line. I need to iterate over the contents stored in this variable to process the times(add them).
Some background;
I am trying to write a sh script to count how many hours a given user has used the system since this month. However, I run into a sling when I try to use a for loop. I believe I need to correct the problem of echo $userInfo displaying each line as a separate line to use in the for loop; but not 100% certain. I know I need to process the login times from the last command in a for loop and add them(just increment the sum of a variable). And I know that I'll have to implement an if elif clause to test different critera. Any suggestions, greatly appreciated.
Thanks in advance.
show more
14 years ago
GNU/Linux
priority queue based on unsorted list(Java's LinkedList Class)
Thanks guys for all the help. I managed to fix it. It works! Thanks again.
cheers
Mike
show more
15 years ago
Java in General
priority queue based on unsorted list(Java's LinkedList Class)
Hi guys, thanks for the input. I have taken your suggestions into account, along with recoding portions of my code; but now I get the following errors
Size Before:4
Is the priority queue empty? false
Exception in thread "main" java.lang.NullPointerException
at PqList.min(PqList.java:56)
at PqList.removeMin(PqList.java:76)
at PqListTest.main(PqListTest.java:16)
My modified code is
import java.util.*; public class PqList<K,V>{ private class Entry{ int key; V value; public Entry( int key, V value){ this.key = key; this.value = value; } public String toString(){ return "(Key:" + key + "Value:" + value +")"; } }// end Entry class // create empty list public PqList(){ count = 0; } // instance variables for PqList LinkedList<Entry> list = new LinkedList<Entry>(); ListIterator<Entry> iter; int count = 0; int minimum = 0; public Entry insert( int key, V value ){ Entry ent; ent = new Entry( key, value ); list.addFirst(ent); count++; return list.getFirst(); } public Entry min(){ // local variables int i = 0; int temp = 0; int min = 0; Entry ent,ent2; ent = list.getFirst(); i = ent.key; while( iter.hasNext() ){ ent2 = iter.next(); temp = ent2.key; if( i < temp){ min = i; list.set(min, ent); }else{ min = temp; list.set(min, ent2); } } minimum = min; return list.get(min); } public Entry removeMin(){ Entry e; e = min(); list.remove(minimum); count --; return e; } public int size(){ return count; } public boolean isEmpty(){ return (count == 0); } }
test class
public class PqListTest { public static void main(String[] args) { PqList<Integer,Integer> p = new PqList<Integer,Integer>(); p.insert(0, 23); p.insert(1, 9); p.insert(2, 34); p.insert(3, 36); System.out.println("Size Before:" + p.size()); System.out.println("Is the priority queue empty? " + p.isEmpty()); while(!p.isEmpty()){ System.out.print(p.removeMin()); } } }
I know there is definitely something wrong with my min() function. If all goes well it should return the entry with the minimum key. Furthermore, my removeMin() function should be able to call the min() function assign the minimum entry to another entry; then delete it from my list. However, It is crashing. I really don't understand how to fix this. The linked list should be dynamic ds so the first value added to the list could have a key of 1000 and the list should be able to accomadate it, correct me if I am wrong. Once again any suggestions greatly appreciated.
cheers
mike
show more
15 years ago
Java in General
priority queue based on unsorted list(Java's LinkedList Class)
Ok, I don't know if I am better off now, but my program doesn't give me array out of bounds error anymore; except now it doesn't return the minimum key. I have double checked and yes I need to use the LinkedList. 0(1) insertions are the only must. Here is my code maybe some one can see something that I am not seeing.
import java.util.*; public class PqList<K,V>{ private class Entry{ int key; V value; Entry link; public Entry( int key, V value){ this.key = key; this.value = value; } public String toString(){ return "(Key:" + key + "Value:" + value +")"; } }// end Entry class // instance variables for PqList LinkedList<Entry> list = new LinkedList<Entry>(); ListIterator<Entry> iter = list.listIterator(); Entry head; int min = 0; int temp = 0; int i = 0; int num = 0; public Entry insert( int key, V value ){ Entry n = new Entry( key, value); list.addFirst(n); num++; return list.getFirst(); } public Entry min(){ min = list.get(0).key; while( iter.hasNext() ){ temp = iter.next().key; if(min < temp) i = min; else i = temp; } return list.get(i); } public Entry removeMin(){ if( list.size() == 0 ) System.out.println("The list is empty."); Entry z = list.remove(min); num--; return z; } public int size(){ return num; } public boolean isEmpty(){ return (num == 0); } }
public class PqListTest { public static void main(String[] args) { PqList<Integer,Integer> p = new PqList<Integer,Integer>(); p.insert(3, 23); p.insert(1, 9); p.insert(0, 34); p.insert(2, 36); while(!p.isEmpty()){ System.out.print(p.removeMin()); } } }
Once again, any suggestions are greatly appreciated. I have been working on this for what seems like forever; it must be something simple.
cheers
show more
15 years ago
Java in General
priority queue based on unsorted list(Java's LinkedList Class)
Thanks Ernest,
yes I have learned about heaps, however, the instructor wants it done with a List from the Java api. It is suppose to be an easy problem, however, I find myself puzzled. I will try the ArrayList, even though he told me to use the LinkedList(maybe he made a mistake). He did make it sound like it should be fairly similar to the stack(which I used the ArrayList to implement). Thanks again for helping out. It is greatly appreciated.
show more
15 years ago
Java in General
priority queue based on unsorted list(Java's LinkedList Class)
Hello all, I am having a really hard time thinking my way through implementing a priority queue using Java's LinkedList class. I understand the priority queue in general terms. Basically you insert key-value pairs into the PQ in any order; but remove entries(key-value pairs) based on highest priority- in my case lowest key. That is a simple concept but implementing it, seems hard to do, especially when I need to print out both the key and values pertaining to each entry. Ok, I have tackled this many different ways, and thus far the closest I have come to solving it was running off the array(array out bounds), however I then realized I need to use "generics" so I started over. Furthermore, I know I will have to have a entry class(innerclass in my PQList), however, is it essentially that I create a node class as well so that I can properly traverse though the list? I was thinking of storing each entry in a node( or could I treat an entry as a node?). Or could I use the listIterator to accomplish this? Any suggestions greatly appreciated. I have been through the text many times over and over again, but I find this ds text is too hard to follow(data structures & algorithms 4th edition Authors: Mihcael Goodrich & Roberto Tamassia.) I always thought textbooks are to reinforce your understanding; not confuse you more. jk
show more
15 years ago
Java in General
an int and a string.
Actually, its late here. I see what you know mean. I just have to think of a way to count consective 1s and 0s. Cheers.
Thanks again
show more
15 years ago
Java in General
an int and a string.
Hi, I don't understand what you are saying. I am incrementing by 1 if a 1 is found in the string and incrementing by 1 if a zero character is fount. I am also keeping track of the total 1s in the string. I think my logic is correct here. If not please point out where I am missing out. Thanks
show more
15 years ago
Java in General
an int and a string.
Hello all, I had one query. When you append an int to a string, doesn't the variable get assigned a string. For instance I am having a hard time with the following code.
import java.util.*; public class BitStat { public static void main(String [] args){ char one ='1'; char zero ='0'; String str; String str2 =""; //boolean b = true; int z = 0; int o = 0; int countTotalOne = 0; int total = 0; int out; int i =0; int inp=0; System.out.println("Please enter a number of bytes"); System.out.println(); Scanner sc = new Scanner(System.in); str = sc.nextLine(); Scanner sc2 = new Scanner(str); while(sc2.hasNextInt(16) ){ inp = sc2.nextInt(16); //System.out.println(inp); for( i = 7; i >= 0; i-- ){ out = (inp >> i) & 0x1; str2 += out + ""; /* if(one){ countOne++; countTotalOne+=countOne; total++; } else{ countZero++; total++; } */ } //b=false; } //str2=Integer.toBinaryString(inp); System.out.println(str2); for(int j=0;j<str2.length();j++){ if(str2.charAt(j)=='1'){ o++; countTotalOne+=o; }else{ z++; } total++; } //System.out.println(Integer.toBinaryString(inp)); System.out.println("Total: " + total); System.out.println(countTotalOne + " " + o + " " + z); } }
I am using bit shifting, my output looks great for this program; I have to implement, however, when I try to count how many 0s and 1s it get way off a way off answer. Is there something I am missing?
The out put is the following.
Please enter a number of bytes de ad be ef 42 1101111010101101101111101110111101000010 Total: 40 351 26 14
I am basically looking on get the following numbers 26 5 4 which correspond to the following: total # of 1 bits, longest 1 bit sequence, longest 0 bit sequence. Thus, the answer should be 26 5 4. I have tried many different ways to implement this and I keep on gettting the same end result. Am I missing the big picture? Please correct me if I am wrong. You enter a key on the keyboard, which corresponds to a ascii code, which then corresponds to a bit pattern, which the computer really works with. I know java uses unicode(could this have something to do with why I am getting this kind of result). Any suggestions much appreciated. Thanks
show more
15 years ago
Java in General
not sorting; getting null
Hi I also changed the following piece of code; and still the same error. I am starting to really love the NullPointerException.
try{ Sorts.selectionSort(stud); for(int i=stud.length-1;i>=0;i--){ System.out.println(stud[i].toString()); } System.out.println("The total number of students read = " + total);
I may need a another nudge. Thanks again.
show more
15 years ago
Java in General
not sorting; getting null
Thanks, for the immediate feedback. That makes sense, but where do you conclude that I am passing my incomplete [] to the selectionSort method before it is completely full. I have changed the array size to 23 elements; that is 22 indexes and still get the same problem. Thanks again for any feedback.
show more
15 years ago
Java in General | https://www.coderanch.com/u/107613/Mike-Smith | CC-MAIN-2022-05 | refinedweb | 2,899 | 56.66 |
On 12/6/2012 11:50 AM, Matt wrote: > It works now. Steven and Alex, thanks for your help! > > I ended up leaving sample.py and foo.py and bar.p the way they were, and in __init__.py putting: > > from foo import * > from bar import * > > So my mistake was not importing the foo and bar modules into sub_one/__init__.py. > > I also see how the __all__ array helps me control what gets imported. I can leave it out of __init__.py, and everything gets imported. So my three lessons are: > > 1) "from X import *" will look for an __all__ list in module X, or in __init__.py if X is a package instead of a module, and import only what is in that list. Module names are different than function names in that list. > 2) if __all__ is not defined, "from X import *' will import everything in X's namespace ... that does not have a leading underscore. This is why there are things like import sys as _sys from itertools import chain as _chain in the stdlib when the module author does not define __all__. > 3) __init__.py acts like just another module, so you have to import the package contents that you want into it before you import the package into your code -- Terry Jan Reedy | https://mail.python.org/pipermail/python-list/2012-December/636246.html | CC-MAIN-2019-30 | refinedweb | 218 | 84.57 |
Slashdot Log In
Victory for small business in domain disputes
A reader sent us the link-o-meter to the story about how Clue Computing beat toy giant Hasbro over a 3 year long legal dispute over clue.com (Hasbro owns the Clue board game). Some are hoping that this will mean small business have a precent to call in in the case of legal disputes over names trademarked by different folks-and in related news, Hasbro will be purchasing Wizards of the Coast, Magic:The Gathering card maker, and owner of TSR, Inc.
This discussion has been archived. No new comments can be posted.
Victory for small business in domain disputes | Log In/Create an Account | Top | 207 comments (Spill at 50!) | Index Only | Search Discussion
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
It's already there. (Score:3)
The problem with InterNIC/RSI "impartially" (i.e, non-commercially) administering something like this (besides the fact that it's RSI) is the enormous overhead, plus it's what DNS was supposed to do. They didn't policy-build to account for cybersquatters and the like, and now it's coming back to bite them in the butt.
OTOH, Yahoo lists net presences by category; you could find your category, and then look for Clue (Entertainment:games:board:Hasbro) or Clue Computing (Companies:California:etc.).
I think the current DNS resolution policy would work, if it had more serious teeth. That's likely to be the best solution we'd get.
Help Slashdot clue.com's legal defense fund! (Score:4)
On a wider note, maybe Rob should add a billing page to slashdot.org where you could use a credit card to donate to a good cause.
Wizard's of the Coast. (Score:4)
It would be a shame if Hasbro dumped D&D entirely or even put it on the back burner as many gaming stores could be hurt by this move.
For more info on the aquisition goto wizards site [wizards.com].
Joseph Elwell.
Restructuring .com (Score:3)
Take for instance, media sources-- there isn't much conflict in that one, as those that are broadcast media have 4 letter designations in the US. There aren't too many magazines willing to get mixed up with each other, and neither are the movies. (sure, there may be crossover between them, but those are the breaks).
As is stands presently, however, the TLDs are completely useless, except for
Unfortunately, unveiling new TLDs, without having some major limitations will result in people flooding the registrars to get them, and more TLDs will be more difficult on the people who have enough problems remembering two letters, much less three. There's some solution out there, I just don't know quite what it is, though.
Re:This is a no-win situation (Score:5)
This is only partially true. A trademark is limited in scope - usually to a particular area of trade. There can be no infringement outside of this area (with the exception of well-known marks). The classic example in the U.S. is "Delta". I can think of three right now - Delta Airlines, Delta Faucets, and Delta Dental (insurance). Despite the use of the same name, these three do NOT conflict as far as trademarks go.
Well-known marks would include something like McDonalds, which covers so much ground that McAnything is going to have a problem (yes, I know about the McDonalds in Scotland, and there have been court cases in the U.K. about this very issue.)
Hasbro is throwing its weight around. Based strictly on trademark law, I'd expect Hasbro to lose the appeals, since 'Clue' is not a well-known mark, and there's no significant cross-over between areas of trade. I just hope that Clue Computing can hang in there for the rest of the proceedings.
...phil
Re:Simple Solution - Ban DNS. (Score:3)
The problem is not that DNS itself is flawed; it's that people have chosen to try to use DNS both as a name-resolution system AND as a directory system.
Removing DNS and going to raw IP addresses would break many many things that people rely on:
-- Web sites routinely map a single name to a round-robin of IP addresses for load-balancing. It might be possible to do some kind of nasty reverse NAT to make this facility possible at the IP level, but we have things like DNS to facilitate not having to do nasty destination munging.
-- Companies are currently able to reorganize their internal networks transparently; if your mail address was employee@xxx.xxx.xxx.xxx, and then the company wanted to move their primary mail server to a geographically different location, there's no way to point that IP address there without complicated tunneling wizardry. Easier just to re-point a DNS entry for mail.company.com.
-- Companies change ISP's all the time, and IP addresses get reassigned from leaving customers to new customers. Imagine if you'd sent out thousands and thousands of advertisements, business cards, product boxes, and letterhead with your address on it, and you went to a new ISP, and your competitor got hold of that address block from your old ISP.
DNS was devised as an abstraction layer above IP addresses to allow tricks like this to happen conveniently, and give some sense of permanence to network addressing, in what's definitionally a change-prone environment. The fact that people have been abusing this abstraction layer for commercial purposes is (a) not surprising, and (b) not at all an indictment of the scheme itself.
DNS is our friend; we just need to get some good rules in place as to how to resolve conflicts like this.
--
This is a no-win situation (Score:5)
So a company with a trademark must use draconian measures of enforcement when defending their trademark -- they have no choice if they want to prove that to a court that they're enforcing it. And since cybersquatting has been a problem in the past, companies are probably advised by their lawyers that they MUST track down ANY potential use of their trademarked name, even in situations where it won't apply.
Of course, that means nothing to the poor small business owner (or private owner) who coincidentally is using a name that has been trademarked. It's certainly not fair to them at all -- and they don't really have the funds to defend against such matters, nine times out of ten.
It seems like a situation where no one can really win. If a company wants to retain the rights of their trademark, they have to crack down in every situation (which is why Red Hat is doing what they're doing these days). On the other hand, there's no reason why someone in a business completely unrelated to the trademark should ever have to be pushed around by a corporations legal teams.
The only way out of this is to either a) strengthen the rights of the people holding the trademark, so they don't have to enforce it all the time, or b) weaken the power of trademarks significantly, or even abolish it all together, so that no one can lay any kind of legal claim to a word or a phrase.
Either option has its problems, but I think that abolishing trademark would be better than strengthening it -- if it were strengthened, companies would probably find it more "convenient" to force people who had trademarked words in their domain names to hand them over, even if the domain names were used in a way completely unrelated to the trademark itself. If that were the case, sites like ajax.org would have been instantly overrun and they would have had no legal recourse whatsoever.
Hope that wasn't too disjointed...
Trademarks... (Score:4)
Lets quote NSI shall we
Revision 03 Effective February 25, 1998 1. Network Solutions, Inc. ("Network Solutions") is responsible for the registration of second-level Internet domain names in the top level COM, ORG, NET, and EDU domains. Network Solutions registers these second-level domain names on a "first come, first served" basis. By registering a domain name, Network Solutions does not determine the legality of the domain name registration, or otherwise evaluate whether that registration or use may infringe upon the rights of a third party.
This is solely written to deter themselves from suffering legal actions... point blank
2. The entity applying for a domain name ("registrant") is solely responsible for selecting its own domain name ("domain name") and maintaining for the continued accuracy of the registration record. The registrant, by completing and submitting.
Does this mean that if I registered "whatever.com" and three months down the line someone trademarked it, they can now sue me? Some of these laws are a joke... I can see whay they would make these laws being it would deter some moron from registering a site to make massive money, but there a Corporate entities who turn around and bastardize these laws as well
3. Network Solutions neither acts as arbiter nor provides resolution of disputes between registrants and third party complainants arising out of the registration or use of a domain name. This Domain Name Dispute Policy ("Policy") does not confer any rights, procedural or substantive, upon third party complainants. Likewise, complainants are not obligated to use this Policy.
More legal mumbo jumbo from a half assed registrar
6. Indemnity. The registrant hereby agrees to defend, indemnify and hold harmless (i) Network Solutions, its officers, directors, employees and agents, and (ii) the National Science Foundation ("NSF"), its officers, directors, and employees (collectively, the "Indemnified Parties"),. Network Solutions recognizes that certain educational and governmental entities may not be able to provide complete indemnification. If the registrant is (i) a governmental or non-profit educational entity, and (ii) not permitted by law or under its organizational documents to provide indemnification, the registrant must notify Network Solutions in writing and, upon receiving appropriate proof of such restriction, Network Solutions may provide an alternative provision for such a registrant.
In other words money talks...
What I wanna know is...
What is Network Solutions going to do in a cross-registrar dispute?
What if they weren't the registrars how are they going to handle things. And when just when are the court systems going to stop letting people twist laws?
oh well back to work...
Common word domain names (Score:4)
Also, it's a user's fault if they type in "" and assume they are at Hasbro's site. I'm sure there's a card in the box that gives the address, or people can type the company's name. Sometimes I just guess the URL if I'm looking for something, but I look at content if the site comes up.
Are we to assume that if someone knows the name of a product, they should just be able to go to and get there?
Surf on over to
It's a moot point (Score:4)
The trademark and copyright interests are lobbying ICANN very heavily (including big money Hollywood interests [dnspolicy.com]) for stronger protection, even beyond what the law currently gives them. They can't get Congress, or even the courts, to back them up, so they are lobbying hard within ICANN, and ICANN is listening, not wanting to have to fight big corporate interests who are the ones actually paying ICANN's bills right now (see Follow the Money [dnspolicy.com]).
Soon individuals and small businesses with find themselves in the position of having to do what Clue Computing did, be the plaintiff in a case suing to KEEP your domain name, since under these new policies Trademark holders won't be obligated to take you to court and prove infringement or dilution. You will have to prove you aren't infringing, thus shifting the burden of proof as well as the expense.(Clue Computing sued NSI to prevent implementation of the Dispute Policy)
Not a very promising outlook.
I've been advocating some sort of grass roots campaign [dnspolicy.com] to rally against these actions by ICANN, but some people just see that ICANN is fighting NSI and think that is a justification for them trampling our rights.
--
William X. Walsh - DSo Internet Services
Editor of
Lawyer: clearly correct under U.S. trademark law (Score:5)
The outcome is clearly correct. The question is whether Hasbro should be sanctioned for an abusive filing for initiating the frivolous litigation.
There are *many* categories of trademarks in the U.S. A trademark in one category does *not* in any way block the identical trademark from being used in another category. That Hasbro has registered "Clue" as a game would in no way stop Ford from building a car called "Clue."
Somehow, Hasbro has gotten the idea that trademarks reach *much* farther in domain names than they do anywhere else. This is simply fallacious, and worthy of sanctions.
hawk, esq., once again griping that judges in general are far too slow to use their authority to sanction frivolous filings.
DNS stinks for the web... (Score:3)
There simply are not enough phrases around to give everyone a fair chance with a DNS system where no one cares about anything except the second level name in
The fact that DNS is controlled from the top down plays right into the hands of all kinds of abuse, everything from lawyer happy MN corporations, to NSI's constant monopolist practices, to the intervention of the American regime that is last thing we want on the Net.
Will adding more tlds help? Hell no, companies are already buying out there domains in
I can't say that I have a beautiful replacement in mind that solves all the problems, but we have to start looking for a decentralized, non-commericial, non-governmental naming system. The current domain name system is not, and will never be, anything but a bad compromise and a headache for the way the Internet has turned out.
-
Re:Name squatters and Large Overbearing Companies (Score:3)
I think a new system of corporate registry is needed, whereby, a given "big corporation" that has it's name as a protected, registered trademark, can register it's name, and be assigned an IP, but doesn't necessarily need a domain name, because this "new system" does not consist of a user, typing the corporate trademark into the Location field of his or her browser.
I don't know if there should be some intermediate "portal" or directory site one should go to first, in order to "hop off" to any given corporation, or whether browsers (or plugins) should add some kind of input field to the UI. It would seem to be more clean if there were just a page one could go to, look up the actual real corporate trademark name (Microsoft Corp. not Microsoft.com), and click on the link, and there you go, no ambiguity, no possibility of hitting some squatter site by mistake, and no need for Corporation X to send paralegal paratroopers in to do a man's job.
The simple mapping of corporate names to domain names certainly is one of the great things about the internet that has attracted a lot of business (because it's SIMPLE for the enduser to understand and implement), but the limitations of using that system for something it was not designed to do are showing.
"The number of suckers born each minute doubles every 18 months."
WotC buyout and D&D (Score:3)
I run one of the larger AD&D sites on the web (I get about 100,000 hits/month, even though I haven't updated it in a year...
:-(
Am I the only one that remembers the problems we (ie the gaming community) had with TSR over writing game extension and new rules? Thankfully, this sorted itself out, and WotC seems to have been content to abide by the TSR decision.
I'm really worried about Hasbro, though. Given that they seem to have a rather (shall we say) zealous approach to "protecting" their Intellectual Property, I'm really worried that they might try to revert to the old ways, and start trying to stop alot of the independent authors of D&D material.
I couldn't fight them if I got sued. I don't have the resources. This despite the fact that I've been EXTREMELY scrupulous about making sure none of the stuff on my site is lifted from TSR material. I'd have to close down, and that would be a shame.
Hopefully, Hasbro will Do The Right Thing, and continue with the current policy. People writing new material for the TSR games help sell "Official" material. And I'm well within my rights to create such stuff. I just can't afford to defend myself in court.
-Erik
Re:Name squatters and Large Overbearing Companies (Score:4)
(2) Establish a convention whereby anyone who has the trademark "foo" in the country with country code "xx" can get "foo.r.xx".
(3??) As a condition of taking "foo.r" or "foo.r.xx" domains, a trademark holder should relinquish any ".com", ".net", or ".org" domains they own that contain the trademark, so that the namespace doesn't become congested from large companies grabbing up every possible domain name containing their brand names. | http://slashdot.org/articles/99/09/09/1712221.shtml | crawl-001 | refinedweb | 2,931 | 59.43 |
On (Fri) Aug 06 2010 [12:12:45], Daniel P. Berrange wrote: > On Wed, Jun 23, 2010 at 08:14:04PM +0530, Amit Shah wrote: > > qemu-config.c doesn't contain any target-specific code, and the > > TARGET_I386 conditional code didn't get compiled as a result. Removing > > this enables the driftfix parameter for rtc. > > > > Signed-off-by: Amit Shah <address@hidden> > > --- > > qemu-config.c | 2 -- > > 1 files changed, 0 insertions(+), 2 deletions(-) > > > > diff --git a/qemu-config.c b/qemu-config.c > > index 95abe61..730ffd9 100644 > > --- a/qemu-config.c > > +++ b/qemu-config.c > > @@ -247,11 +247,9 @@ QemuOptsList qemu_rtc_opts = { > > },{ > > .name = "clock", > > .type = QEMU_OPT_STRING, > > -#ifdef TARGET_I386 > > },{ > > .name = "driftfix", > > .type = QEMU_OPT_STRING, > > -#endif > > }, > > { /* end if list */ } > > }, > > > Is there any reason this patch hasn't been applied to GIT yet ? I'm told > that using this option is critical to making certain guests work reliably > so we want to use it from libvirt/virt-manager for the OS in question. Multiple pings have gone out already; I hope it's in someone's queue to be applied. Amit | http://lists.gnu.org/archive/html/qemu-devel/2010-08/msg00432.html | CC-MAIN-2014-41 | refinedweb | 176 | 68.97 |
At least thats the short story; we need to turn to some code to make this more concrete. C types generally export a C module with a constructor function. Because of that (and because they are simpler), lets start off by studying the basics of C module coding with a quick example.
When you add new or existing C components to Python, you need to code an interface (or "glue") logic layer in C that handles cross-language dispatching and data translation. The C source file in Example 19-1 shows how to code one by hand. It implements a simple C extension module named hello for use in Python scripts, with a function named message that simply returns its input string argument with extra text prepended.
/******************************************************************** * A simple C extension module for Python, called "hello"; compile * this into a ".so" on python path, import and call hello.message; ********************************************************************/ #include
#include /* module functions */ static PyObject * /* returns object */ message(PyObject *self, PyObject *args) /* self unused in modules */ { /* args from python call */ char *fromPython, result[64]; if (! PyArg_Parse(args, "(s)", &fromPython)) /* convert Python -> C */ return NULL; /* null=raise exception */ else { strcpy(result, "Hello, "); /* build up C string */ strcat(result, fromPython); /* add passed Python string */ return Py_BuildValue("s", result); /* convert C -> Python */ } } /* registration table */ static struct PyMethodDef hello_methods[] = { {"message", message, 1}, /* method name, C func ptr, always-tuple */ {NULL, NULL} /* end of table marker */ }; , /* module initializer */ void inithello( ) /* called on first import */ { /* name matters if loaded dynamically */ (void) Py_InitModule("hello", hello_methods); /* mod name, table ptr */ }
Ultimately, Python code will call this C files message function with a string object and get a new string object back. First, though, it has to be somehow linked into the Python interpreter. To use this C file in a Python script, compile it into a dynamically loadable object file (e.g., hello.so on Linux) with a makefile like the one listed in Example 19-2, and drop the resulting object file into a directory listed on your PYTHONPATH module search path setting exactly as though it were a .py or .pyc file.[2]
[2] Because Python always searches the current working directory on imports, this chapters examples will run from the directory you compile them in (".") without any file copies or moves. Being on PYTHONPATHmatters more in larger programs and installs.
############################################################# # Compile hello.c into a shareable object file on Linux, # to be loaded dynamically when first imported by Python. # MYPY is the directory where your Python header files live. ############################################################# PY = $(MYPY) hello.so: hello.c gcc hello.c -g -I$(PY)/Include -I$(PY) -fpic -shared -o hello.so clean: rm -f hello.so core
This is a Linux makefile (other platforms will vary); to use it to build the extension module, simply type make -f makefile.hello at your shell. Be sure to include the path to Pythons install directory with -I flags to access Python include (a.k.a. "header") files. When compiled this way, Python automatically loads and links the C module when it is first imported by a Python script.
Finally, to call the C function from a Python program, simply import module hello and call its hello.message function with a string:
[mark@toy ~/.../PP2E/Integrate/Extend/Hello]$ make -f makefile.hello [mark@toy ~/.../PP2E/Integrate/Extend/Hello]$ python >>> import hello # import a C module >>> hello.message(world) # call a C function Hello, world >>> hello.message(extending) Hello, extending
And thats it -- youve just called an integrated C modules function from Python. The most important thing to notice here is that the C function looks exactly as if it were coded in Python. Python callers send and receive normal string objects from the call; the Python interpreter handles routing calls to the C function, and the C function itself handles Python/C data conversion chores.
In fact, there is little to distinguish hello as a C extension module at all, apart from its filename. Python code imports the module and fetches its attributes as if it had been written in Python. C extension modules even respond to dir calls as usual, and have the standard module and filename attributes (though the filename doesn end in a .py or .pyc this time around):
>>> dir(hello) # C module attributes [\__doc__, \__file__, \__name__, message] >>> hello.__name__, hello.__file__ (hello, ./hello.so) >>> hello.message # a C function object
>>> hello # a C module object
Like any module in Python, you can also access the C extension from a script file. The Python file in Example 19-3, for instance, imports and uses the C extension module.
import hello print hello.message(C) print hello.message(module + hello.__file__) for i in range(3): print hello.message(str(i))
Run this script as any other -- when the script first imports module hello, Python automatically finds the C modules .so object file in a directory on PYTHONPATH and links it into the process dynamically. All of this scripts output represents strings returned from the C function in file hello.c :
[mark@toy ~/.../PP2E/Integrate/Extend/Hello]$ python hellouse.py Hello, C Hello, module ./hello.so Hello, 0 Hello, 1 Hello, 2
Now that Ive shown you the somewhat longer story, lets fill in the rest of the details. You always must compile and somehow link C extension files like the hello.c example with the Python interpreter to make them accessible to Python scripts, but there is some flexibility on how you go about doing so. For example, the following rule could be used to compile this C file on Linux too:
hello.so: hello.c gcc hello.c -c -g -fpic -I$(PY)/Include -I$(PY) -o hello.o gcc -shared hello.o -o hello.so rm -f hello.o
To compile the C file into a shareable object file on Solaris, you might instead say something like this:
hello.so: hello.c cc hello.c -c -KPIC -o hello.o ld -G hello.o -o hello.so rm hello.o
On other platforms, its more different still. Because compiler options vary widely, youll have to consult your C or C++ compilers documentation or Pythons extension manuals for platform- and compiler-specific details. The point is to determine how to compile a C source file into your platforms notion of a shareable or dynamically loaded object file. Once you have, the rest is easy; Python supports dynamic loading of C extensions on all major platforms today.
Technically, what Ive been showing you so far is called "dynamic binding," and represents one of two ways to link compiled C extensions with the Python interpreter. Since the alternative, "static binding," is more complex, dynamic binding is almost always the way to go. To bind dynamically, simply:
Compile hello.c into a shareable object file
Put the object file in a directory on Pythons module search path
That is, once youve compiled the source code file into a shareable object file, simply copy or move the object file to a directory listed in PYTHONPATH. It will be automatically loaded and linked by the Python interpreter at runtime when the module is first imported anywhere in the Python process (e.g., from the interactive prompt, a standalone or embedded Python program, or a C API call).
Notice that the only non-static name in the hello.c example C file is the initialization function. Python calls this function by name after loading the object file, so its name must be a C global and should generally be of the form "initX", where "X" is both the name of the module in Python import statements and the name passed to Py_InitModule. All other names in C extension files are arbitrary, because they are accessed by C pointer, not by name (more on this later). The name of the C source file is arbitrary too -- at import time, Python cares only about the compiled object file.
Under static binding, extensions are added to the Python interpreter permanently. This is more complex, though, because you must rebuild Python itself, and hence need access to the Python source distribution (an interpreter executable won do). To link this example statically, add a line like:
hello ~/PP2E/Integrate/Extend/Hello/hello.c
to the Modules/Setup configuration file in the Python source code tree. Alternatively, you can copy your C file to the Modules directory (or add a link to it there with an ln command) and add a line to Setup like hello hello.c.
Then, rebuild Python itself by running a make command at the top level of the Python source tree. Python reconstructs its own makefiles to include the module you added to Setup, such that your code becomes part of the interpreter and its libraries. In fact, theres really no distinction between C extensions written by Python users and services that are a standard part of the language; Python is built with this same interface. The full format of module declaration lines looks like this (but see the Modules/Setup configuration file for more details):
... [ ...] [ ...] [ ...]
Under this scheme, the name of the modules initialization function must match the name used in the Setup file, or youll get linking errors when you rebuild Python. The name of the source or object file doesn have to match the module name; the leftmost name is the resulting Python modules name.
Static binding works on any platform and requires no extra makefile to compile extensions. It can be useful if you don want to ship extensions as separate files, or if you e on a platform without dynamic linking support. Its downsides are that you need to update the Python Setup configuration file and rebuild the Python interpreter itself, so you must therefore have the full source distribution of Python to use static linking at all. Moreover, all statically linked extensions are always added to your interpreter, whether or not they are used by a particular program. This can needlessly increase the memory needed to run all Python programs.
With dynamic binding, you still need Python include files, but can add C extensions even if all you have is a binary Python interpreter executable. Because extensions are separate object files, there is no need to rebuild Python itself or to access the full source distribution. And because object files are only loaded on demand in this mode, it generally makes for smaller executables too -- Python loads into memory only the extensions actually imported by each program run. In other words, if you can use dynamic linking on your platform, you probably should.
Though simple, the hello.c example illustrates the structure common to all C modules. This structure can vary somewhat, but this file consists of fairly typical boilerplate code:
The C file first includes the standard Python.h header file (from the installed Python Include directory). This file defines almost every name exported by the Python API to C, and serves as a starting point for exploring the API itself.
The file then defines a function to be called from the Python interpreter in response to calls in Python programs. C functions receive two Python objects as input, and send either a Python object back to the interpreter as the result, or a NULL to trigger an exception in the script (more on this later). In C, a PyObject* represents a generic Python object pointer; you can use more specific type names, but don always have to. C module functions can all be declared C "static" (local to the file), because Python calls them by pointer, not name.
Near the end, the file provides an initialized table (array) that maps function names to function pointers (addresses). Names in this table become module attribute names that Python code uses to call the C functions. Pointers in this table are used by the interpreter to dispatch C function calls. In effect, the table "registers" attributes of the module. A NULL entry terminates the table.
Finally, the C file provides an initialization function, which Python calls the first time this module is imported into a Python program. This function calls the API function Py_InitModule to build up the new modules attribute dictionary from the entries in the registration table and create an entry for the C module on the sys.modules table (described in Chapter 12). Once so initialized, calls from Python are routed directly to the C function through the registration tables function pointers.
C module functions are responsible for converting Python objects to and from C datatypes. In Example 19-1, message gets two Python input objects passed from the Python interpreter: args is a Python tuple holding the arguments passed from the Python caller (the values listed in parentheses in a Python program), and self is ignored; it is useful only for extension types (discussed later in this chapter).
After finishing its business, the C function can return any of the following to the Python interpreter: a Python object (known in C as PyObject*), for an actual result; a Python None, (known in C as Py_None), if the function returns no real result; or a C NULL pointer, to flag an error and raise a Python exception.
There are distinct API tools for handling input conversions (Python to C) and output conversions (C to Python). Its up to C functions to implement their call signatures (argument lists and types) by using these tools properly.
When the C function is run, the arguments passed from a Python script are available in the args Python tuple object. The API function PyArg_Parse(and PyArg_ParseTuple, its cousin that assumes it is converting a tuple object) is probably the easiest way to extract and convert passed arguments to C form.
PyArg_Parse takes a Python object, a format string, and a variable-length list of C target addresses. It converts the items in the tuple to C datatype values according to the format string, and stores the results in the C variables whose addresses are passed in. The effect is much like Cs scanf string function. For example, the hello module converts a passed-in Python string argument to a C char* using the s convert code:
PyArg_Parse(args, "(s)", &fromPython) # or PyArg_ParseTuple(args, "s",...
To handle multiple arguments, simply string format codes together and include corresponding C targets for each code in the string. For instance, to convert an argument list holding a string, an integer, and another string to C, say this:
PyArg_Parse(args, "(sis)", &s1, &i, &s2) # or PyArg_ParseTuple(args, "sis",...
To verify that no arguments were passed, use an empty format string like this: PyArg_Parse(args, "( )"). This API call checks that the number and types of the arguments passed from Python matches the format string in the call. If there is a mismatch, it sets an exception and returns zero to C (more on errors below).
As well see in Chapter 20, Embedding Python, API functions may also return Python objects to C as results when Python is being run as an embedded language. Converting Python return values in this mode is almost the same as converting Python arguments passed to C extension functions, except that Python return values are not always tuples. To convert returned Python objects to C form, simply use PyArg_Parse. Unlike PyArg_ParseTuple, this call takes the same kinds of arguments but doesn expect the Python object to be a tuple.
There are two ways to convert C data to Python objects: by using type-specific API functions, or the general object-builder function Py_BuildValue. The latter is more general, and is essentially the inverse of PyArg_Parse, in that Py_BuildValue converts C data to Python objects according to a format string. For instance, to make a Python string object from a C char*, the hello module uses an s convert code:
return Py_BuildValue("s", result) # "result" is a C char []/*
More specific object constructors can be used instead:
return PyString_FromString(result) # same effect
Both calls make a Python string object from a C character array pointer. See the now-standard Python extension and runtime API manuals for an exhaustive list of such calls available. Besides being easier to remember, though, Py_BuildValue has syntax that allows you to build lists in a single step, described next.
With a few exceptions, PyArg_Parse(Tuple) and Py_BuildValue use the same conversion codes in format strings. A list of all supported conversion codes appears in Pythons extension manuals. The most commonly used are shown in Table 19-1; the tuple, list, and dictionary formats can be nested.
These codes are mostly what youd expect (e.g., i maps between a C int and a Python integer object), but here are a few usage notes on this tables entries:
PyArg_Parsesupports some extra codes, which must not be nested in tuple formats ((...)):
The remaining arguments are all optional (varargs). The C targets are unchanged if arguments are missing in the Python tuple. For instance, si|sd requires two arguments but allows up to four.
The function name follows, for use in error messages set by the call (argument mismatches). Normally Python sets the error message to a generic string.
A full error message follows, running to the end of the format string.
This format code list isn exhaustive, and the set of convert codes may expand over time; refer to Pythons extension manual for further details.
When you write C extensions, you need to be aware that errors can occur on either side of the languages fence. The following sections address both possibilities.
C extension module functions return a C NULL value for the result object to flag an error. When control returns to Python, the NULL result triggers a normal Python exception in the Python code that called the C function. To name an exception, C code can also set the type and extra data of the exceptions it triggers. For instance, the PyErr_SetString API function sets the exception object to a Python object and sets the exceptions extra data to a character string:
PyErr_SetString(ErrorObject, message)
We will use this in the next example to be more specific about exceptions raised when C detects an error. C modules may also set a built-in Python exception; for instance, returning NULL after saying this:
PyErr_SetString(PyExc_IndexError, "index out-of-bounds")
raises a standard Python IndexError exception with the message string data. When an error is raised inside a Python API function, both the exception object and its associated "extra data" are automatically set by Python; there is no need to set it again in the calling C function. For instance, when an argument-passing error is detected in the PyArg_Parsefunction, the hello stack module just returns NULL to propagate the exception to the enclosing Python layer, instead of setting its own message.
Python API functions may be called from C extension functions, or from an enclosing C layer when Python is embedded. In either case, C callers simply check the return value to detect errors raised in Python API functions. For pointer result functions, Python returns NULL pointers on errors. For integer result functions, Python generally returns a status code of -1 to flag an error and a or positive value on success. (PyArg_Parse is an exception to this rule: it returns when it detects an error.) To make your programs robust, you should check return codes for error indicators after most Python API calls; some calls can fail for reasons you may not have expected (e.g., memory overflow).
The Python interpreter uses a reference-count scheme to implement garbage collection. Each Python object carries a count of the number of places it is referenced; when that count reaches zero, Python reclaims the objects memory space automatically. Normally, Python manages the reference counts for objects behind the scenes; Python programs simply make and use objects without concern for managing storage space.
When extending or embedding Python, though, integrated C code is responsible for managing the reference counts of the Python objects it uses. How important this becomes depends on how many raw Python objects a C module processes and which Python API functions it calls. In simple programs, reference counts are of minor, if any, concern; the hello module, for instance, makes no reference-count management calls at all.
When the API is used extensively, however, this task can become significant. In later examples, well see calls of these forms show up:
C module functions are expected to return either an object with an incremented reference count, or NULL to signal an error. As a general rule, API functions that create new objects increment their reference counts before returning them to C; unless a new object is to be passed back to Python, the C program that creates it should eventually decrement the objects counts. In the extending scenario, things are relatively simple; argument object reference counts need not be decremented, and new result objects are passed back to Python with their reference counts intact.
The upside of reference counts is that Python will never reclaim a Python object held by C as long as C increments the objects reference count (or doesn decrement the count on an object it owns). Although it requires counter management calls, Pythons garbage collector scheme is fairly well-suited to C integration. | https://flylib.com/books/en/2.723.1/a_simple_c_extension_module.html | CC-MAIN-2018-34 | refinedweb | 3,539 | 60.65 |
I will try to make this a purely minimal example to be as applicable to as many people as possible as well as protect any sort of code sharing that might violate an NDA. Hope this is okay!
I am using CppUTest and CppUMock (compiling with gcc/g++ and makefiles created with CMake) in conjunction with Gitlab Continuous Integration software to create a unit testing environment for future commits and release of software. However, I have run into a bit of a problem. Let's say I have the following folder setup (that I have minimal ability to change, other than the contents of the /tests folder):
+-- src
+-- driver1.c
+-- driver2.c
+-- inc
+-- driver1.h
+-- driver2.h
+-- tests
+-- test_driver1.cpp
+-- test_driver2.cpp
+-- main.cpp
+-- cmakelists.txt
void method1(int n) {
mock().actualCall("method1").withParameter("n", n);
}
CMakeFiles/Tests.dir/.../src/driver1.c:(.text+0x11d): multiple definition of 'method1'
CMakeFiles/Tests.dir/.../src/test_driver2.cpp:(.text+0x0): first defined here
driver1.c includes driver1.h (obviously)
driver2.c includes driver2.h (obviously)
driver2.h includes driver1.h (for calling method1)
test cpp files include their respective .h files
(test_driver1.cpp -> driver1.h and test_driver2.cpp -> driver2.h)
If you want to mock
method1 from
driver1.h, just add the mocked definition in a separate mock_driver1.cpp and then in your CMakeLists.txt:
add_executable(target1 test_driver1.cpp driver1.cpp) add_executable(target2 test_driver2.cpp driver2.cpp mock_driver1.cpp)
Once you are done mocking, replace the
mock_driver1.cpp dependency with
driver1.cpp.
This all assumes you have a separate executable for each test driver.
However, if you want to have one big main program where all drivers are linked together, then you cannot have both the real
method1 and the mocked
method1 co-exist together. For that, I'd recommend wrapping the mocked
method1 in a namespace
mock or something like that, and only call
mock::test1 in test_driver2.cpp. | https://codedump.io/share/CYWLXiss6Ldz/1/cpputest-unit-testing-framework-multiple-definition-exception | CC-MAIN-2018-17 | refinedweb | 316 | 62.34 |
a singleton, two-level, colorful, thread-safe, knob-free, logging library for in-house software
Project description
logthis
logthis is a singleton, two-level, colorful, thread-safe, knob-free, logging library for in-house software.
singleton: There is no object to create. There are only two logging functions, say() and err().
two-level: There is only the information level and the error level. Nothing else. We found it way too mentally involving to have more than two logging levels. We want to avoid unnecessary cognitive load at every message (“Is this a warning? Or an information? Or debugging information?”). We don’t think that is important. Either there is a problem and needs to be resolved (so use err()), or everything is fine and no action is required by the operator (so use say()).
colorful: The prefix of a message is colored indicating the log level. This makes reading the logs easier on the eyes and helps direct the attention. Colors are included even when the logging is redirected to a file. We inspect our logs with Unix utilities (cat and the ilk) and find it cool to preserve colors even when we inspect files such as supervisord logs.
thread-safe: We use a global lock so that multi-threaded logging is not garbled. STDOUT and STDERR are flushed on every logging.
knob-free: There are no options or targets/sinks/streams to set. The information is written to STDOUT and the errors are written to STDERR. We found it daunting to learn and deal with all the special knobs in libraries such as Python logging.
in-house software: logthis is meant to be used for the software developed and operated in-house. Its output will be examined by people who are familiar with the code and would like to inspect it on problems. We include the name of the script and the line number in the messages as well as time in UTC so that it is easier to trace bugs and see where in the code the logging comes from.
If you are developing a library or a program for wider audience, then logthis is probably not for you.
Usage
import logthis # inform the user logthis.say("Hello!") # alert the user that there is an error logthis.err("Something bad happened".)
The output is:
Installation
- Create a virtual environment:
python3 -m venv venv3
- Activate it:
source venv3/bin/activate
- Install logthis with pip:
pip3 install logthis
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/logthis/ | CC-MAIN-2018-34 | refinedweb | 429 | 65.22 |
On Sat, 2 Sep 2000, Roman Zippel wrote:> Hi,> > On Thu, 31 Aug 2000, Alexander Viro wrote:> > > Go ahead, write it. IMNSHO it's going to be much more complicated and> > race-prone, but code talks. If you will manage to write it in clear and> > race-free way - fine. Frankly, I don't believe that it's doable.> > It will be insofar more complicated, as I want to use a more complex state> machine than "locked <-> unlocked", on the other hand I can avoid such> funny constructions as triple_down() and obscure locking order rules.Obscure? Look: all locking order rules in two lines - All ->i_sem ones go before all ->i_zombie ones. Within the same group order is just by memory address.You claim that you have something easier to understand? triple_down() is a_sort_ on 3 elements, for fsck sake. Conceptually very simple.> At any time the object will be either locked or in a well defined state,> where at any time only a single object is locked by a thread. (I hope someI don't believe that you can achieve that. Reason: rmdir()/rename() races.> pseudo code does for the beginning, too?) Most namespace operation work> simply like a semaphore:> > restart:> lock(dentry);> if (dentry is busy) {> unlock(dentry);> sleep();> goto restart;> }> dentry->state = busy;> unlock(dentry);Look, could you call it d_set_state(dentry, busy) or something? It wouldreally help reading the thing.> If the operation is finished, the state is reset and everyone sleeping is> woken up. Ok, let's come to the most interesting operation - rename():Show me your rmdir() and lookup(), will you?> restart:> lock(olddentry);> if (olddentry is busy) {> unlock(olddentry);> sleep();> goto restart;> }> olddentry->state = moving;> unlock(olddentry);Are 'moving' dentries busy? From the code below it seems that they arenot, so you are wide-open to e.g. file creation in the target (originally- empty directory).> restart2:> lock(newdentry);> if (newdentry->state == moving) {> lock(renamelock);> if (olddentry->state == deleted) {> unlock(renamelock);> unlock(newdentry);> sleep();> goto restart;> }> newdentry->state = deleted;> unlock(renamelock);> } else if (newdentry is busy) {> unlock(newdentry);> sleep();> goto restart2;> } else> newdentry->state = deleted;> unlock(newdentry);Huh? See - here's a problem with your approach: can you tell in onesentence what the piece above does? It's _not_ a nitpick. Debugging suchstuff really requires the ability to say concisely WTF it is supposed toachieve.> if (!rename_valid(olddentry, newdentry)) {Which is...?> lock(newdentry);> newdentry->state = idle;> unlock(renamelock);> lock(olddentry);> olddentry->state = idle;> unlock(olddentry);> wakeup_sleepers();> return;> }> if (newdentry exists)> unlink(newdentry);> do_rename(olddentry, newdentry);Broken. rename() must be atomic.> lock(newdentry);> newdentry->state = idle;> unlock(renamelock);> lock(olddentry);> olddentry->state = deleted;> unlock(olddentry);> wakeup_sleepers();> return;> > Note that I don't touch any inode here, everything happens in the dcache.> That means I move the complete inode locking into the fs, all I do here is> to make sure, that while operation("foo") is busy, no other operation will> use "foo".> IMO this should work, I tried it with a rename("foo", "bar") and > rename("bar", "foo"):> case 1: one rename gets both dentries busy, the other rename will wait> till it's finished.> case 2: both mark the old dentry as moving and find the new dentry also> moving. To make the rename atomic the global rename lock is needed, one> rename will find the old dentry isn't moving anymore and has to restart> and wait, the other rename will complete.OK, just for starters, what happens if two rename() are trying to move thething into different places? Both succeeding? What happens if the target(originally empty) gets a new child? Prevent that in fs? Fine, but thenyou've just moved locking of inode pairs (if not triples) into _every_ fsout there.> Other operations will keep only one dentry busy, so that I don't a see> problem here. If you don't find any major problem here, I'm going to tryrmdir() and lookup(), please.> this. Since if this works, it will have some other advantages:> - a user space fs will become possible, that can't even deadlock the> system. The first restart loop can be easily made interruptable, so it can> be safely killed. (I don't really want to know how a Erm... Filesystem that requires killing is about as good as deadlocking.Besides, I'm yet to see a deadlock scenario with userland filesystem thatwould go anywhere near the directory operations. So I'm not sure what youare trying to fix here.> triple_down_interruptable() looks, not to mention the other three locks> (+ BKL) taken during a rename.)> - I can imagine better support for hfs. It can access the other fork> without excessive locking (I think currently it doesn't even tries to).> The order in which the forks can be created can change then too.> > > BTW, I really wonder what kind of locks are you going to have on _blocks_> > (you've mentioned that, unless I've misparsed what you've said). IMO that> > way lies the horror, but hey, code talks.> > I thought about catching a bread, but while thinking about it, there> should also be other ways. But that's fs specific, let's concentrate on> the generic part first.Sorry. I think that you are missing a detail: your variant will need a_lot_ of cruft in every fs. And I think that asking to show a proof-of-concept_full_ thing for some filesystems is not unreasonable. Just to estimatethe amounts of the cruft.> > You claim that it's doable. I seriously doubt it. Nobody knows your ideas> > better than you do, so... come on, demonstrate the patch.> > I think the above example should do basically the same as some nothing> doing patch within affs.Not. You've suddenly got a need to deal with a lot of _new_ locking infilesystems. All of them. Please, show the AFFS patch, just to demonstrate how to deal with them. "Let fs maintainers deal with it" is _not_ a validanswer. At least describe what is needed and show how to fix the in-treeinstances. You'll have to fix them if that proposal will get to somethingworkable. I'm not asking for complete patch right now, but IMO at leastone fs must be done _and_ tested. Just to let everyone estimate theresults.> I hope that example shows two important ideas (no idea if they will save> the world, but I'm willing to learn):> - I use the dcache instead of the inode to synchronize namespace> operation, what IMO makes quite a lot of sense, since it represents our> (cached) representation of the fs.>> - Using states instead of a semaphore, makes it easily possible to detect> e.g. a rename loop.Where? You've skipped the most interesting part: check for rename()validity. And yes, it will take some locking in your variant.Look: more states for dentry are needed. No arguments here. I couldprobably even save you some time just digging out the proposal andpre-patches around the same idea. However: * your "ordering" snippet in rename() (if I've parsed you rightand that's what you are trying to do there) is _way_ more complex thandouble_down() and even triple_down(). Why? Because there you can trace allpossible paths of execution - there's finite amount. All you need is toshow that sorting is right. I claim that your retry scheme is inherentlyharder to proof. And unlike the situation in ext2_get_block() (that madeyou retch, IIRC) there's no obvious invariants. * you've completely missed all fun problems with creation ofchildren in the object we are renaming over. It will come back to hauntyou in every local fs that has ->rename(). * fs locking becomes much more of a burden. I don't see how it's agood thing. Right now one can write a filesystem and ignore the directoryraces completely, unless he is has operations with side effects on otherdirectories. With your own proposal for extent flipping (horrible, IMO) itwould become a non-issue even on AFFS and I'm yet to see any otherexample. And yes, AFFS can be done simpler than your variant. Complexissues didn't become simpler. Simple ones became complex. VFS also grewand became more complex. That might be justified if it bought ussimplification in filesystems, but it didn't. Notice that you can (rightnow) drop and reacquire the locks if you want to be smart, so it didn'tbuy you even better threading. I don't understand your reaction to ordering, BTW - if you have aclass of locks used only in one place it's not a problem. Overdoing theamount of locks is seriously bad idea, but you didn't decrease it._Sharing_ locking mechanisms between different subsystems is very bad, buthaving a semaphore in private part of struct inode? Makes sense to me.Look: our inodes are strange beasts and there really should be no union(inode->u). There is VFS inode (called vnode in SunOS-derived designs) andthere is an fs inode. The latter happens to be kept together with vnodefor several filesystems. Hystorical reasons... It doesn't change thenature of the beast - you want fs-private, you look into fs-private part.There are 3 objects, not 2 - dentry->vnode->inode. The latter is optional_and_ fs-only. BTW, your design doesn't exclude deadlocks, even if you manage tolock (as in "down()/up()") only one dentry at time. You can deadlock onthe dentry state, busy looping. And the fact that you've got more statesjust makes analysis harder...-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgPlease read the FAQ at | https://lkml.org/lkml/2000/9/2/66 | CC-MAIN-2018-39 | refinedweb | 1,598 | 66.23 |
Monitor your ML runs live wherever you are
Would you like to know that your model training failed, the loss stopped improving, or the GPU consumption is going crazy? You can. Run experiments anywhere, log anything you want, and see it all in one place.
It is a good practice to have control over your model training process
And hundreds of data scientists are using Neptune to see that their experiments are running smoothly.
_0<<
Michał Kordas
Machine Learning Researcher @TensorCell
Is this honestly the best way to organize my ML experiments?
SSH + console logs
Open-source solutions like MLflow, TensorBoard or Sacred
Monitor your ML models in 3 steps
Add a few lines to your scripts
Connect Neptune to your project by adding literally 3 lines on top of your scripts. Then you just run your training and evaluation and log whatever you care about.
For most machine learning frameworks you don’t even have to write those logging calls -> We created the integrations for you!
import neptune neptune.init('Me/MyProject') neptune.create_experiment(params={'lr':0.1, 'dropout':0.4}) # training and evaluation logic neptune.log_metric('test_accuracy', 0.84) neptune.log_image('model predictions', image)
Run your experiments the way you usually.
Before
python main.py
After
python main_with_neptune_lines.py
Monitor your experiments in the app
See your learning curves, hardware consumption, console logs as your model is running.
If you log ROC curves, image predictions, model checkpoints, or other things after every iteration you can scroll through them and see the progress live.
Start collaborating on experiments in minutes with our integrations
Are you thinking “Ok but, do I have to write the logging/callback functions myself?”
If you are using Keras, XGBoost, Optuna, or one of the 20+ libraries that we integrate with you don’t need to implement anything to monitor your experiments.
Python
AWS SageMaker
Google Colab
What our users say
Over 5,000 ML people started monitoring their experiments with Neptune this year – read what some of them have to say:
“If you need to monitor and manage your machine learning or any other computational experiments, Neptune.ai is a great choice. It has many features that can make your life easier and your research more organized.”
Boaz Shvartzman
Computer vision researcher and developer @TheWolf
“I’m working with deep learning (music information processing), previously I was using Tensorboard to track losses and metrics in TensorFlow, but now I switched to PyTorch so I was looking for alternatives and I found Neptune a bit easier to use, I like the fact that I don’t need to (re)start my own server all the time and also the logging of GPU memory etc. is nice. So far I didn’t have the need to share the results with anyone, but I may in the future, so that will be nice as well.”
Ondřej Cífka
PhD student in Music Information Processing at Télécom Paris
_30<<
Michał Kordas
Machine Learning Researcher @TensorCell
They already have their ML experimentation in order.
When will you?
✓ Add a few lines to you code
✓ Get back to running your experiments | https://neptune.ai/use-cases/monitor-machine-learning-runs-live | CC-MAIN-2020-50 | refinedweb | 520 | 50.67 |
JSON Adapter for ABAP Function Modules
This blog explains how to use an ABAP class that allows calling function modules with a URL and a JSON representation of ABAP data.
The code to this project is available in this GitHub repository: cesar-sap/abap_fm_json · GitHub.
If you have problems installing it via SAPLink or abapGit, try the ABAP transport files here: abap_fm_json/transport/ at master · cesar-sap/abap_fm_json · GitHub.
ABAP Function Modules is one of the best and most useful features of the ABAP application server. They provide encapsulation of functionality in an easy to use package. They are the basis of ABAP ability to interoperate with the external world, thanks to, mostly, two built-in features: the RFC library (where the F comes from “Function”) and the SOAP adaptor, that allows SOAP compliant Web Services to be built from function modules.
ABAP Function Modules are also great for one more thing: they provide a name based parameter interface. This is something not new to ABAP developers, but this allows an incredible flexible way of parameter passing, including references, complex nested structures and tables. Shortly, with ABAP Function Modules you have the full power of the business suite at your service.
But, the RFC, while great, is a binary only library that could be cumbersome to use, and it is not available for all platforms. The SOAP adapter, while good for SOA enabled sites, could be also more complex that necessary and may have an overhead that makes it a non-option for some use cases, like, for example, data ready for consumption from user interfaces.
Just imagine how great could be to call a function module with a URL:
and get the results of the function module call in JSON format:
{”
}
}
Furthermore, imagine that the function module has input parameters, and we can pass them as part of the query string:
And get the details of the flight:
{
ADDITIONAL_INFO: {
FLIGHTTIME: 65,
DISTANCE: “555.0000”,
UNIT: “KM”,
UNIT_ISO: “KMT”,
PLANETYPE: “DC-10-10”,
FLIGHTTYPE: “”
},
AVAILIBILITY: {
ECONOMAX: 380,
ECONOFREE: 0,
BUSINMAX: 41,
BUSINFREE: 0,
FIRSTMAX: 18,
FIRSTFREE: 0
},
FLIGHT_DATA: {
AIRLINEID: “LH”,
AIRLINE: “Lufthansa”,
CONNECTID: “2402”,
FLIGHTDATE: “2013-01-28”,
AIRPORTFR: “FRA”,
CITYFROM: “FRANKFURT”,
AIRPORTTO: “SXF”,
CITYTO: “BERLIN”,
DEPTIME: “10:30:00”,
ARRTIME: “11:35:00”,
ARRDATE: “2013-01-28”,
PRICE: “242.0000”,
CURR: “EUR”,
CURR_ISO: “EUR”
},
EXTENSION_IN: [ ],
EXTENSION_OUT: [ ],
RETURN: [
{
TYPE: “S”,
ID: “BC_IBF”,
NUMBER: “000”,
MESSAGE: “Method was executed successfully”,
LOG_NO: “”,
LOG_MSG_NO: “000000”,
MESSAGE_V1: “”,
MESSAGE_V2: “”,
MESSAGE_V3: “”,
MESSAGE_V4: “”,
PARAMETER: “”,
ROW: 0,
FIELD: “”,
SYSTEM: “NXK001”
}
]
}
Isn’t it great?
And then, imagine that you have to pass some ABAP structure or table to the function module. You could just go and write it in JSON and pass it to the function module in the payload of a POST request:
This will be the response, showing that a new booking has been created:
Output formats
JSON (the JavaScript Object Notation) is the basis for this class. The default output format is JSON and the only supported input format (apart form the query string params) is JSON. But, other possible output formats have been implemented.
JSON has been seeing an enormous increasing in its use in the last years, especially for its high suitability at being used directly in JavaScript based user interface communication, like HTML5 and AJAX. JSON can be consumed directly in a JavaScript program just by “evaling” it, and its low overhead has also find its way into many other languages and communication requirements.
JSON is normally invoked with the JavaScript XMLHttpRequest call, normally as an asynchronous call. But sometimes, the HTML tag <script> is used to place a JSON call. In this case, the <script> tag would need a real JavaScript program instead of a JSON string. For this, many servers implement a JSON “padded” call (or JSONP), in which the JSON data is sent as the parameter of a JavaScript function. This is normally indicated by the existence of a URL parameter with the name “callback”, as seen here:
Wich gives the following output:
jsfunction({“current_resources”:4,”maximal_resources”:5,”recommended_delay”:0,”rfcsi_export”:{“rfcproto”:”011″,”rfcchartyp”:”4103″,”rfcinttyp”:”LIT”,”rfcflotyp”:”IE3″,”rfcdest”:”seppuku_NXK_00″,”rfchost”:”seppuku”,”rfcsysid”:”NXK”,”rfcdatabs”:”NXK”,”rfcdbhost”:”seppuku”,”rfcdbsys”:”ADABAS D”,”rfcsaprl”:”731″,”rfcmach”:” 390″,”rfcopsys”:”Linux”,”rfctzone”:” 3600″,”rfcdayst”:””,”rfcipaddr”:”172.16.122.128″,”rfckernrl”:”721″,”rfchost2″:”seppuku”,”rfcsi_resv”:””,”rfcipv6addr”:”172.16.122.128″}});
This allows full operation for function modules calls from every kind of JavaScript application.
Once the basic JSON ABAP serialization was completed, I thought it would be a good idea to give you some choice in the output format, just as Google does with most of their public APIs.
Try this:
And you will get the output of the function call in a simple XML representation:
<RFC_SYSTEM_INFO>
<CURRENT_RESOURCES>4</CURRENT_RESOURCES>
<MAXIMAL_RESOURCES>5</MAXIMAL_RESOURCES>
<RECOMMENDED_DELAY>0</RECOMMENDED_DELAY>
<RFCSI_EXPORT>
<RFCPROTO>011</RFCPROTO>
<RFCCHARTYP>4103</RFCCHARTYP>
<RFCINTTYP>LIT</RFCINTTYP>
<RFCFLOTYP>IE3</RFCFLOTYP>
<RFCDEST>seppuku_NXK_00</RFCDEST>
<RFCHOST>seppuku</RFCHOST>
<RFCSYSID>NXK</RFCSYSID>
<RFCDATABS>NXK</RFCDATABS>
<RFCDBHOST>seppuku</RFCDBHOST>
<RFCDBSYS>ADABAS D</RFCDBSYS>
<RFCSAPRL>731</RFCSAPRL>
<RFCMACH>390</RFCMACH>
<RFCOPSYS>Linux</RFCOPSYS>
<RFCTZONE>3600</RFCTZONE>
<RFCDAYST/>
<RFCIPADDR>172.16.122.128</RFCIPADDR>
<RFCKERNRL>721</RFCKERNRL>
<RFCHOST2>seppuku</RFCHOST2>
<RFCSI_RESV/>
<RFCIPV6ADDR>172.16.122.128</RFCIPV6ADDR>
</RFCSI_EXPORT>
</RFC_SYSTEM_INFO>
And this:
In order to have the function module output in YAML format:
— #YAML:1.0”
And while I was at it, and remembering my Perl background, I thought it would be really fun to have ABAP just dump its data in Perl Data::Dumper format, which is quite close to JSON:
Have fun with Perl:
$RFC_SYSTEM_INFO = {‘CURRENT_RESOURCES’ => 4,’MAXIMAL_RESOURCES’ => 5,’RECOMMENDED_DELAY’ => 0,’RFCSI_EXPORT’ => {‘RFCPROTO’ => ‘011’,’RFCCHARTYP’ => ‘4103’,’RFCINTTYP’ => ‘LIT’,’RFCFLOTYP’ => ‘IE3′,’RFCDEST’ => ‘seppuku_NXK_00′,’RFCHOST’ => ‘seppuku’,’RFCSYSID’ => ‘NXK’,’RFCDATABS’ => ‘NXK’,’RFCDBHOST’ => ‘seppuku’,’RFCDBSYS’ => ‘ADABAS D’,’RFCSAPRL’ => ‘731’,’RFCMACH’ => ‘ 390′,’RFCOPSYS’ => ‘Linux’,’RFCTZONE’ => ‘ 3600′,’RFCDAYST’ => ”,’RFCIPADDR’ => ‘172.16.122.128’,’RFCKERNRL’ => ‘721’,’RFCHOST2′ => ‘seppuku’,’RFCSI_RESV’ => ”,’RFCIPV6ADDR’ => ‘172.16.122.128’}};
It shouldn’t be difficult to create your favourite data representation, look at the code and develop your own ideas.
Available options
You can add several options as part of the query string. This is a summary of them:
- ?lowercase=X -> outputs var names in lower case (no available when using the built-in transformation ID)
- ?show_import_params=X -> include import params in the response
- ?callback=<callback_name> -> returns json output enclosed in a JavaScript function call with name <callback_name> (also known as jsonp, for json padded)
- ?format=<data_format> -> returns output in specified <data_format>, which can be: json, xml, yaml, perl. Defaults to json.
Output format is determined with the info sent by the client in the Accept HTTP header.
If specified as a query string option, it will take precedence over the Accept header.
Installation
The code is provided to you in saplink NUGG format. The code is contained in an ABAP class called ZCL_JSON_HANDLER. You just have to put it in an ICF handler. Some considerations for installation follow:
Security and ABAP roles
The ZCL_JSON_HANDLER class contains a user defined ABAP authority check object called Z_JSON. This object contains a single field called FMNAME. In order to protect access to function modules it is highly recommended that each user has a list of the function modules he or she is authorized to call with this handler. A single * will effectively allow access to all function modules.
Due to limitations in saplink, the authorization object is not created when you import the nugget. You have to manually create it with transaction SU21.
Create the ICF Service
Once you have the class installed and have created the authorization object, the next step is to create an ICF handler in transaction SICF:
Adjust logon data at will and then point your browser to the service URL. You are now ready to start!!
Acknowledgements and a bit of history
This little project doesn’t come out of nothing, and as everything that is done nowadays in technology comes from the fact that we can see further as we are standing on the shoulders of giants. There are many JSON for ABAP development projects out there, and JSON support is already built-in in the core of ABAP. This nice feature is already available in this module for the JSON and XML converters.
A great characteristic of function modules in ABAP is that they have a dynamic way of defining function parameters. Additionally, function modules are already very well suited to be used in the model of direct request->response that is so in vogue today with the Web. So we only needed a way of escaping the RFC protocol and put all this richness directly accessible to HTTP calls. I got the idea for all this from a very old blog post from Piers Harding: You don’t need to use soap to keep your rpc clean. This article is from year 2004, and actually we implemented Piers’ idea around that time, using YAML as serialization format. In 2006 we changed the main focus to JSON instead of YAML and we’ve been using that old version for a long time in internal and partner projects. Last year, some developments happening around made me have a look at all this, and could find the time to completely rewrite it cleanly and make it ready for publishing. So sorry for the delay, but it is better late than never. Hope this can still be useful to someone.
I could have never done this without the help of Juan Diaz, who opened my eyes to the incredible richness of ABAP dynamic data programming. He actually developed parts of this and is the main user of this code. Thanks a lot Juan.
I also want to thank all my colleagues at SAP that have patiently listen to me (or at least pretended 🙂 ) when I was talking about this and when I showed them the examples. I also thank them for their patience at standing at my ever changing mood: Ernesto, Félix, Miguel Angel, José, Cony, Cristina and Aviad, thanks a lot for being there.
Getting the code
The code used to be available in Code Exchange here:. The link is not available anymore, as SAP decided that Code Exchange should die.
The code is now in GitHub here: cesar-sap/abap_fm_json · GitHub.
if you have problems getting the code please send me a private message.
Join the project and please report bugs and ideas for improvement. Thank you!.
César,
thanks for pointing me to your blog. We are working in the same field and even seem to have similar preferences. For example: I appreciate that you are considering the equally useful and simple YAML format as well. And also, I am a Perl fan: When rambling through the Perl language universe, I am feeling like Alice in Wonderland, at every glance discovering a great, fascinating feature or idiom. 🙂
Yes, many of us (the ABAP developers’ community) have created own JSON parsers and JSON builders, which was an interesting experience anyway. But now, with the built-in JSON transformer, the product of this work is obsolete. If you can force the canonical JSON format (derived from the ABAP data with the identity transformation) to be used on the client side, then the “Build JSON” transformation is a three-liner, and the “Parse JSON” operation a one-liner (essentially CALL TRANSFORMATION ID). The more deviations from the canonical format you have, the more (usually small) <xsl:template>’s will have to be added to a custom XSLT transformation, as described in my blog on REST APIs (which is not a big deal either). You may find my test report useful with which I explored this technique.
Cheers,
Rüdiger
Hi Rüdiger,
I fully agree with you. Yesterday I just added a new method to my class to use the built-in transformation for parsing JSON, so we go in the same way. Your test report was really useful for me to understand it. I’ve kept the ‘old’ implementation mostly for the reason that the new one only works in new releases, and you know, there are plenty of old releases running out there.
I found the experience interesting as you point. I was using this code for a long time and thought that it could add something interesting to the community, especially for the direct mapping to function modules, and also for the fact that we could use other formats, like YAML or Perl (although they could also be done with a transformation).
Thanks a lot for your comment, and your praise of Perl!!! (that I also share).
César.
Hi Cesar,
Sorry for bothering you with this question but I´m new on this. So my question is as follows….What are the requirements (addons, components, minimun release, etc) from SAP side to get and send data from any scenario that uses JSON in the other side.
Thanks and regards
Hi Diego,
as for the release from which the built-in transformation ABAP<->JSON is available: in his recent blog on JSON in ABAP, Horst Keller writes
And Stefan Riedel-Seifert adds:
Regards,
Rüdiger
Hi Rüdiger,
Thanks a lot for your contribution, it is really useful to have the details here. You may want to know that I have already included the ID transformation in this module for the JSON and XML converters. It is an optional change ready to be activated in the handle_request method (uncomment the lines with calls to methods serialize_id and deserialize_id). They have very much better performance especially for big and complex ABAP structures, and of course, much easier coding: one liners.
Regards,
César.
Hi Diego,
If you want to use the new built-in transformation that Rüdiger comments, then you need to stick to the releases that support it mentioned in his reply. However, this module could be run with the pure ABAP JSON converter in an ABAP kernel starting from 6.40 with only small code changes. I know of people running it in 7.0SP14. Tell me which specific ABAP version you need it to run on to see if we can help.
Regards,
César.
Hi Cesar,
in my “JSON Document Class” I’m checking the current Kernel Version and Service Pack Level to decide whether I can use the simple transformation or not. Maybe this would be interesting for your project, too.
Uwe
Hi Uwe,
Thanks a lot for your proposal!! it is a great idea. Are you using the delivery* function modules for getting release information? I’ll check your project to see how you approached the implementation.
Regards,
César.
For the kernel version I’m using (the release is always the 12th entry in table “kernel_version” and the version the 15th entry). Not my invention, it’s like the code in the Kernel Info Window in SAPGUI.
For the Basis Release check I’m using the following trick:
Hello, After setting up the ICF service, I am getting error 403 – you are not authorized to invoke this function module. Can anyone please help me out with this error?
Hi Abhishek,
I put an authorization check in the code on purpose to make sure that the default configuration is secure. To enable it, you have two options:
I should have documented this as Uwe told me, mea culpa, sorry.
Hope this helps.
César.
Wow, thanks a lot Cesar, you made my day! It works like a charm, I have no words to emphasize how useful this adapter is for my project. Thank a ton for your awesome work!
Hi Cesar,
I have a question about passing importing, changing and table parameters via POST. For example, BAPI_INCOMINGINVOICE_CREATE1 has
Importing Parameters:
HEADERDATA
ADDRESSDATA
INVOICESTATUS
Table parameters:
ITEMDATA
ACCOUNTINGDATA
GLACCOUNTDATA
MATERIALDATA
TAXDATA
WITHTAXDATA
VENDORITEMSPLITDATA
RETURN
EXTENSIONIN
EXTENSIONOUT
TM_ITEMDATA
Here, how can I specify which one is importing and which one is table parameter when doing a POST while passing them in message body in JSON format?
Hi Abhishek,
You don’t have to specify whether they are tables or import params. Just put them in the JSON payload. Actually, you don’t even have to worry about the param order.
The distinction between import and tables is not relevant anymore. Actually, the use of “TABLES” is currently discouraged for new function modules. It is preferred to use the table data type as import or export params. Here you’re dealing with a BAPI, so you have to follow the convention, as they use TABLES params. But, from the point of view of this adaptor, there is no difference.
Hope this helps,
César.
This is great. Thanks a lot!
Hi,
How to get your class details? I went to the page, but what to do with that nugg file?
—
Regards,
Vinay
Most of the ABAP Code Exchange projects are working with SAPlink. With SAPlink you can import and export nearly all SAP development objects. “.nugg” files are so called “Nuggets” and contain the development objects.
See
Hello Caesar,
Good writing and I seem to understand it well inspite of the fact that I am just an ABAPer of 3 years who tries to scratch the surface on connectivity.
I dont know whether its totally relevant, but I’ll go head and say it.
I found this blocg Android and RESTFul web service instead of SOAP by Michael Hardenbol really interesting and suitable for communication with few data for interchange.
And your technique with JSON will be usefull for transporting deep structures and tables. Both are SOAP free !
Hi Yuvaraj,
Thanks a lot for your nice words.
“Salomon saith. There is no new thing upon the earth.” As Piers said long time ago: You don’t need to use soap to keep your rpc clean. 🙂
So, relevant it was, relevant it is.
I’m really happy that you can find this class useful. Thanks a lot again for your comment.
César.
Cesar,
thanks for pointing me to Piers Harding’s excellent blogpost. I didn’t know it – it’s amazing that some very straightforward things – like REST – need some years to break their way. It seems that first the detour has to be experienced exhaustively, up to its ultimate flaws. 🙁
Cheers,
Rüdiger
Hi Rüdiger,
Piers’ post was the one that started it all 🙂 , at least from my point of view. This is the reason I included YAML as one of the formats (actually I began everything in YAML), and added JSON in 2006, when it became clear that JSON was going to be the way to go. You are right with experiencing new detours, it is often very difficult to break away from established views. I know I should have released this long ago, but I had several professional detours and the code remained languishing long time until I found the right motivation last year to upgrade it.
Kind regards,
César.
Hi Cesar,
Is it necessary to provide the input JSON tags in all caps? For example:
{
“AMOUNT”: “100”
}
and not
{
“amount”: “100”
}
In my case I am passing a complex structure which is showing a weird case sensitivity. I have a table like below with in a deep structure:
“AMOUNT”:[{
“TAX”:”A3″,
“CURR”: “EUR”,
“LOCAL_CURR”: “100”
},
{
“TAX”:”A3″,
“CURR”: “EUR”,
“LOCAL_CURR”: “100”
}]
The above JSON input is throwing deserialization error and when I make “AMOUNT” as “amount”, there is no error but it does not populate this structure and returns it empty. On debugging I found that it is raising this exception at line 38 in method DESERIALIZE_ID of class ZCL_JSON_HANDLER in statement
CALL TRANSFORMATION id SOURCE XML json
RESULT (rtab).
when “AMOUNT” is in all caps but no hint elsewhere when it is all small case.
I am not getting any hints for what is happening beyond it. Can you please help me out in figuring out what is going on here?
Hi Abhishek,
You have just made a good point here. The ID transformation is really a canonical transformation. ABAP variable names are uppercase. So, strictly speaking there is no “amount” variable, and thus the ID transformation cannot fit it with the rtab structure and fails. As far as I know, there is no way to make the ID transformation case independent. If someone knows how to do it, I’d be really thankful if they post the solution.
Fortunately, I was aware of this problem and wanted to give people the chance to use lowercase as well. So the “old” deserializer was developed with this in mind. So if you want the class to accept lowercase variables, you can use the JSON_DESERIALIZE method instead of DESERIALIZE_ID. Just activate it in the corresponding line of method HANDLE_REQUEST.
The caveats of using the old method instead of the new one have been commented above, and can be summarized as follows: dependency on the deprecated CL_JAVA_SCRIPT class and better performance in the ID method. But I personally keep using the old method in several systems without any major problem (and feeding it with lowercase params 🙂 ).
Thanks a lot for your comment. Hope this helps.
César.
Hi Cesar,
I have a situation in my project where the JSON body will be passed like with a few letters as capital and rest of them in lower case, something like
{“Amount”:{…….}}
or
{“MsgID”:{…….}}
When you pass a JSON body like this, all the ABAP variables are found to be empty. I was looking for a way to make this compatible with the JSON adapter. Can we make some changes in the javascript regex statement to resolve this issue?
Hi Abhishek,
I’ve made some corrections to the class to allow for this. Please get it at:
Please review it and tell me how it goes. I’ll put the changes in the mainstream shortly.
Regards,
César.
Thanks Cesar, I will try it out and let you know, however can you please check the formatting of the code you have specified in that link? Like double quotes have became "and so on – can you please paste it as plain text?
Also, do you have any suggestions on how I can get the remote IP and hostname of machine who is calling this web service?That will be really helpful for logging as well as filtering the incoming requests.
I really appreciate the work you are doing for this project and I am sure many will be benefited out of this. It is something which should have been built into SAP ABAP framework.
Abhishek,
Click on the “Download” button in the page, and get the slnk file. Do not worry about its contents, just upload it with saplink.
If you want to retrieve the client IP address, use the following in the HANDLE_REQUEST method:
remote_addr = server->request->get_header_field( name = ‘~remote_addr’ ).
Regarding what is included in the standard ABAP framework, from 7.40 onwards, the new REST library is worth a look:
Thanks a lot for your praise of this module.
César.
Hi Cesar,
I am really sorry I won’t be able to import your slink file. I did some modifications to the code relevant to my organization. If I import your slink file, it will overwrite my code. It will be really helpful if you can highlight your code changes or if you can just mention your changes here in comments.
Nevermind, I was able to figure it out. Caesar, there is another problem – the function module Z_JSON_IN does not tolerate spaces around the color (:) within JSON message (like put a space after or before a colon). It gives short dumps. Can that be fixed?
Hi again Abhishek,
Can you send me by mail a copy of your function module Z_JSON_IN? I don’t see this problem happening in the class exposed here.
Thanks,
César.
Actually, the problem does not occur when using CALL TRANSFORMATION that is – when using CALL METHOD me->deserialize_id, it happens when I use CALL METHOD me->json_deserialize – I was using this method since I have to deal with lower and upper case mixed tags in JSON. Also, I would suggest to use CALL TRANSFORMATION since it would handle escape characters as well. However CALL TRANSFORMATION accepts only upper case tags. I would suggest converting all tags to upper case and then pass it to CALL TRANSFORMATION, in that way you do not have to write your own JSON parser.
Hi Abhishek,
The problem you mention with the spaces before/after the colon doesn’t happen even with the ABAP based converter, at least with the last revision in the slinkee file. Send me your JSON by mail to check what is happening. Notice that the method JSON_DESERIALIZE has also been updated in the slinkee.
You have the DESERIALIZE_ID method that uses the standard transformation, it is optional to everybody to decide if they use the old converter or the new one. You know the caveats of each choice.
I thought about going with your idea of converting the JSON var names to upper case, but I don’t think it is a good idea, you have to fiddle with the input JSON string, but if you can implement it, you’re more than welcome to contribute your code.
I have several reasons to keep the old implementation, the most important being that several people are using this class in older releases that don’t support the JSON transformation. You are free to choose the one that suits you better.
Hope this helps,
César.
Hi Abhishek,
You only have to review methods JSON_DESERIALIZE and JSON2ABAP and compare with your changes.
Please contact me directly via mail if you want a copy of the methods.
Regards,
César.
I am sorry, that was my bad, it is working perfectly. Thanks!
Hi again Abhishek,
I’ve just realized that your JSON is also throwing a serialization error. You should send a complete JSON object enclosed in braces, so your JSON should look like this:
{“AMOUNT”:[{
“TAX”:”A3″,
“CURR”: “EUR”,
“LOCAL_CURR”: “100”
},
{
“TAX”:”A3″,
“CURR”: “EUR”,
“LOCAL_CURR”: “100”
}]}
Try this and tell me how it goes.
Thanks, that solved the problem.
Hello,
I’m trying to access the nugg file for this project (NUGG_ABAP_FM_JSON_HANDLER.nugg) but I get redirected to a page asking me to accept the terms of use, which I already did. I also received a confirmation email for codex membership.
Hi Laurent,
Try the following two links and tell me if it gets any better:
Regards.
It finally worked somehow after I received another confirmation from SCN, thanks!
Very interesting Cesar, thank you very much.
I see a lot of possibilities for this in my customers
Hello Again Caesar,
I have question about your idea to call a FM with an url like
I am trying to access the service that I have coded but it is not reachable when I call it from outside the network (VPN).
I searched around to know how to expose a webservice in SAP outside its network and I ran into a post which asked me to configure a reverse proxy as DNZ and use the proxy URL. But I am not sure how to do it. Do you have any idea on this?
Hi Yuvaraj,
Configuring this kind of access is a huge topic that you must study carefully. There are many security implications that you have to discuss with your organization.
You may start reading some documents in SCN and SAP Help to get a glimpse on how that works:*%29&timeScope=all
Hope this helps,
César.
Hi Cesar,
I am unable to download nugget file to a readable format.
Can you please help me in this regard?
Tried several times with SAPLINK. Can you mail me that JSON nugget file which is in readable format.
Thanks.
Visit this link. You will get the NUGG file. Then get the source from this link. Using SAPLink, first import the NUGG file. Then use the second tab to import the slnk file. Lastly, activate the objects.
thanks for this blog..
Few of my doubts got cleared.. 🙂
Cesar – wow. This is excellent – thank you. One thing I struggled with (maybe it was me or maybe it wasn’t)… passing more than simple data types, like structures and tables. For example: a range table of plants I EQ 0010 and I EQ 0040.
I resolved it by improving the algorithm that maps the query string name/values to json. Now I can call a function like this:
http://…./<function_module>
?IT_WERKS[]=SIGN=I;OPTION=EQ;LOW=0010
&IT_WERKS[]=SIGN=I;OPTION=EQ;LOW=0040
Or pass structured data like this:
http://…/<function_module>
?IS_DATE{}=SIGN=I;OPTION=BT;LOW=20131201;HIGH=20131231
If interested, you can find my update to the handler here. Please advise if I should get it to you another way. Without code exchange I wasn’t sure how to handle it. 😐
Thank you!
Jeff
Hi Jeff,
First of all let me tell you that I really appreciate your contribution. I had decided early to limit the passing of parameters in the query string to simple values in the import function module definition. One of the reasons was to find the correct syntax taking into account that the idea of the query string is to pass only simple name/value pairs. So tables, and structures were out of consideration.
But I think you’ve found a very interesting way to handle this case. Please allow me a few days to fully understand it and merge it into my code, so I can consider putting it mainstream.
Please review if it works well using the semicolon as a separator. Take into account that in the query string specification the semicolon is also allowed as a separator just like the ampersand. I have to check how the ICF handles this.
Thanks a lot and happy new year!!!
César.
Thanks César. The document is very useful.. 🙂
Hi all,
I’ve finally created a GitHub repository to replace the old Code Exchange:
cesar-sap/abap_fm_json · GitHub
Find the code and updates there from now on.
Best regards,
César.
Thank you for this post, it’s really helpful.
Very helpful post. Working perfectly with my SAPUI5 project.
Thank you very much!
Hi Cesar,
I have a question. I am good to go. But just one thing I need to confirm.
Why I am getting tags in my output? I have just two parameters one import one export. export parameter is JSON String
Here it is: my url:
Output:
<ZCRM_GET_ALL_SURVYS2>
[{″},{“survey_id”:”74D4350C27D51ED3A7EE3535695229BA”,”survey_name”:”testing_audit1″},{“survey_id”:”74D4350C27D51ED3A7EF7A8AE41989BA”,”survey_name”:”this is the description”},{“survey_id”:”74D4350C27D51ED3A986D98E0E9CA9BA”,”survey_name”:”0005″},{“survey_id”:”74D4350C27D51ED3AA830BA56DAFA9BA”,”survey_name”:”Aster Audit 6M”}]
</STR_JSON>
</ZCRM_GET_ALL_SURVYS2>
Hi Ahmed,
The correct query string field separator is ‘&’, not ‘?’, so you should rewrite your URL as follows:
In the way you had it written before, the “Accept” HTTP header is taking precedence, and in most browsers it contains ‘xml’, so the adaptor uses the XML conversion. To force JSON be sure you include the &format=json field.
Tell me if it works now.
Regards,
César.
Format is JSON now. Thank a lot but I am getting ‘\’ now.
For example:
{“STR_JSON”:”[{\\”}]”}
Why is that so?
Hi Ahmed,
Whether you believe it or not, this is the correct result. You are just putting a JSON string into an ABAP variable (probably of STRING type). So, the adapter understands that it has to quote it.
What you really need to do is to put the Surveys in an ABAP structure in the Function Module interface. Then, let the adaptor do the conversion to JSON.
Could you please send me your Function Module code if you need some help?
Thanks,
César.
I have verified the JSON code. It’s good. Still, I have sent on your email address. Please let me know if I am missing anything.
Thanks,
Hi Ahmed,
I gave you some hints in a private mail. Please tell us how it goes.
Thanks,
César.
Hi Cesar,
I am good to go now. Many thanks.
{
“STR_JSON”: “”,
“SURVY_LIST”: [
{
”
}
]
}
Good to know it worked, Ahmed!!
Regards,
César.
He Cesar,
Many thanks for that. It’s working like a charm. Can you please let me know a short example of using method JSON2ABAP or how to use that. There are couple of things I need to know.
In the table below what do you mean by VAR_NAME, JSON_STRING is a simple JSON string.
If you can please manage, example of using this method would be great.
Hi Ahmed,
You don’t need to use JSON2ABAP directly. Just put the ABAP structure you need as part of the IMPORTING or TABLES or CHANGING section in your function module definition.
The adapter will take care of all this.
If you however, for any specific reason not related to this adapter, want to use the methods directly (they are defined as static to allow this possibility), drop me a line and I’ll try to help you.
Regards,
César.
César,
This looks great and is a nice way to wrap function modules. I really like the simplicity of wrapping the fm in json.
I think one thing to be careful of is csrf attacks in a production version of this. Perhaps a csrf token could help but would make this example more complex.
Thanks for sharing and keep up the good work,
Alex
Hi Alex,
Thanks a lot for the insight. I’ll be looking into this, it is a great suggestion.
Best regards,
César.
Hello Cesar,
That is a wonderful gem blog content you have written. I have downloaded the nugg. and tried using it, works wonderfully fine. Really, Kudos to your sincere efforts.
I have one doubt in above implementation. How do we log off this “fmcall” ( rest ) web service? When I will make a first call, it will prompt me to enter credentials. But question is how do I log out of the same?
Regards,
Harsh
Hello Harsh,
Thanks a lot for your comments.
Regarding the authentication, the default setting is to put the handler in an ICM service configured for basic authentication. Basic authentication does not need logout, as actually the credentials are silently sent by your browser until you close the browser session. In this configuration there is no session in the backend, so there is nothing to log out of.
You may explore other authentication mechanisms in transaction SMICM, when configuring authentication for a given service. Note that this has nothing to do with a server based session, so whatever the authentication mechanism you use, you cannot log out of anything. You just may clear or expire the credentials in the browser.
Just as an example, the adaptor contains an option to start a server based session. You may start a session with a query string parameter of “action=start_session”. This will return a session identifier in the form of a cookie (SAP_SESSIONID_<SID>_<Client>) and will associate it to an abap backend session that will remain opened (check it in transaction SM04). All subsequent calls to the adaptor will be done in the same session context, as long as the session remains open. To explicitly close the session (logout), just make another call with “action=end_session” in the query string. Using this properly could be handy in a UI5 app.
Note that this “feature” is untested. I mean, it works, but I haven’t throughly tested it for any side effects of calling several function modules in the same abap session. There could be issues, so test it before you use it for your case.
Best regards,
César.
Hello César,
Appreciate your help.
I tried to follow the process you told, it is successfully ending the session on server (checked using tcode SM04), but still it is not serving my purpose.
I want to cover the scenario, say suppose initially a user is logged in using html5 application. Now, after performing some transactions, he wants to log out using the button. On this log out button, I am managing to clear the MYSAPSSO2 cookie successfully. But still my SAP_SESSIONID_<SID>_<Client> is there in browser. I need to get rid of this cookie to redirect my user to log in page again.
If you can throw some more light on this…
Regards,
Harsh
Hi Harsh,
What you want to achieve can be done the following way:
This will effectively cancel the session and will redirect you to the html logon app in the next access. Coordinate this with your UI5 or other consumer app logon process.
Hope this helps,
César.
Hello César,
It helped indeed. Thanks very much again for throwing light 🙂
Regards,
Harsh
Hi Cesar,
Thanks for posting such an important concepts and due to this now I know one more way of exposing the ABAP FM to the External world.
in my scenario, I have a ABAP FM call which has few structures and internal tables in input so I want to pass input data for structure and internal tables though the service call URL, please give me an idea on how to handle this case.
Thank you,
Regards,
Jagesh
Hi Jagesh,
Thanks a lot for your words.
Please have a look at this excellent contribution by Jeff Woehler.
Jeff proposed a way to include tables and structures in the URL. Have a look at his code and add it to your system if you want to have this option. Just note that in the proposal nested structures are not supported.
In general, the biggest the data set the most likely it is that you should use a POST method, so it is not recommended to put large structures in a URL.
Best regards,
César.
Hi Cesar,
Thanks for putting this Adaptor together. I just stumbled across this in researching JSON & SAP. I’m excited to get this working as we are just now starting to look at some HTML5 development.
So I installed everything, activated the VCM and configured the ICF. When I try it with the URL,
‘’
I get a VMC run error. I really have very little idea what I’m doing. My guess is that the handler is not working correctly and spewing back an error. I probably installed it incorrectly. Is there a good way to test the handler directly?
This is the error:
The URL call was terminated because of an error.
The following error occurred in system DEV : VMC run error
The error occurred on application server WEB-DEV_DEV_01 and in work process. 11
The termination type was: ERROR_MESSAGE_STATE
The ABAP call stack was:
Function: HTTP_DISPATCH_REQUEST of program SAPLHTTP_RUNTIME
Module: %_HTTP_START of program SAPMHTTP
The work process error log is:
<ErrorInfo URL=”“>
<ErrorMessage>VMC run error</ErrorMessage>
<Date>20141021</Date>
<Time>132017</Time>
<Client>200</Client>
<User>PPUGH</User>
<TerminationType>ERROR_MESSAGE_STATE</TerminationType>
<ABAP-CallStack>
<Function>
<Name>HTTP_DISPATCH_REQUEST</Name>
<Program>SAPLHTTP_RUNTIME</Program>
</Function>
<Module>
<Name>%_HTTP_START</Name>
<Program>SAPMHTTP</Program>
</Module>
</ABAP-CallStack>
</ErrorInfo>
</ErrorLog>
Thanks!
Phillip
Hi Phillip,
Thanks a lot for using this adaptor.
The error you mention has very little to do with the adaptor, but probably has something to do with some incorrect configuration of the VMC (the VMC is a Java VM built inside the ABAP stack used mostly for the internet pricing module in CRM, it should be disabled by default).
First, try to check if the error is also present with other ICF services, for example, test the /sap/bc/icf/Info service in your server to see if you get the same error.
Next, try disabling the VMC. This is done with system parameter vmcj/enable. It should be off (unless you know what you’re using it for!).
Check also that your /sap/bc/fmcall service is correctly defined in SICF.
Come back to me directly to my mail with the results (let’s leave the comments in the blog as clean as possible 🙂 ), and I’ll follow up.
Thanks again!!
César.
Hi Cesar,
Thanks for the help. The adapter works great! The rest of our staff were in impressed when I showed it to them. (I did give you the credit 🙂 ) It will spawn a lot of new ideas.
Thanks again!
Phillip
Hi Phillip,
Thanks a lot for your nice words.
César.
Hi Cesar,
Have there been any issues the we are passing the data through the URL unencrypted? I’ve been told that this could be an issue with using the adapter. Any data should always be encrypted.
Thanks,
Phillip
Hi Phillip,
Of course you are connecting to the URL using HTTPS, aren’t you? 🙂
If you are not, you should be doing it.
You may find more information here:
4.1 HTTP(S) Settings in ICM – SAP NetWeaver Business Client – SAP Library
and here:
SAP Web Dispatcher and SSL – SAP Web Dispatcher – SAP Library
Be sure to be using at a minimum HTTPS if you expose this kind of services in the open to the internet.
Best regards,
César.
Hi Cesar,
Hah! That makes a whole lot of sense. “Of course” we would always use HTTPS. 😉 I meant to all along.
Thanks for the links.
Phillip
Hi Cesar,
Thank you very much for sharing this. It is of great help for us(our company), if it works. If I can get your guidance in solving few issues while installation and thereafter, I will be very grateful. Your assistance would help us immensely.
Regards,
Rashmith.
Hi Rashmith,
It works so far 🙂 .
Please do not hesitate to contact me. Write me a private mail.
Thanks!
César.
Hi Cesar,
Your adapter is awesome, thanks for sharing!
I need to send a file through a restful from sapui5. How can I to do that using your adapter?
Thanks in advance!
Hi Miguel Angel,
Thanks very much for your comment.
First I don’t think that using a function module through this adapter is the best way for handling a file upload in abap. If you insist on using this, you could always encode the file contents in base64 and put them in a string variable in the JSON string. Then your function should take care of decoding it and doing something with the file contents. However I don’t recommend your doing this, and if you do, limit it to small files (<1m).
I assume you use the file upload control in SAPUI5: JsDoc Report – SAP UI development Toolkit for HTML5 – API Reference – sap.ca.ui.FileUpload.
Using this control you can send the file to a URL that can handle file uploads using multiparts. The ABAP entity object supports multiparts (see CL_HTTP_ENTITY), so you should create either a ICF handler or a BSP using this feature for handling file uploads directly.
You can find an example of a BSP that handles uploads here: File Upload in BSP Applications – Using ABAP – SAP Library. Deriving the code from there to do an ICF handler should be straightforward.
Hope this helps.
César.
PS.: Si necesitas más detalles, envíame un mensaje privado, por favor. ¡Gracias!.
Hi Cesar,
This adapter is very interesting way to reduce manual coding.
I am working on NetWeaver gateway, where in we have kind of similar requirement to design gateway to get output as function module.
The only problem I am facing with this adapter is, I can’t find a way to choose different backend system.
I have different hub system, where this handler will exist, and I need to execute function module from different system.
your assistance would be immensely appreciated.
Thanks in advance!
Hi Shefali,
Thanks for your comment.
This adapter is intended to be run on the backend system directly. It hasn’t occurred to me before that this could be used to be able to run it as a frontend that forwards the request to a different backend, like you may do in Gateway with the “System Alias” feature.
It could be possible to develop an option that does this, but to do it right, the correct way for forwarding authentication and other issues should be found.
You can contact me privately if you want to discuss this further.
Best regards,
César.
Hi Cesar,
Do you think that forwarding a request to backend would be a good idea in terms of security?
As I am not much aware of possible vulnerabilities, but i have heard of SQL injections, can that be a problem?
also, how to call FM dynamically in different backend with RFC, as SAP allows either CALL FUNCTION with parameter-table(dynamic) or CALL FUNCTION with DESTINATION?
Thanks in advance!
Hi Shefali,
Sorry for coming back to you so late. In the adaptor you can only call local function modules, no “remote” functions. You can create a function wrapping a call with destination to other system.
I haven’t given much thought at using the adaptor “à la Gateway”, as a frontend, but it could be a good idea. Actually, since you wrote this, several other persons have asked me about it too.
Hope this helps,
César.
Hi Cesar,
We have been using this for more than a year now and it has been working great. Thanks for creating this. Recently, I noticed that it is not giving all the values in handler module while converting from JSON to ABAP internal table when the table has more than like 4k rows. Do you see a similar issue? To recreate, try passing more than 4k rows in your JSON for one of the table parameters when calling any web service in your system.
Yes, there is a bug in the SAP Standard Javascript engine (Kernel function). I haven’t found a solution for this yet.
Currently you have to split the JSON files into smaller pieces.
(background: the class cl_java_script is not ment for public use anymore and therefore I’m not able to open an official SMP issue).
But maybe César may open an internal issue if he has the connections…. 😘
Thanks for the reply Uwe. Can we rewrite JSON2ABAP method and use ABAP for parsing? ABAP supports regular expressions, so just a thought here in case any one has tried this. Even if we do not use regular expressions, maybe we can write our parser code from scratch. Let me know your thoughts.
I did this already with pure ABAP (see Usage zJSON · se38/zJSON Wiki · GitHub ) but the performance for large file is bad (such bad, that you can’t use my library). Therefore I’m currently implementing Césars idea into my library.
I just trying to understand the bug with the javascript engine. So this is a limit on one internal table or the entire JSON message? I mean, I have one table parameter currently on my JSON message which has large number of records. Can I split it into say 4 internal tables in the same JSON call each having 1000 rows(upper limit of number of records is known)? Or it has to be 4 different JSON calls?
You have to make sure, that the result does not exceed the limit. You may, for example, work with an “offset” parameter or something similar in your function -> yes you have to call the function 4 times.
Hi Abishek,
Yes, at it is pointed by Uwe there are some limits in the JavaScript library we are using for “deserializing” JSON.
Actually, I’ve been able to determine two limits:
1. The maximum size of the JSON document that the library is able to accept. I have to check the exact limit for this, but I saw something around 1MB, may be dependent on version. In your case you seem to be reaching this with a message containing 4,000 rows, but it is not the number of rows but the size of the total message,
2. The maximum size of a JSON field. Uwe and I tested it, and the limit is 127KB.
So, for now, the only approach is to use smaller messages when calling the adaptor.
This happens with the JavaScript based deserializer. I haven’t tested it with the “transformation ID” deserializer, so I can’t tell what happens there. I’ll be doing it soon.
Best regards,
César.
Hi Cesar –
When i try to import NUGG_ABAP_FM_JSON_HANDLER.nugg i am getting “Plugin for object type TABL is not installed on this system” error. I have imported both SAPlink_Daily.nugg as well as SAPlink-plugins_Daily.nugg and trying to import NUGG_ABAP_FM_JSON_HANDLER.nugg using ZSAPLINK…but it is not helping. Can you please advise?
Did you activate all the classes after the import of “SAPlink-plugins_Daily.nugg”?
Thanks for your response Fetzer. Sorry for missing to mention about that. When trying to activate, I was getting the below error.
But now after i see your response 🙂 i tried to omit Z_CL_JSON_HANDLER and activate. It activated fine and later imported NUGG_ABAP_FM_JSON_HANDLER.nugg successfully too….but i am surprised to see in the report output that it installed Z_CL_JSO_HANDLER :-(. If Z_CL_JSON_HANDLER is part of this .nugg file …not sure how it was there even before importing this .nugg file. may be it is because of me trying different things out of sequence?
Anyway, the issue i have now (after importing json_handler.nugg) is I am not unable to activate Z_CL_JSON_HANDLER as it still complains ZICF_HANDLER_DATA is unknown. Can you please advise?
Thanks for your help in advance.
Hello Venkat,
I guess you have not activated the service node in SICF t-code.
Regards,
Harsh
Cesar advise to manually activate the structure ZICF_HANDLER_DATA in SE11 and the above error is gone.
But now, I have another issue. I have created a service in SICF and maintained the login details and maintained ZCL_JSON_HANDLER in Handler list tab. After activating the service…when i try to test the browser window opens but immediately takes me to View Downloads page where it asks me to “save”…it doesn’t do anything after hitting save button also…nothing to save and doesn’t go anywhere. Can anyone please advise?
After correcting the incorrect port numbers in the url…it worked fine!!. i am able to see the output of RFC_SYSTEM_INFO.
However, i tried to call our custom fucntion module which returns data in four table and each table having 2, 48, 48 and 674 entries respectively….The app is now breaking with the below message….
XML Parsing Error: not well-formed
<<Complete url as entred in the browser along with inputs to the FM as url parameters>>
<<the whole xml structure as one big line…>>
Hello Venkat,
It seems that you have successfully set up the adapter. The error which you are telling has to do with forming input parameters to function module using XML. You are not forming the input parameters correctly in XML.
Let me know if further help required.
Regards,
Harsh
Harsh –
Thanks for your response. forming input parameters to FM using XML? i just passed the input parameters as url parameters as suggested in the blog…and expected the code in the nugget to return the output of the FM in JSON format. Please let me know if my understanding is incorrect.<<My Custom FM>>?Param1=<<value>>&Param2=<<value>>&format=json
Hi Venkat,
This looks like a parsing error in your browser. Can you send me a screenshot?
thanks,
César.
César –
Sorry for the late reply. Please check the screenshot. Similar error in Firefox and Chrome also. But format=json and format=yaml is working fine in all three browsers. i guess XML is just a browser issue and would work fine when called inside the code?
Hello Venkat,
I’d like you send me also the code to the function module, and also the JSON output (save it to a file). Please send them to my personal mail.
Thanks!!
César.
Hi all,
We are getting “Type “CL_ABAP_CODEPAGE” is unknown” error in method “SERIALIZE_ID” (row 68) while we try to activate ZCL_JSON_HANDLER Class.
We are new on this. Could you help us?
Best regards and thanks in advance!
Hi,
which ABAP version are you using? (SAP_ABAP component)
thanks,
César.
Hi Cesar,
SAP_ABA 700 SP 31
Best regards,
Rubén
Hi Rubén,
The ID standard transformation is not available for that ABAP version, it requires a minimum of 702. For using this adapter with 700, you must use the class built-in ABAP based serializers. For this, do the following:
– make sure that the lines in HANDLE_REQUEST calling serialize_id and deserialize_id are commented out,
– comment out the code in methods serialize_id and deserialize_id,
– activate the class.
That should make it work in 700.
If you need more help, please contact me by mail.
Thanks,
César.
Nice blog… I made a test and it works.. good work!!!
Hi,
Thanks for this. It is very useful.
I’ve made a small ui5 module to make it even simpler to use this json adapter. It also allows mocking fm calls on the frontend.
Regards,
jumpifzero/sapui5-easyfunctionmodules · GitHub
Hi Tiago,
This is an excellent contribution!!
Everybody should refer to it in case of doubts about how to call this from JavaScript.
Thanks a lot Tiago!!
César.
Hi Cesar,
Nice blog and must say lot of comments. Not sure if i have seen these many comments on any other sdn blog.
One question, as i was not able to go through all the comments. Since we already have gateway, and I dont think it is a very costly setup. What pushed you to create your own class?
Thanks
Pranav
Hi Pranav,
Thanks a lot for you nice comments!
As I explained in the blog, I actually started what whould later become this in 2006, so we had no alternatives by then. When I released it in 2013 I was actually motivated by the appearence of Gateway. I had great expectations on Gateway, but many people contacted me to tell me that they thought that sometimes the oData approach could be more cumbersome to make some tasks than using just simple function calls.
Let me quote Tiago in his GitHub page:
I could add more making for a very long debate. In my opinion, oData and Gateway are good as long as the kind of service adapts well to the oData model: navigation from item lists to item details and leveraging this ability to automatize some funcions in the user interface (ie. automatic scrolling, search and filtering of the results, navigation from lists to item detail, automatic generation of forms from metadata, etc.).
Many uses in AJAX-like applications are not suited very well for this model. In many cases a simple model of request-response suits better. Additionally, you can also employ highly nested structures (AKA JSON documents) with this adaptor, whereas it becomes very limited to do this with oData and Gateway (lots of limitations on the use of complex structures and use of “deep inserts”).
Finally, the ease of reusing existing code already developed in Z functions at plenty of customers is probable the bigger push to use this adapter.
Hope this helps.
Best regards,
César.
Dear César,
at UI5con in Frankfurt there was a presentation by Mark Schmale on “Combining UI5 and SAP LO VC for fun and Profit”. The best thing of this presentation was that he used the JSONRPC Service that is delivered as part of SAP Gateway in the SICF Path /sap/gw/jsonrpc. Check out my Tweet for screenshots. The only tricky part is that your request must include a CSRF-Token. Such a token can be retrieved by calling any standard SAP Gateway service.
Best regards
Gregor
I don’t think, that this service should be active in the system (but you may clone the underlaying handler and restrict it to specific function modules)
I even think that this approach should be used anyhow. SAP ABAP systems should be exposed by easy to consume, standard or custom build ODATA services to also benefit from the ODATA support in UI5.
Hi Cesar,
Fantastic article. It helped me a lot. Quick Question – How can I generate a ‘WADL’ so my partners can create a consumer scenario with all the data elements loaded into their tools/system.
Best,
Rahul.
Dear Rahul,
I think you mean a WSDL. Please do not use this approach and stick with the SAP standard tools and provide a WebService by creating it in SE80 or OData via SAP Gateway (Transaction SEGW).
Best regards
Gregor
Dear Gregor,
Thank you for your time and recommendation. I agree with your comments that using NW Gateway is our best bet. I meant WADL though; our partners using PeopleSoft have shown me their screen and they have some sort of WADL consumption screen. Looks like WADL is advocated by Oracle side of the house for Restful web services, but W3C has not standardized yet. Anyway, we are proceeding with SAP Standard tools/webservice.
Best Regards,
Rahul
Hi Gregor, Rahul,
As a matter of fact, it is WADL, not WSDL (Web Application Description Language).
It is funny that I was considering adding WADL to this adaptor and was studying its feasibility with one colleague one year ago. Finally, we decided against doing it, for several reasons:
First, this adaptor is not providing RESTful services. There are no different actions related to the request method used in the HTTP call (unless you activate a trick that is included in HANDLE_REQUEST and develop your function module for it).
Second, WADL is not fully standarized and very few tools make use of it, and none among SAP tools (except SAP API Management, Apigee, which does – actually that was one of the reasons I considered it). And the fact that Oracle advocates it doesn’t help either 😉
The fact is that if your requirements include full documentation, interoperability, standarization, and code generation for consumer apps, you’re probably better off with the standards supported in SAP, like SOAP/WSDL and oData, as Gregor suggests.
On the other hand, if you are in full control of your development landscape and it is enough for you to distribute an example for the call, you may well use this adapter, especially if you want to reuse existing modules. Some people I know are happy using this with SAPUI5 apps, and some others even mix oData calls with calls to this adaptor.
Instead of WADL, I also considered JSON-RPC. I’ll have a look at the jsonrpc gateway service that Gregor suggested above.
Best regards,
César.
great
Hi César,
thank you for sharing your work. We are currently looking for a lightweight alternative to JCo and your adapter (or the above mentioned json-rpc service) seems to be very promising. Both are working great in my first tests but there is one feature I couldn’t find so far: for our app we need transactional RFCs for locking and to rollback changes in case of errors in later RFCs. Is something like that possible with your current implementation?
For the json-rpc service I couldn’t find any documentation apart from the screenshot Gregor posted.
Best regards,
Christoph
Dear Christoph,
by definition REST is stateless. So if you don’t want to rewrite your RFC modules you need in one transaction you should stick with JCo as there you have a stateful conneciton. SAP is working hard to bring handling of transaction to OData. And their concept for that are Draft Documents (search for that at Glossary – About ABAP Programming Model for SAP Fiori – SAP Library).
Best regards
Gregor
Dear Gregor,
thank you for your input. I know that REST and JSON-RPC are stateless (btw. JSON-RPC is not per se RESTful, but this is another topic).
The problem is that we already have alot of ABAP functionality, which would be hard to translate to an OData interface.
But fortunately I’ve found the needed functionality in both versions (the adapter from this blogpost and the json-rpc service).
For the JSON adapter you only need to add ?action=start_session and ?action=end_session to the parameter list, to start and end a session in which the LUW doesn’t change. (this sets a cookie for the session management)
For the JSON-RPC service there is something like a “virtual function module” to start and end a session. This could also be interesting for people, who doesn’t need a stateful connection because it’s a good way to get the csrf-token. If you send:
{
“jsonrpc”:”2.0″,
“method”: “JSONRPC.INIT”,
“params”: {
“SESSION” : {
“STATEFUL” : “enabled”,
“VIA” : “cookie”, // this can also be “url” for url encrypted session ids
“TIMEOUT” : <timeout>
}
},
“id”: <id>
}
you get this answer with a session id in the cookies:
{
“jsonrpc”: “2.0”,
“result” : {
“ENDPOINT”: “/sap/gw/jsonrpc”,
“HEADERS”:[
{
“NAME”: “X-CSRF-Token”,
“VALUE”: <token>
}
],
“SESSION”:{
“STATEFUL”: “enabled”,
“VIA”: “cookie”,
“TIMEOUT”: <timeout>
}
},
“id”: <id>
}
To end the session just send “method” : “JSONRPC.END” with no params.
I hope this saves someone the time I needed to dig into the code 😉
Best regards,
Christoph
Hi Christoph,
I’m happy that you reviewed the code and found the start_session and end_session options.
Please note that I just did some little testing on them, but not enough to confirm that they may serve all purposes. That’s why I didn’t document them by the way. They just use the SAP standard session handling for HTTP, the same that is used by the stateful BSPs, so they should work, but your application should take care of the cookies, the operations done in the session lifetime and should take care of the session auto expiration (something that is normally controlled by the BSP framework).
You and Gregor are correct pointing out that REST is inherently stateless, as is the web in general, so any way of carrying session information on top of HTTP is somehow a hack, although it is today so widely done as to be considered a standard.
This adaptor is not RESTful, and it is not its purpose to be it (there are ways that you could make a function endpoint that could implement a RESTful service on top of a function module, look for the corresponding section in the code). So it is more akin to JSON-RPC in the way that it just exposes an endpoint to make a call to a function module with input and output serialized in JSON payload.
Please do not hesitate to contact me if you find any issue with the use of the start and end session options.
Best regards,
César.
Hi Gregor,
As far as I understood it, Draft Documents are not really transactions, but a way of handling transient storage for an entity in the application. I don’t know if they support locks, so they are not exactly the same as a transaction in ABAP.
Moreover, it seems that they can only be used from the Smart Templates.
Do you have more information on them?
Thanks!
César.
This is great.
Is there a possible way instead of JSON or XML to return results back in HTML?
thanks
Dear Juan,
why do you want that? By having JSON or XML you can implement a HTML5 Frontend which consists of static and due to that cached HTML, CSS and JavaScript that will render the dynamic part returned from the Server to the page.
Best regards
Gregor
Thanks for the quick reply.
I am new to web services so bear with me.
Our current PR approval workflow process send an workflow email to approvers with a link which starts the SAPGUI. I want to simply to this by allowing the users to simple click on the webservice for approval.
This web service is quite simple that it calls a FM with a single importing parameter(PR number) and returns whether it was approval process was successful or failure.
When the user click on this web-service , I want to be able to render the return message into a pop up for the user.
implement a HTML5 Frontend – Do you mean to have intermediary frontend that calls SAP and renders the return JSON message into HMTL?
Hi Juan,
perhaps it would be worth a look into the standard SAP Fiori app for PO approvals. There are no additional licensing costs for Fiori. The most simple and straight forward solution for your issue should be using a BSP.
Best regards
Gregor
Hi Juan,
It should be possible, just copy the ABAP2XML method to ABAP2HTML and modify the code accordingly. But, HTML is for formatting layout, so you will have to imagine to which HTML tags you would map any data. It is not worth to do it nor is the adaptor the place to do it. As Gregor said, you should be doing that transformation in JavaScript in your HTML5 app. Or implement a simple BSP or WDA directly.
Do you mean to have intermediary frontend that calls SAP and renders the return JSON message into HMTL?
More or less this is the model currently used for HTML5/JavaScript/CSS apps in vogue today.
Take your time to learn it, we have all been new to something at one point of our lives.
Best regards,
César.
thanks for the reply.
is it possible way to pass the sap user id and password in the JSON(or url) instead of hardcoding it the SICF node?
Dear Juan,
never pass username & password via the URL. You should look into options (X.509 Certificates, SPNEGO, SAML) to enable secure Single Sign On for your ABAP Stack.
Best regards
Gregor
Hey Juan!
It looks to me that you are quite not ready to fly yet!!
What you have proposed is just one of the many mishaps you may come across if you are not careful with security.
Instead of telling you how to solve this problem, I’ll tell you to take some time to learn about it. Believe me, any time you dedicate to security is more than worth it.
You got some good hints from Gregor. Please look for topics around web security and when you feel prepared, go look at Authentication – RFC/ICF Security Guide – SAP Library to get more information SAP related.
We are here to help you, but please, do your homework first!!
Best regards,
César.
Thanks Gregor and Ceasar for the your advices. I will more into the security measures.
I have set up the authority check as you have described and its working as expected.
Hi Cesar,
we have to load currency exchange rate from a URL in a JSON file format, I did not find a good example that would fit my string, can you guide me ? data look like this.
Thank you.
any clue how to also get the output of the URL into SAP directly would be appreciated?
Dear Cesar,
I’m new to web technologies involving SAP. Please forgive me if what i say seems contradictory.
I’m working on an old vertion of SAP where REST solution isn’t provided yet (sap_basis 702). I already have implemented a note that allows me to use xslt_tool for coding rest.json files such as :
What I need to do is use an external API where model may be like tyis : “http2//{hostname}/webmarketing/<service>/v1/notify/<service_point_id>”.
What sort of change should I make in the code you provided that may permit the transfer of rest.json data to the external site ?
I know how it may works with Soamanager and WSDL files, but how could it works with rest.json ?
Thanks for your help.
Best regards,
Jean-François.
REST is just a simple protocol over HTTP. So, you may use class CL_HTTP_CLIENT to communicate from your ABAP server to the web.
Hi Sandra,
thanks. I must admit I didn’t undestand that.
to test API do I need add on IW_FND and transaction /IWFND_/GW_CLIENT ?
without it how may I test CL_HTTP_CLIENT with my program ?
Best Regards,
Jean-François Parmentier.
Sorry, I can’t help for the Gateway application.
You should better create a new thread in the Connectivity forum, I don’t think your question is related to this blog post.
Hello
a bit of topic, but what does the SAPLINK plug ins do?what is the purpose of it?
thanks
SAPlink exports repository objects to a XML file and imports them, like a transport request. Useful for quickly distributing. Search the web to get the code.
Sandra, I think you misunderstood my question. I was talking about the SAPLINK PLUGINs.
SAPlink plugin list | SAPlink Project | Assembla
UPDATE ; I think i understand. if I want to import a INTF object with SAPLINK, i need to have SAPLINK PLUGIN for INTF class
The JSON Adaptor does not seem like large XSTRINGs. In the JSON2ABAP method the js_object->execute bombs with no error message when trying to send in to much data.
I ran the script in JSFIDDLE with the large data and it ran without any issues.
Any idea how to correct this without limiting the data?
Hi Vince,
Unfortunately, this limitation has been known for a very long time. It is due to the fact that the JSON2ABAP method in the adaptor is using the CL_JAVA_SCRIPT class and there seems to be a hardcoded limit in the maximum size of a variable when passed from the ABAP to the JavaScript context in the embedded SpiderMonkey engine. The limit seems to be around 120k. It does not have an easy solution, as SAP decided to deprecate the CL_JAVA_SCRIPT class, and while still included in the system, it is no longer maintained, which is a pity, by the way.
But wait!, there is a solution already built into the code. Uncomment line 262 of method HANDLE_REQUEST and comment out line 261 (substitute the call to method me->json_deserialize with a call to method me->deserialize_id). This way, the adaptor will use the ABAP built-in JSON deserializer transformation that has no such size limits. Just be careful that when you do that, the adaptor will be very strict with the JSON it takes, you will have to feed the adaptor with the complete JSON structure of the call. If that is not a problem for you, then do this change and it should work.
Please keep me informed if the solution works for you.
Best regards,
César.
When you say the complete JSON structure are you saying the inputs and output (Imports, Exports, Tables, etc) have to be in the call? Do you have an idea of how a such call would look like.
Hi Vince,
It looks like that the built-in transformation is now working well, and it is no longer needed to build a full structure in the call. Sorry for this.
Did it work any better with the built-in transformation regarding the size limit? I’d like to hear from you.
Thanks!
César.
Hello Cesar,
I have implemented the JSON service adapter in our SAP development system and working absolutely fine. However, after moving the transports to quality system, the third party system which is making the call to SAP is not receiving any parameters in the response. But the connection between third party and SAP is perfect, they are getting response but without any return parameters. Please help me how to solve this issue. The same is working perfect in development system.
Thanks,
Shreekant
Hi Shreekant,
Please send me a private mail to discuss the issue.
Thanks!!
César.
Hi Cesar,
I have been using this “gem” since 2013 in Netweaver 701. Like Vince Tebano says above, I have had problems using large XSTRING parameters (i.e. transfering binary files), but tweaking the code I have solved the problem.
Now I am using this adapter in Netweaver 740 SP9, so I can use the DESERIALIZE_ID and SERIALIZE_ID methods and avoid the old method JSON_DESERIALIZE. This, in theory, would solve the issue with large XSTRING parameters.
But when I use the DESERIALIZE_ID method I get the exception
CX_XSLT_DESERIALIZATION_ERROR, even if I use a simple function module like DATE_GET_WEEK.
I use this call
and the JSON for the transformation is {“DATE”:”20161214″}
Where is the problem?
Thanks,
Víctor Hernández
Problem solved !!!
If I use another format for the date parameter in the URL, it works like a charm. 😉
Víctor Hernández
Hi Victor:
Exactly!! Simply note that if you use the *_ID methods then you have to use the date JavaScript notation, which is exactly what you found out!
Thanks!!
César.
Hi Cesar,
First I want to congratule for the great job. I think it’s a very useful tool!!
I have a little problem, I would like to know how to send parameters to SAP of a list of values. I have one of the import parameters of the function module defined as table type and the line is defined as elemental type (in this case the type is EQUNR), then I have the following import parameter:
I_EQUIPOS type TY_CCM_EQUNR (list of EQUNR).
I’m trying to send a list of values via POST method with the following JSON string:
{
“I_EQUIPOS”: [
“8004112”,
“9003610”
]
}
but it doesn’t work.
I’ve debugged the call and my function module receive any value in I_EQUIPOS.
Can you help me?
Thanks!
Hi Juan,
Yes, I can help you, but I need more information.
Please send me a private mail.
Thanks!
César.
Hi,
Thanks for your response, but I’m not able to find how to send you a private mail, sorry.
Please, give me some clues to find how to send the private mail.
Thanks.
Hi Cesar,
I have implemented the JSON service adapter in our SAP development system and get the syntax error in the class ZCL_JSON_HANDLER in method ‘DESERIALIZE_ID’.
Line # 40 gave me the error,please help me to solve the issue.
Thanks,
Inderpreet Singh
Hello,
Same issue…
I’m trying to install the ABAP-JSON with the transport I downloaded from:
I tried to import the transport with STMS transaction and I’m getting an error message on method “DESERIALIZE_ID” about line 41:
data(reader) = cl_sxml_string_reader=>create( json_xtext ).
The error is:
Class ZCL_JSON_HANDLER,Method DESERIALIZE_ID 41
Field “DATA” is unknown. It is neither in one of the specified tables
nor defined by a “DATA” statement. .
Here is the screenshots:
Any help will be great.
Thanks,
Miki Barzilay
You have a 7.31 system, and the tool was developed on a 7.40 system. So, you have to adapt/backport the tool. For instance, you have to replace the following:
with:
Next issue is referring to:
It seems, in 7.31 we have to change it to:
But then we still got the following error:
“Field “CAST” is unknown. It is neither in one of the specified tables”
Please advise.
Thanks!
Harel
Yes of course, there are many many differences between 7.31 and 7.40. If you want to adapt the code, you should understand 7.40: I suggest you to reading Horst Keller blogs about 7.40
data(writer) = cast if_sxml_writer( cl_sxml_string_writer=>create( type = if_sxml=>co_xt_json ) ).
corresponds to (cast = down cast) :
DATA writer TYPE REF TO IF_SXML_WRITER.
writer ?= cl_sxml_string_writer=>create( type = if_sxml=>co_xt_json ).
Another one is:
How should it be converted into 7.31 syntax?
Thanks!
Harel
I give up
Thanks! 🙂
That’s so great, congratulations!!
Hi Cesar Martin,
Thank you for sharing this information. We are able to call a function module and also pass variables via URL. All work fine if import parameters are simple parameters. (like they take only one value)
Is it possible to pass values via URL to a variable which is of type table with two columns. Something like below
IT_CONTEXTS – Variable Name
TY_S_CONTEXT
CONTEXT_NAME – Value 1
CONTEXT_TYPE – Value 2
I am struggling to find a correct format for this to pass it via URL.
Regards
Amit
Hi Amit,
Jeff Woehler in a previous comment already proposed something similar and implemented some code to handle it.
I personally prefer restricting the use of the query string just for plain variables in the import area, and use POST for structures and tables, but I see the point of having the option in the query string. Please feel free to contribute the code.
Thanks!!
César.
Thank you for nice article but I have some problem there is error on method: IF_HTTP_EXTENSION~HANDLE_REQUEST it says ‘Type “ZICF_HANDLER_DATA” is unknown.’ .
I have checked in se11 but I couldn’t found type: ZICF_HANDLER_DATA.
Could please help to fixed this error.
Best Regards,
O. Fajilago
Hello Ordiales,
Did you install it using AbapGit ()??
If so, you shouldn’t have this problem. Otherwise, please send me a private mail explaining what you did.
Thanks!!
César.
Hello César,
thanks for the great work, I was exactly looking for a funktion like this, since I have many function module based interfaces that I can now supply to the web-world. The basic “rfc_system_info” call is working however I am struggling with calling my custom fucnction modules.
They have a structure like:
Doing the call I get the following error message:
I get a su53 error like this. Please check the “//” within the authorization Error
Has anyone an idea how to overcome this?
Best Regards
Alexander
Also with * in the Authorization Object I get an error like this:
Has anyone an idea how to overcome this?
Look at your function module name. If you use slashes, the function module name is mapped to one in a namespace. Check that the function //<namespace>/<function_name> exists.
Hi Cesar,
Thank you for sharing this great interface.
BR,
Gibson
Thanks Gibson!
Dear Cesar,
Superb write-up and excellent piece of work.
You saved lot of efforts from our end to convert existing SOAP webservices returning XML data to RESTful webservices with this adapter.
Keep sharing….cheers & many thanks!
Regards,
Shaurya Jain
Thanks Shaurya Jain!! | https://blogs.sap.com/2013/03/05/json-adapter-for-abap-function-modules/ | CC-MAIN-2021-04 | refinedweb | 12,897 | 72.66 |
It's too late tonight, but monday morning I will likely be committing a major revamping of the buildworld code. It will do a number of things: STAGE1: * It compartmentalizes the bootstrap/buildtools from the cross-build setup from the world stage. Instead of unfathomable subdirectory names in /usr/obj/usr/src/* the stage directories are now flattened out and better named . e.g. btools_<currentarch> for bootstrap and build tools, ctools_<currentarch>_<targetarch> for the cross compiler tools, and world_<targetarch> for the main buildworld. * The build-tools will contain all tools required by the build, not just the ones which might have version/portability problems. This is so I can remove the base system path (e.g. like /usr/bin) from the main world build, which in turn prevents buildworld from accidently using programs that it isn't supposed to be using. I'd like to remove it from the cross-build stage too but I'd have to build the compiler in the build-tools stage to be able to do that and I haven't decided whether its worth the extra time yet or not. * The buildworld target will properly remove the entire buildworld object hierarchy. It turns out that it was only removing the world stage before, it wasn't removing the build tools and cross tools stages. * New targets to make incremental buildworlds easier will be introduced, e.g. quickworld and realquickworld. quickworld skips the build and cross tools, realquickworld skips the build tools, cross tools, and depend step. * The concept of platform-native compiled programs which are uneffected by the cross-build, and all Makefile's that generate these little helper programs now use the new concept. New suffixes have been introduced: '.no' for 'native object module' and '.nx' for 'native executable'. This is replacing the build-tools: target that existed in the tree. The problem is that the build-tools stage in the old build was polluting the world stage's namespace a bit more then it should have. This will primarily make cross-building less hackish, once we start doing cross builds. * Fix a bug in 'wmake', which simulates the buildworld environment for piecemeal compilation/testing. It was not using /usr/src/share/mk. * Additional .ORDER: constraints (not finished) STAGE2: * Fix .c.o and friends in sys.mk (specify -o ${.TARGET} instead of assuming that the target is named the same). * Continued messing around with .ORDER for -j N builds. * Cleanup passes -Matt | http://leaf.dragonflybsd.org/mailarchive/kernel/2004-03/msg00397.html | CC-MAIN-2015-22 | refinedweb | 413 | 66.74 |
for connected embedded systems
ualarm()
Schedule an alarm
Synopsis:
#include <unistd.h> useconds_t ualarm( useconds_t usec, useconds_t interval );
Arguments:
- usec
- The number of microseconds that you want to elapse before the first alarm occurs, or 0 to cancel any previous request for an alarm.
- interval
- The number of microseconds that you want to elapse before the subsequent alarms occur.
Library:
libc
Use the -l c option to qcc to link against this library. This library is usually included automatically.
Description:
The ualarm() function causes the system to send the calling process a SIGALRM signal after usec microseconds of real-time have elapsed. The alarm is then sent every interval microseconds after that.
Processor scheduling delays may cause a delay between when the signal is sent and when the process actually handles it.
If usec is 0, any previous ualarm() request is canceled.
Returns:
- 0
- There was no previous ualarm() request.
- -1
- An error occurred (errno is set).
- Any other value
- The number of microseconds until the next scheduled SIGALRM.
Errors:
- EAGAIN
- All timers are in use; wait for a process to release one and try again.
Examples:
#include <stdio.h> #include <unistd.h> #include <stdlib.h> int main( void ) { useconds_t timeleft; printf( "Set the alarm and sleep\n" ); ualarm( (useconds_t)( 10 * 1000 * 1000 ), 0 ); sleep( 5 ); /* go to sleep for 5 seconds */ /* To get the time left before the SIGALRM is to arrive, one must cancel the initial timer, which returns the amount of time it had remaining. */ timeleft = ualarm( 0, 0 ); printf( "Time left before cancel, and rearm: %ld\n", timeleft ); /* Start a new timer that kicks us when timeleft seconds have passed. */ ualarm( timeleft, 0 ); /* Wait until we receive the SIGALRM signal; any signal kills us, though, since we don't have a signal handler. */ printf( "Hanging around, waiting to exit\n" ); pause(); /* You'll never get here. */ return EXIT_SUCCESS; }
Classification:
Caveats:.
Don't mix calls to ualarm() with nanosleep(), sleep(), timer_create(), timer_delete(), timer_getoverrun(), timer_gettime(), timer_settime(), or usleep().
See also:
alarm(), nanosleep(), sigaction(), sleep(), timer_create(), timer_delete(), timer_getoverrun(), timer_gettime(), timer_settime(), TimerAlarm(), usleep() | http://www.qnx.com/developers/docs/6.4.0/neutrino/lib_ref/u/ualarm.html | crawl-003 | refinedweb | 343 | 65.32 |
>>.
Wyvern = Wyrm (Score:3, Interesting)
Why? What's the worst that could happen? What's the best?
Why is the NSA interested in something like that directly? What is the potential for abuse?
Is it to make code analysis that much more centralized and (supposedly) simple?
Why didn't this come up with itself before now?
Re:Wyvern = Wyrm (Score:5, Interesting)
The standard NSA tatctic for introducing security holes into a system is to obfuscate things so that holes are hard to spot and find. SELinux is probably such a system, and this polglot language -- which effectviely makes debugging impossible -- is likely another.
Re: (Score:3, Insightful)
To properly need to debug such a language, you would need to be aware of all of the possible rules, pitfalls, bugs, and race conditions of every language under its hood.
At a basic level, is your "if else" condition running on it's Java or C++ or C version? Does it catch exceptions? Where is data being handled in memory? Are buffer overruns possible in some of these languages?
No one human could possibly we simultaneously cognisant of all possible sources of error. Programs in such a language would be a secur
Wyvern = Wyrm (Score:2)
But the good news for the USA is the data will still have connect with say international billing and other US set global standards.
Thats where a system like this might be fun. You dont have to care what the backend was, just what is sent as known, expected, decrypted data.
Pulling useful data from new bespoke communications streams will be like setting the old standards.
Re:Wyvern = Wyrm (Score:5, Insightful)
To write applications in one language, instead of HTML, CSS, JavaScript, SQL, and something else. Not including multiple levels of configuration files (website and web server at least).
The NSA could insert backdoors which, unless they were incomprehensible crypto, would be easily found by both white and black hat investigators. Also, Carnegie Mellon University, which has a pile of research announcements every year, has its entire research department under suspicion of colluding with an oppressive government agency and spends decades regaining international status as someone you can do anything other than make the punchline of a joke.
CMU losing status is, to CMU, absolutely an intolerable option. I'm not saying it won't just because of the potential impact, but you asked what is the worst that could happen. Backdoors, and a respected university bursts into flames and is disregarded for decades internationally. That's bad.
Fewer bugs.
Because despite recent bad press, they are interested in security. If we can write stuff with fewer bugs, we are more secure. Maybe there are still plenty of bugs in the hardware/OS that they know about, but fewer bugs in the application level, which means the foreigners don't know about them because they don't exist.
Pretty small. White hats will vet the libraries, black hats will try to penetrate it, and it's no more or less secure than anything else a human has written. But people can make mistakes in fewer languages. And they aren't replacing languages, from the sound of it.
I suppose you could read the article.
Why didn't the airplane come up before it did? Are you insinuating something? Do you know something we don't know? Did someone mod you up for any particular reason, or just because you spewed thoughtless rhetorical questions?
Re:Wyvern = Wyrm (Score:5, Insightful)
backdoors [...] would be easily found by both white and black hat investigators.
That's about the same as stating it is as simple to find a needle in a haystack as to put one in.
We already have issues finding normal bugs. We have seen flaws in kernels and encryption libraries that might have well been a typo, yet were in for years.
Re: (Score:2)
Put the haystack into a body of water and the hay will mostly float whereas the needle will sink.
Re: (Score:2)
Re: (Score:2)
No, how do they work?
Re: (Score:2)
As for why someone did not come up with it before, I have not looked lately, but old versions of GCC could compile together half a dozen languages into a single binary and I worked on a team that split up the project into multiple languages using the feature.
"bad press", "interested in security" (Score:3)
Your post makes various other points that sound reasonable to me, but I have to call out the above line from a couple of angles:
1) using the phrase "bad press" implies a virtuous subject that has been distorted by a reporting industry with a non-virtuous agenda. NOTHING OF THE SORT has happened to poor lil' NSA here... they FUCKED us, straight up, and got caught red-handed.
2) Whatever the extent to which the NSA is "interested in sec
Re: (Score:3)
It *has* been done before. I worked on it years ago. One of my colleagues came up with it in 1999. [waterlanguage.org]
It was brilliant to work in, but it didn't catch on.
Re: (Score:2)
Except that it's not valid XML but something even worse.
That language just looks horrible.
Re: Wyvern = Wyrm (Score:2, Interesting)
Not impressed. The OP obviously doesn't understand a thing about programming languages in general, or programming as an activity in particcular. Or he would know that the use of multiple files, and multiple languages, is a means to an end, not a nuisance. Namely to manage complexity, and to use the most appropriate level of abstraction to solve a particular problem. If he'd know he would not claim that wyvern is a polyglot language, but that it is a meta language to create internal DSLs, domain specific lan
Re: (Score:2)
Why didn't this come up with itself before now?
Jack of all trades, master of none.
Re: (Score:2)
Lack of basic research (Score:5, Insightful)
I arrived at America pretty late - at the 60's - but at least at that time America had several institutions doing all kinds of wonderful basic research
Bell Labs
Xerox's famous lab at Palo Alto
The Skunkworks
And at that time Darpa funded a lot of basic research as well
Today, all gone
Even Darpa's funding are not aiming at basic research - such as what TFA has outlined - what they are doing at Carnegie Mellon is actually an applied research
... taking what has been known and add another layer onto it
What's happening in America nowadays is very worrying
Re:Lack of basic research (Score:5, Insightful)
Of course, a lot of research was done by the private labs of corporations back then, like IBM, RCA, etc.. Engineering was a respected profession, you needed real talent to become an engineer or programmer and you could earn a good living that way in the West.
Then one day some bright psychopath realized it would be cheaper if universities did the research with government money instead.
Then you get the research done, your future employees come already in debt, and then they work for peanuts paying back their student loans.
So companies used to pay YOU to do research, now YOU pay to go to university and the companies get to keep the IP!
And social engineering and manipulation means that people will WILLINGLY do so!
Brilliant!
Re: (Score:2)
Yet at the same time, the US spends considerably more on research than it did then. I think here the explanation is that public funding crowded out private for basic research.
It makes little sense to fund your own research in the cases where some government would fund it for you. Similarly, if you're a researcher, public funding is high quality and less demanding than private funding. Sure, you have to fill out a ton of paperwork. But they don't have the anything like the expectations th
Re: (Score:2)
Wyvern? (Score:3)
Shit summary (Score:5, Insightful)
CSS and HTML5 are not programming languages. You don't "choose" html5 over, say, php.
(And don't fucking say HTML5 + CSS3 is turing complete)
Re:Shit summary (Score:5, Insightful)
I didn't see any programming languages in the list on the summary. Just a bunch of web shit.
Re:Shit summary (Score:5, Funny)
Yeah and can you imagine the horrific shit sandwich that would be a combination of CSS, HTML5, PHP and JavaScript?
666 Mark of the Techno Beast. It's like some shit Ghostbusters 2099 would be tasked with stopping.
stupid argument (Score:2, Insightful)
CSS & HTML5 ***are*** code languages for programming machine behavior
*at the presentation level*
it's not an "original gangster" hardcore badass super 1337 C#+! language...it's not complex or "bragable" at a gathering of dorks trying to impress each other...
but it's symbols that form a code that humans use to 'program' machine behavior...that's a programming language
just accept it, once and for all, and stop all of you....just stop
it doesnt make your skillz any less bragable...it's a coding language...mo
Re: (Score:2, Informative)
CSS & HTML5 ***are*** code languages for programming machine behavior
CSS & HTML5 are data that is interpreted by a computer program. They are not "code languages". The rule of thumb is that without some sort of control structure (if/then/else, loops, etc.), it's just data.
For HTML, this becomes obvious once you see how many real languages (JavaScript, PHP, ColdFusion, VisualBasic/ASP, etc.) have been created to overcome its lack of control structures.
then all code is data (Score:2)
you can't redefine "coding" by calling everything "data"
it's instructions for a machine...that's coding...
you're playing linguistic games & no matter how you do it you're still wrong functionally
Re: (Score:2)
But PRESENTATION (how something looks) and BEHAVIOUR (how something acts) are two different things.
Saying "programming machine behaviour... at the presentation level" is a nonsensical statement. HTML/CSS define content & presentation. They do not "program behaviour".
Or as Wiki puts it [wikipedia.org], "The purpose of programming is to find a sequence of instructions that will automate performing a specific task or solve a given problem". HTML & CSS simply do not qualify. They are certainly computer languages, b
definition is clear (Score:2)
presentation is behavior...in fact, if all you have is a monitor **all behavior is presentation**
if use HTML5 to tell a computer to display a black background when you go to a URL
OR i could do the same to ****PROGRAM**** the computer to display a white background when you go to a URL
either way, user enters data (URL in browser) and computer returns a ***PROGRAMMED*** response
programmed using HTML5 so that the browser knows it's the *background* that is to be black, not another part
that's programming no matt
CSS? (Score:5, Funny)
"What's your favorite programming language? Is it CSS?"
Why yes, I just love writing VoIP systems in CSS.
Re: (Score:2)
Every time I see the wish to create yet another, newer, better way to program a computer, instead of the oldschool assembler, C, Basic and Pascal methods, it keeps reminding me to ask people to let's come up with a better way to represent numbers. As in Roman numerals like MCMLXXXIV truly suck compared to Hindu (called Arabic) 1984 numerals, but we shouldn't leave it at that, there's gotta be something better than that Hindu representation. But the reality is that we'd be like a dog chasing it's tail with a
Re:CSS? (Score:5, Insightful)
I'd like to point out that you can't represent irrational numbers accurately without a new system. Let alone trancendental numbers.
Also some numbering systems are more convenient. Binary, for example. Not different numerals, but used differently.
I know, not exactly your point, but don't dismiss languages other than C, Basic, and Pascal.
Compiler virus (Score:5, Interesting)
Wasn't there some discussion on how effective a special, compiler-embedded virus would be? This seems like a good candidate for that.
NSA: A Source Name we trust! (Score:2)
Yes! Finally, a programming language and development system from a serious organization we can all trust to help us produce secure applications! I am so happy I'm doing the little Snoopy Dog House Dance! Oh-Joy! More Exclamation Points Please!!!
Re: (Score:2)
Yeah, like that horrible SELinux thing they developed...
Re: (Score:3)
You have n programming languages... (Score:5, Funny)
Re: (Score:3)
Apologies to you, AC, for hijacking your highly upvoted comment.
We appear to have something rather serious at work here. A registered user (jelIomizer, the second 'L' is actually an 'i' character or some Unicode variant) posted over 28 posts (all MyCleanPC spam) in under 6 minutes on this article--something neither you or I can do. This smacks of a slashcode bug or admin collusion.
For reference [cryptome.org]...
Oh yeah, hello to all the friendly NSA propaganda operatives out there. Go fuck yourself.
Re: (Score:2) [xkcd.com]
Ridiculous Summary, Interesting Papers (Score:3, Informative)
As you'd expect from CMU, the papers themselves are pretty interesting. Just read the abstracts instead of trying to guess from the summary or vice article, which are both way off the mark.
Because More is always better !!! (Score:2)
At the NSA they KNOW a bigger haystack is a better haystack, so why not extend that idea to a programming language.
By understanding all the languages you get the strengths of all the languages and none of the weaknesses, programmers can just ignore the weaknesses then they arent there,
Why should programmers have to put up with those pesky syntax errors when you can just make the language accept any (stupid) command.
Forward to the future !
why- just why? (Score:4, Insightful)
Why in the hell would you need to look at something with a skeptical eye just because money came from a certain source? Is the reputation of carnegie mellon suspect or something? And if so, shouldn't that in and of itself be the reason of suspect?
The submiter is a shallow person suffering from guilt by association which is never a valid premise. I mean i know skin heads who donate to planned patrenthood specifically because they have all their abortion clinics in areas with high minority populations and keep the minority populations in check. Does that mean we have to look at them wiyh a skeptical eye too? Of course not- or at least npt because a source of their funding has issues most of us find repulsive.
The merrits of this will rest on its own. There is absolutely no reason to put the integrity of the development into question simply because the NSA gave funding.
Re:why- just why? (Score:5, Insightful)
Re: (Score:2)
No.. you are just as likely to overlook good research and settle for bad research when the source of funds is a primary role in how you accept it.
The research and or science will stand on its own merrits. Well, that is if science is the goal and not politics. In this case, a university of good repute just had its integrity challenged by nothing more than idiots on parade trying to turn something political. There is no justification for it. Just mentioning the funding sources is one thing, but they actually
Re: (Score:2)
The NSA never touched this though. You sem to be trying to say that since a drunken murderer buys jim beam that all whiskey jim beam produces is somehow now suspect. It juat aint so.
Re: (Score:2, Insightful)
There is absolutely no reason to put the integrity of the development into question simply because the NSA gave funding.
Uh yes, there is..
How much longer are you willing to be a battered spouse, making excuses for your abuser?
Re: (Score:2)
If the NSA or evil software writers association actually developed the software, then yes. But simply passing money to an otherwise reputated team makes no sense- closed or open.
Re: (Score:2)
Except that one wonders *why* are they funding it. How will it make our communications less secure?
Off hand the only thing that comes to mind is that there would be fewer components of the browser that the NSA needed to compromise if all the languages used the same interpreter. Perhaps that's all there is. It's even possible that they didn't fund the project with a malign intent. That, however, is not the way I'd bet given their "improvements" of encryption methods.
Re: (Score:3)
No, it doesn't "roll all languages into one" (Score:5, Informative)
No, it doesn't "roll all languages into one". It just allows embedding of the text of another language, such as HTML, into a Wyvern program. Variables can be substituted. Like this:
(except that the last 3 lines above should be indented, because this language uses Python-style block notation.)
Of course, everybody does that now, but the way they do it, especially in PHP, tends to lead to problems such as SQL injection attacks. The idea here is that Wyvern has modules for the inserted text which understand what kinds of quoting or escaping are required for the embedded language text.
I just glanced at the paper, but that seems to be the big new feature.
Re: (Score:2)
That problem would not exist if people new how to use a database.
Re: (Score:2, Informative)
It's not just about quoting or escaping. It actually builds an AST for each TSL expression (for example, an HTML expression), so they can tell if the expression is valid and how to combine the Wyvern expression with the TSL expression containing it. It looks like brain-dead string concatenation, which reduces clutter and improves readability, but it gives you all the benefits of using the type system.
Re: (Score:2)
yeah, and the Java experience is that embedding code with html isn't a great idea. That's why JSP is on the way out and JSF on the way in.
skilled international negotiator! (Score:2)
Yeah, about as skilled and effective as past Israeli-Palestinian negotiators...
CSS? JavaScript? PHP? HTML5? (Score:5, Insightful)
Are these what the kids call programming languages these days?
It doesn't sound very serious.
Re: (Score:3)
> Are these what the kids call programming languages these days?
Yup. A lot of 'programmers' don't even know non-web languages exist. I wish I was kidding. And a lot of employers don't know either. The whole thing is just really sad.
Re: (Score:2)
It is all the NSA needs to get access to your Internet Connected device. That means everything nowadays, including my Linux toaster.
Yes, I am very sceptical if I see the letters N, S and A. (It isn't paranoia, because that is only when you THINK you are being followed, not when it becomes a fact.)
Programming language? (Score:2)
CSS: not a programming language.
HTML: not a programming language.
PHP: not a programming language.
Note: I'm a web developer mostly these days, I write a bucket of each of these. I'm a computer science educated professional and I also write a lot of code in Java and C++. I really like PHP. It is however not a bloody programming language, it's a scripting language.
Re: (Score:2)
So in order to be a programming language it has to be compiled instead of interpreted?
Where does compiled PHP fit into your world view?
Re: (Score:3)
That's a distinction without a difference. All "scripting languages" are programming languages, quibbling over whether the particular domain a language is used in makes it a "real" language or not is fodder for arrogant asses who need to make others seem smaller to boost their own pathetic egos.
Obviously, different languages have different strengths and weaknesses. You wouldn't write an OS kernel in JavaScript, and you wouldn't write system administration automation in C++. Sneering at the domain of one
Re: (Score:3)
Well, PHP is a programming language, just not really a general-purpose one.
Anyways, web-stuff is a small part of programming, and not really an important one as it is pretty limited.
Re:Programming language? (Score:5, Insightful)
I really like PHP. It is however not a bloody programming language, it's a scripting language.
I really hate PHP, but what I hate even more is being confronted with this mysterious distinction between "scripting" and "programming" languages.
A language might be strongly or weakly, dynamically or statically typed. A particular implementation might employ a compiler, a virtual machine or interpreter. These are meaningful distinctions. But what (with the possible exception of a hardware specific control language) does it even mean for a language (as distinct from its implementation) to be a "scripting" language?
Would PHP cease to be a scripting language if an object code compiler were available for it? Is 'C' a "scripting language" just because it's interpreted [softintegration.com]? And what about a language which has never actually been implemented, what in the language specification determines unequivocally if that language is 'scripting' or a a 'programming' language?
Re: (Score:3).
Re: (Score:2)
Agreed. Assembly is just a scripting language for microprocessors. C is just a scripting language for the compiler back-end. The OP did a terrible job of making his case.
Re: (Score:2)
[quote]
CSS: not a programming language.
HTML: not a programming language.
[/quote]
CSS and HTML are such devious piles of junk, they should be turing complete by now.
Which behaviour? (Score:5, Interesting)
//\
/*
#include "stdio.h"
/**///\
public class test2 {
//\
public static
void main
(String[]a)//\
/*
(int argc, char *argv[])//*/
{
System.out.printf("hi, I'm java\n");/*
printf("hi, I'm C\n");//*/
}
//\
}
Re: (Score:2)
void main(int argc, char *argv[])
valid C
Nope. Not valid C. Valid would be int main(void), int main(int argc, char **argv)(and equivalent), and in some cases int main(int argc, char **argv, char **envp) (and equivalent).
Source [open-std.org]
Re: (Score:2)
Depends on the standard. Even "main()" can be valid.
Re: (Score:2)
Depends on the standard.
No. None of the C Standards ever had void a valid return type for main, and, frankly all of them (since we're talking standards, that means C89 through C11) give you int main(void) and int main(int argc, char **argv)(and equivalent).
It's not like i didn't link a source.
Even "main()" can be valid.
Yes, C89 allowed leaving away the int, that's called "implicit int". Needless to point out, the return type is still int.
Re: (Score:2)
"main()" does not have void as return type, it has "no return type specified". You are also not going back far enough if C89 is the first thing you look at and you are constraining your search too much if you require an "IOS" Standard. There are others around, even if bodies like ISO would probably say they are not standards. Not so.
Re: (Score:2)
Furthermore probably sucks to think that a function taking no parameters like foo(void) and a function taking an unspecified number of parameters like foo() and (void) were the same thing. Your mind might be a bit C++-damaged (in C++, foo() in fact means foo(void)).
Educate much?
Consider char (*foo(int)
They've re-invented PL/1 (Score:2)
Re: (Score:2)
And we know how well that worked the last time.
Nah. They've re-invented Ada.
Ada is when they re-invented PL/1.
Hmm. What comes after strike 2?
Re: (Score:2)
Sorry, but PL/1 was a decent language with atrocious subsets at rediculous prices. The compiler was also large and slow. And I had some problems with it's "intelligent type conversion"s. But you've got to remember what other languages were around at the time. It hadn't learned Object Oriented programming. Etc. But it made safe use of pointers rather easy. I wrote my first Red-Black tree in PL/1 and it was a lot easier both to do and to understand than the one I did later in C.
OTOH, I must admit that
Why is scripting better than an amalgation of CSS? (Score:2)
I really don't understand this. Almost every site I go to does the same damn crap with Javascript and all of it could be done with other technologies.
LLVM's logo is a wyvern (Score:4, Insightful)
Keep away from it (Score:2)
It's supposed to help the NSA, and to hurt you in the end.
if it did, that would eliminate my bugs (Score:4, Insightful)
It doesn't do what the summary says.
If it did, that would take care of half of my bugs. Within a 30-minute period, I might well work in PHP, Perl, ActionScript, JavaScript, and some other language. A large portion of my errors are things like using empty() in JavaScript. Especially, ActionScript is almost the same as JavaScript, and a lot of Perl is also valid PHP, so when switching between these it's easy to absent-mindedly tap out a line in the wrong language.
Once upon a time, I used vim syntax highlighting, which doesn't typically catch using the right syntax, but the wrong function name, but does make missed braces and such obvious. Maybe I should right a vim plugin for "wrong language, dummy." It would look for echo (phph vs print (Perl), etc.
Re: (Score:2)
This reminds me of ls.bat and various other little
.bat files people put on DOS and Windows machines for obvious reasons.
All programming languages? (Score:3)
Does it do APL ? Forth ? 6502 assembler?
Re: (Score:2)
They forgot to add the distiction "... your average script-kiddie has ever heard of".
OMG! (Score:2)
They've re-invented PL/1!
Its really too bad... (Score:5, Insightful)
The NSA's reputation has been annihilated. There are good people that work for such organizations. People that could and do benefit our society on a regular basis. Their institution was simply coopted by irresponsible people that sadly destroyed everything. Its a shame.
Re: (Score:2)
Maybe it's time to reanimate (D)ARPA - the guys that gave us the Internet.
Re: (Score:2)
Maybe it's time to reanimate Al Gore?
Re: (Score:2)
The only NSA employee I'd trust is John Casey.
FTFY (Score:5, Funny)
"...and here's another one!"
Re: (Score:3)
Jellomizer (Score:2)
Jellomizer has multiple posts all dated with 7:12 PM. Now, as a Slashdot member over the years, with excellent karma, I can't even post that fast, regardless of what I'm posting. What allows Jellomizer, without the consent of the editors/admins, to post spam repeatedly, without any time delay?
all of those and more... (Score:2)
Anthing based upon HTML or CSS is guaranteed to be a unmaintainable crap. Put them together, and you have the largest pile of shite ever !
Naive predictions (Score:2).
Re: (Score:2)
What I really want to know is... how the fuck does a registered user post over 20+ posts in under 6 minutes without being filtered by the "you must wait X minutes" filter. This smacks of a slashcode exploit or editor collusion. I'm a registered user with Excellent karma, and I can't post anywhere NEAR that fast.
Re: (Score:3)
There's a comment threshold feature that effectively eliminates your ability to see low rated comments, which these ravings are rendered to with a quickness thanks to a rather decent moderation scheme.
Caveat: two or three of the smartest things I've ever read on here were, at least at one point, low threshold.
Re: (Score:2)
Allowing blatant spam to drown AC comments is likely the goal. Still not sure how Jellomizer posts over 20 (20+!!!) posts in under 6 minutes even IF they had excellent karma. This smacks of a slashcode bug or editor collusion. Normal users won't suffer because of the karma bonus, but affected users will include any ACs making relevant points. Allowing the spam to continue unabated will simply result in controversial viewpoints (held legitimately, posted AC to preserve reputation) being drowned out. For furt
Re: (Score:2)
Though since this is Slashdot, there's virtually zero chance this is the first (or the last) instance of a disgruntled nerd with some coding skill.
Can't you just picture the editors, worked up into a frenzy this Monday morning, feverishly pursuing a solution?
Re: (Score:2)
That's fine and wonderful, but some of us browse at -1 because some people make great points as an AC. This sort of spamming blatantly denies those people a voice.
Re: (Score:3)
Indeed. And JavaScript and PHP are special-purpose languages that are unfit to be user in a general setting. The OP has no clue.
Re: (Score:2)
Not quite, but it decidedly requires quite advanced skills to produces anything good with it.
Re: (Score:2)
Re: (Score:2)
Perl was a polyglot before it was cool. Hipster Perl. | https://developers.slashdot.org/story/14/08/10/2250205/new-nsa-funded-code-rolls-all-programming-languages-into-one?sbsrc=md | CC-MAIN-2016-44 | refinedweb | 4,922 | 64.91 |
If you use NumPy for numerical computations, which I recommend if you program in Python, one of the key things to do is to process entire arrays in one go as much as possible (this is also true for MATLAB). Using these so-called vectorized operations makes sure that the operation is run using compiled code instead of interpreted Python code, since the array and matrix operations behind NumPy (and MATLAB) are implemented in compiled languages such as C.
As an example, I’ll create a Python program that produces the classical Mandelbrot fractal. This fractal is computed by defining, for each \(c\in\mathbb{C}\), a polynomial \(f_c(z)=z^2+c\). This polynomial is then iterated starting with \(z=0\), that is, \(f_c^n(0)\) is computed, and any point \(c\) for which \(f_c^n(0)\) does not “escape to infinity” for \(n\to\infty\) is part of the Mandelbrot set (there are some example values of \(c\) in the article Mandelbrot Set).
In order to compute this polynomial for all pixels of an image at the same time, we create matrices \({\bf Z}\) and \({\bf C}\), and then do the computation
\[{\bf Z}^{(t+1)}={\bf Z}^{(t)}*\,{\bf Z}^{(t)}+\,{\bf C},\]
where \(*\) means element-wise matrix multiplication. The matrix \({\bf Z}^{(0)}\) must be initialized to all zeros, so that’s easy.
The Matrix C
Each element of the matrix \({\bf C}\) must be initialized to the coordinate of the pixel in the complex plane that it represents, i.e., the pixel \((x,y)\) must get the value \(x+yi\). This makes sure that the correct function \(f_c\) is computed at each point. A straightforward way to do that is to create a row of x coordinates and a column of y coordinates. For example, for the points with integer coordinates in the range \([-2,2]\), this would be
\[\begin{pmatrix}
-2 & -1 & 0 & 1 & 2
\end{pmatrix}\]
and
\[\begin{pmatrix}
2 \\
1 \\
0 \\
-1 \\
-2
\end{pmatrix}.\]
By replicating both to a full \(5\times5\) matrix and adding them, with the second one multiplied by \(i\), we get
\[\begin{pmatrix}
-2+2i & -1+2i & 2i & 1+2i & 2+2i \\
-2+i & -1+i & i & 1+i & 2+i \\
-2 & -1 & 0 & 1 & 2 \\
-2-i & -1-i & -i & 1-i & 2-i \\
-2-2i & -1-2i & -2i & 1-2i & 2-2i
\end{pmatrix}.\]
In practice, a basic plot of the Mandelbrot fractal can be made on a grid that extends from \(-2\) to \(1\) in direction of the x-axis, and \(-1\) to \(1\) in the direction of the y-axis. The magnitude of the elements of a \(480\times320\) matrix with this range is shown in Figure 1.
One way to compute \({\bf C}\) in Python is the following.
import numpy as np m = 480 n = 320 x = np.linspace(-2, 1, num=m).reshape((1, m)) y = np.linspace(-1, 1, num=n).reshape((n, 1)) C = np.tile(x, (n, 1)) + 1j * np.tile(y, (1, m))
This code literally translates the vector and matrix operations from the example above. There are alternatives such as
mgrid() that could be used, but I think that the construction using
tile() explicitly is clearer in this case.
Iterating the Polynomial
We now have the matrices \({\bf Z}\) and \({\bf C}\) ready to start iterating. This is where it gets a bit tricky. The basic code is completely trivial:
Z = np.zeros((n, m), dtype=complex) for i in range(100): Z = Z * Z + C
However, there’s a problem. The values that “escape to infinity” grow so quickly that they overflow the maximum float value in no time, which first results in
Inf values and then quickly in
NaN values. To avoid this, I’ll add a mask
M of pixels that are potentially in the Mandelbrot set, but from which we will remove pixels if we discover that they are not. Mathematically, it can be proven that pixels values with a magnitude larger than 2 will escape and cannot be part of the set. Hence, the code can be adapted as follows.
Z = np.zeros((n, m), dtype=complex) M = np.full((n, m), True, dtype=bool) for i in range(100): Z[M] = Z[M] * Z[M] + C[M] M[np.abs(Z) > 2] = False
There are two important constructs here. The first one is the notation
Z[M]. This is so-called boolean indexing. It selects the elements of the matrix
Z for which
M contains
True. This enables you to only continue iterating on those pixels that have not escaped yet. The second line updates
M itself. After each iteration, we determine all pixels that have escaped, using the expression
np.abs(Z) > 2. We then use boolean indexing again to remove these pixels from
M, using the expression
M[np.abs(Z) > 2] = False.
Hence, using array operations allows computing the Mandelbrot set in only a few lines of code, and with much better performance than by iterating over all pixels with Python
for loops. Figure 2 shows the result of running the above code (so, for 100 iterations) and then plotting
M.
This black-and-white image shows the Mandelbrot set in black. For more information on the difference between this and the usual colorful images that you possibly expected here, see again the article Mandelbrot Set.
Python Code
For completeness, a complete Python program follows.
import numpy as np from imageio import imwrite m = 480 n = 320 x = np.linspace(-2, 1, num=m).reshape((1, m)) y = np.linspace(-1, 1, num=n).reshape((n, 1)) C = np.tile(x, (n, 1)) + 1j * np.tile(y, (1, m)) Z = np.zeros((n, m), dtype=complex) M = np.full((n, m), True, dtype=bool) for i in range(100): Z[M] = Z[M] * Z[M] + C[M] M[np.abs(Z) > 2] = False imwrite('mandelbrot.png', np.uint8(np.flipud(1 - M) * 255))
The call to
flipud() makes sure that the plot is not upside down due to the matrix indexing that NumPy uses. This does not make any difference for this image, since the Mandelbrot set is symmetrical around the x-axis, but it becomes important when you zoom in on different regions of the set.
Hi Tom!
In the second last line, you mean symmetric around x-axis, right?
I also have a doubt -
why the lower bulb in the Mandelbrot set looks bit tilted? (I went through other similar graphs over internet and everywhere is is like that)
Yes, around the x-axis, of course. Thanks, I've adapted the text. Both the upper and lower small bulbs are indeed tilted. This is not an effect of the way in which the image was produced or something, it is really how the Mandelbrot set looks.
Add new comment | https://tomroelandts.com/index.php/articles/how-to-compute-the-mandelbrot-set-using-numpy-array-operations | CC-MAIN-2021-04 | refinedweb | 1,144 | 64.71 |
Background
In this tutorial series, we are going to learn modern c++ using the raspberry pi. So, what do I mean when I say modern? In August of 2011, a new version of the c++ standard was released that included some exciting new features. So, in this tutorial series, we will be taking advantage of these enhancements.
This tutorial series will start with the basics and then quickly move on to more advanced topics. Occasionally, we will take a step back and use what we have learned so far to do a project, so that you can exercise the skills that you have learned and build something exciting. By the end, you should have a great foundation in c++ that you can use to build your own projects.
Setup
For this tutorial series, we are going to be using the Raspbian OS. It’s very easy to install and set up and it seems to be the most popular OS for the raspberry pi. Raspbian comes with all of the tools that we need to start programming, so let’s dive in.
Code
Here’s the code to print out a message to the screen:
#include <iostream>
using namespace std;
int main()
{
cout << "Hello Raspberry Pi!" << endl;
return 0;
}
The first line includes a library. Libraries are prewritten code that you can leverage to do things faster and easier. Many common functions, such as writing a message to the screen, have already been written, so you can just reuse them and focus on aspects of your program that are unique to the problem that you are trying to solve.
The next line makes it so that you don’t have to prefix the functions in the library with the name of the namespace that it is in. This is to help fix name collision problems. Imagine if I wrote a function called foo and someone else wrote a function with the exact same name. If I tried to call that function the computer wouldn’t know which one I wanted, so to help with this namespaces were created. This line tells the computer that if it is looking for a function, that it should start by looking in the std namespace.
The next line declares a function. A function is a group of code that takes a set of inputs and produces an output. A simple example would be a sine function. The function would take in an angle, do some computations on that angle and then return the sine of the given angle. This function is called main and it doesn’t take any inputs, so there are just empty parenthesis after the name of the function. The function returns an error status. This is used to let other programs know if the program succeeded or failed. The main function is a very special function. It lets the computer know where your program starts, so every program must have a main function.
The curly braces signify the start and end of the function. Inside the function we only have two lines. The first one writes the phrase given to the screen, and the second one tells the computer that the program succeeded. If you change the text in between the two double quotes, you can make the computer write anything that you want to the screen.
Compiling and Running
So, take the above code and paste it into a file called 01_HelloRPi.cpp. Now open up a command prompt and navigate to the directory that contains the file. In the command prompt type:
g++ -std=c++0x 01_HelloRPi.cpp -o01_HelloRPi
This will take the code that you have written and create a program that can be run. The name of the program to create is specified with the –o flag, in this case we called it 01_HelloRPi. The –std=c++0x flag tells the compiler to use the latest version of the c++ standard. Then to run the program type in:
./01_HelloRPi
And that will write your message to the screen.
Summary
In this first tutorial we set up our environment to get ready to code and wrote a small program that prints a message to the screen.
In the next tutorial, we discuss how to read in data from the user to make an interactive program.
If you have any questions or comments about what was covered here, post them to the comments. I watch them closely and will respond and try to help you out.
Tutorial Index
14 - Linked List Operations
- 01_HelloRPi.cpp.zip 251 bytes | https://www.element14.com/community/community/code_exchange/blog/2013/01/02/c-tutorial--hello-raspberry-pi | CC-MAIN-2018-22 | refinedweb | 759 | 80.41 |
I have this program with a class DNA. The program counts the most frequent k-mer in a string. So, it is looking for the most common substring in a string with a length of k.
An example would be creating a dna1 object with a string of AACCAATCCG. The count k-mer method will look for a subtring with a length of k and output the most common answer. So, if we set k = 1 then 'A' and 'C' will be the most occurrence in the string because it appears four times. See example below:
dna1 = DNA.new('AACCAATCCG')
=> AACCAATCCG
>> dna1.count_kmer(1)
=> [#<Set: {"A", "C"}>, 4]
>> dna1.count_kmer(2)
=> [#<Set: {"AA", "CC"}>, 2]
class DNA
def initialize (nucleotide)
@nucleotide = nucleotide
end
def length
@nucleotide.length
end
protected
attr_reader :nucleotide
end
# I have k as my only parameter because I want to pass the nucleotide string in the method
def count_kmer(k)
# I created an array as it seems like a good way to split up the nucleotide string.
counts = []
#this tries to count how many kmers of length k there are
num_kmers = self.nucleotide.length- k + 1
#this should try and look over the kmer start positions
for i in num_kmers
#Slice the string, so that way we can get the kmer
kmer = self.nucleotide.split('')
end
#add kmer if its not present
if !kmer = counts
counts[kmer] = 0
#increment the count for kmer
counts[kmer] +=1
end
#return the final count
return counts
end
#end dna class
end
Code
def most_frequent_substrings(str, k) (0..str.size-k).each_with_object({}) do |i,h| b = [] str[i..-1].scan(Regexp.new str[i,k]) { b << Regexp.last_match.begin(0) + i } (h[b.size] ||= []) << b end.max_by { |k,_| k }.last.each_with_object({}) { |a,h| h[str[a.first,k]] = a } end
Example
str = "ABBABABBABCATSABBABB" most_frequent_substrings(str, 4) #=> {"ABBA"=>[0, 5, 14], "BBAB"=>[1, 6, 15]}
This shows that the most frequently-occurring 4-character substring of
strappears 3 times. There are two such substrings: "ABBA" and "BBAB". "ABBA" begins at offsets (into
str) 0, 5 and 14, "BBAB" substrings begin at offsets 1, 6 and 15.
Explanation
For the example above the steps are as follows.
k = 4 n = str.size - k #=> 20 - 4 => 16 e = (0..n).each_with_object([]) #<Enumerator: 0..16:each_with_object([])>
We can see the values that will be generated by this enumerator this by converting
e to an array.
e.to_a #=> [[0, []], [1, []], [2, []], [3, []], [4, []], [5, []], [6, []], [7, []], [8, []], # [9, []], [10, []], [11, []], [12, []], [13, []], [14, []], [15, []], [16, []]]
Note the empty array contained in each element will be modified as the array is built. Continuing, the first element of
e is passed to the block and the block variables are assigned using parallel assignment:
i,a = e.next #=> [0, []] i #=> 0 a #=> []
We are now considering the substring of size 4 that begins at
str offset
i #=> 0, which is seen to be "ABBA". Now the block calculation is performed.
b = [] r = Regexp.new str[i,k] #=> Regexp.new str[0,4] #=> Regexp.new "ABBA" #=> /ABAB/ str[i..-1].scan(r) { b << Regexp.last_match.begin(0) + i } #=> "ABBABABBABCATSABBABB".scan(r) { b << Regexp.last_match.begin(0) + i } b #=> [0, 5, 14]
We next have
(h[b.size] ||= []) << b
which becomes
(h[b.size] = h[b.size] ||= []) << b #=> (h[3] = h[3] || []) << [0, 5, 14]
Since
h has no key
3,
h[3] on the right side equals
nil. Continuing,
#=> (h[3] = nil || []) << [0, 5, 14] #=> (h[3] = []) << [0, 5, 14] h #=> { 3=>[[0, 5, 14]] }
Notice that we throw away
scan's return value. All we need is
b
This tells us the "ABBA" appears thrice in
str, beginning at offsets 0, 5 and 14.
Now observe
e.to_a #=> [[0, [[0, 5, 14]]], [1, [[0, 5, 14]]], [2, [[0, 5, 14]]], # ... # [16, [[0, 5, 14]]]]
After all elements of
e have been passed to the block, the block returns
h #=> {3=>[[0, 5, 14], [1, 6, 15]], # 1=>[[2], [3], [7], [8], [9], [10], [11], [12], [13], [14], [15], [16]], # 2=>[[4, 16], [5, 14], [6, 15]]}
Consider substrings that appear just once:
h[1]. One of those is
[2]. This pertains to the 4-character substring beginning at
str offset
2:
str[2,4] #=> "BABA"
That is found to be the only instance of that substring. Similarly, among the substrings that appear twice is
str[4,4] = str[16,4] #=> "BABB", given by h[2][0] #=> [4, 16].
Next we determine the greatest frequency of a substring of length 4:
c = h.max_by { |k,_| k } #=> [3, [[0, 5, 14], [1, 6, 15]]] d = c.last #=> [[0, 5, 14], [1, 6, 15]]
For convenience, convert
d to a hash:
d.each_with_object({}) { |a,h| h[str[a.first,k]] = a } #=> {"ABBA"=>[0, 5, 14], "BBAB"=>[1, 6, 15]}
and return that hash from the method.
There is one detail that mention. It is possible that
d will contain two or more arrays that reference the same substring, in which case the value of the associated key (the substring) will equal the last of those arrays. Here's a simple example.
str = "AAA" k = 2
In this case the array
c above will equal
c = [[0], [1]]
Both of these reference
str[0,2] #=> str[1,2] #=> "AA". In building the hash the first is overwritten by the second:
c.each_with_object({}) { |a,h| h[str[a.first,k]] = a } #=> {"AA"=>[1]} | https://codedump.io/share/qKJWFqjKdkpw/1/how-to-write-a-method-that-counts-the-most-common-substring-in-a-string-in-ruby | CC-MAIN-2017-13 | refinedweb | 901 | 75.91 |
Please tell me the exact logic to solve this question. I’ve tried a lot but always end up with WA. Here is the link of the question -
It w1 is odd, the we need one n1 object to fill the last one space. If n1==0, then the point is wasted.
so, if w1 is odd, then subtract 1 from it (the last space), if n1>0, add 1 to answer and reduce one from n1. Do the same for w2.
Now you have two bags each with even number weight capacity. Now, each object in n2 carries 2 units of weight. So total weight is 2*n2.
Let, n2=2*n2;
and W=w1+w2;
So now, n2 and n1 are total weight available and W is the total weight capacity.
If W<=n2, it means we can fill all the weight in W by n2.
So, apply
{
ans+=W; //As W weight is available
n2-=W;
W=0;
}
else if(W>n2) // this means W can accommodate all n2 weight and will have spare space.
{
W-=n2;
ans+=n2;
n2=0;
}
Do the exact same for n1 and print the value in ans.
p.s. In solving with n2, if W becomes 0, it doesn’t cause any problem with n1 part as we are adding and subtracting 0 from any value.
Easy to understand Solution
#include<bits/stdc++.h> using namespace std; #define ll unsigned long long int main(){ ll t, w1,w2,n1,n2,W; cin>>t; while(t--){ cin>>w1>>w2>>n1>>n2; ll ans = 0; ll temp = min(w1/2+w2/2,n2); ans=2*temp; ans += min(w1+w2-2*temp, n1); cout<<ans; } return 0; } | https://discuss.codechef.com/t/fill-the-bags/9482 | CC-MAIN-2020-40 | refinedweb | 284 | 90.09 |
CodePlexProject Hosting for Open Source Software
Hi all,
I'm looking to build a form widget that posts the data to a controller action. Seems simple enough but I'm struggling. I don't want the widget to have a content part as it doesn't need it. All the example I've seen contains ContentParts and I've actually
built my own twitter widget a content part so I understand how it all works.
I maybe even talking rubbish, so let me explain. I don't want to save anything in the database, I just want to render a widget that contains a shape template (which contains the form) and then submit to a custom action on a controller within my module. This
action will then submit the form data to campaign monitor and so I don't need to save the information in the database.
I'm building a widget as the form will be shown on all pages, in a specific zone.
I know how to create the controller and action and register a route with an area for the controller in my module.
Can I create a seperate view model and use for my shape template or do I need to use a content part, which acts as the view model but I just don't have a handler that then submits the data?
Sorry if a bit confusing question. If anyone has some real world examples of doing this that would be great.
Thanks,
Kevin
You can create a content part without creating a content part record. Be mindful though that it's not unusual to have configuration settings a widget, and you may want to add them at a later date.
Usually, you would create the new content part like this:
public class MyPart : ContentPart<MyPartRecord>
{ }
but Orchard is happy enough if you just derive from ContentPart rather than ContentPart<MyPartRecord>. There is an example of this in the search widget (take a look at
Orchard.Web/Modules/Orchard.Search/Models/SearchFormPart.cs).
You will still need to create a handler and a driver as you would with any other content part (see Writing a Content Part), although clearly you will need to remove the handler
code that asks for an IRepository<> and sets the storage filter (since MyPartRecord does not exist).
The rest of it is pretty much just the same as writing a regular content part/widget - you'll need to display the correct mark-up to display the form and let the user submit it to your controller, and maybe work out which content item or URL the
request came from so you can return the user back there once your controller method has finished.
Hope this helps!
Yes, you should be able to do this, and @kobowi has a great answer, especially looking at the SearchForm widget. Just to add onto his answer, and bring emphasis to your question "Can I create a seperate view model and use for my shape template or do I need
to use a content part, which acts as the view model but I just don't have a handler that then submits the data?" ...
Yes, you'll see that's basically exactly what the SearchForm does. You can create a Driver for your ContentPart (which as @kobowi pointed out can just be a simple ContentPart with no backing record if you want) and the Driver can contain just a Display method.
It doesn't need to contain the Editor methods. And the Display method can return whatever datamodel you want for the shape. It can just rely on dynamic properties which contain the values, or it can have one property which itself is a strongly typed view model
that your view can reference. Either way will work, but regardless you can have the form in your shape template post to your own controller and do whatever you want from there.
Are you sure you want to delete this post? You will not be able to recover it later.
Are you sure you want to delete this thread? You will not be able to recover it later. | https://orchard.codeplex.com/discussions/271245 | CC-MAIN-2017-09 | refinedweb | 692 | 66.57 |
This is an introduction to the Jena client API to the AllegroGraph RDFStore™ from Franz Inc.
The Java Jena API offers convenient and efficient access to an AllegroGraph server from a Java-based application. This API provides methods for creating, querying and maintaining RDF data, and for managing the stored triples.
The Java Jena API emulates the Jena Semantic Web Framework to make it easier to migrate from Jena to AllegroGraph.
The Jena client tutorial rests on a simple architecture involving AllegroGraph, disk-based data files, Java, and a file of examples called JenaTutorialExamples.java.
Each lesson in JenaTutorialExamples.java is encapsulated in a Java method, named exampleN(), where N ranges from 0 to 21 (or more). The function names are referenced in the title of each section to make it easier to compare the tutorial text and the living code of the examples file. We use the same example numbers across multiple APIs to facilitate comparisons among them, even though differences in API features sometimes leaves a gap in the sequence of numbers.
The tutorial examples can be run on a Linux system, running AllegroGraph and the examples on the same computer ("localhost"). The tutorial assumes that AllegroGraph has been installed and configured using the procedure posted on this webpage. The Java Jena API and the Jena tutorial are bundled with the AllegroGraph Java Client.
We need to clarify some terminology before proceeding.
The first task is to have our AllegroGraph Server open a repository. This task is implemented in example1() from JenaTutorialExamples.java.
In example1() we build a chain of Java objects, ending in a "connection" object that lets us manipulate triples in a specific repository. The overall process of generating the connection object follows this diagram:
The example first connects to an AllegroGraph Server by providing the endpoint (host IP address and port number) of an already-launched AllegroGraph server. You'll also need a user name and password. This creates a client-side server object, which can access the AllegroGraph server's list of available catalogs through the server.listCatalogs() method:
public class JenaTutorialExamples { static private final String SERVER_URL = ""; static private final String CATALOG_ID = "java-catalog"; static private final String REPOSITORY_ID = "jenatutorial"; static private final String USERNAME = "test"; static private final String PASSWORD = "xyzzy"; static private final String TEMPORARY_DIRECTORY = ""; static final String FOAF_NS = ""; /** * Creating a Repository */ public static AGGraphMaker example1(boolean close) throws Exception { // Tests getting the repository up. println("\nStarting example1()."); AGServer server = new AGServer(SERVER_URL, USERNAME, PASSWORD); println("Available catalogs: " + server.listCatalogs());
This is the output so far:
Starting example example1().
Available catalogs: [/, java-catalog, python-catalog]
These examples use either the default root catalog (denoted as "/") or a named catalog called "java-catalog".
In the next line of example1(), we use the server's getRootCatalog() method to create a client-side catalog object connected to AllegroGraph's default rootCatalog, as defined in the AllegroGraph configuration file. The catalog object has methods such as getCatalogName() and getAllRepositories() that we can use to investigate the catalogs on the AllegroGraph server. When we look inside the root catalog, we can see which repositories are available:
AGCatalog catalog = server.getCatalog(CATALOG_ID);
println("Available repositories in catalog "
+ (catalog.getCatalogName()) + ": "
+ catalog.listRepositories());
The corresponding output lists the available repositories. (When you run the examples, you may see a different list of repositories.)
Available catalogs: [/, java-catalog]
Available repositories in catalog java-catalog: []
In the examples, we are careful to delete previous state before continuing. You probably would not do this in your actual application:
catalog.deleteRepository(REPOSITORY_ID);
The next step is to create a client-side repository object representing the repository we wish to open, by calling the createRepository() method of the catalog object. We have to provide the name of the desired repository (REPOSITORY_ID in this case, which is bound to the string "jenatutorial").
AGRepository myRepository = catalog.createRepository(REPOSITORY_ID);
println("Got a repository.");
myRepository.initialize();
println("Initialized repository.");
A new or renewed repository must be initialized, using the initialize() method of the repository object. If you try to initialize a repository twice you get a warning message in the Java window but no exception.
Got a repository. Initialized repository.
The goal of all this object-building has been to create a client-side repositoryConnection object, which we casually refer to as the "connection" or "connection object." The repository object's getConnection() method returns this connection object. The function closeBeforeExit() maintains a list of connection objects and automatically cleans them up when the client exits.
AGRepositoryConnection conn = myRepository.getConnection();
closeBeforeExit(conn);
println("Got a connection.");
println("Repository " + (myRepository.getRepositoryID())
+ " is up! It contains " + (conn.size()) + " statements.");
The size() method of the connection object returns how many triples are present. In the example1() function, this number should always be zero because we deleted, recreated, and cleared the repository. This is the output in the Java window:
Got a connection.
Repository javatutorial is up! It contains 0 statements.
When using the Java Jena API, it is necessary to create a GraphMaker object on the connection. This object will let us create graphs in the connection's repository.
AGGraphMaker maker = new AGGraphMaker(conn); println("Got a graph maker for the connection.");
Got a graph maker for the connection.
Whenever you create a new repository, you should stop to consider which kinds of triple indices you will need. This is an important efficiency decision.. If you know the URI of a desired resource (the subject value of the query pattern), then the spogi index lets you retrieve all triples with that subject as a single block.
The idea is to provide your respository with the indices that your queries will need, and to avoid maintaining indices that you will never need.
We can use the connection object's listValidIndices() method to examine the list of all possible AllegroGraph triple indices:
List<String> indices = conn.listValidIndices();
println("All valid triple indices: " + indices);
This is the list of all possible valid indices:
All valid triple indices: [spogi, spgoi, sopgi, sogpi, sgpoi, sgopi, psogi, psgoi, posgi, pogsi, pgsoi, pgosi, ospgi, osgpi, opsgi, opgsi, ogspi, ogpsi, gspoi, gsopi, gpsoi, gposi, gospi, gopsi, i]
AllegroGraph can generate any of these indices if you need them, but it creates only seven indices by default. We can see the current indices by using the connection object's listIndices() method:
indices = conn.listIndices();
println("Current triple indices: " + indices);
There are currently seven indices:
Current triple indices: [i, gospi, gposi, gspoi, ospgi, posgi, spogi]
The indices that begin with "g" are sorted primarily by subgraph (or "context"). If you application does not use subgraphs, you should consider removing these indices from the repository. You don't want to build and maintain triple indices that your application will never use. This wastes CPU time and disk space. The connection object has a convenient dropIndex() method:
println("Removing graph indices...");
conn.dropIndex("gospi");
conn.dropIndex("gposi");
conn.dropIndex("gspoi");
indices = conn.listIndices();
println("Current triple indices: " + indices);
Having dropped three of the triple indices, there are now four remaining:
Removing graph indices...
Current triple indices: [i, ospgi, posgi, spogi]
The i index is for deleting triples by using the triple id number. The ospgi index is sorted primarily by object value, which makes it possible to grab a range of object values as a single block of triples from the index. Similarly, the posgi index lets us reach for a block of triples that all share the same predicate. We mentioned previously that the spogi index lets us retrieve blocks of triples that all have the same subject URI.
As it happens, we may have been overly hasty in eliminating all of the graph indices. AllegroGraph can find the right matches as long as there is any one index present, but using the "right" index is much faster. Let's put one of the graph indices back, just in case we need it. We'll use the connection object's addIndex() method:
println("Adding one graph index back in...");
conn.addIndex("gspoi");
indices = conn.listIndices();
println("Current triple indices: " + indices);
Adding one graph index back in...
Current triple indices: [i, gspoi, ospgi, posgi, spogi]
In its default mode, example1() closes the maker and connection. It can optionally return the maker when called by another method, as will occur in several examples below. If you are done with these objects, closing them and shutting them down will free resources.
if (close) {
// tidy up
maker.close();
conn.close();
myRepository.shutDown();
return null;
}
return maker;
}
In example2(), we show how to create resources describing two people, Bob and Alice, by asserting individual triples into the repository. The example also retracts and replaces a triple. Assertions and retractions to the triple store are executed by 'add' and 'remove' methods belonging to the Model object. The Model object is created by the GraphMaker object, which we will obtain from example1(), above.
Before asserting a triple, we have to generate the URI values for the subject, predicate and object fields. The Java Jena API to AllegroGraph Server predefines a number of classes and predicates for the RDF, RDFS, XSD, and OWL ontologies. RDF.type is one of the predefined predicates we will use.
The example2() function begins by calling example1() to create the appropriate GraphMaker object, which is bound to the variable maker. We will use the GraphMaker to create a graph, and then a model object that is connected to the graph. We will need both objects in order to proceed.
public static AGModel example2(boolean close) throws Exception {
println("\nStarting example2().");
AGGraphMaker maker = example1(false);
AGGraph graph = maker.getGraph();
AGModel model = new AGModel(graph);
The next step is to begin assembling the resources we will need for the example. The model's createResource() method creates a Resource object based on a string URI. These are the resources for "Bob" and "Alice":
Resource alice = model
.createResource("");
Resource bob = model.createResource("");
Both Bob and Alice will have a "name" attribute. We also need to declare a class of Persons, so we can state that Bob and Alice are Persons.
Property name = model
.createProperty("");
Resource person = model
.createResource("");
The name attributes will contain literal values. We have to generate the Literal objects from strings:
Literal bobsName = model.createLiteral("Bob");
Literal alicesName = model.createLiteral("Alice");
The next line prints out the number of triples currently in the repository.
println("Triple count before inserts: " + model.size());
Triple count before inserts: 0
Now we assert four triples, two for Bob and two more for Alice, using the model's add() method. Note the use of RDF.type, which is an attribute of the RDF object in. This attribute is set the the URI of the rdf:type predicate, which is used to indicate the class of a resource.
// Alice's name is "Alice"
model.add(alice, name, alicesName);
// Alice is a person
model.add(alice, RDF.type, person);
// Bob's name is "Bob"
model.add(bob, name, bobsName);
// Bob is a person, too.
model.add(bob, RDF.type, person);
After the assertions, we count triples again (there should be four) and print out the triples for inspection. If we call model.listStatements() with no arguments, it dumps every triple in the graph. (We'll be a little more selective in example4().)
println("Added four triples.");
println("Triple count after inserts: " + (model.size()));
StmtIterator result = model.listStatements();
while (result.hasNext()) {
Statement st = result.next();
println(st);
}
This is the output at this point. We see four triples, two about Alice and two about Bob:
Triple count after inserts: 4
[,,]
[,, "Bob"]
[,,]
[,, "Alice"]
We see two resources of type "person," each with a literal name.
The next step is to demonstrate how to remove a triple. Use the remove() method of the model object, and supply a triple pattern that matches the target triple. In this case we want to remove Bob's name triple from the repository. Then we'll count the triples again to verify that there are only three remaining.
model.remove(bob, name, bobsName);
println("Removed one triple.");
println("Triple count after deletion: " + (model.size()));
Removed one triple.
Triple count after deletion: 3
Finally, we re-assert Bob's name so we can use it in subsequent examples, and we'll return the model object for other examples to use. Example2() ends with a condition that either closes the connection or passes the model object on to the next method for reuse.
model.add(bob, name, bobsName);
if (close) {
model.close();
graph.close();
maker.close();
return null;
}
return model;
}
SPARQL stands for the "SPARQL Protocol and RDF Query Language," a recommendation of the World Wide Web Consortium (W3C). SPARQL is a query language for retrieving RDF triples.
Our next example illustrates how to evaluate a SPARQL query. This is the simplest query, the one that returns all triples. Note that example3() continues with the four triples created in example2().
public static void example3() throws Exception {
AGModel model = example2(false);
println("\nStarting example3().");
try {
String queryString = "SELECT ?s ?p ?o WHERE {?s ?p ?o .}";
The SELECT clause returns the variables ?s, ?p and ?o. The variables are bound to the subject, predicate and object values of each triple that satisfies the WHERE clause. In this case the WHERE clause is unconstrained. The dot (.) in the fourth position signifies the end of the pattern.
Queries must be created by offering the queryString to the AGQueryFactory.create() method. This creates a query object. The query object is passed, in turn, to the AGQueryExecutionFactory.create() method where it is combined with the model for the graph you want to search. The resulting AGQueryExecution object has a method, execSelect(), that runs the query and returns a ResultSet.
AGQuery sparql = AGQueryFactory.create(queryString);
QueryExecution qe = AGQueryExecutionFactory.create(sparql, model);
try {
ResultSet results = qe.execSelect();
The ResultSet is an iterator that gives access to a sequence of bindingSets. Below we illustrate one (rather heavyweight) method for extracting the values from a binding set, indexed by the name of the corresponding column variable in the SELECT clause.
while (results.hasNext()) {
QuerySolution result = results.next();
RDFNode s = result.get("s");
RDFNode p = result.get("p");
RDFNode o = result.get("o");
System.out.println(" { " + s + " " + p + " " + o + " . }");
Starting example3().
{ Alice . }
{ . }
{ . }
{ Bob . }
The best practice is to close the various objects as soon as you finish using them, in order to free resources.
} finally {
qe.close();
}
} finally {
model.close();
}
}
The listStatements() method of the model object provides a simple way to perform unsophisticated queries. This method lets you enter a mix of required values and wildcards, and retrieve all matching triples. (If you need to perform sophisticated tests and comparisons you should use the SPARQL query instead.)
This is the example4() function of JenaTutorialExamples.java. It begins by calling example2() to create a model object and populate the jenarepository with four triples describing Bob and Alice.
public static void example4() throws Exception {
AGModel model = example2(false);
println("\nStarting example4().");
We're going to search for triples that mention Alice, so we have to create an "Alice" resource object to use in the search pattern. Think of this as Alice's URI.
Resource alice = model
.createResource("");
Now we search for triples with Alice's URI in the subject position. The "null" values are wildcards for the predicate and object positions of the triple. The RDFNode token indicates that we want to object of the triple to be either a resource URI or a literal value. The object value can sometimes be a string, which we are excluding from this pattern.
StmtIterator statements = model.listStatements(alice, null,
(RDFNode) null);
The listStatements() method returns a statement iterator object (bound to the variable "statements" in this case). This object can be iterated over, exposing one result statement at a time.
try {
while (statements.hasNext()) {
println(statements.next());
}
This prints out the two matching triples for "Alice."
Starting example4().
[,,]
[,, "Alice"]
At this point it is good form to close the statements and model objects because they occupy memory.
} finally {
statements.close();
model.close();
}
}
The next example, example5(), illustrates some variations on what we have seen so far. The example creates and asserts typed literal values, including language-specific literals.
First, example5() obtains a model object from example2(). Then it clears the repository of all existing triples.
public static void example5() throws Exception {
AGModel model = example2(false);
println("\nStarting example5().");
model.removeAll();
For sake of coding efficiency, it is good practice to create variables for namespace strings. We'll use this namespace again and again in the following lines.
String exns = "";
The example creates new resources describing Alice and Ted. Apparently Bob was on vacation. We will use these resources in the subject field of the triples.
Resource alice = model.createResource("");
Resource ted = model.createResource(exns + "ted");
These are the four predicates used in the example: age, weight, favoriteColor, and birthdate. They are instantiated as Property objects. We would usually refer to these are predicates in AllegroGraph.
Property age = model.createProperty(exns,"age");
Property weight = model.createProperty(exns, "weight");
Property favoriteColor = model.createProperty(exns, "favoriteColor");
Property birthdate = model.createProperty(exns, "birthdate");
Favorite colors, declared as Literals in English (default) and French.
Literal red = model.createLiteral("Red");
Literal rouge = model.createLiteral("Rouge", "fr");
Age values, declared as INT, LONG, and untyped:
Literal fortyTwoInt = model.createTypedLiteral("42", XSDDatatype.XSDint);
Literal fortyTwoLong = model.createTypedLiteral("42", XSDDatatype.XSDlong);
Literal fortyTwoUntyped = model.createLiteral("42");
Birth date values, declared as DATE and DATETIME types.
Literal date = model.createTypedLiteral("1984-12-06", XSDDatatype.XSDdate);
Literal time = model.createTypedLiteral("1984-12-06T09:00:00", XSDDatatype.XSDdateTime);
Weights, written as floats, but one untyped and the other declared to be a float.
Literal weightUntyped = model.createLiteral("120.5");
Literal weightFloat = model.createTypedLiteral("120.5", XSDDatatype.XSDfloat);
The model object's createStatement() method assembles the elements of a triple, but does not yet add them to the repository. Here are Alice's and Ted's ages assembled into statements. (We gave Ted two age triples because Bob was on vacation. The triples have the same value cast into different types.)
Statement stmt1 = model.createStatement(alice, age, fortyTwoInt);
Statement stmt2 = model.createStatement(ted, age, fortyTwoLong);
Statement stmt3 = model.createStatement(ted, age, fortyTwoUntyped);
The Java Jena API to AllegroGraph Server uses the model.add() method for asserting triples into the repository. It can create triples from statements, or from URIs and literal values, as shown here.
model.add(stmt1);
model.add(stmt2);
model.add(stmt3);
model.add(alice, weight, weightFloat);
model.add(ted, weight, weightUntyped);
model.add(alice, favoriteColor, red);
model.add(ted, favoriteColor, rouge);
model.add(alice, birthdate, date);
model.add(ted, birthdate, time);
The RDF/SPARQL spec is very conservative when matching various combinations of literal values. The match and query statements below illustrate how some of these combinations perform. Note that this loop uses the listStatements() method to retrieve triples. We'll do SPARQL queries in a minute.
for (Literal obj : new Literal[] {null, fortyTwoInt, fortyTwoLong, fortyTwoUntyped, weightFloat, weightUntyped,
red, rouge}) {
println( "\nRetrieve triples matching " + obj + ".");
StmtIterator statements = model.listStatements(null, null, obj);
try {
while (statements.hasNext()) {
println(statements.next());
}
} finally {
statements.close();
}
}
The listStatements() method looks for all triples that have a specific value in the object position. It doesn't care which resource or which predicate are in play. The loop cycles through various typed and untyped Literals to see which triples match each request.
These are the results of the tests in this loop. The first iteration uses "null" as the object value. This is a wildcard value, and matches all the triples in the repository:
Retrieve triples matching null.
[,, "1984-12-06T09:00:00Z"^^]
[,, "1984-12-06Z"^^]
[,, "Rouge"@fr]
[,, "Red"]
[,, "120.5"]
[,, "1.20500006E2"^^]
[,, "42"]
[,, "42"^^]
[,, "42"^^]
[,, "Bob"]
[,,]
[,,]
[,, "Alice"]
What triples match "42" declared as an INT? [fortyTwoInt]
Retrieve triples matching 42^^.
[,, "42"^^]
What triples match "42" declared as an LONG? [fortyTwoLong]
Retrieve triples matching 42^^.
[,, "42"^^]
What triples match "42" untyped? [fortyTwoUntyped]
Retrieve triples matching 42.
[,, "42"]
What triples match "120.5" declared as a FLOAT? [weightFloat]
Retrieve triples matching 120.5^^.
[,, "1.20500006E2"^^]
What triples match "120.5" untyped? [weightUntyped]
Retrieve triples matching 120.5.
[,, "120.5"]
What triples match "Red" as a simple string? [Red]
Retrieve triples matching Red.
[,, "Red"]
What triples match "Rouge" declared as a French string? [Rouge]
Retrieve triples matching Rouge@fr.
[,, "Rouge"@fr]
The next loop builds and evaluates a SPARQL query instead of using listStatements(). It also shows examples of putting typed values into the search criteria without creating Literal objects as an intermediate step. It also shows how to extract individual subject, predicate and object values from each returned triple.
for (String obj : new String[]{"42", "\"42\"", "120.5", "\"120.5\"", "\"120.5\"^^xsd:float",
"\"Rouge\"@fr", "\"Rouge\"", "\"1984-12-06\"^^xsd:date"}) {
println( "\nQuery triples matching " + obj + ".");
String queryString = "PREFIX xsd: <> SELECT ?s ?p ?o WHERE {?s ?p ?o . filter (?o = " + obj + ")}";
AGQuery query = AGQueryFactory.create(queryString);
QueryExecution qe = AGQueryExecutionFactory.create(query, model);
try {
ResultSet results = qe.execSelect();
while (results.hasNext()) {
QuerySolution result = results.next();
RDFNode s = result.get("s");
RDFNode p = result.get("p");
RDFNode o = result.get("o");
println(" " + s + " " + p + " " + o);
}
What triples match "42" (which is an int). We get both ints and longs.
Query triples matching 42. "42"^^<> "42"^^<>
What triples match "\"42\"" (which is a string?)
Query triples matching "42". 42
What triples match "120.5" (a float)?
Query triples matching 120.5. 1.20500006E2^^<>
What triples match "\"120.5\"" (a string)?
Query triples matching "120.5". 120.5
What triples match"\"120.5\"^^xsd:float" (a float)?
Query triples matching "120.5"^^xsd:float. 1.20500006E2^^<>
What triples match "\"Rouge\"@fr" (a French string)?
Query triples matching "Rouge"@fr. Rouge@fr
What triples match "Rouge" (a string)?
Query triples matching "Rouge". [No matches. General string fails to match French string.]
In the following example, we use listStatements() to match a DATE object. We have used a DATE literal in the object position of the triple pattern:
println("\nRetrieve triples matching DATE object.");
StmtIterator statements = model.listStatements(null, null, date);
try {
while (statements.hasNext()) {
println(statements.next());
}
} finally {
statements.close();
}
Retrieve triples matching DATE object.
[,, "1984-12-06Z"^^]
Note the string representation of the DATE object in the following query.
StmtIterator statements = model.listStatements(null, null,
model.createTypedLiteral("1984-12-06",XSDDatatype.XSDdate));
Match triples having a specific DATE value.
[,, "1984-12-06Z"^^]
Let's try the same experiment with DATETIME:
StmtIterator statements = model.listStatements(null, null, time);
Retrieve triples matching DATETIME object.
[,, "1984-12-06T09:00:00Z"^^]
And a DATETIME match without using a literal value object:
StmtIterator statements = model.listStatements(null, null,
model.createTypedLiteral("1984-12-06T09:00:00",XSDDatatype.XSDdateTime));
Match triples having a specific DATETIME value.
[,, "1984-12-06T09:00:00Z"^^]
The Java Jena API client can load triples in either RDF/XML format or NTriples format.>
The NTriples file contains a graph of resources describing the Kennedy family, the places where they were each born, their colleges, and their professions. A typical entry from that file looks like this:
<> <> "Joseph" . <> <> "Patrick" . <> <> "Kennedy" . <> <> "none" . <> <> <> . <> <> "1888" . <> <> "1969" . <> <> <> . <> <> <> . <> <> <> . <> <> <> . <> <> <> . <> <> <> .
Note that AllegroGraph can segregate triples into contexts (subgraphs) by treating them as quads. The fourth field contains the URI of a subgraph. The Jena GraphMaker object, however, explicitly creates subgraphs (by creating model objects). Each model object addresses the content of a specific subgraph. There is always an unnamed "default background graph" available, and you may create any number of named subgraphs to compliment it. It is important to address your queries to the right graph, of course.
In example6() we use the default background graph from example1(), which is initially empty, plus a second subgraph named "". Note that it is traditional to name subgraphs using appropriate URIs.
public static AGGraphMaker example6() throws Exception {
AGGraphMaker maker = example1(false);
AGModel model = new AGModel(maker.getGraph());
AGModel model_vcards = new AGModel(maker.createGraph(""));
Model addresses the default background graph. Model_vcards addresses a second subgraph where we will load v-card data.
The variables path1 and path2 are bound to the RDF/XML and NTriples files, respectively. You may have to redefine these paths depending on your platform and how you have set up the project. The data files are in the same directory as JenaTutorialExamples.java.
String path1 = "src/tutorial/java-vcards.rdf";
String path2 = "src/tutorial/java-kennedy.ntriples";
Both examples need a base URI as one of the required arguments to the asserting methods:
String baseURI = "";
In the next step we use the read() method to load the vcard triples into the model_vcards subgraph:
model_vcards.read(new FileInputStream(path1), baseURI);
Then we use read() to load the Kennedy family tree into the background graph:
model.read(new FileInputStream(path2), baseURI, "N-TRIPLE");
Now we'll ask AllegroGraph to report on how many triples it sees in the null context and in the #vcards context:
println("After loading, model_vcards contains " + model_vcards.size()
+ " triples in graph '" + model_vcards.getGraph()
+ "'\n and model contains " + model.size()
+ " triples in graph '" + model.getGraph() + "'.");
The output of this report was:
After loading, model_vcards contains 16 triples in graph ''
and model contains 1214 triples in graph 'default-graph'.
Example6() concludes by returning the GraphMaker object so it can be passed on to simplify the creation/load phase of subsequent examples.
Example7() borrows the same triples we loaded in example6(), above, and runs two unconstrained retrievals. It retrieves the GraphMaker object from example6() and uses it to reacquire the model and model_vcards objects. The first uses model.listStatements() to print out the subject value and subgraph identifier of each triple found in the default background graph. We placed a limit on the output because otherwise it would flood the screen with more than a thousand Kennedy triples.
public static void example7() throws Exception {
AGGraphMaker maker = example6();
AGModel model = new AGModel(maker.getGraph());
AGModel model_vcards = new AGModel(maker.openGraph(""));
println("\nMatch all and print subjects and graph (model)");
StmtIterator statements = model.listStatements();
for (int i = 0; i < 25 && statements.hasNext(); i++) {
Statement stmt = statements.next();
println(stmt.getSubject() + " " + stmt.getModel().getGraph());
}
Match all and print subjects and graph (model) default-graph default-graph default-graph default-graph... [and 21 more]
This loop prints out triples from the default graph only, because that is the only graph that model can access.
The following loop performs the same experiment using model_vcards.
println("\nMatch all and print subjects and graph (model_vcards)");
statements = model_vcards.listStatements();
for (int i = 0; i < 25 && statements.hasNext(); i++) {
Statement stmt = statements.next();
println(stmt.getSubject() + " " + stmt.getModel().getGraph());
}
statements.close();
Match all and print subjects and graph (model_vcards)
1
1... [and 11 more]
In this case, the loop prints out only v-card triples from model_vcards. The two odd-looking triples are from "blank" nodes in the v-cards model.
The next examples show how to write triples out to a file in either NTriples format or RDF/XML format. The output of either format may be optionally redirected to standard output (the Java command window) for inspection.
Example example8() begins by obtaining a GraphMaker object from example6(). This means the repository contains Kennedy family tree triples in the the default graph. We'll create a new model object to give us access to the default graph:
public static void example8() throws Exception {
AGGraphMaker maker = example6();
AGModel model = new AGModel(maker.getGraph());
To write triples in NTriples format, call model.write(), which dumps all triples. You have to a give it an output stream, which could be either a file path or standard output. You must also tell model.write() which output format you want. In the first case we'll use N-TRIPLE format. The code below gives you the choice of writing to a file or to the standard-output window.
String outputFile = TEMPORARY_DIRECTORY + "temp.nt";
// outputFile = null;
if (outputFile == null) {
println("\nWriting n-triples to Standard Out instead of to a file");
} else {
println("\nWriting n-triples to: " + outputFile);
}
OutputStream output = (outputFile != null) ? new FileOutputStream(
outputFile) : System.out;
model.write(output, "N-TRIPLE");
output.close();
To write triples in RDF/XML format, the process is the same, except that you don't have to prompt model.write() with a format. The RDF format is the default for this method.
String outputFile2 = TEMPORARY_DIRECTORY + "temp.rdf";
// outputFile2 = null;
if (outputFile2 == null) {
println("\nWriting RDF to Standard Out instead of to a file");
} else {
println("\nWriting RDF to: " + outputFile2);
}
output = (outputFile2 != null) ? new FileOutputStream(outputFile2)
: System.out;
model.write(output);
output.close();
}
The write() method writes out all triples in a subgraph. This provides a convenient means for making local backups of sections of your RDF store.
Finally, if the objective is to write out a filtered set of triples, the following approach may be used. First capture the query results as a StmtIterator. Then add the statements to a new, empty model. Finally, use model.write() to dump all the statements of that model to a file to to standard output.
public static void example9() throws Exception {
AGGraphMaker maker = example6();
AGModel model = new AGModel(maker.getGraph());
StmtIterator statements = model.listStatements(null,RDF.type, (RDFNode)null);
Model m = ModelFactory.createDefaultModel();
m.add(statements);
m.write(System.out);
}
A namespace is that portion of a URI that precedes.
In the SPARQL query in the example below, we see two qnames, "rdf:type" and "ex:alice". Ordinarily, we would expect to see "PREFIX" declarations in SPARQL that define namespaces for the "rdf" and "ex" nicknames. However, the RepositoryConnection. In the example below, we first register the 'ex' prefix, and then submit the SPARQL query. It is legal, although not recommended, to redefine the built-in prefixes RDF, etc..
The example example11() begins by borrowing a connection object from example1(). Then we retrieve the repository object and its associated valueFactory.
public static void example11() throws Exception {
AGGraphMaker maker = example1(false);
AGModel model = new AGModel(maker.getGraph());
We need a namespace string (bound to the variable exns) to use when generating the alice and person URIs.
String exns = "";
Resource alice = model.createResource(exns + "alice");
Resource person = model.createResource(exns + "Person");
Now we can assert Alice's RDF:TYPE triple.
model.
model.setNsPrefix("ex", exns);
String queryString =
"SELECT ?s ?p ?o" +
"WHERE { ?s ?p ?o . FILTER ((?p = rdf:type) && (?o = ex:Person) ) }";
AGQuery query = AGQueryFactory.create(queryString);
QueryExecution qe = AGQueryExecutionFactory.create(query, model);
try {
ResultSet results = qe.execSelect();
while (results.hasNext()) {
println(results.next());
}
The output shows the single triple with its fully-expanded URIs. This demonstrates that the qnames in the SPARQL query successfully matched the fully-expanded URIs in the triple.
[s=;p=;o=]
It is worthwhile to briefly discuss performance here. In the current AllegroGraph system, queries run more efficiently if constants appear inside of the "where" portion of a query, rather than in the "filter" portion. For example, the SPARQL query below will evaluate more efficiently than the one in the above example. However, in this case, you have lost the ability to output the constants "" and "". Occasionally you may find it useful to output constants in the output of a 'select' clause; in general though, the above code snippet illustrates a query syntax that is discouraged.
SELECT ?s WHERE { ?s rdf:type ex:person }
SPARQL provides alternatives to the standard SELECT query. Example example13() exercises these alternatives to show how AllegroGraph Server handles them using the Java Jena API.
The example begins by borrowing an AGGraphMaker object from example6(). From the maker object we can generate an AGModel object. This connects to an existing repository that contains vcard and Kennedy data. We'll need to register a Kennedy namespace to make the queries easier to read.
public static void example13() throws Exception {
AGGraphMaker maker = example6();
AGModel model = new AGModel(maker.getGraph());
model.setNsPrefix("kdy", "");
As it happens, we don't need the vcard data this time, so we'll remove it. This is an example of how to delete an entire subgraph (the vcards "context") using Jena:
maker.removeGraph("");
The example begins with a SELECT query so we can see some of the Kennedy resources. Note the use of the qe.exectSelect() method to evaluate the query.
String queryString = "select ?s where { ?s rdf:type kdy:person} limit 5";Note that SELECT returns variable bindings. In this case it returns subject URIs of five people:
AGQuery query = AGQueryFactory.create(queryString);
QueryExecution qe = AGQueryExecutionFactory.create(query, model);
println("\nSELECT some persons:");
try {
ResultSet results = qe.execSelect();
while (results.hasNext()) {
println(results.next());
}
} finally {
qe.close();
}
SELECT some persons:
[s=]
[s=]
[s=]
[s=]
[s=]
The ASK query returns a Boolean, depending on whether the triple pattern matched any triples. In this case we ran two tests; one seeking "John" and the other looking for "Alice." Note that the ASK query uses a different construction method than the SELECT query: qe.execAsk().
queryString = "ask { ?s kdy:first-name 'John' } ";
query = AGQueryFactory.create(queryString);
qe = AGQueryExecutionFactory.create(query, model);
try {
println("\nASK: Is there anyone named John? " + qe.execAsk());
} finally {
qe.close();
}
queryString = "ask { ?s kdy:first-name 'Alice' } ";
query = AGQueryFactory.create(queryString);
qe = AGQueryExecutionFactory.create(query, model);
try {
println("\nASK: Is there anyone named Alice? " + qe.execAsk());
} finally {
qe.close();
}
The output of this loop is:
ASK: Is there anyone named John? true ASK: Is there anyone named Alice? false
The CONSTRUCT query contructs new triples out of the matching values in the query. The point is that the query can bind variables from existing triples and then "construct" new triples by recombining the values. This query constructs new triples that use the kdy:has-grandchild predicate.
queryString = "construct {?a kdy:has-grandchild ?c}" +
" where { ?a kdy:has-child ?b . " +
" ?b kdy:has-child ?c . }";
query = AGQueryFactory.create(queryString);
qe = AGQueryExecutionFactory.create(query, model);
try {
Model m = qe.execConstruct();
Note that the CONSTRUCT query qe.execConstruct(), which returns a new model object. This model contains new triples that have not yet been added to the triple store. Use model.add() to put them in the store. Note that we are adding one model to another:
model.add(m); // add new triples to the store
} finally {
qe.close();
}
The DESCRIBE query returns another model, meaning all triples of the matching resources. It uses qe.execDescribe(). In this case we asked SPARQL to describe one grandparent and one grandchild. (This confirms that the kdy:has-grandchild triples successfully entered the triple store.)
queryString = "describe ?s ?o where { ?s kdy:has-grandchild ?o . } limit 1";
query = AGQueryFactory.create(queryString);
qe = AGQueryExecutionFactory.create(query, model);
try {
Model m = qe.execDescribe();
println("\nDescribe one grandparent and one grandchild:");
m.write(System.out);
} finally {
qe.close();
}
Printing the result model to standard output reveals that the "model" is expressed as RDF/XML. The actual output is too lengthy to show here, but one block confirms that the has-grandchild triples are present as expected:
<rdf:Description rdf:
<j.0:has-grandchild rdf:
<j.0:has-grandchild rdf:
<j.0:has-grandchild rdf:
<j.0:has-grandchild rdf: ...
SPARQL Update queries can also be evaluated to modify the model/repository. A SPARQL Update can be executed as follows:
queryString = "insert data { kdy:person4 kdy:nickname 'Jack'}"; query = AGQueryFactory.create(queryString); qe = AGQueryExecutionFactory.create(query, model); try { qe.execUpdate(); } finally { qe.close(); }
The great promise of the semantic web is that we can use RDF metadata to combine information from multiple sources into a single, common model. The great problem of the semantic web is that it is so difficult to recognize when two resource descriptions from different sources actually represent the same thing. This problem arises because there is no uniform or universal way to generate URIs identifying resources. As a result, we may create two resources, Bob and Robert, that actually represent the same person.
This problem has generated much creativity in the field. One way to approach the problem is through inference. There are certain relationships and circumstances where an inference engine can deduce that two resource descriptions actually represent one thing, and then automatically merge the descriptions. AllegroGraph's inference engine can be turned on or off each time you run a query against the triple store. (Note that inference is turned off by default, which is the opposite of standard Jena behavior.)
In example example19(), we will create four resources: Bob, with son Bobby, and Robert with daughter Roberta.
First we have to set up the data. We begin by generating four new resources.
public static void example19() throws Exception {The next step is to create URIs for the predicates we'll need (name and fatherOf), plus one for the Person class.
AGGraphMaker maker = example1(false);
AGModel model = new AGModel(maker.getGraph());
Resource robert = model.createResource("");
Resource roberta = model.createResource("");
Resource bob = model.createResource("");
Resource bobby = model.createResource("");
// create name and child predicates, and Person class.The names of the four people will be literal values.
Property name = model.createProperty("");
Property fatherOf = model.createProperty("");
Resource person = model.createResource("");
// create literal values for names
Literal bobsName = model.createLiteral("Bob");
Literal bobbysName = model.createLiteral("Bobby");
Literal robertsName = model.createLiteral("Robert");
Literal robertasName = model.createLiteral("Roberta");
Robert, Bob and the children are all instances of class Person. It is good practice to identify all resources by an rdf:type link to a class.
// Robert, Bob, and children are peopleThe four people all have literal names.
model.add(robert, RDF.type, person);
model.add(roberta, RDF.type, person);
model.add(bob, RDF.type, person);
model.add(bobby, RDF.type, person);
// They all have names.Robert and Bob have links to the child resources:
model.add(robert, name, robertsName);
model.add(roberta, name, robertasName);
model.add(bob, name, bobsName);
model.add(bobby, name, bobbysName);
// robert has a child
model.add(robert, fatherOf, roberta);
// bob has a child
model.add(bob, fatherOf, bobby);
Now that the basic resources and relations are in place, we'll seed the triple store with a statement that "Robert is the same as Bob," using the owl:sameAs predicate. The AllegroGraph inference engine recognizes the semantics of owl:sameAs, and automatically infers that Bob and Robert share the same attributes. Each of them originally had one child. When inference is turned on, however, they each have two children.
Note that SameAs does not combine the two resources. Instead it links each of the two resources to all of the combined children. The red links in the image are "inferred" triples. They have been deduced to be true, but are not actually present in the triple store.
This is the critical link that tells the inference engine to regard Bob and Robert as the same resource.
// Bob is the same person as Robert
model.add(bob, OWL.sameAs, robert);
This is a simple listStatements() search asking for the children of Robert, with inference turned OFF. This is the default behavior of a model.
println("\nChildren of Robert, inference OFF");
printRows(model.listStatements(robert, fatherOf, (RDFNode) null));
The search returns one triple, which is the link from Robert to his direct child, Roberta.
Children of Robert, inference OFF
[,,]
In the AllegroGraph Java API, we can take one graph and turn inference on and off at will. In the Java Jena API, however, a model either permits inferencing or it doesn't. Therefore, to compare queries with and without inferencing, we'll need a second model. Fortunately, both "models" can address the same graph. The second model is not a duplicate graph, but a second interface to the same graph.
First we need a reasoning engine, which is an instance of AGReasoner. Then we combine the reasoner with the old model to create a new, inferencing model (called InfModel).
println("\nChildren of Robert, inference ON");
AGReasoner reasoner = new AGReasoner();
InfModel infmodel = new AGInfModel(reasoner, model);
The query is the same as before, but the results are different:
printRows(infmodel.listStatements(robert, fatherOf, (RDFNode)null));
Children of Robert, inference ON
[,,]
[,,]
Note that with inference ON, Robert suddenly has two children because Bob's child has been included. Also note that the final triple (robert fatherOf bobby) has been inferred. The inference engine has determined that this triple logically must be true, even though it does not appear in the repository.
We can reuse the Robert family tree to see how the inference engine can deduce the presence of inverse relationships.
Up to this point in this tutorial, we have created new predicates simply by creating a URI and using it in the predicate position of a triple. This time we need to create a predicate resource so we can set an attribute of that resource. We're going to declare that the hasFather predicate is the owl:inverseOf the existing fatherOf predicate.
The first step is to remove the owl:sameAs link, because we are done with it.
model.remove(bob, OWL.sameAs, robert);
We'll need a URI for the new hasFather predicate:
Property hasFather = model.createProperty("");
This is the line where we create a predicate resource. It is just a triple that describes a property of the predicate. The hasFather predicate is the inverse of the fatherOf predicate:
model.add(hasFather, OWL.inverseOf, fatherOf);
First, we'll search for hasFather triples, leaving inference OFF to show that there are no such triples in the repository:
println("\nPeople with fathers, inference OFF");
printRows(model.listStatements(null, hasFather, (RDFNode)null));
People with fathers, inference OFF
Now we'll turn inference ON. This time, the AllegroGraph inference engine discovers two "new" hasFather triples.
println("\nPeople with fathers, inference ON");
printRows(infmodel.listStatements(null, hasFather, (RDFNode)null));
People with fathers, inference ON
[,,]
[,,]
Both of these triples are inferred by the inference engine.
Invoking inference using the rdfs:subPropertyOf predicate lets us "combine" two predicates so they can be searched as one. For instance, in our Robert/Bob example, we have explicit fatherOf relations. Suppose there were other resources that used a parentOf relation instead of fatherOf. By making fatherOf a subproperty of parentOf, we can search for parentOf triples and automatically find the fatherOf triples at the same time.
First we should remove the owl:inverseOf relation from the previous example. We don't have to, but it keeps things simple.
model.remove(hasFather, OWL.inverseOf, fatherOf);
We'll need a parentOf URI to use as the new predicate. Then we'll add a triple saying that fatherOf is an rdfs:subPropertyOf the new predicate, parentOf:
Property parentOf = model.createProperty("");
model.add(fatherOf, RDFS.subPropertyOf, parentOf);
If we now search for parentOf triples with inference OFF, we won't find any. No such triples exist in the repository.
println("\nPeople with parents, inference OFF");
printRows(model.listStatements(null, parentOf, (RDFNode)null));
People with parents, inference OFF
With inference ON, however, AllegroGraph infers two new triples:
println("\nPeople with parents, inference ON");
printRows(infmodel.listStatements(null, parentOf, (RDFNode)null));
People with parents, inference ON
[,,]
[,,]
The fact that two fatherOf triples exist means that two correponding parentOf triples must be valid. There they are.
Before setting up the next example, we should clean up:
model.remove(fatherOf, RDFS.subPropertyOf, parentOf);
When you declare the domain and range of a predicate, the AllegroGraph inference engine can infer the rdf:type of resources found in the subject and object positions of the triple. For instance, in the triple <subject, fatherOf, object> we know that the subject is always an instance of class Parent, and the object is always an instance of class Child.
In RDF-speak, we would say that the domain of the fatherOf predicate is rdf:type Parent. The range of fatherOf is rdf:type Child.
This lets the inference engine determine the rdf:type of every resource that participates in a fatherOf relationship.
We'll need two new classes, Parent and Child. Note that RDF classes are always capitalized, just as predicates are always lowercase.
Resource parent = model.createResource("");
Resource child = model.createResource("");
Now we add two triples defining the domain and rage of the fatherOf predicate:
model.add(fatherOf, RDFS.domain, parent);
model.add(fatherOf, RDFS.range, child);
Now we'll search for resources of rdf:type Parent. The inference engine supplies the appropriate triples:
println("\nWho are the parents? Inference ON.");
printRows(infmodel.listStatements(null, RDF.type, parent));
Who are the parents? Inference ON.
[,,]
[,,]
Bob and Robert are parents. Who are the children?
println("\nWho are the children? Inference ON.");
printRows(infmodel.listStatements(null, RDF.type, child));
Who are the children? Inference ON.
[,,]
[,,]
Bobby and Roberta are the children. | https://franz.com/agraph/support/documentation/current/java-tutorial/jena-tutorial.html | CC-MAIN-2018-43 | refinedweb | 7,360 | 51.55 |
After such a long time since it was announced, Canon's first mirrorless EOS camera --- Canon EOS M has finally got in hand.. The only problem, as many Canon DSLRs may have, is the incompatibility which will jump out when you want to import Canon EOS M 1080p MOV to Final Cut Pro 7 on Mac.
Obviously, what matters is not the MOV fomat, since it is fine while playing with QT player on Mac, but the H.264 codec. It is a highly compressed codec, great for playing/streaming videos. But as for editing H.264 MOV in FCP 7, it is not such a suitable choice. In order to make Canon EOS M EOS M and FCP 7 users should be developed with all the five options. Here recommended the best H.264 MOV to Apple ProRes Converter for Mac in that.
Four-step Guide for transcoding H.264 MOV ingest/edit H.264 MOV files into FCP X, iMovie, FCE, Adobe Premiere Pro, Adobe Premiere Elements, Adobe After Effects, Avid Media Composer, etc. If you are interested, please link to Brorsoft's Video Converter for Mac to get more info.
import Canon EOS M 1080p MOV to Final Cut Pro 7, editing H.264 files in FCP, importing H.264 MOV to FCP, make Canon EOS M files editable in FCP, transfer MOV files to FCP, Canon EOS M H.264 MOV in FCP 7, converting H.264 files to Apple ProRes, Mac converting H.264 MOV to ProRes, put H.264 MOV files in FCP, transcoding H.264 MOV footages for FCP 6, convert H.264 MOV to ProRes 422, H.264 to ProRes conversion, best H.264 to Apple ProRes converter, best video converter for FCP, H.264 MOV to Apple ProRes, how to import H.264 MOV files to FCP | http://www.brorsoft.com/how-to/import-canon-eos-m-1080p-mov-files-final-cut-pro7.html | CC-MAIN-2016-22 | refinedweb | 306 | 85.79 |
This guide assumes you've already setup Comatose (see Installation).
Comatose supports a hierarchal arrangement of pages. A page tree, in other words. Every page has the following attributes:
Provided by acts_as_tree:
The slug is generally a URI accessible name based on the title. For example, if your page is titled "My Favorite Food" the slug will be my-favorite-food.
The full_path is the page's full path from the root node.
The page tree's root node is the 'Home Page', by default, and always has an empty full_path.
Now that you have the Comatose plugin installed and ready for action, it needs to know which 'root' path, or base URI, to serve content from. That mapping is done in your routes.rb file. It's as simple as adding the following as the last entry (even after the map.connect ':controller/:action/:id' route):
map.comatose_admin
map.comatose_root ''
With this in place, Comatose will start serving pages directly from the root of the application. Since it's the last route, if there aren't any previous matches (to the other routes you've defined for your application), it will pass the URI info on to the ComatoseController.
Note: Don't forget to remove the default Rails index.html from app/public/, if it's there it will be used by the server instead of the comatose root page!
This is the bare minimum for integration. At this point, you should be able to visit and start adding pages.
You can map multiple roots to your application by simply calling the map.comatose_root once for each location.
map.comatose_root 'pages'
map.comatose_root 'content'
That's nifty, but it doesn't seem to be all that useful like that. Now, if you can map a URI root path to a sub-page of your page tree, then it becomes handy. Here's how you'd do that:
map.comatose_root 'help', :index=>'help'
map.comatose_root 'content', :index=>'other-content'
You can configure how Comatose works per root path. Here are the options you can specify when calling map.comatose_root:
So, perhaps we have a page in our tree that is the parent node for all of our application help. And perhaps we want it styled differently than the other comatose pages. To do that, we'd just create a new Rails layout and tell Comatose to use that layout when rendering all help pages:
map.comatose_root 'help', :index=>'application-help', :layout=>'help_layout'
map.comatose_root ''
When you don't send in the :index, it will default to the root node. Also, Comatose will use a generic layout that's been included in the plugin if you don't specify a :layout.
You can add named routes for Comatose mount points. You use map.comatose_* where "*" is anything other than "root".
map.comatose_help 'help', :index=>'application-help', :layout=>'help_layout'
This will create a comatose_help named routed that you can use elsewhere in your application. You can specify a child page of a named route like this:
<%= link_to "Account Help", comatose_help_url(:page=>'account') %>
The page body gets processed, then filtered. Processing generates content based on dynamic tags (in Liquid, or ERB). Filtering takes the content and converts it into HTML.
Text filters convert the text into HTML. Comatose comes pre-configured with support for using Textile, Markdown, Markdown+SmartyPants, or RDoc.
If there is another text filter type you'd prefer beyond the default filters, you can write your own text filter as simply as putting this in your environment.rb file:
# SMARTYPANTS
TextFilters.define "SmartyPants" do |text|
require 'rubypants'
def render_text(text)
RubyPants.new(text).to_html
end
end
If you try to require a library that isn't installed, the TextFilter will not enable the filter. Therefore the filter drop-down on the edit page form will never show a filter that throws an exception when being loaded.
For this to work properly, you have to require the library within the TextFilters.define block.
The page body is also is run through Liquid or ERb, so you can get kinda fancy. Here are all the items you have access to in the processing context:
Comatose allows rendering of pages inline from your application view, it's used just like rendering a partial:
<%= render :comatose=>'about' %>
Where 'about' is the page path. By default it will return a failure message if you send it a path it can't find. You can tell it not to by sending it the :silent flag, like this:
<%= render :comatose=>'site/navigation', :silent=>true %>
Just like partial, inline rendering supports sending a :locals hash, allowing you to send parameters to the comatose page being rendered:
<%= render :comatose=>'welcome', :locals=>{ :username=>'USERNAME' } %>
The parameters are accessible from the page processor just like a page attribute. For example:
h1. Welcome
Hello, {{ username }}
Comatose is designed to be as safe as possible. By default it removes access of destructive ActiveRecord methods, and it doesn't allow access into your application either. What does that mean? It means, by default, you can't access Rails' helpers, or your application helpers from within the page's content.
However, there are times when it's useful to provide a page access to application data. To do that you create a ComatoseDrop. If you're familiar with Liquid's Drop, then you'll feel right at home. Here's an example of simple ComatoseDrop:
Comatose.define_drop 'news' do
def latest_headlines
News.find(:all, :conditions=>['created_on > ?', 2.weeks.ago]).collect {|n| n.title }
end
end
Now, within a comatose page it can be referenced like this:
h2. Latest Headlines
{% for headline in news.latest_headlines %}
* {{ headline }}
{% endfor %}
Viola! That's it.
Before version 0.8, Comatose had few options that you could set via the Comatose::Options class. For everything else, it was expected that you would override methods on the ComatoseController and ComatoseAdminController classes. However, newer versions of Comatose support a more advanced configuration system that lets you configure just about everything in one place.
You'll configure Comatose in your environment.rb file. Here's an example of the configuration block:
Comatose.configure do |config|
# Sets the text in the Admin UI's title area
config.admin_title = "Site Content"
config.admin_sub_title = "Content for the rest of us..."
end
See "ConfigurationSettings" for a complete listing of the configuration options. Following are a few of the highlights.
At the top of the comatose admin, it says 'Comatose' with a sub-title of 'The Micro CMS'. You can change these values without having to do a complete re-skin by defining these settings:
config.admin_title = "My App's CMS"
config.admin_sub_title = "Because size does matter..."
You can tell Comatose what content-type to use. By default, it's set to use 'utf-8'. This should be fine for most uses, but if you are developing an application in a different charset, you can set it like this:
config.content_type = 'iso-8859-1'
You can also set the default level of expansion on the page tree like this:
config.default_tree_level = 3
By default, it's set to 2 -- which shows the first two levels of pages.
Some configuration settings allow you to attach a block of code to them. You can use these for returning dynamic values.
You can define modules to include in the ComatoseController by adding to the list of config.includes. You will have access to the module's code from within the setting blocks and the layout.
Use config.admin_includes for the ComatoseAdminController.
If you need access to certain helpers in the layout(s) Comatose uses, you can add to config.helpers or config.admin_helpers.
Note: These do not add methods, or tags, to a page's content. Just to the layout.
You probably don't want just anybody to add, edit, or delete your comatose pages (Aw CRUD!). The ComatoseAdminController calls an #authorize method for every action as a before_filter. That #authorize method then looks for the config.admin_authorization block and executes it.
So by defining the config.admin_authorization setting you can lock down access to the comatose admin.
Comatose.configure do |config|
# Includes AuthenticationSystem in the ComatoseAdminController
config.admin_includes << :authenticated_system
# Calls :login_required as a before_filter
config.admin_authorization = :login_required
end
Without modification, Comatose will set the page author as the REMOTE_ADDR sent by the browser -- usually an IP address. If you want it to be something else, it's as simple as defining the config.admin_get_author setting. If you've implemented authentication, it could look something like this:
Comatose.configure do |config|
# Includes AuthenticationSystem in the ComatoseAdminController
config.admin_includes << :authenticated_system
# Returns the author name (login, in this case) for the current user
config.admin_get_author do
current_user.login
end
end
Comatose comes out-of-the-box with a serviceable administration UI. But if you want to modify it so that it will match your application's look-n-feel, it's easy to accomplish because there's a rake task available to help you get started.
$ rake comatose:admin:customize
When you run the task, it will copy the views and layouts it uses into your application folders (app/views/comatose_admin and app/views/layouts/comatose_admin.rhtml) folder, giving you total control over how it looks and behaves.
Following is a list of all the files it will copy into your application folder structure:
app/
views/
comatose_admin/ <- Admin views
_form.rhtml
_page_list_item.rhtml
edit.rhtml
index.rhtml
new.rhtml
layouts/
comatose_admin.rhtml <- Admin UI layout
public/
javascripts/
comatose_admin.js <- Admin UI javascript
stylesheets/
comatose_admin.css <- Admin UI css
You can hide some of the 'meta' information fields in the admin if you'd like by adding the string names of the fields to hide in the configuration block:
config.hidden_meta_fields << 'keywords'
The supported meta fields that you can hide are:
Page caching is enabled, by default, for comatose pages. Pages will only expire when you edit/delete them from the comatose admin. To remove all cached pages use the 'Clear Page Cache' command in the admin.
You can override page caching per mount point by sending :use_cache=>'false'. It can also be overridden globally by using the config setting:
config.disable_caching = true
Versions 0.5+ support fragment caching of inline rendered content. You can instruct comatose not to use the fragment cache by sending :use_cache=>false like this:
<%= render :comatose=>'path', :use_cache=>false %>
Oh, and be sure to set ActionController::Base.fragment_cache_store in your environment.rb file:
ActionController::Base.fragment_cache_store = :file_store, File.join(RAILS_ROOT, 'tmp', 'cache', 'fragments')
Note: Caching will automatically be disabled if you send page parameters via the :locals hash.
It's possible to limit the view of a user to a certain sub-branch, or sub-branches, of the page hierarchy. You define the config.admin_get_root_page setting to return a ComatosePage object, or an Array of ComatosePages.
This example defines both the config.admin_authorization and the config.admin_get_root_page settings:
Comatose::Page.root
end
end
end
In your configuration block, set the following:
config.default_processor = :erb
Now it will use ERB for text processing instead of Liquid. There are a few minor differences in the context to keep in mind. For example, instead of using page.has_keyword.key, you can use the more ruby-like page.has_keyword? key.
A lot of times you'll create pages in development that you want to transfer to production without having to do the old copy-n-paste dance. To help accommodate this, comatose comes with two rake tasks just for this purpose.
By running:
$ rake comatose:data:export
You will get a db/comatose-pages.yml file with all the pages in your active database.
The FROM environment variable is the page path starting point. It will only export the pages from the page at path ENV[FROM] down. If you don't specify FROM, it default to the homepage ''.
$ rake comatose:data:export FROM=faq
You can specify the output file if you don't want to use db/comatose.yml or you want to export multiple branches by defining the TO_FILE environment variable:
$ rake comatose:data:export TO_FILE=db/other-pages.yml
You can, of course, mix and match these:
$ rake comatose:data:export FROM=site-help TO_FILE=db/help-pages.yml
The import process is just like exporting...
$ rake comatose:data:import
This loads the pages from db/comatose-pages.yml into your active database.
To import somewhere other than the page tree root, you can specify the TO environment variable:
$ rake comatose:data:import TO=faq
You can also specify a page.yml file other than the default by setting the FROM_FILE environment variable:
$ rake comatose:data:import TO=help FROM_FILE=db/help-pages.yml
Keep in mind that task loads environment.rb, so you'll probably want to specify the RAILS_ENV like this:
$ RAILS_ENV=development rake comatose:data:export
$ RAILS_ENV=production rake comatose:data:import
Or...
$ RAILS_ENV=development rake comatose:data:export FROM=help
$ RAILS_ENV=production rake comatose:data:import TO=help
If Using Authentication, this will enable application session key to be shared by comatose
ActionController::Base.session_options[:session_key] = '_my_unique_app_session_id'
Example config for including application helpers within comatose
Comatose.configure do |config|
config.helpers << ApplicationHelper
end
access_denied
end
end
end
Documentation only has one parameter for define, it looks like it now has 2:
#
# MARKDOWN
#
TextFilters.define :markdown, "Markdown" do
require 'bluecloth'
def render_text(text)
BlueCloth.new(text).to_html
end
def create_link(title, url)
"[#{title}](#{url})"
end
end
If Using Authentication, this will enable application session key to be shared by comatose
Example config for including application helpers within comatose:
Documentation only has one parameter for define, it looks like it now has 2: | http://code.google.com/p/comatose-plugin/wiki/GettingStartedGuide | crawl-002 | refinedweb | 2,265 | 58.99 |
Energy efficiency This little light of mine Investments in efficiency are getting more attention See article
Readers' comments
Reader comments are listed below. Comments are currently closed and new comments are no longer being accepted.
Sort:
It is one thing to provide information. It is another thing to get consumers to act on the information.
Smart Metering of electricity is possible But even Easier and Fail Safe is Simple Time Shifting of Power Consumption. fancy-pants App. No algorithm. No blue tooth wireless sensor. No feedback computer loops..
Energy saving and efficiency is such an economic "No brainer" that it is staggering that it so neglected. In the UK we are going to spend hundreds of billions of pounds to generate 20% of our electricity from renewable resources -unless an uncommon attack of common sense breaks out in Government.
There are companies out there saving 17% of heating and air conditioning bills simply by managing the systems more sensibly with no hardware. That the turnover of the company I have in mind amounts to the grand total of £6 million in five years and represents a payback to its customers of about that amount each year demonstrates how neglected this field is.
And that is only one small firm in the energy saving field. My estimate is that the saving on carbon output in the environment could be reduced well beyond the 20% renewables target for a fraction of the cost of that Renewables Target on the promise of a few small firms alone.
Here's an idea that can save energy and money. If all households that can hang out their laundry to dry instead of using the dryer, many millions of kilowatt-hours of energy could be saved. In winter, hanging clothes to dry in the basement provides needed humidity. Even in condos or townhouses, people could use drying racks available for minimal cost.
And, if men turned off the hot water faucet while shaving instead of letting it run and using as much hot water as one would use for a shower, more energy could be saved.
Oh, well, here in Europe many are on smart meters for many years. So what's the news?
And can somebody at TE explain to me why I should buy their magazine that has news-that-is-not-news?
Yes, it is quite amazing how much 'low-hanging fruit' there is in energy conservation. With relatively little effort and virtually no effect on living standards an individual can reduce energy consumption in a household by anywhere from 20-50% or more. I have added some insulation in my attic, hung some insulated curtains, a programmable thermostat, compact florescent light bulbs, low flow showerheads and a few other things and I use about half the energy as my neighbors..
Low flow shower heads are evil.
I think they are required by regulation in the UK - or at very least, every shower I have used in this country has been of the low flow variety. They are certainly required by regulation in New York.
Far less effective, far less enjoyable and completely impractical.
Efficiency is the increase of the output/ input ratio. Where blind pursuit of reduction in the denominator completely undermines the numerator, the outcome is reduced efficiency and reduced living standards.
Get rid of oppressive regulation. Let us have legal showers. I prefer to cycle to and from work, but I want to have a decent shower after each journey.
"The results have been impressive. By the end of this year nearly every household in Oklahoma will have been fitted with a smart meter ... This has made Oklahoma an unlikely leader in the booming business of energy efficiency."
This entire paragraph is nonsense. Makes it seem like the fancy new meters save energy when in and of themselves they're just a sinkhole for dollars. Tell us more about what OGe to a) pay for the smart meter installation program and b) manage energy consumption enough to avoid the $1B cost of new generation?
100% agree, shifting off peak loads should be rewarded by the utility in the form of lower off peak rates.
A power meter on each house will only report back to the utility what the household is using (for billing purposes I suspect). I hope the meters will be measuring each circuit separately so the householder can see where their wasting energy and take steps to improve. However if the household doesn’t understand energy efficiency and what to look out for, this effort might be wasted. They should offer free online energy efficiency training. There is a good website offering this now and all the courses are free, it's called MyEnergyUniversity....
Having lived in France I know water is rather expensive there (perhaps even commensurate with the cost of providing it), but here in the US in many locations water is provided at a subsidized cost. Perhaps if users were paying the full cost of their water they would find low-flow shower heads less evil.
The biggest disincentive from companies investing in energy efficiency is corporation tax.
Most businesses have finance costs in the region of 8%. They must make an expected annual post tax return of 8% on an investment, for that investment to make business sense.
Say the proposal is an energy efficiency investment (say, new lighting, new insulation or a more efficient ventilation system) that will be depreciated over two years. (1.08^2) = 1.166, and so we need a 16.6% post tax return over two years.
In the US though, we would need to pay 39% corporation tax on the pretax return (in the UK, 24%). ( 16.6 * (1/ (1-0.39) ) ) = 27.2%, and so we need a 27.2% pretax return on our two year efficiency investment to generate the necessary 16.6% posttax for covering finance costs.
In the UK, it would be ( 16.6 * (1/ (1-0.24) ) ) = 21.8%.
The corporation tax wedge is a very strong disincentive from making simple and economically beneficial efficiency investments.
If we want companies to participate in efficiency savings just so much as (or even more than) households, we must stop taxing investments in capital that would achieve those savings.
Far better to eliminate corporation tax, and instead increase the higher rates of income tax.
* clarification. 20% VAT is only applied to the retail mark up - and not to the value at import.
The Suzuki Alto is the definition of efficiency:
- energy efficiency. This thing does 53.5 mpg (US). If every private in car in the US did this, the US would need 40% less road fuel, would have far low
- capital efficiency. It costs 6,600 GBP new in the UK ($10,600), and that's already including our import tariffs and 20% VAT (sales tax applied before list price, essentially).
- it's actually a car. It does 96 mph and has all mod cons. And Suzuki parts are cheap, so it won't cost much in maintenance.
Driving can be affordable for the masses, even with $10/ gallon (even for a two-way 30 mile commute, $12 would hardly break the bank).
Efficiency is meaningful and has great potential. We can live the dream of freedom, even if US oil consumption falls by 40%.
Just wait until large Chinese manufacturers start turning out reliable cars. With international competition hotting up, capital costs are going to be driven ever more remorselessly downwards (and efficiency upwards, if that is what the market demands). Let's hope that our governments get out the way and eliminate tariffs.?
BRING BACK THE PACE PROGRAM!
It would be an enormous boon to residential energy efficiency. | http://www.economist.com/node/21551496/comments | CC-MAIN-2014-10 | refinedweb | 1,286 | 64.1 |
- Article Catalog
- Introduction to News Sentiment Analysis with Eikon Data APIs - a Python example
This article will demonstrate how we can conduct a simple sentiment analysis of news delivered via our new Eikon Data APIs. Natural Language Processing (NLP) is a big area of interest for those looking to gain insight and new sources of value from the vast quantities of unstructured data out there. The area is quite complex and there are many resources online that can help you familiarise yourself with this very interesting area. There are also many different packages that can help you as well as many different approaches to this problem. Whilst these are beyond the scope of this article - I will go through a simple implementation which will give you a swift enough introduction and practical codebase for further exploration and learning.
Pre-requisites:
Refinitiv Eikon / Workspace with access to new Eikon Data APIs
Python 2.x/3.x
Required Python Packages: eikon, pandas, numpy, beautifulsoup, textblob, datetime
Required corpora download: >>>python -m textblob.download_corpora (this is required by the sentiment engine to generate sentiment)
Introduction
NLP is a field which enables computers to understand human language (voice or text). This is quite a big area of research and a little enquiry on your part will furnish you with the complexities of this problem set. Here we will be focussing on one application of this called Sentiment Analysis. In our case we will be taking news articles(unstructured text) for a particular company, IBM, and we will attempt to grade this news to see how postive, negative or neutral it is. We will then try to see if this news has had an impact on the shareprice of IBM.
To do this really well is a non-trivial task, and most universtities and financial companies will have departments and teams looking at this. We ourselves provide machine readable news products with News Analytics (such as sentiment) over our Refinitiv Real-Time platform in realtime at very low latency - these products are essentially consumed by algorithmic applications as opposed to humans.
We will try to do a similar thing as simply as possible to illustrate the key elements - our task is significantly eased by not having to do this in a low latency environment. We will be abstracting most of the complexities to do with the mechanics of actually analysing the text to various packages. You can then easily replace the modules such as the sentiment engine etc to improve your results as your understanding increases.
So lets get started. First lets load the packages that we will need to use and set our app_id.
In [1]:
import eikon as ek
import pandas as pd
import numpy as np
from bs4 import BeautifulSoup
from textblob import TextBlob
import datetime
from datetime import time
import warnings
warnings.filterwarnings("ignore")
ek.set_app_key('YOUR APP KEY HERE')
There are two API calls for news:
get_news_headlines : returns a list of news headlines satisfying a query
get_news_story : returns a HTML representation of the full news article
We will need to use both - thankfully they are really straightforward to use. We will need to use get_news_headlines API call to request a list of headlines. The first parameter for this call is a query. You dont really need to know this query language as you can generate it using the News Monitor App (type NEWS into Eikon search bar) in Eikon.
You can see here I have just typed in 2 search terms, IBM, for the company, and, English, for the language I am interested in (in our example we will only be able to analyse English language text - though there are corpora, packages, methods you can employ to target other languages - though these are beyond the scope of this article). You can of course use any search terms you wish.
After you have typed in what you want to search for - we can simply click in the search box and this will then generate the query text which we can then copy and paste into the API call below. Its easy for us to change logical operations such as AND to OR, NOT to suit our query.
So the line of code below gets us 100 news headlines for IBM in english prior to 4th Dec 2017, and stores them in a dataframe, df for us.
In [2]:
df = ek.get_news_headlines('R:IBM.N AND Language:LEN', date_to = "2017-12-04", count=100)
df.head()
Out[2]:
I will just add 3 new columns which we will need to store some variables in later.
In [3]:
df['Polarity'] = np.nan
df['Subjectivity'] = np.nan
df['Score'] = np.nan
So we have our frame with the most recent 100 news headline items. The headline is stored in the text column and the storyID which we will now use to pull down the actual articles themselves, is stored in the storyID column.
We will now iterate through the headline dataframe and pull down the news articles using the second of our news API calls, get_news_story. We simply pass the storyID to this API call and we are returned a HTML representation of the article - which allows you to render them nicely etc - however for our purposes we want to strip the HTML tags etc out and just be left with the plain text - as we dont want to analyse HTML tags for sentiment. We will do this using the excellent BeautifulSoup package.
Once we have the text of these articles we can pass them to our sentiment engine which will give us a sentiment score for each article. So what is our sentiment engine? We will be using the simple TextBlob package to demo a rudimentary process to show you how things work. TextBlob is a higher level abstraction package that sits on top of NLTK (Natural Language Toolkit) which is a widely used package for this type of task.
NLTK is quite a complex package which gives you a lot of control over the whole analytical process - but the cost of that is complexity and required knowledge of the steps invloved. TextBlob shields us from this complexity, but we should at some stage understand what is going on under the hood. Thankfully there is plenty of information to guide us in this. We will be implementing the default PatternAnalyzer which is based on the popular Pattern library though there is also a NaiveBayesAnalyzer which is a NLTK classifier based on a movie review corpus.
All of this can be achieved in just a few lines of code. This is quite a dense codeblock - so I have commented the key steps.
In [4]:
for idx, storyId in enumerate(df['storyId'].values): #for each row in our df dataframe
newsText = ek.get_news_story(storyId) #get the news story
if newsText:
soup = BeautifulSoup(newsText,"lxml") #create a BeautifulSoup object from our HTML news article
sentA = TextBlob(soup.get_text()) #pass the text only article to TextBlob to anaylse
df['Polarity'].iloc[idx] = sentA.sentiment.polarity #write sentiment polarity back to df
df['Subjectivity'].iloc[idx] = sentA.sentiment.subjectivity #write sentiment subjectivity score back to df
if sentA.sentiment.polarity >= 0.05: # attribute bucket to sentiment polartiy
score = 'positive'
elif -.05 < sentA.sentiment.polarity < 0.05:
score = 'neutral'
else:
score = 'negative'
df['Score'].iloc[idx] = score #write score back to df
df.head()
Out[4]:
Looking at our dataframe we can now see 3 new columns on the right, Polarity, Subjectivity and Score. As we have seen Polarity is the actual sentiment polarity returned from TextBlob (ranging from -1(negative) to +1(positive), Subjectivity is a measure (ranging from 0 to 1) where 0 is very objective and 1 is very subjective, and Score is simply a Positive, Negative or Neutral rating based on the strength of the polarities.
We would now like to see what, if any, impact this news has had on the shareprice of IBM. There are many ways of doing this - but to make things simple, I would like to see what the average return is at various points in time AFTER the news has broken. I want to check if there are aggregate differences in the average returns from the Positive, Neutral and Negative buckets we created earlier.
In [5]:
start = df['versionCreated'].min().replace(hour=0,minute=0,second=0,microsecond=0).strftime('%Y/%m/%d')
end = df['versionCreated'].max().replace(hour=0,minute=0,second=0,microsecond=0).strftime('%Y/%m/%d')
Minute = ek.get_timeseries(["IBM.N"], start_date=start, interval="minute")
Minute.tail()
Out[5]:
We will need to create some new columns for the next part of this analysis.
In [6]:
df['twoM'] = np.nan
df['fiveM'] = np.nan
df['tenM'] = np.nan
df['thirtyM'] = np.nan
df.head(2)
Out[6]:
OK so I now just need to get the timestamp of each news item, truncate it to minute data (ie remove second and microsecond components) and get the base shareprice of IBM at that time, and at several itervals after that time, in our case t+2 mins,t+5 mins, t+10 mins, t+30 mins, calculating the % change for each interval.
An important point to bear in mind here is that news can be generated at anytime - 24 hours a day - outside of normal market hours. So for news generated outside normal market hours for IBM in our case, we would have to wait until the next market opening to conduct our calculations. Of course there are a number of issues here concerning our ability to attribute price movement to our news item in isolation (basically we cannot). That said, there might be other ways of doing this - for example looking at GDRs/ADRs or surrogates etc - these are beyond the scope of this introductory article. In our example, these news items are simply discarded.
We will now loop through each news item in the dataframe, calculate (where possible) and store the derived performance numbers in the columns we created earlier: twoM...thirtyM.
In [7]:
for idx, newsDate in enumerate(df['versionCreated'].values):
sTime = df['versionCreated'][idx]
sTime = sTime.replace(second=0,microsecond=0,tzinfo=None)
try:
t0 = Minute['CLOSE'].iloc[Minute.index.get_loc(sTime)]
df['twoM'][idx] = ((Minute['CLOSE'].iloc[Minute.index.get_loc((sTime + datetime.timedelta(minutes=2)))]/(t0)-1)*100)
df['fiveM'][idx] = ((Minute['CLOSE'].iloc[Minute.index.get_loc((sTime + datetime.timedelta(minutes=5)))]/(t0)-1)*100)
df['tenM'][idx] = ((Minute['CLOSE'].iloc[Minute.index.get_loc((sTime + datetime.timedelta(minutes=10)))]/(t0)-1)*100)
df['thirtyM'][idx] = ((Minute['CLOSE'].iloc[Minute.index.get_loc((sTime + datetime.timedelta(minutes=30)))]/(t0)-1)*100)
#print(str(sTime)+" pass" + str(((Minute['CLOSE'].iloc[Minute.index.get_loc((sTime + datetime.timedelta(minutes=2)))]/(t0)-1)*100)))
except:
#print(str(sTime) + " fail" )
pass
df.head()
Out[7]:
Fantastic - we have now completed the analytical part of our study. Finally, we just need to aggregate our results by Score bucket in order to draw some conclusions.
In [8]:
grouped = df.groupby(['Score']).mean() grouped
Out[8]:
Observations
From our initial results - it would appear that there might be some small directional differences in returns between the positive and neutral groups over shorter time frames (twoM and fiveM) after news broke. This is a pretty good basis for further investigation. So where could we go from here?
We have a relatively small n here so we might want to increase the size of the study.
We might also want to try to seperate out more positive or negative news - ie change the threshold of the buckets to try to identify more prominent sentiment articles - maybe that could have more of an impact on performance.
In terms of capturing news impact - we have thrown a lot of news articles out as they happened outside of market hours - as it is more complex to ascertain impact - we might try to find a way of including some of this in our analysis - I mentioned looking at overseas listings GDR/ADRs or surrogates above. Alternatively, we could using EXACTLY the same process looking at all news for an index future - say the S&P500 emini - as this trades on Globex pretty much round the clock - so we would be throwing out a lot less of the news articles? Great I hear you cry - but would each news article be able to influence a whole index? Are index futures more sensitive to some types of articles than others? Is there a temporal element to this? These are all excellent questions. Or what about cryptocrurrencies? They trade 24/7? and so on.
We could also investigate what is going on with our sentiment engine. We might be able to generate more meaningful results by tinkering with the underlyng processes and parameters. Using a different, more domain-specific corpora might help us to generate more relevant scores.
You will see there is plenty of scope to get much more involved here.
This article was intended as an introduction to this most interesting of areas. I hope to have de-mystified this area for you somewhat and shown how it is possible to get started with this type of complex analysis using only a few lines of code, a simple easy to use yet powerfull API and some really fantastic packages, to generate some meaningful results.
Downloads
Article.EikonAPI.Python.NewsSentimentAnalysis | https://developers.uat.refinitiv.com/en/article-catalog/article/introduction-news-sentiment-analysis-eikon-data-apis-python-example | CC-MAIN-2022-27 | refinedweb | 2,220 | 61.77 |
Updated:.
var v = new { Amount = 108, Message = "Hello" };.
var productQuery =
from prod in products
select new { prod.Color, prod.Price };
foreach (var v in productQuery)
{
Console.WriteLine("Color={0}, Price={1}", v.Color, v.Price);
}.
Date
History
Reason
July 2008
Added information about cast restrictions to introductory text and Remarks section.
Information enhancement.
"To pass an anonymous type, or a collection that contains anonymous types, outside a method boundary, you must first cast the type to object."
This isn't entirely true. Consider this helper method:
public static List<T> ListOfType<T>(this T type){ return new List<T>();}
This allows you to write:
var v = new { Amount = 108, Message = "Hello" };
var listOfV = ListOfType(v);
listOfV.Add(v);
I just passed a new strongly-typed List<some_anonymous_type> out of a method without casting it to object.
You can also cast object to an anonymous type using the following helper method:
T Cast<T>(object obj, T type){ return (T)obj;}
So even if you return the anonymous type from a method by casting to object, you can use it strongly typed like this:
var v = Cast(ThisReturnsAnonymousTypeAsObject(), new { Amount = 0, Message = "" });
if(v.Message == "Hello") ...
See "Can't return anonymous type from method? Really?" for more details:
The need to return anonymous type can emerge e.g. while refactoring long methods with Linq code: trying to extract some Linq code into a new method While the trick to return anonymous types is interesting by a technological point of view, it do lead to poor quality code (hard to read, error prone).When I need to return an anonymous type, I do declare and use a real type with just automatic properties and without constructors and use that. It work fine even with Linq2Sql (the Sql execution plan will not change because of the refactoring).The resulting code is easy to read/understand and is type-safe.IMHO The difference between a good programmer and a bad one is that a good programmer always strives to do things the straight way while a bad programmer feels smart when he discover and use dirty tricksjoin f in dc.Table2 on p.ID equals f.table1IDselect new{p,f});var output = returnCode;MyContext context = new MyContext();foreach (var o in output.Invoke(context)){yield return o.f.table1ID;yield return o.p.ID;}}}}
join f in dc.Table2 on p.ID equals f.table1ID
select new
p,
f
});
var output = returnCode;
foreach (var o in output.Invoke(context))
yield return o.f.table1ID;
yield return o.p.ID; | http://msdn.microsoft.com/en-us/library/bb397696.aspx | crawl-002 | refinedweb | 425 | 66.54 |
Today, you will learn how to remove the background color from a sprite sheet using AS3, and blit the result to a bitmap canvas. Read on to learn more!
Final Result Preview
Let's take a look at the final result we will be working towards:
Step 1: Drawing the Spritesheet
So, it is time to draw your spritesheet. Open up your favourite 'pixel-art' program and create an image of 128x128 and give it a background colour of '#e731f2' which is a nice purple colour!
This is my amazing artwork:
Save your image somewhere organised and let us continue!
Step 2: Importing the Sprite Sheet
Now, I'm call this a sprite sheet even though it is just one image. A 'sprite sheet' usually consists of more than one sprite but we can imagine we have more, right?
Anyway, if you are using Flash CS4 or higher, simply import your image via File | Import | Import to Library:
If you are using any other AS3 IDE, I have included the SWC file so you should probably skip this step. If you wish to embed your own images, I'm sure that your IDE of choice will have this feature.
Step 3: Exporting the Sprite Sheet
We have now got our sprite sheet in the Library; we should really make it into a
Class.
Right-click the image in the library and select 'Properties'. Give the image the following properties:
Hit OK. If you get a warning, just ignore it; it does not matter here.
Step 4: Creating the Document Class
If you're not familiar with the concept of a document class, check out this Quick Tip before reading further.
Create a new 'ActionScript 3.0 Class' and give it the name 'Main'. When the file has been created, save it as 'Main.as'.
This code should be placed in our new Class:
package { import flash.display.MovieClip; public class Main extends MovieClip { public function Main() { // constructor code } } }
We are not done yet, however! If you are using the Flash IDE, navigate to the 'Properties Panel' and set the 'Document Class' to 'Main'. If you are wondering what that does, it means that when your application/game is run by the Flash Player,
Main.as will be the class that's linked to the SWF itself. Cool, huh?
Run the program; if you get no errors then you should be good to go!
Step 5: Creating the Canvas
Before we do any blitting, we will first need to make a canvas to blit onto. If you are unsure of the term Blitting or would like to learn more about it, please take a look at this tutorial.
Now, we will declare a new Bitmap variable, to which we will blit (copy) the image.
private var canvas:Bitmap;
After we have done this, we will add a
function called
Initialize() which will allow us to set everything up neatly:
public function Main() { Initialize(); }
Let us create the function now:
private function Initialize():void { canvas = new Bitmap( new BitmapData( 550, 400, false, 0x000000 ) ); //Sets the Canvas to Black. stage.addChild( canvas ); //Adds the canvas to the stage. }
We are still not finished however, as we still have to add the
imports:
import flash.display.MovieClip; import flash.display.Bitmap; import flash.display.BitmapData;
Run the program; if it has a black background, it worked!
Step 6: Initializing the SpriteSheet
Firstly, we will need to make a new variable of type
SpriteSheet - which was the Class for the image we imported earlier, remember?
private var canvas:Bitmap; private var spriteSheet:SpriteSheet;
We shall then initialize it:
private function Initialize():void { canvas = new Bitmap( new BitmapData( 550, 400, false, 0x000000 ) ); //Sets the Canvas to Black. spriteSheet = new SpriteSheet(); //Sets spriteSheet to hold an instance of the image that we made. stage.addChild( canvas ); //Adds the canvas to the stage. }
Run the program and you should see nothing; let's fix that right away!
Step 7: Updating the Program
Now we need to add an
ENTER_FRAME event. This will allow the program to update 24 times a second (24 FPS) in our case.
In the
Main() function, add the following line:
public function Main() { Initialize(); stage.addEventListener( Event.ENTER_FRAME, Update ); }
Now we need to make the
Update() function. Add this function after the other functions:
private function Update(e:Event):void { }
Don't forget the
imports:
import flash.display.MovieClip; import flash.display.Bitmap; import flash.display.BitmapData; import flash.events.Event;
Now we are ready to do some blitting!
Step 8: Blitting
Here comes the interesting part!
Alright, so what we want to do is:
- Clear the canvas.
- Blit the image and remove the background colour.
In the update function, type the following code:
private function Update(e:Event):void { canvas.bitmapData.lock(); canvas.bitmapData.fillRect( new Rectangle( 0,0,stage.width, stage.height ), 0x000000 ); canvas.bitmapData.copyPixels( spriteSheet, new Rectangle( 0,0,128,128 ), new Point( 100, 100 ) ); canvas.bitmapData.unlock(); }
If you run this, you will get your image on the canvas! However, this is not just what we are aiming for as we wish to remove that background colour from the image.
I shall explain some of the code above first:
canvas.bitmapData.lock();- This line optimizes the blitting and it is a good habit to type it most of the time!
canvas.bitmapData.fillRect();- This line clears the canvas by filling it with a Black colour.
canvas.bitmapData.copyPixels();- Not very useful in our situation but copies all the pixels from part of an image.
canvas.bitmapData.unlock();- This works with
lock()to optimize the process.
Now you should have this on the screen...
Yes, I know, you are probably right. I think we should get rid of the purple too...
Step 9: Removing the Colour
Finally, it's time to remove the purple colour!
What we want to do is check through every pixel; if the pixel is purple, we simply do not copy it to the canvas. To do this, we will make our own function.
Change
Update() to the following:
private function Update(e:Event):void { canvas.bitmapData.lock(); canvas.bitmapData.fillRect( new Rectangle( 0,0,stage.width, stage.height ), 0x000000 ); CustomBlitting( spriteSheet, new Rectangle( 0,0,128,128 ), new Point( 100, 100 ), 0xE730F2 ); canvas.bitmapData.unlock(); }
Our new function (
CustomBlitting(), which we have not written yet) takes most of the parameters that copyPixels does, along with an extra one: the colour we wish to remove.
Time to write the function. This code may look complicated if you have never done a
nested for-loop before. The way this loop works is basically:
- For every row we have...
- Check every pixel in that row...
- Move to next row down...
private function CustomBlitting( src:BitmapData, srcRect:Rectangle, destPoint:Point, color:uint ):void { for( var i:int = 0; i < srcRect.height; i++ ) { for( var j:int = 0; j < srcRect.width; j++ ) { var currentPixel:uint = src.getPixel( srcRect.x + j, srcRect.y + i ); if( currentPixel != color ) { canvas.bitmapData.setPixel( destPoint.x + j, destPoint.y + i, currentPixel ); } } } }
Let me explain the getPixel and setPixel, although they should probably be self-explanatory:
getPixel( x, y );- This returns the colour of a pixel at the X,Y location.
setPixel( x, y, color );- This sets the colour of a pixel to
colorat the X,Y location of the canvas.
Now if you run the program, it works!
Step 10: Challenges
I only have one challenge for you to do for this tutorial:
Accept an Array of colours as a parameter and remove any colours from the image that are in the array.
Good luck!
Conclusion
I hope you have enjoyed this tutorial and have learnt something new today. If you'd like to show me your SWF with the completed challenges, leave a comment below!
Envato Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this post
| https://code.tutsplus.com/tutorials/blitting-with-as3-removing-a-bitmaps-background-color--active-9981 | CC-MAIN-2019-47 | refinedweb | 1,321 | 74.59 |
toktok
Fast and most complete/customizable tokenizer in Python.
It is roughly 25x faster than spacy's and nltk's regex based tokenizers.
Using the aho-corasick algorithm makes it a novelty and allows it to be both explainable and fast in how it will split.
The heavy lifting is done by textsearch and pyahocorasick, allowing this to be written in only ~200 lines of code.
Contrary to regex-based approaches, it will go over each character in a text only once. Read below about how this works.
InstallationInstallation
pip install tok
UsageUsage
By default it handles contractions, http, (float) numbers and currencies.
from tok import word_tokenize word_tokenize("I wouldn't do that.... would you?") ['I', 'would', 'not', 'do', 'that', '...', 'would', 'you', '?']
Or configure it yourself:
from tok import Tokenizer tokenizer = Tokenizer(protected_words=["some.thing"]) # still using the defaults tokenizer.word_tokenize("I want to protect some.thing") ['I', 'want', 'to', 'protect', 'some.thing']
Split by sentences:
from tok import sent_tokenize sent_tokenize("I wouldn't do that.... would you?") [['I', 'would', 'not', 'do', 'that', '...'], ['would', 'you', '?']]
for more options check the documentation of the
Tokenizer.
Further customizationFurther customization
Given:.
t.drop("bla", "bla is not needed") t.word_tokenize("Please remove bla, thank you") ['Please', 'remove', ',', 'thank', 'you']
ExplainableExplainable
Explain what happened:
t.explain("bla") [{'from': 'bla', 'to': ' ', 'explanation': 'bla is not needed'}]
See everything in there (will help you understand how it works):
t.explain_dict
How it worksHow it works
It will always only keep the longest match. By introducing a space in your tokens, it will make it be split.
If you consider how the tokenization of
. works, see here:
- When it finds a
A.it will make it
A.(single letter abbreviations)
- When it finds a
.0it will make it
.0(numbers)
- When it finds a
., it will make it
.(thus making a split)
If you want to make sure something including a dot stays, you can use for example:
t.keep("cool.")
ContributingContributing
It would be greatly appreciated if you want to contribute to this library.
It would also be great to add contractions for other languages. | https://libraries.io/pypi/tok/0.0.9 | CC-MAIN-2019-35 | refinedweb | 349 | 61.33 |
When adding an overlay to a Google Maps implementation, be sure to read the documentation thoroughly, which I of course didn’t resulting in the fact that I missed the important part about XHTML and VML which on its turn resulted in the fact that it wouldn’t work in IE.
This happened to me whilst working on the roadmap for the Trancefusion 9 website … all went well in Firefox, but not in IE.
Quoting the solution here will hopefully help out others who don’t feel like rtfm’ing the Google Maps API Documentation:
“If you want to show polylines on your map (like the lines used by Google Maps to show driving directions), you need to include the VML namespace and some CSS code in your XHTML document to make everything work properly in IE“. The API is referring to this piece of code.
Now remember that y’all! | https://www.bram.us/2006/10/02/my-google-maps-overlay-issue-resolved/ | CC-MAIN-2021-39 | refinedweb | 151 | 67.22 |
TrackPacer Part 3 - Controlling Thousands of LEDs
This is the last post in a 3 part series detailing the construction of our latest hardware project - TrackPacer. In this post, I'll cover the different ways you can control large numbers of LEDs using an Arduino and open source libraries, as well as describe the technique we settled on for efficiently managing 12,000 LEDs.
Background (Getting Started)
Note: This post will focus on interacting with NeoPixel LEDs.
Adafruit's NeoPixel LEDs are a joy to work with. You power them with 5 volts, they come in lengthy strips, and there are a number of libraries out there with great code examples so you can get up and running in no time at all. A great one to get started with is Adafruit's own NeoPixel library.
With the library, the code to control specific LEDs is quite easy:
#include <Adafruit_NeoPixel.h> #define PIXELS 150 #define PIN 12 #define LED_TYPE (NEO_GRB + NEO_KHZ800) Adafruit_NeoPixel leds = Adafruit_NeoPixel(PIXELS, PIN, LED_TYPE); void setup() { leds.begin(); } void loop() { for(int i = 0; i < PIXELS; i++) { leds.setPixelColor(i, 255, 0, 255); // (pixel, Red, Green, Blue) leds.show(); delay(250); } }
If however you want to control more than a couple hundred, you can run into memory, power, or data integrity limitations. To successfully control larger numbers of LEDs, there are a few requirements to tackle.
Requirement #1: The Brain
Adafruit's NeoPixel library is very easy to use, in part because it holds a buffer of all of the lights in memory. 3 bytes for each light for the Red, Green, and Blue values. This memory buffer is great as it lets you individually set the values for specific lights in your strip, but it does chew into the limited space you have if you're on something like an Arduino Uno or Mega. There are a couple options to handle this.
Option #1: Bigger Brain
If you're ready to take the Arduino training wheels off, then look into the Teensy. While the Arduino Uno will top out at around 450 LEDs, the Teensy should bring you control of over 20,000 by my calculation. Of course, you'll start running into power issues at that level but at least you'll have the memory problem taken care of.
Option 2: More Efficient Brain
If you're stuck with limited memory, you can jump onto the road less travelled and ditch the memory buffer for the LEDs. Without all of your LEDs in memory, you lose the ability to write to specific LEDs and instead are forced to iterate over each LED and send the RGB command one by one.
There's an excellent post on wp.josh.com which explains the ins and outs of how this works, and links to the code you can drop in yourself. The upside of this technique is that you can now control a limitless number of LEDs. The downside is your interface for doing so becomes trickier.
Option 3: Lots of Brains
This takes a bit more work, but if your project calls for it, it might be best to scale the number of your LED controlling microcontrollers. If you have the resources to do so, scaling horizontally is a great way to control lots of LEDs.
Combine this with a bigger or more efficient brain and you've got a mighty system. If you need some tips on how to get multiple microcontrollers talking to each other, check out Part 2 of this post series.
Requirement #2: Power
I have less tips and tricks for handling this requirement - it's more of a "just do it the right way" kind of thing. You want to make sure you're supplying enough current for your project.
LEDs pull the most amount of current when they're shining bright white - full intensity on Red, Blue, and Green. For NeoPixels, one pixel draws 60mA at bright white. Multiply that by whatever number of LEDs you're planning on using, and compare that with the current your power supply is capable of putting out. If it doesn't supply enough current, consider using a beefier power supply or introducing multiple different supplies to spread the load.
What We Did
For the Trackpacer project, we settled on a combination of techniques by chaining multiple Teensys together - Lots of Bigger Brains. Every Teensy has it's own power supply and is in charge of 1,200 LEDs (two strands of 600 to minimize voltage drop over the LED strips).
It's surely a sight to see that many LEDs all synchronized together. If you've managed a large scale LED setup, I'd love to hear about it in the comments below. And if you've found this interesting, check out the other posts in this series Part 1 - A Nerdy Overview, and Part 2 - Connecting Multiple Microcontrollers Using ICSC. | https://www.viget.com/articles/trackpacer-part-3-controlling-thousands-of-leds/ | CC-MAIN-2022-27 | refinedweb | 818 | 68.5 |
Kentico Xperience 13 Beta (2 Part Series)
Table of Contents
- How to Get the Kentico EMS 2020 Beta
- Running The Beta on .NET Core
- Coding Kentico on ASP.NET Core
- Why Is Kentico EMS on ASP.NET Core Important?
- Conclusion
The first Kentico EMS 2020 beta was released in the Autumn of 2019, with a focus on updates to reusable content and core libraries being migrated to .NET Standard.
The second beta has been available for about a month. So let's dive in 🤽🏾♀️!
How to Get the Kentico EMS 2020 Beta
To get access to the beta we need to have a DevNet account. If you don't have one, you can register here.
After logging into DevNet with our account, we can download the beta in the downloads section of the site.
The beta has instructions 📃 on how to get it up and running, what features are new in this version, the list of known issues, and what the focus should be for developers and users of Kentico when trying out this version of the beta.
Running The Beta on .NET Core
The feature of Beta 2 that I find most interesting is the ability to run Kentico code in a .NET Core environment, specifically ASP.NET Core.
What does this mean exactly 🤔?
Well, since Kentico 12, the Content Management and Content Delivery sides of the application have been separated into (2) applications.
The upcoming version of Kentico will be no different.
However, while the Content Management application will still be running exclusively on the .NET Framework we've been using for 2 decades, the Content Delivery application can be run on either .NET Framework 4.8 (using ASP.NET MVC) or .NET Core 3.1 (using ASP.NET Core).
The two frameworks are bridged by Kentico's core libraries (like document management, marketing automation, global event system) being migrated to work on .NET Standard 2.0, and both .NET Core 3.1 and .NET Framework 4.8 support .NET Standard 2.0 🤓.
This means both the Content Management and Content Delivery applications can share a large majority of the platform's code.
So let's look at setting up a new Content Delivery application on .NET Core, using Visual Studio and VS Code 👍🏾.
Initial Setup
We should follow the instructions in the .zip file we downloaded that contains all the beta code and information.
Steps:
- Install the Kentico 2020 Beta 2 Kentico Installation Manager (KIM), which will appear as Kentico Installation Manager 13.0 in the Windows Start Menu
- Install a new Kentico 2020 Beta 2 site using the DancingGoatMVC template on our machine
We can see there is a
\NuGet Packages folder that contains all the NuGet packages we'd normally be downloading from Kentico on Nuget.org.
I recommend copying these packages, into a sub-folder of the one created by the KIM for the new beta web application (here I've named it
\nuget-packages).
Using Visual Studio
Visual Studio gives us the classic Kentico development environment we've used in all previous versions, and setting up an ASP.NET Core project in Visual Studio is very similar to ASP.NET projects running on Full Framework 😃.
ℹ I recommend running the latest version of Visual Studio, 16.4.5 at the time of this writing, to avoid any issues ℹ.
We can create a new project in the standard Visual Studio wizard interface:
Enter the standard information for your project:
ℹ It will probably be easiest to create it in the same folder where the Beta 2 CMS and DancingGoatMVC projects are located ℹ.
Then we will select the standard MVC project template:
When the solution loads up we will see our project, but now we need to install the Kentico 2020 Beta 2 NuGet packages.
These packages are not available on NuGet.org since this is a beta 🤨, but we can install them by adding a custom NuGet package source for our Windows account in Visual Studio:
Once we add the new package source, we can select it from the package source drop down in the NuGet packages UI:
We only need to install (1) package -
Kentico.Libraries. Once that package is installed we can see it in the "Dependencies" node of the project ☺:
Now that everything is setup and installed for Visual Studio, we can start using Kentico in .NET Core 💪🏾💪🏾!
Using VS Code
Setting up a new project in VS Code is a bit different, but very exciting ⚡, because a functioning VS Code based project means we have a truly cross-platform ASP.NET Core codebase that Mac and Linux users can work on as well!
ℹ Before we begin, we will need version 3.1+ of the .NET Core SDK and VS Code installed ℹ.
We will be doing all the setup from the command line 💻, so open up your favorite terminal/shell and execute the following commands from the root of the Kentico projects folder (where all the
.sln files are).
First, we will create the solution file
dotnet new sln --name Sandbox
Next, we create the ASP.NET Core project:
dotnet new mvc --name Sandbox
We should now have a
Sandbox.sln file at the root of our directory and a
Sandbox sub-directory containing the ASP.NET Core MVC project:
Let's add the project to our solution:
dotnet sln .\Sandbox.sln add .\Sandbox\
Finally, we'll add the local NuGet packages for the beta to our new project:
dotnet add .\Sandbox\ package Kentico.Libraries -v 13.0.0 --source "nuget-packages"
ℹ Above,
.\Sandbox\is the path to the folder containing my
.csproj,
Kentico.Librariesis the package being installed,
13.0.0is the specific version I want, and
"nuget-packages"is the local folder source I'm going to get this package from ℹ.
At this point we should be able to build the project from the command line:
dotnet build .\Sandbox\
We can also run it from within VS Code 👏🏾!
Open VS Code from the command line:
code .\Sandbox\
Let's create a
task.json file, which will contain common operations we want to perform on the project, using the Command Palette:
We select the default option (create
tasks.json) and then select .NET Core as the task template:
We will end up with the following JSON in
.\.vscode\tasks.json:
{ // See // for the documentation about the tasks.json format "version": "2.0.0", "tasks": [ { "label": "build", "command": "dotnet", "type": "shell", "args": [ "build", // Ask dotnet build to generate full paths for file names. "/property:GenerateFullPaths=true", // Do not generate summary otherwise it leads to duplicate errors in Problems panel "/consoleloggerparameters:NoSummary" ], "group": "build", "presentation": { "reveal": "silent" }, "problemMatcher": "$msCompile" } ] }
Now, from within VS Code we can use the Command Palette to run a specific task (
Tasks: Run Task) or use a shortcut to run the Build task via
ctrl+
shift+
b 🧐.
We can also add a
launch.json file so that we can run/debug our application easily from within VS Code using the Command Palette and selecting "Debug: Open launch.json":
From there we select .NET Core as our environment and VS Code will create the following
launch.json for us:
{ // Use IntelliSense to learn about possible attributes. // Hover to view descriptions of existing attributes. // For more information, visit: "version": "0.2.0", "configurations": [ { "name": ".NET Core Launch (web)", "type": "coreclr", "request": "launch", "preLaunchTask": "build", "program": "${workspaceFolder}/bin/Debug/netcoreapp3.1/Sandbox.dll", "args": [], "cwd": "${workspaceFolder}", "stopAtEntry": false, "serverReadyAction": { "action": "openExternally", "pattern": "^\\s*Now listening on:\\s+(https?://\\S+)" }, "env": { "ASPNETCORE_ENVIRONMENT": "Development" }, "sourceFileMap": { "/Views": "${workspaceFolder}/Views" } }, { "name": ".NET Core Attach", "type": "coreclr", "request": "attach", "processId": "${command:pickProcess}" } ] }
Now we can launch and debug our app from within VS Code by selecting "Debug: Start Debugging" from the Command Palette:
Final Steps
So, now that we have an ASP.NET Core project up and running, with the Kentico EMS 2020 Beta 2 NuGet packages installed, is there anything left to do before we get to run some awesome Kentico + .NET Core goodness 🙄?
... Well, yes, there are a couple more steps 😑 ... but they're easy and we're getting close 🤗!
- Add the connection string from the CMS
web.configto our ASP.NET Core
appsettings.jsonfile as follows:
{ "Logging": { "LogLevel": { "Default": "Information", "Microsoft": "Warning", "Microsoft.Hosting.Lifetime": "Information" } }, "AllowedHosts": "*", "ConnectionStrings": { "CMSConnectionString": "<Connection string goes here>" } }
- In our
Startup.csfile, the
ConfigureServices()and
Configure()methods need updated to integrate Kentico:
// ✅ Kentico Integration using CMS.AspNetCore.Platform; // ... public void ConfigureServices(IServiceCollection services) { services.AddControllersWithViews(); // ✅ Kentico Integration services.AddKentico(); } public void Configure(IApplicationBuilder app, IWebHostEnvironment env) { // ... // ✅ Kentico Integration app.UseKentico(); app.UseRouting(); app.UseEndpoints(endpoints => { // ✅ Kentico Integration endpoints.MapKenticoRoutes(); endpoints.MapControllerRoute( name: "default", pattern: "{controller=Home}/{action=Index}/{id?}"); }); }
Now, we're all set!
Coding Kentico on ASP.NET Core
We've got everything set up and running, so we should be able to test out some Kentico functionality 🎉🎊!
What Can We Do?
The documentation that comes with the beta has a great snippet of demo code that shows Kentico EMS really can run on .NET Core now!
First, we create a new
ArticleModel class:
using System; namespace Sandbox.Models { public class ArticleModel { public string Heading { get; set; } public string Text { get; set; } public Guid ImageGuid { get; set; } public string ImageFileName { get; set; } public string GetImagePath() => $"~/getattachment/{ImageGuid:D}/" + $"{ImageFileName}?sitename=DancingGoatMVC&maxsidesize=400"; } }
Then, we can update the
HomeController.Index() method to get data from the EMS database using our favorite
DocumentHelper calls:
// ✅ Added to let us use our model class using Sandbox.Models // ✅ Added to allow us to query documents using CMS.DocumentEngine; // ✅ Added to make our use of "ValidationHelper" less repetative using static CMS.Helpers.ValidationHelper; // ... public IActionResult Index() { var articles = DocumentHelper.GetDocuments("DancingGoatMvc.Article") .Culture("en-us") .OrderBy("NodeOrder") .ToList() .Select(article => { var attachmentGuid = GetGuid( article.GetProperty("ArticleTeaser"), default); var attachment = DocumentHelper.GetAttachment( article, attachmentGuid); return new ArticleModel { Heading = GetString(article.GetProperty("ArticleTitle"), "🍩"), Text = GetString(article.GetProperty("ArticleSummary"), "🍦"), ImageGuid = attachmentGuid, ImageFileName = attachment.AttachmentName }; }); return View(articles); }
Now we update our
Home.cshtml to the following:
@model IEnumerable<Sandbox.Models.ArticleModel> @{ ViewData["Title"] = "Home Page"; } @foreach (var article in Model) { <section class="row text-and-image"> <h2 class="col-lg-12">@article.Heading</h2> <div class="col-md-6"> <div class="text-and-image-text"> @Html.Raw(article.Text) </div> </div> <div class="col-md-6"> @{ string url = Url.Content(article.GetImagePath()); } <img src="@url" title="@article.Heading" alt="@article.Heading" class="img-responsive" /> </div> </section> }
And this is what we should see in the browser when running the application and visiting the root of our site:
🤩 So awesome 🤩!
What Has Not Been Released Yet?
The most important thing, for developers familiar with Kentico 12 MVC projects, that is missing from this beta would have to be the Page Builder functionality.
This means we won't be able to view our ASP.NET Core site from within the CMS and add/configure MVC Widgets for those pages 😞.
However, as we can see in the Kentico product roadmap, this functionality will be available in Beta 3 🤠!
Why Is Kentico EMS on ASP.NET Core Important?
Kentico was originally developed as an ASP.NET Framework application built on the Web Forms technology.
With Kentico 12, ASP.NET MVC running on .NET Framework became the recommended technology approach for Content Delivery for new sites 👍🏾.
But this is still running on the Windows-only .NET Framework and not the new and quickly growing .NET Core framework.
Kentico has been migrating their internal libraries to .NET Standard 2.0 for several years to support the eventual scenario we've just experienced - Kentico EMS integrated with .NET Core 🧡!
.NET Core is the future of .NET, so it's very exciting to see Kentico supporting it for Kentico 2020.
With Kentico EMS sites built using ASP.NET Core as their Content Delivery technology, we will be able to build these applications on Windows, Linux, or Mac, and use something like Docker for development and deployment 🤓.
We will be able to take advantage of all the performance improvements in .NET Core (when compared to .NET Framework), and the powerful new features of ASP.NET, like middleware, ubiquitous Dependency Injection, and Tag Helpers 🔥🔥.
Conclusion
Now we should be able to get the new Kentico EMS beta set up as an ASP.NET Core project in either Visual Studio, or VS Code, integrate in Kentico's beta NuGet packages, and code up some sweet 🍰 Kentico functionality running in an ASP.NET Core application.
Compatibility with .NET Core is great for Kentico and great for us, the Kentico developer community.
It's a bright ☀ new future and I hope you join me in it, by trying out the Kentico EMS 2020 Beta 2 😎.
If you are looking for more of a product overview and insight into what is coming, from a feature perspective, in the next version of Kentico, check out Matt Nield's post, Kentico 2020 milestone 2 review.
As always, thanks for reading 🙏!
We've put together a list over on Kentico's GitHub account of developer resources. Go check it out!
If you are looking for additional Kentico content, checkout the Kentico tag here on DEV:
Or my Kentico blog series:
Kentico Xperience 13 Beta (2 Part Series)
Posted on Mar 2 by:
Sean G. Wright
dev lead @WiredViews, founding partner @craftbrewingbiz. @Kentico MVP. love to learn / teach web dev & software engineering, collecting vinyl records, mowing my lawn, craft 🍺
Read Next
Uploading Image from Angular to ASP.NET Core Web API
Hemant Joshi -
Youtube Video: ASP.NET Core Web API (Rest API) with Entity Framework Core, SQL Server and Visual Studio Code
jalpesh Vadgama -
Update Your .NET Core Projects, Folks!
Jamie -
Discussion | https://practicaldev-herokuapp-com.global.ssl.fastly.net/seangwright/kentico-ems-2020-beta-beta-2-on-net-core-494b | CC-MAIN-2020-29 | refinedweb | 2,275 | 59.19 |
#include <sys/types.h> #include <db.h>
The record number data structure is either variable or fixed-length records stored in a flat-file format, accessed by the logical record number.(3) is defined in the <db.h> include file as follows:
typedef struct { unsigned long flags; unsigned int cachesize; unsigned int psize; int lorder; size_t reclen; unsigned char bval; char *bfname; } RECNOINFO;
The elements of this structure are defined as follows:
The data part of the key/data pair used by the recno access method is the same as other access methods. The key is different. The data field of the key should be a pointer to a memory location of type recno_t, as interface to create a new record will cause the creation of multiple, empty records if the record number is more than one greater than the largest record currently in the database.
Document Processing in a Relational Database System, Michael Stonebraker, Heidi Stettner, Joseph Kalash, Antonin Guttman, Nadene Lynn, Memorandum No. UCB/ERL M82/32, May 1982. | http://www.makelinux.net/man/3/R/recno | CC-MAIN-2014-42 | refinedweb | 171 | 59.53 |
Monitoring the health of your computer system is incredibly important. That's why Microsoft built performance monitoring into Windows NT. Jeffrey Richter shows you a C++ class that lets you easily use performance data within your own apps.
Jeffrey Richter
Implementing a Web View Namespace Extension Using Active Directory Services
Now you can view the Web with Windows Explorer using a namespace extension. We'll explain how to create and customize a Web View with HTML, the Active Template Library, and the Active Directory Services Interface.
Todd Daniell, Brian Daigle, Doug Bahr,
and Dave Mims
Interface Definition Language is the preferred way to describe your COM interfaces, but many developers have only a rudimentary knowledge of IDL. Here's a survival guide that will show you what IDL is, when you need it, and the basics of using it. | http://www.microsoft.com/msj/0898/default.aspx | CC-MAIN-2014-52 | refinedweb | 139 | 51.78 |
Ask a Jedi: Handling RSS feeds with custom data
This post is more than 2 years old.
Neal asks:.)
I'm trying to use cffeed to read RSS feeds provided by National Public Radio (I manage a few websites for community radio stations that purchase NPR programming). Everything works great, except that NPR uses a custom namespace for certain elements. For example, npr:rmaudio provides a link to a Real audio file, but cffeed won't read this element. When I try to loop through the feed having captured it with cfhttp, coldfusion chokes on the colon. Any ideas? I can't seem to find much about this on the web. Finally, thanks for the many interesting articles you've written over the years.
As for the colon issue - you are the second person in the past 24 hours who hasn't recognized a simple fact about structures. (Remember that when dealing with an XML object, you can treat parts of it as arrays and structs.) When it comes to keys that have colons, or other odd characters, all you need to do is use bracket notation. So consider this simple structure:
<cfset s = {}> <cfset s.age = 9> <cfset s["funky cold madina"] = 21>
The first key I set, age, is a simple string with no spaces in it, and I can use dot notation. The second key, "funky cold madina", can't be used with dot notation, but bracket notation works just fine.
So let's look at the XML Neal was working with. The feed may be found here:
And if you get this data via XML, you will see data like so:
<item> <title>Pennsylvania Primary Roundup</title> <description>Barack Obama improved his showing among white, middle-class voters, but not enough to beat Hillary Clinton in the Pennsylvania primary on Tuesday. NPR's National Political Correspondent Mara Liasson analyzes the race with Robert Seigel.</description> <pubDate>Tue, 22 Apr 2008 21:53:01 -0400</pubDate> <link></link> <guid></guid> <npr:wmaudio></npr:wmaudio> <npr:rmaudio></npr:rmaudio> </item>
Note the NPR tags. Working with these tags is a simple matter. Consider this code:
<cfloop index="x" from="1" to="#arrayLen(xmlResult.rss.channel.item)#"> <cfset item = xmlResult.rss.channel.item[x]> <cfoutput> <p> <b><a href="#item.link.xmlText#">#item.title.xmlText#</a></b><br /> <cfif structKeyExists(item, "npr:wmaudio")> <a href="#item["npr:wmaudio"].xmlText#">WM Audio Link</a><br /> </cfif> <cfif structKeyExists(item, "npr:rmaudio")> <a href="#item["npr:rmaudio"].xmlText#">RM Audio Link</a><br /> </cfif> </p> </cfoutput> </cfloop>
All I do is loop over each item, output the things I know exist (link and title), and check to see if the WM or RM audio links exist. If so, I output them using bracket notation.
Archived Comments
Ahem, it's <a href="...">Funky Cold Medina</a>
I'm having the same issue with Google's Picasa API. In fact I just made a post about it on the HoF mailing list:...
So if I understand correctly...you're saying that I basically can't use CFFEED for feeds that have custom namespaces if I want to get those custom nodes?
Correct.
Suck. Do you have any idea if this is something they'll fix? Seems like it's a pretty significant omission.
Well, its still pretty trivial to get to the meta data. CF makes working w/ XML pretty easy. | https://www.raymondcamden.com/2008/04/23/Ask-a-Jedi-Handling-RSS-feeds-with-custom-data | CC-MAIN-2021-17 | refinedweb | 568 | 65.32 |
Hi people, I am having trouble with my splash screen. Basically, it just flashes really quickly and I can't get to to pause for long enough. See attached file.
I've attached my program as a zipped file.
After you have download it you need to right click and extract it otherwise it won't work.
Awaiting your earliest reply.
God bless.
Here's the code:
import java.awt.*; import java.awt.event.*; public class TestSplash { MyFrame theFrame; public static void main (String args[]){ TestSplash t = new TestSplash(); t.createMainFrame(); } private void createMainFrame() { theFrame = new MyFrame("A Dummy Frame"); theFrame.setVisible(true); } } class MyFrame extends Frame { Splash mySplash; public MyFrame(String title){ super(title); addWindowListener (new WindowAdapter() { public void windowClosing(WindowEvent e) { System.exit(0); } } ); mySplash = new Splash(this, "blart.png"); // dummy delay so we can see the Splash! for(int i = 0; i < 50000; i++) { //PROBLEM HERE System.out.println(i) ; } setSize(200,200); } } | https://www.daniweb.com/programming/software-development/threads/39461/splash-screen-woes | CC-MAIN-2017-09 | refinedweb | 156 | 61.12 |
Math::Prime::Util - Utilities related to prime numbers, including fast sieves and factoring
Version 0.32
# my $aref = primes( 5_000, 10_000 ); # If you want them in an array instead my @primes = @{primes( 500 )}; # You can do something for every prime in a range. Twin primes to 10k: forprimes { say if is_prime($_+2) } 10000; # For non-bigints, is_prime and is_prob_prime will always be 0 or 2. # They return 0 (composite), 2 (prime), or 1 (probably prime) 0 if given input less than 2) $n = prev_prime($n); # Return Pi(n) -- the number of primes E<lt>= n. @prime_factors = factor( $n ); # Get all factors @divisors = all_factors( )); # divisor sum $sigma = divisor_sum( $n ); # sum of divisors $sigma0 = divisor_sum( $n, 0 ); # count of divisors $sigmak = divisor_sum( $n, $k ); $small_prime = random_prime(1000); # random prime <= limit my $rand_prime = random_prime(100, 10000); # random prime within a range my $rand_prime = random_ndigit_prime(6); # random 6-digit prime my $rand_prime = random_nbit_prime(128); # random 128-bit prime my $rand_prime = random_strong_prime(256); # random 256-bit strong prime my $rand_prime = random_maurer_prime(256); # random 256-bit provable prime (and currently are) the fastest on CPAN, including. It requires no external software for big number support, as there are Perl implementations included that solely use Math::BigInt and Math::BigFloat. If you want high performance with big numbers (larger than Perl's UV size), you should install Math::Prime::Util:.
Two scripts are also included and installed by default:
--helpto see all the options.
factorprogram. It supports bigint and expression inputs.
By default all functions support bignums. With a few exceptions, the module will not turn on bignum support for you -- you will need to
use bigint,
use bignum, or pass in a Math::BigInt or Math::BigFloat object as your input. The functions take some care to perform all bignum operations using the same class as was passed in, allowing the module to work properly with Calc, FastCalc, GMP, Pari, etc. You should try to install Math::Prime::Util::GMP if you plan to use bigints with this module, as it will make it run much faster.
Some of the functions, including:
factor is_pseudoprime is_strong_pseudoprime nth_prime moebius mertens euler_phi exp_mangoldt chebyshev_theta chebyshev_psi is_prime is_prob_prime next_prime prev_prime
work very fast (under 1 microsecond) on small inputs, but the wrappers for input validation and bigint support take more time than the function itself. Using the flag '-bigint', e.g.:
use Math::Prime::Util qw(-bigint);
will turn off bigint support for those functions. Those functions will then go directly to the XS versions, which will speed up very small inputs a lot. This is useful if you're using the functions in a loop, but since the difference is less than a millisecond, it's really not important in general. The last four functions have shortcuts by default so will only skip validation. 2 if the number is prime, 0 if not. For numbers larger than
2^64 it will return 0 for composite and 1 for probably prime, using an extra-strong BPSW test. If Math::Prime::Util::GMP is installed, some additional primality tests are also performed on large inputs, additional random bases with "is_strong_pseudoprime", or a different test such as "is_frobenius_underwood_pseudoprime". Even better, make sure Math::Prime::Util::GMP is installed and use "is_provable_prime" which should be reasonably fast for sizes under 2048 bits. Another possibility is to use "random_maurer_prime" in Math::Prime::Util which constructs a random provable prime. trial division (for ranges with only one expected prime), a Sieve of Eratosthenes using wheel factorization, or a segmented sieve.
$n = next_prime($n);
Returns the next prime greater than the input number. If the input is not a bigint, then 0 is returned if the next prime is larger than a native integer type (the last representable primes being
4,294,967,291 in 32-bit Perl and
18,446,744,073,709,551,557 in 64-bit).
$n = prev_prime($n);
Returns the prime preceding the input number (i.e. the largest prime that is strictly less than the input). 0 is returned if the input is
2 or lower. using in threads are allowed.
Math::BigInt objects may be used for the range.
For some uses an iterator ("prime_iterator", "prime_iterator_object") or a tied array (Math::Prime::Util::PrimeArray) may be more convenient. Objects can be passed to functions, and allow early loop exits without exceptions. Here is a clumsy "forprimes" exception example:
use bigint; eval { forprimes { die "$_\n" if $_ % 123 == 1 } 2**100, 2**101 }; my $n = 0+$@;, and
14,16 returns 0).
The current implementation decides based on the ranges whether to use a segmented sieve with a fast bit count, or the LMO method has complexity approximately
O(b^0.7) + O(a^0.7). It does use more memory however. A calculation of
Pi(10^14) completes in under 1 minute,
Pi(10^15) in under 5 minutes, and
Pi(10^16) in under 20 minutes, however using about 500MB of peak memory for the last. In contrast, even primesieve using 12 cores would take over a week on this same computer, and use the Dusart (2010) bounds of
x/logx * (1 + 1/logx + 2.000/log^2x) <= Pi(x) x/logx * (1 + 1/logx + 2.334/log^2x) >= Pi(x)
above that range. These bounds do not assume the Riemann Hypothesis. If the configuration option
assume_rh has been set (it is off by default), then the Schoenfeld (1976) bounds are used for.
say "The ten thousandth prime is ", nth_prime(10_000);
Returns the prime that lies in index
n in the array of prime numbers. Put another way, this returns the smallest
p such that
Pi(p) >= n.
For relatively small inputs (below 2 million or so), this does a sieve over a range containing the nth prime, then counts up to the number. This is fairly efficient in time and memory. For larger values, a binary search is performed between the Dusart 2010 bounds using Riemann's R function, then a fast prime counting method is used to calculate the count up to that point, then sieving is done in the typically small difference zone.
While this method is hundreds of times faster than generating primes, and doesn't involve big tables of precomputed values, it still can take a fair amount of time and space for large inputs. Calculating the
10^11th prime takes a bit under 2 seconds, the
10^12th prime takes 10 seconds, and the
10^13th prime (323780508946331) takes 1 minute. Think about whether a bound or approximation would be acceptable, as they can be computed analytically.
If the bigint or bignum module is not in use, this will generate an overflow exception if the number requested would result in a prime that cannot fit in a native type. If bigints are in use, then the calculation will proceed, though it will be exceedingly slow. A later version of Math::Prime::Util::GMP may include this functionality which would help for 32-bit machines.
my $lower_limit = nth_prime_lower($n); my $upper_limit = nth_prime_upper($n); # $lower_limit <= nth_prime(n) <= $upper_limit
Returns an analytical upper or lower bound on the Nth prime. These are very fast as they do not need to sieve or search through primes or tables. An exact answer is returned for tiny values of
n. The lower limit uses the Dusart 2010 bound for all
n, while the upper bound uses one of the two Dusart 2010 bounds for
n >= 178974, a Dusart 1999 bound for
n >= 39017, and a simple bound of
n * (logn + 0.6 * loglogn) for small
n.
say "The one trillionth prime is ~ ", nth_prime_approx(10**12);
Returns an approximation to the
nth_prime function, without having to generate any primes. Uses the Cipolla 1902 approximation with two polynomials, plus a correction for small values to reduce the error.
Takes a positive number
n and a base
a as input, and returns 1 if
n is a probable prime to base
a. This is the simple Fermat primality test. Removing primes, given base 2 this produces the sequence OEIS A001567.
my $maybe_prime = is_strong_pseudoprime($n, 2); my $probably_prime = is_strong_pseudoprime($n, 2, 3, 5, 7, 11, 13, 17);
Takes a positive number as input and one or more bases. The bases must be greater than
1. Returns 1 if the input is a strong probable prime to all of the bases, and 0 if not.
If 0 is returned, then the number really is a composite. If 1 is returned, then it is either a prime or a strong pseudoprime to all the given bases. Given enough distinct bases, the chances become very, very strong.
An alias for
is_strong_pseudoprime. This name is deprecated. as input, and returns 1 if the input passes the minimal lambda+2 test (see Underwood 2012 "Quadratic Compositeness Tests"), where
(L+2)^(n-1) = 5 + 2x mod (n, L^2 - Lx + 1). The computational cost for this is between the cost of 2 and 3 strong pseudoprime tests. There are no known counterexamples, but this is not a well studied test. positive number as input, and returns 1 if the input passes the Agrawal-Kayal-Saxena (AKS) primality test. This is a deterministic unconditional primality test which runs in polynomial time for general input.
This function is only included for completeness and as an example. The Perl implementation is fast compared to the only other Perl implementation available (in Math::Primality), and the implementation in Math::Prime::Util::GMP compares favorably to others in the literature. However AKS in general is far too slow to be of practical use. R.P. Brent, 2010: "AKS is not a practical algorithm. ECPP is much faster.": -
D = P*P - 4*Q != 0 -
P > 0 -
P < n -
Q < n -
k >= 0 -
n >= 2
say "$n is square free" if moebius($n) != 0; $sum += moebius($_) for (1..200); say "Mertens(200) = $sum";
Returns μ(n), the Möbius function (also called the Moebius, Mobius, or MoebiusMu function) for a non-negative is Deléglise and Rivat (1996) algorithm 4.1, which is a segmented version of Lioen and van de Lune (1994) algorithm 3.2.) 74.8s 7000MB List::Util::sum(moebius(1,100_000_000)) 88.5s 0MB $sum += moebius($_) for 1..100_000_000 [-nobigint] 181.8s 0MB $sum += moebius($_) for 1..100_000_000
The summation of individual terms via factoring is quite expensive in time, though uses O(1) space. This function will generate the equivalent output via a sieving method, which will use some more memory, but be much faster. that counts the number of positive integers less than or equal to
n that are relatively prime to
n. Given the definition used,
euler_phi will return 0 for all
n < 1. This follows the logic used by SAGE. Mathematic/WolframAlpha also returns 0 for input 0, but returns
euler_phi(-n) for
n < 0.
If called with two arguments, they define a range
low to
high, and the function returns an array with the totient of every n from low to high inclusive. Large values of high will result in a lot of memory use.ikind psi function, where
psi(n) = J(2,n) / J(1,n).
say "exp(lambda($_)) = ", exp_mangoldt($_) for 1 .. 100;
Returns EXP(Λ(n)), the exponential of the Mangoldt function (also known as von Mangoldt's function) for an integer value. It.
say chebyshev_theta(10000);
Returns θ(n), the first Chebyshev function for a non-negative integer input. This is the sum of the logarithm of each prime where
p <= n. An alternate computation is as the logarithm of n primorial. Hence these functions:
use List::Util qw/sum/; use Math::BigFloat; sub c1a { 0+sum( map { log($_) } @{primes(shift)} ) } sub c1b { Math::BigFloat->new(primorial(shift))->blog }
yield similar results, albeit slower and using more memory.
say chebyshev_psi(10000);
Returns ψ(n), the second Chebyshev function for a non-negative integer input. This is the sum of the logarithm of each prime where
p^k <= n for an integer k. An alternate computation is as the summatory Mangoldt function. Another alternate computation is as the logarithm of LCM(1,2,...,n). Hence these functions:
use List::Util qw/sum/; use Math::BigFloat; sub c2a { 0+sum( map { log(exp_mangoldt($_)) } 1 .. shift ) } sub c2b { Math::BigFloat->new(consecutive_integer_lcm(shift))->blog }
yield similar results, albeit slower and using more memory.
say "Sum of divisors of $n:", divisor_sum( $n );
This function takes a positive integer as input and returns the sum of the k-th powers of the divisors of the input, including 1 and itself. If the second argument (
k) is omitted it is assumed to be 1. This is known as the sigma function (see Hardy and Wright section 16.7, or OEIS A000203). The API is identical to Pari/GP's
sigma function.
The second argument can be a code reference, which is called for each divisor and the results are summed. This allows computation of other functions, but will be less efficient than using the numeric second argument.
An example of the 5th Jordan totient (OEIS A059378):
divisor_sum( $n, sub { my $d=shift; $d**5 * moebius($n/$d); } );
though we have a function "jordan_totient" which is more efficient.
This function is useful for calculating things like aliquot sums, abundant numbers, perfect numbers, etc.
For numeric second arguments (sigma computations), the result will be a bigint if necessary. For the code reference case, the user must take care to return bigints if overflow will be a concern.
.
$order = znorder(2, next_prime(10**19)-6);
Given two positive integers
a and
n, returns the multiplicative order of
a modulo <n>. This is the smallest positive integer
k such that
a^k ≡ 1 mod n. Returns 1 if
a = 1. Return undef if
a = 0 or if
a and
n are not coprime, since no value will result in 1 mod n. again,); }); # Mersenne Twister. Very fast, decent RNG, auto seeding. use Math::Random::MT::Auto; prime_set_config(irand=>sub {Math::Random::MT::Auto::irand() & 0xFFFFFFFF});
say "My 4-digit prime number is: ", random_ndigit_prime(4);
Selects a random n-digit prime, where the input is an integer number of digits between 1 and the maximum native type (10 for 32-bit, 20 for 64-bit, 10000 if bigint is active). tag was used, then numbers larger than the threshold will be flagged as an error, and numbers on the threshold will be restricted to native numbers. For better performance with large bit sizes, install Math::Prime::Util::GMP.
my $bigprime = random_nbit_prime(512);
Selects a random n-bit prime, where the input is an integer number of bits between 2 and the maximum representable bits (32, 64, or 100000 for native 32-bit, native 64-bit, and bigint respectively). A prime with the nth bit set will be uniformly selected, with randomness supplied via calls to the
irand function as described above.
irand function is used for randomness, so all the discussion in "random_prime" about that applies here.. For better performance with large bit sizes, install Math::Prime::Util::GMP...)
prime_set_config( assume_rh => 1 );
Allows setting of some parameters. Currently the only parameters are:. irand Takes a code ref to an irand function returning a uniform number between 0 and 2**32-1. This will be used for all random number generation in the module.
my @factors = factor(3_369_738_766_071_892_021); # returns (204518747,16476429743)
Produces the prime factors of a positive number input, in numerical order. The special cases of
n = 0 and
n = 1 will return
n, which guarantees multiplying the factors together will always result in the input value, though those are the only cases where the returned factors are not prime.
In scalar context, returns the number of prime factors with multiplicity (OEIS A001222). This is the expected result, as if we put the result into an array and then took the scalar result.
The current algorithm for non-bigints is a sequence of small trial division, a few rounds of Pollard's Rho, SQUFOF, Pollard's p-1, Hart's OLF, a long run of Pollard's Rho, and finally trial division if anything survives. This process is repeated for each non-prime factor. In practice, it is very rare to require more than the first Rho + SQUFOF to find a factor, and I have not seen anything go to the last step. @divisors = all_factors(30); # returns (2, 3, 5, 6, 10, 15)
Produces all the divisors of a positive number input. 1 and the input number are excluded (which implies that an empty list is returned for any prime number input). The divisors are a power set of multiplications of the prime factors, returned as a uniqued sorted list.
my @factors = trial_factor($n);
Produces the prime factors of a positive number input. The factors will be in numerical order. The special cases of
n = 0 and
n = 1 will return
n, while with all other inputs the factors are guaranteed to be prime. For large inputs this will be very slow.. In the long run it has the same advantages and disadvantages as Fermat's method.
my @factors = squfof_factor($n); my @factors = rsqufof_factor($n); # racing multiplier version).
If Math::MPFR is not installed, then results are calculated using either Borwein (1991) algorithm 2, or the basic series. Full input accuracy is attempted, but there are defects in Math::BigFloat with high accuracy computations that make this difficult. It is also very slow. I highly recommend installing Math::MPFR for BigFloat computations.). Accuracy without MPFR should be 35 digits.
Print strong pseudoprimes to base 17 up to 10M:
# Similar to A001262's isStrongPsp function, but over 4x faster perl -MMath::Prime::Util=:all -E 'my $n=3; while($n <= 10000000) { print "$n " if is_strong_pseudoprime($n,$base) && !is_prime($n); $n+=2; } BEGIN {$|=1; $base=17}'
or, slightly faster, use forprimes and loop over the odds between primes:
perl -MMath::Prime::Util=:all -E '$|=1; $base=17; my $prev = 1; forprimes { $prev += 2; while ($prev < $_) { print "$prev " if is_strong_pseudoprime($prev,$base); $prev += 2; } } 3,10000000'
Print some primes above 64-bit range:
perl -MMath::Prime::Util=:all -Mbigint -E 'my $start=100000000000000000000; say join "\n", @{primes($start,$start+1000)}' # Similar code using Pari: # perl -MMath::Pari=:int,PARI,nextprime -E 'my $start = PARI "100000000000000000000"; my $end = $start+1000; my $p=nextprime($start); while ($p <= $end) { say $p; $p = nextprime($p+1); }'(); for (1 .. 10000) { print $sgit->(), "\n"; }
Project Euler, problem 3 (Largest prime factor):
use Math::Prime::Util qw/factor/; use bigint; # Only necessary for 32-bit machines. say "", (factor(600851475143))[-1]
Project Euler, problem 7 (10001st prime):
use Math::Prime::Util qw/nth_prime/; say nth_prime(10_001);
Project Euler, problem 10 (summation of primes):
use Math::Prime::Util qw/primes/; my $sum = 0; $sum += $_ for @{primes(2_000_000)}; say $sum;
Project Euler, problem 21 (Amicable numbers):
use Math::Prime::Util qw/divisor_sum/; sub dsum { my $n = shift; divisor_sum($n) - $n; } my $sum = 0; foreach my $a (1..10000) { my $b = dsum($a); $sum += $a + $b if $b > $a && dsum($b) == $a; } say $sum;
Project Euler, problem 41 (Pandigital prime), brute force command line:
perl -MMath::Prime::Util=:all -E 'my @p = grep { /1/&&/2/&&/3/&&/4/&&/5/&&/6/&&/7/} @{primes(1000000,9999999)}; say $p[-1];'
Project Euler, problem 47 (Distinct primes factors):
use Math::Prime::Util qw/pn_primorial factor/; use List::MoreUtils qw/distinct/; sub nfactors { scalar distinct factor(shift); } my $n = pn_primorial(4); # Start with the first 4-factor number $n++ while (nfactors($n) != 4 || nfactors($n+1) != 4 || nfactors($n+2) != 4 || nfactors($n+3) != 4); say $n;
Project Euler, problem 69, stupid brute force solution (about 2 seconds):
use Math::Prime::Util qw/euler_phi/; my ($n, $max) = (0,0); do { my $ndivphi = $_ / euler_phi($_); ($n, $max) = ($_, $ndivphi) if $ndivphi > $max; } for 1..1000000; say "$n $max";, ~4 minutes:
use Math::Prime::Util qw/factor -nobigint/; my $nsemis = 0; do { $nsemis++ if scalar factor($_) == 2; } for 1 .. int(10**8)-1; say $nsemis;
Here is the best way for PE187. Under 30 milliseconds from the command line:
use Math::Prime::Util qw/primes prime_count/; use List::Util qw/sum/; my $limit = shift || int(10**8); my @primes = @{primes(int(sqrt($limit)))}; say sum( map { prime_count(int(($limit-1)/$primes[$_-1])) - $_ + 1 } 1 .. scalar @primes );
Above
2^64, "is_prob_prime" performs an extra-strong BPSW test which is fast (a little less than the time to perform 3 Miller-Rabin tests) and has no known counterexamples. If you believe used to use Miller-Rabin tests with
k bases (typically unacceptable who has every played, and can keep playing as long as they like. It's only valid if the players are completely oblivious to what is happening.
Perl versions earlier than 5.8.0 have a rather broken 64-bit implementation, in that the values are accessed as doubles. Hence any value larger than
~ 2^49 will start losing bottom bits.. Because MPU does input validation and bigint conversion, there is about 20 microseconds of additional overhead making MPXS a little faster for tiny inputs, but once over 700k MPU is faster for all values.015, and strong) in addition to Maurer's algorithm. MPU does not have the critical bug RT81858. MPU should support for returning a generator.
Math::Factor::XS calculates prime factors and factors, which correspond to the "factor" and "all_factors" functions of MPU. These functions do not support bigints. Both are implemented with trial division, meaning they are very fast for really small values, but quickly become unusably slow (factoring 19 digit semiprimes is over 700 times slower). It has additional functions
count_prime_factors (possible in MPU using
scalar factor($n)) and
matches which has no module to support the BPSW and AKS tests.x faster. Math::Primality also installs a
primes.pl program, but it has much less functionality than the one included with MPU.
Math::NumSeq is more a related module rather than one with direct functionality. It does however offer a way to get similar results such as primes, twin primes, Sophie-Germain primes, lucky primes, moebius, divisor count, factor count, Euler totient, primorials, etc. Math::NumSeq is mainly set up for accessing these values in order, rather than for arbitrary values, though some sequences support that. The primary advantage I see is the uniform access mechanism for a lot of sequences. For those methods that overlap, MPU is usually much faster. Importantly, most of the sequences in Math::NumSeq are limited to 32-bit indices.). Some of the highlights:
isprime
Similar to MPU's "is_prob_prime" or "is_prime" functions. MPU is deterministic for native integers, and uses a strong BPSW test for bigints (with a quick primality proof tried as well). The default version of Pari used by Math::Pari (2.1.7) uses 10 random M-R bases, which is a probable prime test usually considered much weaker than the BPSW test used by MPU and newer versions of Pari (though better than a fixed set of bases). Calling as
isprime($n,1) performs a Pocklington-Lehmer
n-1 proof. This is comparable in performance to MPU:GMP's
n-1 proof implementation, and is reasonably fast for about 70 digits, but much slower than ECPP.
If Math::Pari is compiled with version 2.3.5 of Pari (this is not easy to do on many platforms), then the algorithms are completely different. The
isprime function now acts like "is_provable_prime" -- an APRCL proof is performed, which is quite efficient though requires using a larger stack for numbers of 300+ digits. It is somewhat comparable in speed to MPU:GMP's ECPP proof method, but without a certificate. Using the
ispseudoprime function will perform a BPSW test similar to "is_prob_prime". hundreds" though with a different return (I find the result value quite inconvenient to work with, but others may like its vector of factor/exponent format). Slower than MPU for all 64-bit inputs on an x86_64 platform, it may be faster for large values on other platforms. With the newer Math::Prime::Util::GMP releases, bigint factoring is slightly faster in MPU.
eulerphi
Similar to MPU's "euler_phi". MPU is 2-20x faster for native integers. There is also support for a range, which can be much more efficient. Without Math::Prime::Util::GMP installed, MPU is very slow with bigints. With it installed, it is about 2x slower than Math::Pari.
moebius
Similar to MPU's "moebius". Comparisons are similar to
eulerphi.
sigma
Similar to MPU's "divisor_sum". MPU is ~10x faster for native integers and about 2x slower for bigints.
eint1
Similar to MPU's "ExponentialIntegral".
zeta
A more feature-rich version MPU's "RiemannZeta" function (supports negative and complex inputs).
Overall, Math::Pari supports a huge variety of functionality and has a sophisticated and mature code base behind it (noting that the default version of Pari used is about 10 years old now). For native integers often using Math::Pari will be slower, but bigints are often:
PFGW is the fastest primality testing software I'm aware of once past 2000 or so digits, has fast trial division, and is especially fast on many special forms. It does not have a BPSW test however, and there are quite a few counterexamples for a given base of its PRP test, so for primality testing it is most useful for fast filtering of very large candidates. A test such as the BPSW test in this module is then recommended.
Primo is the best method for open source primality proving for inputs over 1000 digits. Primo also does well below that size, but other good alternatives are WraithX APR is the fastest publically available code I am aware of. It is much faster than any of the alternatives, and even more so when run multi-threaded. Tomás Oliveira e Silva's private code may be faster for very large values, but about 10% faster than the simple SoE in this module, slower than Pari and yafu's SoE implementations, and 2x slower than primesieve.
Up to a limit, extensive use of tables plus a good segmented sieve will produce the fastest results, but the number of tables needed to maintain good performance grows exponentially. The code in this module approaches the best publically available results, with the notable exception of Christian Bau's L-M-O implementation. The author of primesieve is also working on an L-M-O implementation. None of these are state of the art compared to private research methods.
Counting the primes to
10^10 (10 billion), with time in seconds. Pi(10^10) = 455,052,511. The numbers below are for sieving. Calculating
Pi(10^10) takes 0.064 seconds using the Lehmer algorithm in version 0.12.
External C programs in C / C++: 1.9 primesieve 3.6 forced to use only a single thread 2.2 yafu 1.31 3.8 primegen (optimized Sieve of Atkin, conf-word 8192) 5.6 Tomás Oliveira e Silva's unoptimized segmented sieve v2 (Sep 2010) 6.7 Achim Flammenkamp's prime_sieve (32k segments) 9.3 (mod 2310, single thread) 11.2 Tomás Oliveira e Silva's unoptimized segmented sieve v1 (May 2003) 17.0 Pari 2.3.5 (primepi) Small portable functions suitable for plugging into XS: 4.1 My segmented SoE used in this module (with unrolled inner loop) 15.6 My Sieve of Eratosthenes using a mod-30 wheel 17.2 A slightly modified verion of Terje Mathisen's mod-30 sieve 35.5 Basic Sieve of Eratosthenes on odd numbers 33.4 Sieve of Atkin, from Praxis (not correct) 72.8 Sieve of Atkin, 10-minute fixup of basic algorithm 91.6 Sieve of Atkin, Wikipedia-like
Perl modules, counting the primes to
800_000_000 (800 million):
Time (s) Module Version Notes --------- -------------------------- ------- ----------- ~11000 Math::Primality 0.
is_prime: my impressions for various sized inputs:
Module 1-10 digits 10-20 digits BigInts ----------------------- ----------- ------------ -------------- Math::Prime::Util Very fast Very fast Slow / Very Fast (1) Math::Prime::XS Very fast Very slow (2) -- Math::Prime::FastSieve Very fast N/A (3) -- Math::Primality Very slow Very slow Fast Math::Pari Slow OK OK / Fast (4) (1) Without / With L<Math::Prime::Util::GMP> installed. (2) Trial division only. Very fast if every factor is tiny. (3) Too much memory to hold the sieve (11dig = 6GB, 12dig = ~50GB) (4) By default L<Math::Pari> installs Pari 2.1.7, which uses 10 M-R tests for isprime and is not fast. See notes below for 2.3.5.
The differences are in the implementations:
first does simple divisibility tests to quickly recognize composites, then looks in the sieve for a fast bit lookup if possible (default up to 30,000 but can be expanded via
prime_precalc). Next, for relatively small inputs, a deterministic set of Miller-Rabin tests are used, while for larger inputs a strong BPSW test is performed. For native integers, this is faster than any of the other modules. With Bigints, you need the Math::Prime::Util::GMP module installed to get good performance. With that installed, it is about 2x faster than Math::Primality and 10x faster than Math::Pari (default 2.1.7).
does trial divisions, which is wonderful if the input has a small factor (or is small itself). If given a large prime it can be tens of thousands of times slower than MPU. It does not support bigints.
only works in a sieved range, which is really fast if you can do it (M::P::U will do the same if you call
prime_precalc). Larger inputs just need too much time and memory for the sieve.
uses GMP (in Perl) for all work. Under ~32-bits it uses 2 or 3 MR tests, while above 4759123141 it performs a BPSW test. This is great for bigints over 2^64, but it is significantly slower than native precision tests. With 64-bit numbers it is generally an order of magnitude or more slower than any of the others. Once bigints are being used, its performance is quite good. It is faster than this module unless Math::Prime::Util::GMP has been installed, in which case Math::Prime::Util is faster.
has some very effective code, but it has some overhead to get to it from Perl. That means for small numbers it is relatively slow: an order of magnitude slower than M::P::XS and M::P::Util (though arguably this is only important for benchmarking since "slow" is ~2 microseconds). Large numbers transition over to smarter tests so don't slow down much. With the default Pari version,
isprime will do M-R tests for 10 randomly chosen bases, but can perform a Pocklington-Lehmer proof if requested using
isprime(x,1). Both could fail to identify a composite. If pari 2.3.5 is used instead (this requires hand-building the Math::Pari module) then the options are quite different.
ispseudoprime(x,0) performs a strong BPSW test, while
isprime now performs a primality proof using a fast implementation of the APRCL method. While the APRCL method is very fast it is still much, much slower than a BPSW probable prime test for large inputs. with large factors will be the extreme end). including in Pari. Both are pretty fast to about 60 digits, and work reasonably well to 80 or so before starting to take over circa-2009 workstation, with Math::BigInt::GMP, Math::Prime::Util::GMP, and Math::Random::ISAAC::XS installed.
bits random +testing rand_prov Maurer CPMaurer ----- -------- -------- --------- -------- -------- 64 0.0001 +0.000008 0.0002 0.0001 0.022 128 0.0020 +0.00023 0.011 0.063 0.057 256 0.0034 +0.0004 0.058 0.13 0.16 512 0.0097 +0.0012 0.28 0.28 0.41 1024 0.060 +0.0060 0.65 0.65 2.19 2048 0.57 +0.039 4.8 4.8 10.99 4096 6.24 +0.25 31.9 31.9 79.71 8192 58.6 +1.61 234.0 234.0 947.3 random = random_nbit_prime (results pass BPSW) random+ = additional time for 3 M-R and a frobenius test rand_prov = random_proven_prime maurer = random_maurer. Below 512 bits, using "is_provable_prime"("random_nbit_prime") is typically faster than Maurer's algorithm, but becomes quite slow as the bit size increases. This leaves the decision of the exact method of proving the result to the..
Tomás Oliveira e Silva has released the source for a very fast segmented sieve. The current implementation does not use these ideas. Future versions might. old SQUFOF implementation, still included in the code, is my modifications to Ben Buhrow's modifications to Bob Silverman's code.
This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself. | http://search.cpan.org/~danaj/Math-Prime-Util-0.32/lib/Math/Prime/Util.pm | CC-MAIN-2014-15 | refinedweb | 5,453 | 63.49 |
I have a scenario where I add a few of the same component (a container to hold a object) which is derived from mx Panel to a HDividedBox. The layout is set to "absolute".
public class ObjectContainer extends Panel { this.layout = "absolute"; //rest of the code
Now the HDividedBox nicely displays the containers with a divider in between each container.
In the containers creation complete event, I add a image which I want basically work as a button to display a context menu. I add it to the top right corner. So I add the image, and add a action listener for the click event
myButton = new Image(); myButton.source = myDownImageClass; myButton.addEventListener(MouseEvent.CLICK, showObjectContainerMethods); myButton.right = 5 myButton.top = 5;
Now this works fine too.
The issue comes when an object is added to the container which is bigger than the container. The image (button) gets removed from the stage and then added back. (Confirmed it by listening to `ADDED_TO_STAGE` and `REMOVED_FROM_STAGE` events). The image visually does not appear to be removed and added back though.
BUT, the action listener wont fire anymore after the image is added back! What could be the cause and any potential fixes?
I tried adding the actionListener back in the added event, but still it wont work
I don't know what the cause could be, but you might want to add your listener to the container of the image. If it doesn't have one, you could wrap it with a group and add your listener there. I only suggest this, because I have never had a problem with "disappearing listeners" when using groups and other containers. | https://forums.adobe.com/thread/831875 | CC-MAIN-2018-30 | refinedweb | 276 | 66.33 |
View all headers
Path: senator-bedfellow.mit.edu!bloom-beacon.mit.edu!tribune.meitca.com!uunet!in2.uu.net!144.212.100.12!news.mathworks.com!iagnet.net!newsfeed.internetmci.com!164.67.42.145!nntp.info.ucla.edu!134.87.113.1!news.bc.net!torn!kone!news.ccs.queensu.ca!fletcher
From: fletcher@democracy.queensu.ca (Alex)
Newsgroups: rec.games.mud.diku,rec.games.mud.announce,rec.answers,news.answers
Subject: FAQ: [diku] rec.games.mud.diku FAQ
Supersedes: <Diku_872273982@democracy.queensu.ca>
Followup-To: poster
Date: 16 Sep 1997 19:44:57 GMT
Organization: Queen's University, Kingston
Lines: 960
Approved: rgm-announce-request@theurgy.digex.net,news-answers-request@mit.edu
Message-ID: <Diku_874442702@democracy.queensu.ca>
NNTP-Posting-Host: democracy.queensu.ca
Summary: This is the official FAQ for rec.games.mud.diku as well
as being the official FAQ for DikuMuds in general. It should
be read by anyone who wishes to post to the rec.games.mud.diku
newsgroup.
Keywords: diku faq periodic
Originator: fletcher@democracy.queensu.ca
Xref: senator-bedfellow.mit.edu rec.games.mud.diku:53085 rec.games.mud.announce:3020 rec.answers:33965 news.answers:112407
View main headers
REC.GAMES.MUD.DIKU FAQ
Version 3.13
A note on FTP sites:
If you discover that a site is no longer valid, or you have a
new site to submit, please let me know. Thank you very much.
_________________________________________________________________
Recent Changes
* Added a basic list of DikuMud derivatives that are freely
available.
* Added a question about lag. Specifically a definition.
* Added a section on porting to Linux.
* Addition to creditation questions.
* Updates and additions to the resource section.
_________________________________________________________________
Credits
* Originally authored and compiled by Frederick Myers [reni]
<mondays@bsu-cs.bsu.edu>
* Original HTML conversion by Ryan Watkins [VampLestat]
<vamp@csulb.edu>
* Recent updates, upkeep, HTML by Alex Fletcher [Furry]
<fletcher@democracy.queensu.ca>
The following people have lent a hand with contributions/comments for
this document...
* Sebastan Hammer - Donated notes on DikuII and on DikuMud history.
* Adam Coggins - Spelling and grammar checking.
* Dean Gaudet - Provided a list of common stock Diku bugs and
init_socket patch.
* Furey at Merc Industries - Provided some technical info for
starting a mud.
* Michael Brundage - Provided the noecho patch and some other bug
fixes.
* Nick Borko - VMS TinTin CPU fix.
* Derek J. Middleton and Russel Schultz - provided basic
instructions to port DikuMud Gamma 0.0 to Linux.
* Others... Jeffery Bennett, Bombman, Dan Brumleve, Mort, Sven
Nielsen, Nino Ruffini, Derek Snider, and Naved Surve.
Please let me know if I have missed anyone...
_________________________________________________________________
Overview
This document will be posted approximately bi-weekly to
rec.games.mud.diku.
Approximate size of this document: 40k
This file is the FAQ for the rec.games.mud.diku newsgroup as well as a
general game FAQ for DikuMud games. This FAQ is divided up into three
different sections:
* Introduction (general information and background)
* Implementor / Building (questions concerning starting a mud,
building areas, etc)
* Resources (well known ftp sites for various DikuMud utilities,
patches, and sources).
More information on muds can be received from the
rec.games.mud.announce General Three Part FAQ, online within whichever
mud you decided to play, and from the various web and ftp sites
located at the end of this document.
All information is correct to the best of my knowledge. The author
takes no responsibility for any inaccuracies contained in this
document. Please let me know of any corrections to make to this
document.
This document may be distributed freely. Any use of any of the
contents here-in must be credited and the author should be notified.
In light of several publishing companies 'stealing faqs', BY NO MEANS
IS THIS DOCUMENT TO BE USED FOR 'FOR_PROFIT' GAINS WITHOUT THE
EXPLICIT WRITTEN CONSENT OF THE AUTHOR. THIS INCLUDES INCLUDES USING
THIS DOCUMENT ON "FAQ CD-ROMS".
Patches Note: If you decide to use one of the patches in this FAQ,
please take the time to put the authors (Dean Gaudet and Jeffery Stine
of ArcticMud for the socket patch and Michael Brundage for the noecho
patch) in your lib/credits file. These three individuals have donated
their work for the better of others, so please give credit where
credit is due.
_________________________________________________________________
Questions
Introduction
"What is a DikuMud?"
"Where did this DikuMud come from?"
"Where and how do I connect to a DikuMud?"
"Where can I find out where these muds are located?"
"Ok, I'm connected, what do I do now?"
"What is a client program, or TinTin?"
"Is there anything I can do about enourmous CPU usage of VMS TinTin?"
"Ok, I have a character, now what do I do?"
"What are hit points, mana, and movement?"
"What are some other things I should know?"
"What is a crash?"
"What is lag?"
"What is some of the slang or jargon I hear on these muds?"
"What about this newsgroup, rec.games.mud.diku?"
"What is DikuII, and when can I expect to see it?"
"Where can I find so-and-so?"
"Now I know where to find it... what is available?"
Implementors / Building
"I really like DikuMuds, I want to start my own!"
"What is the difference between Circle, Merc and Silly?"
"Well, I do not want to run my own mud, but I would like to create an
area."
"Ok, I think I can start my own, but I hear there are bugs with the
stock code, what are these?"
"Are there any RFC's of interest?"
"What runs on Linux?"
"How can I make the original DikuMud run on Linux?"
"Are there any books of interest to admins?"
"Are there any muds available with online creation?"
"When is Circle 3.0 going to be released?"
"Are there any mailing lists for administrators?"
"Do I need to follow the license agreement?"
"Should I credit my area authors too?"
Resources
Web Resources
FTP Resources
_________________________________________________________________
Introduction
"What is a Diku Mud?"
A DikuMud is s specific species of one of the fastest growing
forms of computer games called Mud. For more information on
muds in general, please consult the general mud FAQ posted on
rec.games.mud.announce.
DikuMuds are highly influenced on the AD&D format for
adventuring. Though DikuMuds are not an exact duplicate of
AD&D, both share many common qualities, enough so a person who
is familar with AD&D will feel quite at home with the DikuMud
world. But by no means is AD&D experience required for a person
to prove to be successful.
While some muds are based on pure social interaction and some
based on pure fighting, DikuMuds have evolved into an
intelligent compromise between the two.
"Where did this DikuMud come from?"
DikuMud was originally developed by Katja Nyboe, Tom Madsen,
Hans Henrik Staerfeldt, Michael Seifert, and Sebastian Hammer.
A small bit of background of DikuMud, according to Sebastian
Hammer:
"The game originated at the Department of Computer Science at the
University of Copenhagen (in Danish: Datalogisk Institut ved
K|benhavns Universitet; or, amongst friends: DIKU) The foundations
of the code were laid out in March of 1990. Our background
(Mud-wise) was primarily Abermud (LpMud was just emerging at the
time), and our object was to make a better AberMud. We wanted to
make it fast, compact and CPU-efficient. We wanted to allow more
than the 18 (or so) players-at-a-time that AberMuds permitted in
those days, and we wanted a bigger world, so that players could
truly get lost in there (back then, 500 room AberMuds seemed the
norm). Also, we wanted to make it more interesting for players to
cooperate, rather than just run madly around in search of beasts to
kill.
I guess we reached some of our goals, but far from all of them.
Currently, we are working on DikuMud II, which is still under
debugging. We have ceased to support the original code in any way,
since so many "improved" versions have started to circulate"
"Where and how do I connect to a DikuMud?"
DikuMuds are located on different computers throughout the
world. These computers can be at universities, companies, or
even be personal workstations. To connect to these games, you
need two things -
1. Access to telnet.
2. The mud's host name or IP number and the protocal port that
the particular mud is running on.
Telnet is, to put it very simply, like a telephone. From the
computer, you 'dial in' where you want to connect to and you
are in (assuming the game is up).
Since many people play from a Unix based platform, I will use
that for examples on how to connect. Asgard.cs.bsu.edu 6969
will be the mud we will use as an example to connect to.
(147.226.112.94 was the IP number of that same machine).
From a unix based machine - (The % is the prompt)
% telnet asgard.cs.bsu.edu 6969
or
% telnet 147.226.112.94 6969
or
% telnet <enter>
telnet> open asgard.cs.bsu.edu 6969
"Where can I find out where these muds are located"
Word of mouth is a good way. So are the news groups. Different
muds are always being mentioned in rec.games.mud.diku plus a
listing is posted every few weeks in rec.games.mud.announce.
Also, look for the Mud Connector ().
"Ok, I'm connected, what do I do now?"
Many DikuMuds have a lot of differences on how a person goes
about making a character. But there are some common
similarities between all of them.
Name
This will be your character's name for the game. It is
suggested that you do not make up a name that is
complicated to spell (something like Gustralieb would be
a pain for other players to type) or something that could
possibly be the name of a monster in the game (something
like Dragon or Guard has a great possibility of being a
monster (or mob) in the game). If your name is one of the
above, you can find yourself being killed at times when a
sensible name could have saved your life.
This is what will prevent other people from playing your
character. Pick a password word that is hard to guess and
one that you currently do not use on another mud or
system. This is very important because it takes little
effort for a mud admin to find your password of your
character, your site and your username. Bad passwords
include your real name, character name, and anything less
than 3 characters. It is recomended that you choose
something that includes some non-standrard Alpha
characters such as # or @ or * (you get the idea). So, a
password like #Chapo1* would be a good, hard to guess
Sex
No, not if you get any, but what gender you want to be.
Most commonly this will be Male or Female, but a few muds
have the option of being Other or Neuter.
Class
There are usually four basic classes that DikuMuds
usually have.
1. Cleric - a healer
2. Warrior - a fighter
3. Magic User - a spell caster
4. Thief - a rogue
Some muds also have many other split style classes such
as Rangers, Paladins, Bards, etc. Usually on the log-on
screen, there will be some sort of online help that will
help you in the decision of your class.
There are a few other things some muds include such as
hometown and races. For these items, there should be
sufficient online help to guide you through on the
paticular DikuMud you are playing.
"What is a client program, or TinTin?"
A client program is basically a program that you use to connect
to a mud that has many enchanced features to help (in some
peoples' opinion... 'cheat') in the game. The most commonly
used Unix clients for DikuMuds are TinTin and TinyFugue. TinTin
was specifically designed for play on DikuMuds and is only
available for Unix platforms. There is now a Beta version of
WinTin for Windows '95 however. TinyFugue was designed for
MUSHes and other TinyMuds, but has been adapted for use with
DikuMuds. Both of these feature things like macros, aliases,
and triggers. Sites where you can find these two clients are
listed later in this document.
Further info on clients programs and their functions is
contained in the bi-weekely rec.games.mud.announce FAQ.
"What can I do about the enourmous CPU usage of VMS TinTin?"
Quick and easy fix:
+ Edit the file main.c
+ Find the following line:
time.tv_usec=0;
+ Change the 0 to 500000
+ Recompile and you are all set.
"Ok, I have a character, now what do I do?"
The first thing you would want to do is get to know the mud and
its commands. Some things you can do is type:
This will give a general listing of the commands
available.
HELP <keyword>
This will give you a more precise definition. An example
of what you could type is HELP SAY. This will give you an
explanation and proper syntax for the command say.
INFO
Will give you a brief introduction to the paticular mud
you are playing.
COMMANDS
This command is not on all muds, but what it does is
gives you a listing of all the commands available to your
character.
NEWS
Will provide the latest news of the mud.
Ask around. Most people are generally nice by nature and will
offer some (sometimes a lot, sometime very very little) help if
you ask nicely and are not annoying about it.
Read everything you see. Things like the MOTD (Message Of The
Day, which you see right before you enter the game) will often
provide very important information.
"What are hit points, mana, and movement?"
Hit Points
A numeric representation of the amount of hits/damage
that your character can take. Every time you are hit, you
lose some amount of hit points. You are considered
officially dead when you reach -11 hit points, though at
0 you can not do anything except for hope there is
someone around who will heal you.
Mana
This is the amount of spells you can cast. Every time you
cast a spell, a certain amount of mana is subtracted from
your working total of mana. Mana is like the working
energy that you can use to cast spells.
Movement
This is the amount that you can walk/run etc etc. A
decent comparison could be the amount of energy your
player has. Some skills also subtract from your movement
points.
"What are some other things that I should know?"
Don't be annoying. Such things are constanly whinning to other
players and wizards will be the quickest way of being rejected
by the players of the mud.
Avoid player killing unless it has been explicitly allowed on
the mud you are playing. Usually if player killing is not
allowed on a mud, and someone violates this, it is dealt with
very sternly.
Avoid unnecessary shouts. Such things as shouting
"LAAAAAAGGGG", "GOODBYE SO-AND-SO", "LEVEL!!!", etc generally
do nothing but annoy other players and can be taken care of by
using tell or say.
Don't litter. Leaving junk around does nothing more than drain
the machine's resources. See if the mud you are playing has the
junk command, sacrifice command, or a dump where you can
dispose of uneeded items.
Remember, that it is only a game and the main purpose is for
you, as a player, to have fun, explore, and talk to people. And
do not let mud playing take priority over your school/job work,
which happens all too often.
"What is a crash?"
A crash occurs when
1. The system that the mud is located fails, or
2. The actual mud itself fails.
When a mud crashes, you will be thrown out of the game and you
will not be able to connect back to the game until it is
rebooted.
Because of crashes, the importance of saving your character is
very important. All you need to to do is type 'save' and your
character is saved, that's it. No excuses. Fortunately, many
muds have made it so that the mud saves your character every so
often.
In event the game does crash, and you lose items/experience,
then often (not always, and it is not required) immortals/gods
will reimburse your items. But remember, the keywords for you
to get reimbursed are politeness and courtsey. Chances are, if
you lost something, other players did too. Avoid telling your
local god 100 times that you lost something. Usually, if you
told them once, they know and will get to you as soon as they
can. And remember, nowhere does it say they have to reimburse
you.
In the event of a crash, do not go straight to rec.games.mud.diku
and post a message saying "Blah mud crashed 4 seconds ago, what
happened?" That posting will usually be met with negative
reponses. POST TO THE NET ONLY AS A LAST RESORT. If the game is
going to be down for an extended amount of time or if there are
serious problems, usually there will be a message at the port
the game ran on, or there will be a posting on rec.games.mud.diku
concerning the down time.
"What is lag?"
Lag is the result of an overstrained Internet. It comes in two
varieties, machine lag in which the machine the MUD runs on is
overburdened, and network lag caused by a poor network
connection between the enduser and the MUD's host machine.
Machine lag can be created by other processes gobbling up CPU
processing time or RAM, or simply by trying to run an overly
exotic MUD on an inadequate machine. Machine lag will cause all
actions to be slowed down, but for the most part everything is
slowed down equally. Machine lag can be fixed by upgrading the
hardware the MUD uses or by the MUD implementors better
optimizing the MUD.
Network lag is typically the result of a breakdown or bottleleg
somewhere on the Internet. Massive or rerouted traffic on the
Internet as a whole will cause traffic jams that make
communicating between the MUD and the enduser difficult.
Typically, the communications problem is sporadic and the lag
will come and go in "bursts". Multiple commands will go thru
simultaneously, followed by a period when seemingly no response
occurs. Network lag is sometimes caused by the MUD's machine if
the MUD has an inadequate hookup to the Internet. Utilities
like PING and TRACEROUTE are good for tracking down the
location of network lag.
The two types of lag have different effects on the MUDder.
Since machine lag slows everything down, all actions take
longer. Machine lag is essentially like operating in slow
motion. There is little real danger (other than becoming
inattentive from boredom) to machine lag since you essentially
have a longer reaction time. Network lag, on the other hand, is
MUCH worse. It may take many seconds, even minutes, for a
command to be entered, be processed and the response to that
action to come back to the user. Obviously, the situation could
have dramatically changed in the meantime. In short, the user
might be responding to an event that the MUD thinks happened
many seconds in the past. Or more to the point, you might
already be dead before you even register that you should be
thinking about fleeing.
Each MUD has its own policy on how lag related problems are
resolved. Commonly though, most MUDs will refuse reimbursement
for lag induced death simply because it is an aspect of the
game that the implementors have no control over and for the
most part can not be verified.
From Jeffrey Bennett (Batopr@SneezyMUD)
"What is some of the slang or jargon I hear on these muds?"
brb --- Be right back.
brt --- Be right there.
rl --- Real life. Something like "I'm bored rl" is commonly
heard.
brb rl --- Put the two together and you get "Be right
back, real life". You know, like going to the bathroom.
pk, pk'ing --- Player Killer and Player Killing.
newbie --- Someone who is new to the game. Associated with the word
clueless.
mob --- A mobile, a monster in the game.
immort --- A player who has achived immortality on the mud and is
considered a god.
imp(s) --- The person(s) who run the mud. They have final say over
everything.
afk --- Away from keyboard.
afw --- Away from window.
inv --- Your inventory, what you have on you and is not
currently equiped.
equip --- The items that you are currently using. Like the armour
you are wearing.
"What about this newsgroup, rec.games.mud.diku?"
This newsgroup was designed to help filter a lot of traffic
that flowed through the newsgroup rec.games.mud. The newsgroup
is designed for discussions about anything pretaining to
DikuMud games. Anyone or anything is open, though messages that
obviously have no purpose, like "Big Fat Hairy Mud
Rules/Sucks!!!!" are generally frowned upon and are a waste of
peoples' time and of network resources.
"What is DikuII, and when can I expect to see it?"
DikuII is exactly that, the second version of DikuMud. The
latest word is that this code will not be released due to the
politics and pains of releasing public code. Unless of course,
you have the April Fools DikuII release of a couple of years
ago. However, all this aside, ValhallaMud runs on the DikuII
code and is run by some of the original DikuMud authors.
"What other derivatives are available then?"
There are a large number of derivatives of the original
DikuMud, far too many to list in fact, but a quick summary can
be made. The first real variant on the DikuMud Gamma code was
the Alfa DikuMud code. Not too many changes were made, but a
number of bugs were corrected, and several new areas were
added. In the time following this, other variants were
released, for example Copper, Sequent, Pirate, TECHv3, and so
forth, most without too many changes to the original code. As
more new ideas came to the forefront, SillyMud was released,
MercMud, and CircleMUD. MercMud has since branched off into a
large number of other derivatives such as ROM, Envy, SMAUG, The
Isles (NiMud), Ember, Oblivian, and more. The differences
between these are generally in the basic features, but the
MercMud tree introduced a new file format to the game for area
files. Instead of all of the rooms being stored in one big
file, they stored each area in its own file instead, making
adding and removing areas somewhat easier.
Each generation of DikuMud code tends to remove a number of
bugs and problems from earlier code while introducing new
features, bells, and whistles. If you are planning on using one
of these derivatives, you are advised to find the one that
suits your needs the most.
See also the question on the differences between Silly, Merc,
and Circle based muds in the next section.
"Where can I find so-and-so?"
If you are looking for a paticular client/patch/source, check
the bottom list and look at those ftp sites. I try to maintain
a general list of what is at these sites, but I can not always
keep up-to-date of these sites' contents, so you will have to
actually log in those sites and look around, chances are what
you are looking for is at one of those sites.
If by chance you do not find what you are looking for at one of
those sites, then use the archie server. Archie is a archive
database searcher that will aid in hunting down a paticular
program or whatever it is you seek. Don't post to the newsgroup
asking how to use archie, ask someone at your site if they can
help. If you can not get help locally, then post the question
to newsgroup such as news.newusers.questions or look in
news.answers.
_________________________________________________________________
Implementor / Building
"I really like DikuMuds, I want to start my own!"
Well, before you go off and do that, there are some things you
need to know or have.
A good, working knowledge of C. Though with the amount of
enhanced muds that are available, this is still a good thing to
know because you are never going to find 100% bug free code.
A machine. Some general requirements include:
+ 32-bit processor
+ 8+ megs available (greater than 16 is desirable)
+ 2-8 megs of available memory
+ Network bandwith running (to Internet) at about 50
kilobits/second
+ Explicit System Administrator Approval. Muds do not go
unnoticed on any machine where there are any other users. Get
this before you do anything or you could find hours and hours
of hard work down the drain.
+ A large amount of time to devote to actual work on the mud
and time to spend online in the game doing administrative
duties. This has caused the eventual death of many muds.
+ A creative mind. Be creative about your work, no one really
cares for a dull, boring mud.
DikuMuds are not very CPU intensive, so very little CPU time is
needed for all practical purposes.
"What is the difference between Circle, Merc, and Silly?"
There is a lot of differences between the codes, some that are
easily recognizable by players, some that are not.
CircleMUD is the closest to the original Gamma Diku with a lot
of the bugs patched plus a lot of new features built in. It can
compile on almost any platform, including Amigas and Windows 95.
SillyMud is a very large and heavily modified release and if I
remember what the author of this code (or maybe it was someone
else) said is "it is big and it is ugly". Silly is filled with
features but is has not been patched up to run on all systems,
so beware it might take some hacking to get this to work on
your machine.
MercMUD is yet another highly developed and very different
release. It has been made to work with a variety of machines
including Macintosh and is very compact. The MercMUD base has
branched off into a number of different bases now, including
ROM, EnvyMUD, and SMAUG.
It is recommended that you give each release your attention to
find out which code is best for you, because none is
specfically better than the other in general terms, it is up to
you find out what you prefer.
Further additions to this section are welcome, including
comparing new code bases that exist.
See also the question on which derivatives are available in the
previous section.
"Well, I do not want to run my own mud, but I would like to create an
area."
Some tips in writting an area:
1. Get documentation - Basic DikuMud documention has been
upgraded and released by the Curious Areas Workshop, and
their Builders' Handbook can teach the most inexperienced
beginner how to create an area.
2. Ask a wizard at the mud you would like to build for for the
documentation specifically for that mud. This is helpful
because the actual format for area creation and many of the
different bitvectors vary greatly from mud to mud.
3. Planning. Plan out your area. Make a detailed map and think
of a good general story or theme for your area before you
start construction.
4. Don't just make an area for the sake of it. If you make an
area just for the sake of it, this usually shows and people
do not want a boring, non-planned area. All you will do is
waste your time and the admin's time.
5. Many other helpful tips are given in the C.A.W. Builders'
Handbook, and are available online at:
"Ok, I think I can start my own, but I hear there are bugs with the
stock code, what are these?"
Unfortunatly, all code will have bugs, here are some of the
more well known bugs that should be looked at when starting
your own bugs.
(NOTE: These are all fixes that require knowledge of C... told
you you needed to know C.)
+ Problem with realloc() in db.c with the world, and to a
lesser extent mob_index and obj_index.
+ Problem in do_pour. This bug allows players to preform 'pour
cup cup' and have an infinite water supply.
+ Problem on do_taste. A player can taste the bread, but when
he tastes the bread, it will apply the breads fill value
without actually eating the bread, thus having an infinite
bread supply.
+ Problem in nanny(). In nanny(), if a player answers 'no' to
the 'is that really your name?', the name pointer is never
set to NULL. So, when you drop link, the same pointer will be
free [using free()] again inside free_char().
+ Problem in generic_find(). Uses str_cmp() instead of isname()
in FIND_OBJ_EQUIP.
+ Problem in affect_from_char(). The variable hjp is given the
value hjp-next after hjp has been free()'d.
+ Problem in shop.c. A has the scroll 'a tattered scroll' he
wishes to sell. Upon selling the item, a check is made for
the keyword scroll in the shopkeeper's inventory. Since
scrolls of identify and scrolls of recall are produced by the
shop and have the keywords 'scroll', the game assumes that
the first scroll is one of these items and destroys the item
and never places it back for resale.
+ Problem in the init_socket. The ports do not seem to clear
freely, so you end up with a lot of port binds. Here is a
patch provided by Dean Gaudet <dgaudet@arctic.org>
int init_socket(int port)
{
int s, sbuf, opt;
struct sockaddr_in sa;
struct linger ld;
memset(&sa, 0, sizeof(struct sockaddr_in));
sa.sin_family = AF_INET;
sa.sin_port = htons(port);
if ((s = socket(AF_INET, SOCK_STREAM, 0)) < 0) {
exit(0);
}
opt = 1;
if (setsockopt(s,SOL_SOCKET,SO_REUSEADDR, (char *)&opt, sizeof(opt)) < 0)
{
exit(1);
}
ld.l_onoff = 0; /* let's make sure this isn't on */
ld.l_linger = 1000; /* who cares what this is */
if (setsockopt(s, SOL_SOCKET, SO_LINGER, (char *)&ld, sizeof(ld)) < 0) {
exit(1);
}
......
if (bind(s, (struct sockaddr *)&sa, sizeof(sa)) < 0) {
if( close(s) ) {
syslogf("init_socket: error closing socket: %s", strerror(errno));
exit(1);
}
exit(0);
}
if( listen(s, 5) < 0 ) {
exit(1);
}
return(s);
}
+ Check for unused variables by compiling with the -Wunused
flag. This will help in streamlining your code.
+ Many typos, especially look in constants.c, act.obj1.c and
act.other.c
+ Echo on/off:
in interpreter.c, add this to the other includes:
#include <arpa/telnet.h>
create the following strings:
char echo_off[]={IAC,WILL,TELOPT_ECHO,'\0'};
char echo_on[]={IAC,WONT,TELOPT_ECHO,'\n','\r','\0'};
make the following macros:
#define ECHO_ON(d) SEND_TO_Q(echo_on,(d))
#define ECHO_OFF(d) SEND_TO_Q(echo_off,(d))
Then place in appropriate places where you want to turn
echo'ing on or off.
"Are the any RFC's of interest?"
The following RFC's can be of interest to Diku Implementors:
+ RFC 1413: Telnet Identification Protocol
+ RFC 854: Telnet Protocol
+ RFC 857: Telnet Echo Option
RFC's are located on many sites including nic.ddn.mil and
wuarchive.wustl.edu.
"What runs on Linux?"
CircleMUD and ROM 2.4 were both developped on Linux, so neither
should have any problems compiling on Linux. Other muds such as
MercMUD and EnvyMUD have been reported to compile with very few
problems.
"How can I make the original DikuMud run on Linux?"
From Derek J. Middleton with corrections from Russell Schutlz
To port DikuMud gamma 0.0 over to Linux, there are a couple
things that should be modified. This may not be everything, but
here is what needs to be done right off the bat:
1. Add this to your #include section:
#if defined(linux) || defined(SYSV)
#include <sys/utsname.h>
#include <unistd.h>
#endif
2. Change all calls to the srandom() function to srand()
3. I believe there are two references like this. Change:
gettimeofday(&last_time, (struct timeval *) 0);
to:
gettimeofday(&last_time, (struct timezone *) 0);
4. In addition, the init_socket() function needs to be re-worked
quite a bit. This is what I have:
int init_socket(int port) {
int s;
char opt = 1;
struct sockaddr_in sa;
struct hostent *hp;
struct linger ld;
struct utsname hostinfo;
char *hostname;
/* get the current hostname */
if (uname(&hostinfo)<0) {
perror("uname");
exit(1);
}
hostname = hostinfo.nodename;
memset(&sa, 0, sizeof(sa));
hp = gethostbyname(hostname);
if (hp == NULL)
{
perror("gethostbyname");
exit(1);
}
sa.sin_family = hp->h_addrtype;
sa.sin_port = htons(port);
s = socket(AF_INET, SOCK_STREAM, 0);
if (s < 0)
{
perror("Init-socket");
exit(1);
}
if (setsockopt (s, SOL_SOCKET, SO_REUSEADDR,
&opt, sizeof (opt)) < 0)
{
perror ("setsockopt REUSEADDR");
exit (1);
}
ld.l_onoff = 1;
ld.l_linger = 1000;
if (setsockopt(s, SOL_SOCKET, SO_REUSEADDR, &ld, sizeof(ld)) < 0)
{
perror("setsockopt LINGER");
exit(1);
}
if (bind(s, (struct sockaddr *) &sa, sizeof(sa)) < 0)
{
perror("bind");
close(s);
exit(1);
}
listen(s, 3);
return(s);
}
Once this is done, you should be well on your way to having the
Gamma DikuMud running on your Linux system.
"Are there any books of interest to admins?"
Yes, these should be available at your local bookstore:
_Unix Network Programming_
Stevens, Richard W.
Prentice Hall, 1990
_The C Programming Language_
Kernighan, Brian W.
Ritchie, Dennis M.
Prentice Hall, 1988
[This is THE bible on C programming]
"Are there any muds available with online creation?"
The Isles is a Merc based DikuMud with Online Creation among a
lot of other enhancements.
There is a copy of Envy mud with a ported version of The Isles
OLC in it.
SMAUG, a newer code release based on Merc has a built in OLC
system.
Circle 3.x is also going to have online creation. (See the next
question)
"When is Circle 3.0 going to be released?"
See the CircleMUD FAQ for more details on CircleMUD 3.x
"Are there any mailing lists for administrators?"
CircleMUD has a mailing list at <listserv@post.queensu.ca>. To
subscribe, send it a piece of mail with the body 'subscribe
circle <first name> <last name>'.
MercMUD and EnvyMUD have a mailing list at
<merc-l-request@webnexus.com>
To subscribe to this, send a piece of mail to that address with
the word 'subscribe' in the body.
ROM also has a mailing list now. This can be found at
<rom-request@cmc.net>. To subscribe, send a piece of mail to
this address with the word 'subscribe' in it.
.
"Should I credit my area authors too?"
Of course you should! The best way to do this is to either have
a help entry for 'areas' or 'zones' with a listing of zones and
their authors. Another way to do this would be to use the Merc
family method of having an 'areas' command which lists the
areas and authors. A final method would be to list them in the
credits of your mud. If all of your authors are credited
publically, it gives people more incentive to release areas to
the public and to create areas for your mud.
To aid this, Nino Ruffini (Steppin of JediMUD) has lent his
time in order to compile a fairly extensive listing of areas
and their authors. This has been posted once to rgmd, and will
soon be found on a web page near you! Keep your eyes peeled for
it.
_________________________________________________________________
Resources
Below is a list of commonly used sites for DikuMud related items. All
sites/contents are subject to change.
Web Resources
There are a good number of resource sites online and more seem to crop
up every day. The following is by no means a definitive list of
available resources, but should serve as a good starting point.
The official DikuMud homepages.
This is the official release site of ROM.
The official CircleMUD homepages, complete with links to the
CircleMUD FAQ, various documentation for running a CircleMUD,
etc.
This is the official CircleMUD FAQ site, as well as being the
official home of this FAQ. Also found here are a number of
CircleMUD code snippets, and links to CircleMUD area updates.
The official Curious Areas Workshop homepages. These pages
include area releases and the C.A.W. Builders' Handbook.
The Mud Connector. This site has links to many online
resources, as well as links to over 400 muds.
This site is the quintessential Macintosh Mudding Resource
site, with links to Mac Clients, code, and so forth.
A site with many links to building resources online, as well as
links to several area archives.
A page listing the approximate family tree of the DikuMud
server, complete with links to the home sites of many of the
DikuMuds listed.
This is the homepage of the TinyFugue Client. This page gives
complete instructions on how to go about using the client and
how to get it.
FTP Resources
This site contains a general mishmash of mud related software.
Unfortunately, it is somewhat out of date in general.
This has the server software for MUME as well as a few
utilities for DikuMuds. There are also some clients available
here.
This site is mostly out of date, but has a fairly large amount
of mud related items, including having the most recent releases
of the TinyFugue client.
This site is an older site pertaining to DikuMuds.
This site contains a fair number of servers as well as a few
areas, but does not tend to be updated very often.
This is the official release site of EnvyMud.
Probably one of the largest and most up to date FTP sites. It
contains almost every single DikuMud related server that has
been publically released.
This is the offical site of the CircleMUD distribution. It
contains various releases of the CircleMUD server, as well as a
plethora of CircleMUD administrator contributed code and areas.
This site contains a number of items relevant to DikuMuds, as
well as being the homesite for the Hidden Worlds Mud.
The official release site of Curious Areas Workshop, an area
building group, also known for the C.A.W. Builders' Handbook.
The official site of the SillyMUD distribution.
A site for PMF, an alternative client to TinTin.
The official release site of the TinTin++ client.
_________________________________________________________________
Final Word
Playing a mud of any sort is NOT a right. The people who run the game
and the people who owns/runs the system that you are playing from are
not required to let you play. If you abuse your privledge of playing,
there are good chances that it will be taken away.
_________________________________________________________________
--
Erm... Yeah. Whatever. | http://www.faqs.org/faqs/games/mud-faq/diku/ | CC-MAIN-2018-39 | refinedweb | 6,447 | 74.49 |
YAML - YAML Ain't Markup Language (tm)
use YAML; # Load a YAML stream of 3 YAML documents into Perl data structures. my ($hashref, $arrayref, $string) = Load(<<'...'); --- name: ingy age: old weight: heavy # I should comment that I also like pink, but don't tell anybody. favorite colors: - red - white - blue --- - Clark Evans - Oren Ben-Kiki - Brian Ingerson --- > any YAML serialization produced by Perl can be processed by Python, and be guaranteed to return the data structure intact. (Even if it contained Perl specific structures like GLOBs).
The YAML language has been designed to be flexible enough to solve it's own problems. The markup itself has 3 basic construct which resemble Perl's hash, array and scalar. By default, these map to their Perl equivalents. But each YAML node also supports a type (or "transfer method") which can cause that node to be interpreted in a completely different manner. That's how YAML can support oddball structures like Perl's typeglob.
The following functions are exported by YAML.pm by default when you use YAML.pm like this:.
Writes the YAML stream to a file instead of just returning a string.
Reads the YAML stream from a file instead of a string.:
--- #YAML:1.0 apple: good banana: bad cauliflower: ugly --- #YAML:1.0(:all); use YAML::Node; $hash = {apple => 'good', banana => 'bad', cauliflower => 'ugly'}; print Dump $hash; Bless($hash); $ynode = ynode(Blessed($hash)); $ynode->keys(['banana', 'apple']); print Dump $hash;
Returns the yaml node that a particular perl node is associated with (see above). Returns undef if the node is not (YAML) blessed.
Alias to Dump(). For Data::Dumper fans.
Aliases to Dump() and Load(). For Storable fans.
This will also allow YAML.pm to be plugged directly into modules like POE.pm, that use the freeze/thaw API for internal serialization.
This is a list of the various groups of exported functions that you can import using the following syntax:
use YAML ':groupname';
Imports Dump(), Load(), DumpFile(), LoadFile(), Bless() and Blessed().
Imports freeze() and thaw().
Imports freeze() and thaw().
YAML can also be used in an object oriented manner. At this point it offers no real advantage. This interface will be improved in a later release.
New returns a new YAML object. For example:
my $y = YAML->new; $y->Indent(4); $y->dump($foo, $bar);
OO version of Dump().
OO version of Load().
YAML options are set using a group of global variables in the YAML namespace. This is similar to how Data::Dumper works.
For example, to change the indentation width, do something like:
local $YAML::Indent = 3;
The current options are:.
Default is 1. (true)
This tells YAML.pm whether to use a separator string for a Dump operation. This only applies to the first document in a stream. Subsequent documents must have a YAML header by definition.
Default is 1. (true)
Tells YAML.pm whether to include the YAML version on the separator/header.
The canonical form is:
--- YAML:1.0.
Default is ''.
Anchor names are normally numeric. YAML.pm simply starts with '1' and increases by one for each new anchor. This option allows you to specify a string to be prepended to each anchor number.. Safe deserialization is one of the core goals of YAML..
If you want to force YAML to use the 'folded' style for all multiline scalars, then set $UseFold to 1.
NOTE: YAML's folded style is akin to the way HTML folds text, except smarter.. it's type that makes it behave differently. In this manner, YAML can be extended to represent Perl's Glob or Python's tuple, or Ruby's Bigint.
A YAML stream is the full sequence of bytes.
--- YAML:1.0 This: top level mapping is: - a - YAML - document transfer 'perl/Foo::Bar':
- !perl/Foo::Bar foo: 42 bar: stool
A collection is the generic term for a YAML data grouping. YAML has two types of collections: mappings and sequences. (Similar to hashes and arrays)
A mapping is a YAML collection defined by key/value pairs..
This is a single line of unquoted text. All plain scalars are automatic candidates for "implicit transferring". This means that their type is determined automatically by examination. Unless they match a set of predetermined YAML regex patterns, they will raise a parser exception. The typical uses for this are plain alpha strings, integers, real numbers, dates, times and currency.
- a plain string - -42 - 3.1415 - 12:34 - 123 this is an error
This is similar to Perl's use of single quotes. It means no escaping and no implicit transfer. It must be used on a single line.
- 'When I say ''\n'' I mean "backslash en"'
This is similar to Perl's use of double quotes. Character escaping can be used. There is no implicit transfer and it must still be single line.
- "This scalar\nhas two lines, and a bell -->\a"
This is a multiline scalar which begins on the next line. It is indicated by a single closing brace.. When libyaml is written (in C) there will be a definite separation. libyaml will contain a parser and emitter, and YAML.pm (and YAML.py etc) will supply the loader and dumper.
For more information please refer to the immensely helpful YAML specification available at. ysh *ALPHA* code.
BIGGER WARNING: YAML.pm has been slow in the making, but I am committed to having top notch YAML tools in the Perl world. The YAML team is close to finalizing the YAML 1.1 spec. This code is based off of a very old pre 1.0 spec. In actuality there isn't a ton of difference, and this YAML.pm is still fairly useful. Things will get much better in the future.
YAML is quite capable of serializing circular references. And for the most part it can deserialize them correctly too. One notable exception is a reference to a leaf node containing itself. This is hard to do from pure Perl in any elegant way. The "canonical" example is:
$foo = \$foo;
This serializes fine, but I can't parse it correctly yet. Unfortunately, every wiseguy programmer in the world seems to try this first when you ask them to test your serialization module. Even though it is of almost no real world value. So please don't report this bug unless you have a pure Perl patch to fix it for me.
By the way, similar non-leaf structures Dump and Load just fine:
$foo->[0] = $foo;
You can test these examples using 'ysh -r'. This option makes sure that the example can be deserialized after it is serialized. We call that "roundtripping", thus the '-r'.
Unicode is not yet supported. The YAML specification dictates that all strings be unicode, but this early implementation just uses ASCII.
Python, Java and perhaps others support using any data type as the key to a hash. YAML also supports this. Perl5 only uses strings as hash keys.
YAML.pm can currently parse structured keys, but their meaning gets lost when they are loaded into a Perl hash. Consider this example using the YAML Shell:
ysh > --- yaml> ? yaml> foo: bar yaml> : baz yaml> ... $VAR1 = { 'HASH(0x1f1d20)' => 'baz' }; ysh >
YAML.pm will need to be fixed to preserve these keys somehow. Why? Because if YAML.pm gets a YAML document from YAML.py it must be able to return it with the Python data intact.
As far as I know, other Perl serialization modules are not capable of serializing and deserializing typeglobs, subroutines (code refs), regexes and file handles. YAML.pm has dumping capabilities for all of these. Loading them may produce wild results. Take care.
NOTE: For a (huge) dump of Perl's global guts, try:
perl -MYAML -e '$YAML::UseCode=1; print Dump *::'
To limit this to a single namespace try:
perl -MCGI -MYAML -e '$YAML::UseCode=1; print Dump \%CGI::'
This is a pure Perl implementation that has been optimized for programmer readability, not for computational speed.
Oren Ben-Kiki and Clark Evans are currently developing libyaml, the official C implementation of the YAML parser and emitter. YAML.pm will be refactoring to use this library once it is stable. Other languages like Python, Tcl, PHP, Ruby, JavaScript and Java can make use of the same core library.
Autrijus Tang is also currently developing libyaml-haskell, the haskell parser for YAML. Due to the complexity of the YAML grammar it is expected it will take him 87 minutes to complete this.
Please join us on the YAML mailing list if you are interested in implementing something. Or try dropping into
#yaml on
freenode, if that's your style.
This module Dumps and Loads in one operation. There is no interface for parsing or emitting a YAML stream one node at a time. It's all or nothing.
An upcoming release will have support for incremental parsing and dumping. Stay tuned.
Please read YAML::Node for advanced YAML features. is the official YAML website. is the YAML 1.0 specification. is the official YAML wiki.
YAML has been registered as a Source Forge project. () Currently we are only using the mailing list facilities there.
Brian Ingerson <INGY@cpan.org> is resonsible for YAML.pm.
The YAML language is the result of a ton of collaboration between Oren Ben-Kiki, Clark Evans and Brian Ingerson. Several others have added help along the way.
This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
See | http://search.cpan.org/~ingy/YAML-0.39/lib/YAML.pod | crawl-002 | refinedweb | 1,583 | 68.77 |
PHPUnit 开发团队宣布 PHPUnit 6.0.0 发布了。此版本添加了新功能,更改并删除了现有功能,并修复了错误。此处提供了详细的更改列表。
Dropping support for PHP 5.6
According to our release process, PHPUnit must be compatible with all versions of PHP that are actively supported by the PHP project.
Active support for PHP 5.6 ended on December 31, 2016. The only actively supported versions of PHP as of February, 3 2017 are PHP 7.0 and PHP 7.1.
Backwards Compatibility Issues
PHPUnit's units of code are now namespaced. For instance, PHPUnit_Framework_TestCase is now PHPUnit\Framework\TestCase
PHPUnit is now strict about useless tests by default. Use the --dont-report-useless-tests commandline option or the beStrictAboutTestsThatDoNotTestAnything="false"configuration directive to disable this risky test check.
Global and super-global variables are no longer backed up before and restored after each test by default. Use the --globals-backup commandline option or the backupGlobals="true" configuration directive to enable this feature.
The logfile format generated using the --log-junit option and the <log type="junit" target="..."/> configuration directive has been updated to match the current format used by JUnit. Due to this change you may need to update how your continuous integration server processes test result logfiles generated by PHPUnit.
Obtaining PHPUnit 6.0
We distribute a PHP Archive (PHAR) that contains everything you need in order to use PHPUnit. Simply download it from here, make it executable, and put it into your $PATH, for instance.
Alternatively, you may use Composer to download and install PHPUnit as well as its dependencies.
Requirements
PHPUnit 6.0 requires PHP 7.0; using the latest version of PHP is highly recommended. The documentation has a detailed list of the PHP extensions that are required to use PHPUnit.
Support for PHPUnit 5.7
Following our release process, PHPUnit 5.7 will receive bug fixes until February 2, 2018.
End of Life for PHPUnit 4.8
Following our release process, PHPUnit 4.8 has reached End of Life as of February, 3 2017 and will no longer receive bug fixes.
If you use Chef, Puppet, or a similar tool to download and install a PHPUnit 4.8 PHP archive (PHAR) then please change the URL from to. The URL will not be offered in the future anymore. It will continue to work for now and it will always redirect to the latest version of PHPUnit 4.8.
Looking Ahead
The goal of our release process is to deliver new features into the hands of our users every two months. The next release with new features will be PHPUnit 6.1. It is currently in development and will become stable on April 7, 2017.
PHPUnit 7
PHPUnit 7.0, which is scheduled for February 2, 2018, will no longer support PHP 7.0.
Support for PHPUnit 6
Following our release process, PHPUnit 6 will receive bug fixes until February 8, 2019. | https://www.oschina.net/news/81761/phpunit-6-0-0-released | CC-MAIN-2022-27 | refinedweb | 476 | 60.01 |
The information in this post is out of date.
Visit msdn.com/data/ef for the latest information on current and past releases of EF.
In Entity Framework 3.5 (.NET 3.5 SP1), there are more than a few restrictions that were imposed on entity classes. Entity classes in EF needed to either be sub classes of EntityObject, or had to implement a set of interfaces we collectively refer to as IPOCO – i.e. IEntityWithKey, IEntityWithChangeTracker and IEntityWithRelationships. These restrictions made it difficult if not downright impossible to build EF friendly domain classes that were truly independent of persistence concerns. It also meant that the testability of the domain classes was severely compromised.
All of this changes dramatically with the next release of Entity Framework: 4.0 (.NET Framework 4.0). Entity Framework 4.0 introduces support for Plain Old CLR Objects, or POCO types that do not need to comply with any of the following restrictions:
- Inheriting from a base class that is required for persistence concerns
- Implementing an interface that is required for persistence concerns
- The need for metadata or mapping attributes on type members
For instance, in Entity Framework 4.0, you can have entities that are coded as shown:
public class
Customer
{ public string CustomerID { get; set; } public string ContactName { get; set; } public string City { get; set; } public
List
<
Order
> Orders { get; set; } }
public class
Order
{ public int OrderID { get; set; } public
Customer
Customer { get; set; } public
DateTime
OrderDate { get; set; } }
You can then use the Entity Framework to query and materialize instances of these types out of the database, get all the other services offered by Entity Framework for change tracking, updating, etc. No more IPOCO, no more EntityObject – just pure POCO.
Keep in mind that this is an extremely simplistic example, and intentionally so. There is much more here than meets the eye – I am sure that it brings up at least a few questions about what is possible and what isn’t possible with POCO entities – for instance:
- Do I need to have public getters/setters for scalar and navigation properties?
- Will I get Lazy Loading? How does explicit loading work?
- How does relationship fix-up work with POCO?
- What does this mean for code generation done within Visual Studio?
- How does this fit in with the repository pattern?
What about Complex Types? Serialization? Change Tracking? Add/Attach…. The list goes on….
These and many other questions and concerns will be answered in our in-depth series on POCO that we are working on publishing in the coming weeks.
And by the way – did I just mention Lazy Loading? Watch for a sneak preview on that tomorrow!
– Faisal Mohamood
Program Manager, Entity Framework
Join the conversationAdd Comment
PingBack from
I’m off at TechEd this week talking to customers about the EF—especially about ways to be successful
Can I have entity looking like this?
public class Customer
{
private readonly List<Order> _orders = new List<Order>();
public string CustomerID { get; set; }
public string ContactName { get; set; }
public string City { get; set; }
public List<Order> Orders { get{return _orders}; }
public bool HasOrders {get {return Orders.Count>0;}}
}
Thank you for submitting this cool story – Trackback from DotNetShoutout
Wonderful!!!
Is it possible to get a planned date for EF 4?
Will be available CTP?
Great. We were also improvements in the visual editor?
Thank you for submitting this cool story – Trackback from progg.ru
Entity Framework 4.0 Sneak Peak – POCO Goodness!
@ Krzysztof
Yes your entity can look like that. We will support collection properties with getters only, or with both a getter and a setter. Anything that is an ICollection<T> will do.
Jeff
Eagerly awaiting this release. I have just gotten comfortable with EFPocoAdapter so the migration isn’t too tedious. And definitely looking forward to MSDN articles and better support (though Jarek has been quite responsive over email, I really was starting to miss full support)
My biggest pain point has to be serialization. I ran into a bunch of issues with circular references and JSON serialization before finally getting it working. Since the POCO classes are auto-generated I would like to see support to dynamically ignore properties during serialization (attributes only work at compile time and get overridden each time those classes are regenerated). This way
Would it be a bad idea to make those classes partial? I know that extending them would mean they’re not purely POCO anymore but I do find that if they are auto-generated it’s hard to modify them without getting your changes overridden. Extension methods work but only halfway through. Extension properties if that comes in .NET 4 might offer more flexibility in extending POCO objects.
Excellent!!! Now we are talking.
That’s great news. Although I feel you will still find cry babies, but EF finally starting to look good.
You’ve been kicked (a good thing) – Trackback from DotNetKicks.com
Completely excited!!!
Unit Testing sound now different ….
Is the designer going to overwrite POCOs in case of database changes?
I have started talking to developers about what they can expect in Entity Framework V2 such as in my
If you are looking to follow this series, be sure to subscribe to my RSS feed at
If you are looking to follow this series, be sure to subscribe to my RSS feed at
What to be Expecting of Entity Framework in .NET 4 The ADO.NET team started to release a series of posts
Windows Azure/Azure Service Framework (including .NET Services)/BizTalk David Pallmann has released 2.1 of Azure Storage Explorer with the new feature of modifying what is in storage including create or delete blob containers, blob items, queues, queue
This week on Channel 9 at TechEd 2009, Brian is joined by Jeff Hadfield and Greg Duncan to discuss this
Writing unit tests is a core practice in the vast majority of modern software development approaches.
Pick of the week: The Web Browser Address Bar is the New Command Line General Open Source or Die – The *Real* Future of Graffiti : Lee Dumond addresses the lack of any forward development of Graffiti CMS and implores Telligent to hand it over to the capable
In the first version of the Entity Framework code generation was implemented internally using CodeDom
One important feature missing in this example is the ability to customize the generation of your entity classes. Using T4 you will be able to have POCO or whatever flavor of entity you desire by simply creating additional templates.
Last week I mentioned in the sneak preview on POCO that support for POCO entities is one of the new capabilities
#.think.in infoDose #29 (11th May – 15th May)
In the first version of the Entity Framework code generation was implemented internally using CodeDom
#.think.in infoDose #29 (11th May – 15th May)
Introduction: From the moment I put my hands on Visual Studio.Net 2010 Beta 1 and I’m targeting EF4
In verschiedenen Einträgen in diesem Blog habe ich über die Verwendung von POCO’s mit dem Entityframework
Hi,
I am trying to use ADO.Net Entity Data Framework in Prism. Almost all the examples including talk about Silverlight page accessing. However in our applications, we use modules which are Class Libraries or Silverlight Class Libraries, which bind the data to the UI. Are there any code samples/snippets to use the same?
Have uploaded the code sample at
That’s great news. Although I feel you will still find cry babies, but EF finally starting to look good.
Faisal,
Beta 1 already llok great but Any aproximate date for Beta 2 ?
Great 🙂 | https://blogs.msdn.microsoft.com/adonet/2009/05/11/sneak-preview-persistence-ignorance-and-poco-in-entity-framework-4-0/?replytocom=22023 | CC-MAIN-2018-30 | refinedweb | 1,273 | 54.42 |
0
Hi all!
Quick summary:
I'm attempting to shutdown a socket, but the framework that I'm inheriting an Interface from already contains an implementation of a "shutdown" method. Therefore, I am unable to call "shutdown" for the socket.
Longer Explanation:
How can I fix this? I originally tried looking for a namespace it may have been included under, but true to form for Microsoft, there is none (Although I imagine it's for C compatibility?)
Is there a way to prevent my code from looking in the base files for the the method? I can't override the base shutdown method as that's just pointless. I guess I could write a class to wrap the socket stuff up in, but that seems pretty pointless for what is at the end of the day, a very simple call. | https://www.daniweb.com/programming/software-development/threads/414588/existing-method-overridden-by-inherited-class | CC-MAIN-2017-26 | refinedweb | 140 | 71.24 |
Writing a New Port¶
The Mido port API allows you to write new ports to do practically anything.
A new port type can be defined by subclassing one of the base classes and overriding one or more methods. Here’s an example:
from mido.ports import BaseOutput class PrintPort(BaseOutput): def _send(message): print(message) >>> port = PrintPort() >>> port.send(msg) note_on channel=0 note=0 velocity=64 time=0
_send() will be called by
send(), and is responsible for
actually sending the message somewhere (or in this case print it out).
Overridable Methods¶
There are four overridable methods (all of them default to doing nothing):
``_open(self, **kwargs)``
Should do whatever is necessary to initialize the port (for example opening a MIDI device.)
Called by
__init__(). The
nameattribute is already set when
_open()is called, but you will get the rest of the keyword arguments.
If your port takes a different set of arguments or has other special needs, you can override
__init__()instead.
_close(self)
Should clean up whatever resources the port has allocated (such as closing a MIDI device).
Called by
close()if the port is not already closed.
_send(self, message)
(Output ports only.)
Should send the message (or do whatever else that makes sense).
Called by
send()if the port is open and the message is a Mido message. (You don’t need any type checking here.)
Raise IOError if something goes wrong.
_receive(self, block=True)
(Input ports only.)
Should return a message if there is one available.
If
block=Trueit should block until a message is available and then return it.
If
block=Falseit should return a message or
Noneif there is no message yet. If you return
Nonethe enclosing
pending()method will check
self._messagesand return one from there.
Note
Prior to 1.2.0 ``_receive()would put messages in
self._messages(usually via the parser) and rely on
receive()to return them to the user.
Since this was not thread safe the API was changed in 1.2.0 to allow the
_receive()to return a message. The old behavior is still supported, so old code will work as before.
Raise IOError if something goes wrong.
Each method corresponds to the public method of the same name, and will be called by that method. The outer method will take care of many things, so the inner method only needs to do the very minimum. The outer method also provides the doc string, so you don’t have to worry about that.
The base classes are
BaseInput,
BaseOutput and
BaseIOPort
(which is a subclass of the other two.)
Locking¶
The calls to
_receive() and
_send() will are protected by a
lock,
left.lock. As a result all send and receive will be thread
safe.
Note
If your
_receive() function actually blocks instead of
letting the parent class handle it
poll() will not
work. The two functions are protected by the same lock, so
when
receive() blocks it will also block other threads
calling
poll(). In this case you need to implement your
own locking.
If you want to implement your own thread safety you can set the
_locking attribute in your class:
class MyInput(ports.BaseInput): _locking = False ...
An example of this is
mido.backends.rtmidi where the callback is
used to feed an internal queue that
receive() reads from.
Examples¶
An full example of a device port for the imaginary MIDI library
fjopp:
import fjopp from mido.ports import BaseIOPort # This defines an I/O port. class FjoppPort(BaseIOPort): def _open(self, **kwargs): self._device = fjopp.open_device(self.name) def _close(self): self._device.close() def _send(self, message): self.device.write(message.bytes()) def _receive(self, block=True): while True: data = self.device.read() if data: self._parser.feed(data) else: return
If
fjopp supports blocking read, you can do this to actually block
on the device instead of letting
receive() and friends poll and
wait for you:
def _receive(self, block=True): if block: # Actually block on the device. # (``read_blocking()`` will always return some data.) while not ``self._messages``: data = self._device.read_blocking() self._parser.feed(data) else: # Non-blocking read like above. while True: data = self.device.read() if data: self._parser.feed(data)
This can be used for any kind of port that wants to block on a pipe,
an socket or another input source. Note that Mido will still use
polling and waiting when receiving from multiple ports (for example in
a
MultiPort).
If you want separate input and output classes, but the
_open() and
_close() methods have a lot in common, you can implement this
using a mix-in.
Sometimes it’s useful to know inside the methods whether the port
supports input or output. The way to do this is to check for the
methods
`send() and
receive(), for example:
def _open(self, **kwargs): if hasattr(self, 'send'): # This is an output port. if hasattr(self, 'receive'): # This is an input port. if hasattr(self, 'send') and hasattr(self, 'receive'): # This is an I/O port.
Attributes¶
A port has some attributes that can be useful inside your methods.
name
The name of the port. The value is device specific and does not have to be unique. It can have any value, but must be a string or
None.
This is set by
__init__().
closed
True if the port is closed. You don’t have to worry about this inside your methods.
_messages
This is a
collections.dequeof messages that have been read and are ready to be received. This is a shortcut to
_parser.messages.
_device_type (Optional.)
If this attribute exists, it’s a string which will be used in
__repr__(). If it doesn’t exist, the class name will be used instead. | https://mido.readthedocs.io/en/latest/implementing_ports.html | CC-MAIN-2022-33 | refinedweb | 959 | 68.16 |
25 September 2012 15:55 [Source: ICIS news]
MOSCOW (ICIS)--Russia's Lukoil on Tuesday restarted the ethylene and propylene units at its Stavrolen petrochemical plant which were shut down after a fire in December last year, it said.
The high-density polyethylene (HDPE) units is expected to restart in coming days, a source at the oil company said.
Based in ?xml:namespace>
Stavrolen's PP unit was restarted on 11 March this year following the fire on 15 December 2011.
Lukoil had previously pledged to restart ethylene, propylene and HDPE facilities by April. This was postponed to July and then | http://www.icis.com/Articles/2012/09/25/9598532/russias-lukoil-restarts-stavrolen-ethylene-propylene-units.html | CC-MAIN-2015-14 | refinedweb | 101 | 60.65 |
I’m looking to use GR along with Interact.jl for a 3D visualization in IJulia, and I noticed that
GR.trisurface seems to have some difficulties with overlapping polygons (as matplotlib does).
Is there currently a way to achieve a ray-traced surface plot in Julia? I have been using Asymptote to render such figures statically, and it does a terrific job, but I would prefer a Julia-based solution.
import GR GR.inline("png") f(x,y) = x^2+y^2 < 1e-3 ? 0 : x*y/(x^2+y^2) rad = 1 rv = linspace(0,rad,5) θv = linspace(0,2π,12) xv = [r*cos(θ) for r=rv, θ=θv] yv = [r*sin(θ) for r=rv, θ=θv] zv = [f(r*cos(θ),r*sin(θ)) for r=rv, θ=θv]; xv, yv, zv = map(a->vcat(a...),(xv,yv,zv)) GR.setviewport(0.1, 0.95, 0.1, 0.95) GR.setwindow(-2, 2, -2, 2) GR.setspace(-2, 2, 85, 35) GR.setmarkersize(0.1) GR.trisurface(xv,yv,zv) GR.polyline3d([0,0],[0,0],[0,2]) GR.polyline3d([0,0],[0,2],[0,0]) GR.polyline3d([0,2],[0,0],[0,0]) GR.show() | https://discourse.julialang.org/t/triangulated-surfaces-in-gr-ray-tracing/6348 | CC-MAIN-2018-30 | refinedweb | 202 | 68.06 |
Dapper allows you to map a single row to multiple objects. This is a key feature if you want to avoid extraneous querying and eager load associations.
Example:
Consider 2 classes: Post and.
When I do this (My classes are different names, but same construct), I get a Post and a User, a Post and a User. I'm using the Web API, so this is all JSON, if that matters. This is the way I'd see it if I did straight SQL in the Management Studio, you get the many rows and the corresponding User records
What if I want to send back the JSON that has the User once and all the posts in an array, then the next User, array of posts, etc.
id title content id name 1 Article1 Content1 55 Smith 2 Article2 Content2 55 Smith 3 Article3 Content3 55 Smith
I get the JSON back that has the User information over and over (as expected but not wanted). It's backwards.
What I want is a JSON object that has a format like this (I think this is correct):
{ "User": 55, "Name": "Smith", "Post": [ { "id": 1, "title": "title1", "content":"MyContent1" }, { "id": 2, "title": "title2", "content":"MyContent2" }, { "id": 3, "title": "title3", "content":"MyContent2" } ] }
How do I do this? Right now I get the reverse. I thought I would simply change the classes around, but I did not because of the instructions on Github, the "makes more sense" part. I am using this,
(List<Post>)db.Query<Post, User, Paper>(sqlString, (post, user) => { post.user = user; return post; }, splitOn: "id");
I know I don't need the splitOn here, but in my real query the name is different than id.
This is pretty close:
public class Shop { public int? Id {get;set;} public string Name {get;set;} public string Url {get;set;} public IList<Account> Accounts {get;set;} } public class Account { public int? Id {get;set;} public string Name {get;set;} public string Address {get;set;} public string Country {get;set;} public int ShopId {get;set;} } var lookup = new Dictionary<int, Shop>() conn.Query<Shop, Account, Shop>(@" SELECT s.*, a.* FROM Shop s INNER JOIN Account a ON s.ShopId = a.ShopId ", (s, a) => { Shop shop; if (!lookup.TryGetValue(s.Id, out shop)) { lookup.Add(s.Id, shop = s); } if (shop.Accounts == null) shop.Accounts = new List<Account>(); shop.Accounts.Add(a); return shop; }, ).AsQueryable(); var resultList = lookup.Values;
It makes the first object identifier. Not sure if I can use it like that or not. But this does do the array of books like I was asking, and I did not have to create a special object. Originally, it was supposed to be on Google Code, but I couldn't find this test on Github.
Another option is to use .QueryMultiple
[Test] public void TestQueryMultiple() { const string sql = @"select UserId = 55, Name = 'John Doe' select PostId = 1, Content = 'hello' union all select PostId = 2, Content = 'world'"; var multi = _sqlConnection.QueryMultiple(sql); var user = multi.Read<User>().Single(); user.Posts = multi.Read<Post>().ToList(); Assert.That(user.Posts.Count, Is.EqualTo(2)); Assert.That(user.Posts.First().Content, Is.EqualTo("hello")); Assert.That(user.Posts.Last().Content, Is.EqualTo("world")); }
Update:
To return multiple users and their posts:
[Test] public void TestQueryMultiple2() { const string sql = @"select UserId = 55, Name = 'John Doe' select UserId = 55, PostId = 1, Content = 'hello' union all select UserId = 55, PostId = 2, Content = 'world'"; var multi = _sqlConnection.QueryMultiple(sql); var users = multi.Read<User>().ToList(); var posts = multi.Read<Post>().ToList(); foreach (var user in users) { user.Posts.AddRange(posts.Where(x => x.UserId == user.UserId)); } Assert.That(users.Count, Is.EqualTo(1)); Assert.That(users.First().Posts.First().Content, Is.EqualTo("hello")); Assert.That(users.First().Posts.Last().Content, Is.EqualTo("world")); } | https://dapper-tutorial.net/knowledge-base/41884734/how-do-i-return-one-to-many-records-in-a-specific-order-with-dapper-and-multi-mapping- | CC-MAIN-2021-21 | refinedweb | 631 | 56.66 |
Hi.. -... server please tell me.
Hi friend,
For solving the problem.......
i am Getting Some errors in Struts - Struts
i am Getting Some errors in Struts I am Learning Struts Basics,I am Trying examples do in this Site Examples.i am getting lot of errors.Please Help me
Reply Me - Struts
Reply Me Hi Friends,
I am new in struts please help me... file,connection file....etc please let me know its very urgent
Hi Soniya,
I am sending you a link. This link will help you.
Please
Ho to get virtual key code.
Ho to get virtual key code. Hi
How to get virtual key code from button.I made on screen keyboard.But it not proper work for windows ,shift,alt and ctrl key.
pls suggest
Tell me - Struts
Directory Structure for Struts Tell me the Directory Structure for Struts
struts - Struts
struts Hi,
I am new to struts.Please send the sample code for login... the code immediately.
Please its urgent.
Regards,
Valarmathi Hi Friend....shtml
http
Tell me - Struts
Directory Structure with example program Execution Tell me the Directory Structure with example program Execution .Again me.. - Java Beginners
Hi .Again me.. Hi Friend......
can u pls send me some code......
REsponse me.. Hi friend,
import java.io.*;
import java.awt....://
Thanks. I am sending running code
hi.......
){
}
}
can anyone tell wts wrong with this code??
Hi,
Check it:
import...hi....... import java.awt.;
import java.sql.*;
import javax.swing.*;
import java.awt.event.*
public class NewJFrame extends javax.swing.JFrame
Hi
Hi I have got this code but am not totally understanding what...;
import java.util.Scanner;
private static int nextInt() {
public class MatchingPair{
private static class input {
private static int nextInt() {
throw new
Hi... - Struts
Hi... Hi,
If i am using hibernet with struts then require... of this installation Hi friend,
Hibernate is Object-Oriented mapping tool... more information,tutorials and examples on Struts with Hibernate visit
Struts
Struts Hi i am new to struts. I don't know how to create struts please in eclipse help frnds.. help me..
hi frnds.. help me.. i ve a doubt in incompatible type error (block letter). plz help me clear this error!
thanks in advance.
This is my code.
double velo[][] = new double [task] [resource];
for(int i=0;iList<Double>
new
new hi
i am jane
pls explain the
difference between heap memory and stack memory
new
new hi
i am jane
pls explain the
difference between
string greeting = good morning
string greeting = new string ("good morning
hi - SQL
hi hi sir,i want to create a database in oracle,not in my sql sir,plz tell me how to create a database. Hi Friend,
Try the following...*;
public class OracleExample {
public static void main (String[] args) {
try
Tell me - Java Beginners
Tell me
how to create a valid.js file please tell me and give the write code
Thanks Hi friend,
Please give details for requirement of this "valid.js" file.
For read more information
http
Struts - Struts
Struts hi,
I am new in struts concept.so, please explain... explain and details send to me. Hi friend,
Please add struts.jar...://
I hope that, this link will help you
plz tell me
CountButtonClicks();
}
JButton button1 = new JButton("Click Me!");
int clickCount...plz tell me how to get no. of times the 'button' is pressed
....
import javax.swing.*;
import java.awt.event.*;
public class
Hi Hi
How to implement I18N concept in struts 1.3?
Please reply to
please tell me
please tell me Blockquote
Blockquote> BlockquoteBlockquote
how... = DriverManager.getConnection("jdbc:odbc:student");
File imgfile = new File("C:/rose.jpg");
FileInputStream fin = new FileInputStream(imgfile);
PreparedStatement pre
best Struts material - Struts
best Struts material hi ,
I just want to learn basic Struts.Please send me the best link to learn struts concepts
Hi Manju...://
Thanks
struts
struts <p>hi here is my code can you please help me to solve...*;
import org.apache.struts.action.*;
public class LoginAction extends Action...(ActionMapping am,HttpServletRequest req)
{
ActionErrors ae=new ActionErrors tag - Struts
Struts tag I am new to struts,
I have created a demo struts application in netbean,
Can any body please tell me what are the steps to add new tags to any jsp
Hi I want import txt fayl java.please say me...
Hi,
Please clarify your problem! how i get answer of my question which is asked by me for few minutes ago.....rply
Struts - Struts
Struts hi
can anyone tell me how can i implement session tracking in struts?
please it,s urgent........... session tracking? you mean... for later use in in any other jsp or servlet(action class) until session
hi
for such a programme... plz help me... i hope for a speedy reply from ur side...hi sir i've a project on railway reservation... i need to connect netbeans and mysql with the help of jdbc driver... then i need to design the frame
hi!
hi!
struts
technologies like servlets, jsp,and struts.
i am doing one struts application where i...struts hi
Before asking question, i would like to thank you... into the database could you please give me one example on this where i i | http://roseindia.net/tutorialhelp/comment/2557 | CC-MAIN-2014-10 | refinedweb | 867 | 76.62 |
Vbscript sqlserver työt
...-BackupAction "Log" -BackupFile "sqlserversharetest[kirjaudu nähdäksesi URL:n]" -ServerInstance "sqlserver1" Backup-SqlDatabase -Database "Databasenumber2" -BackupFile "sqlserver1sharetest[kirjaudu nähdäksesi URL:n]" -ServerInstance "sqlserver1" Backup-SqlDatabase -Database "Databasenu...
i am looking for VBScript developer write few small scripts.
Rekisteröidy tai kirjaudu sisään nähdäksesi tiedot.
New Web Project SRS Preparation and design Arctechture and dat...in given estimated timelines (the project will be design and develop in asp.net, c#, sqlserver (with mvc or 3 tier) or in PHP, MYSQL) Database design and application artechture design experience will be preferabale. database will be sqlserver or my sql thanks very much
...welcome, user reset /recovery password and changes user profile. 3. Configuration such like SMTP account, menu configuration 4. All configuration should be store in database sqlserver using ado.net. 5. All source code should be in raw project and not dependency with other library such like dll etc. If the the library call other namespace or dll should be
We Want to transfer Azure APP based on (ASP.NET and Azure DB) to Local Server with (IIS + SQLSERVER) Environment already setup need to configure connection
I have a project ready and now i need to make some changes which include the .net code part , sqlserver and Some of UI. Nothing so complicated. Just need to manage things.
...to the website based on the collected data. Technologies to be used - Javascript - Web Services (PHP, .Net, Node, any different server side language) - Database (SqlServer / MariaDB / MySQL / Any DB engine that serves the needs) Benefits This topic will help to understand the psychology of users visiting our website and services. A better
I need someone to help me connect to a remote SqlServer DB using jdts jar and Android Studio.
Windows SQL PL/SQL tanto para Oracle y SQLServer
...ayude a automatizar el proceso de transferencia de bodegas. Son 11 tablas que estan en una base de datos SQLSERVER, y debe estar realizado en java o php, adicional debe permitir al precionar un boton que se conecte a la base de datos SQLSERVER y ejecute 2 procedimientos almacenados que ya se encuentran realizados. La base de datos esta lista.
...expert on telnet so we will give mockup screens to her/him so s/he will develop application.. We
I Need a
We need to convert DTS Packages with VBScript ActiveX Tasks to Visual Basic 2008 to run as an SSIS Integration Services Script Task. Convert the contents of "[kirjaudu nähdäksesi URL:n]" to Visual Basic 2008 to run in [kirjaudu nähdäksesi URL:n]"
Necesito algunos cambios en un sitio web magento existente, quisiera contactar con personas con experiencia en .NET para realizar mejoras en tienda online sincronizada con SQLServer y programada en .NET Podemos debatir cualquier detalle a través del chat.
.. : [kirjaudu nähdäksesi URL:n]
1). 3+ yrs Experience in Asp.Net Web Application Development using .Net fw3.5 and above. 2). WebForms Development 3). Ajax W...Development Experience on AspNet MVC 2). jQuery, Knockoutjs, Angularjs or other JavaScript MVVM 3). Extensively worked on Database programming preferably Oracle and/or SQLServer with Stored Procedures, Advanced SQL Queries
Need a SQLServer DBA who can help with designing tables for a web application. Would not expect that more than 16 hours would be needed. A little trouble understanding one to many relations and especially how to understand and resolve many to many relationships....
Senior .NET Web Developer C# we are looking for a .NET programmer to the purchase of a package of 150 hours/month. Technologies: C# Linq Entity Framework .NET....NET Web Developer C# we are looking for a .NET programmer to the purchase of a package of 150 hours/month. Technologies: C# Linq Entity Framework .NET WEB SQLServer
Senior Java Programmer Spring boot we are looking for a Java programmer to the purchase of a package of 150 hours/month. Technologies: * Spring Boot - Sp...Java programmer to the purchase of a package of 150 hours/month. Technologies: * Spring Boot - Spring security - Database Migration * Groovy on Grails * SQLServer
I would like someone that can modify this script so that I can import a csv file list of external accounts under each user account. Here is the link to the script that needs to be modified. [kirjaudu nähdäksesi URL:n] It needs to specify all these values [kirjaudu nähdäksesi URL:n] Please let me know if you have any questions. Looking to do this as soon as possible....
...com webforms) - precisa ser responsivo com uso de bootstrap (tema gratuito) - qq pluging utilizado e todo o fonte deve ser entregue - utilizar banco gratuito (mysql ou sqlserver express) - visualmente parecido com o buscape Funcionalidades: - opção de logar para consultar a pontuação do cliente e atualizar os dados cadastrais - site de venda de
Create a Web system for vehicle management. Around 25 screens need to be developed to create this application. As an input to wor...the macro view of deliveries. Ex: Activity name, Delivery date and responsible Technologies: Required: Spring Boot, Database Migration, Spring Security, Tomcat 7 and SQLServer. Desirable: Angular with TypeScript
...submit with your application (with confidential details omitted). o Need someone who is fluent in
...desarrollar una plataforma web SAAS (cloud) junto con una APP para Android y unirlas a tecnología RFID para el sector farmacéutico. Hay datos que habrán de extraerse de un SQLserver local (mysql). Es una pequeña aplicación de 5 tablas para realizar inventario. Buscamos que el programador sea de Barcelona o cerca. Requisitos posibles: • C# ...
e commerce website using php, magento, sqlserver, mysql, ssms
...
...funcionalidad para biometrico digital persona 4500, este es un video que me encontre por la red de como funciona y es la funcionalidad que buscamos.
...solution on local and it compiled without any error but some warnings.) + I think this project can work with PostgreSQL or SqlServer. I have little experience with SQL server but if you prefer and suggest PostgreSQL is better than SqlServer, you can use it while installing. Its your choice. TL;DR Install the project from Github to Windows Server. The.
Job Description Exp : 2-5 yrs (Angular 2
I would like to get data of the pro e-sport matches that are being played. I am primarly interested in...
Se precisa un desarrollador/a Wordpress ...Maquetación Web (HTML y CSS), PHP, WordPress y MySQL Se valorarán: Muy positivamente conocimientos de SEO Positivamente conocimientos de HTML5, CSS3, jQuery, ASP.NET, SQLServer Imprescindible: Confidencialidad, compromiso y sobretodo ser capaz de respondernos en unos tiempos más o menos lógicos.
...make mirroring of 9 tables from 3 servers (3 to 3) in 9 tables on 1 server. Sources: 3 servers on different ip addresses Windows Server 2008 r2 Enterprise sp1 Microsoft SQLServer 2012 Microsoft SQL Server Management Studio 11.0.3153.0 Purpose: Debian Linux 4.2.0-23-generic x86_64 DB: MariaDB based on MySQL 5.7.1. The table structure is identical | https://www.fi.freelancer.com/job-search/vbscript-sqlserver/3/ | CC-MAIN-2018-51 | refinedweb | 1,139 | 58.08 |
Parse Trees
With the implementation of our tree data structure complete, we now look at an example of how a tree can be used to solve some real problems. In this section we will look at parse trees. Parse trees can be used to represent real-world constructions like sentences or mathematical expressions.
The diagram below shows the hierarchical structure of a simple sentence. Representing a sentence as a tree structure allows us to work with the individual parts of the sentence by using subtrees.
We can also represent a mathematical expression such as as a parse tree, as shown below.
We have already looked at fully parenthesized expressions, so what do we know about this expression? We know that multiplication has a higher precedence than either addition or subtraction. Because of the parentheses, we know that before we can do the multiplication we must evaluate the parenthesized addition and subtraction expressions. The hierarchy of the tree helps us understand the order of evaluation for the whole expression. Before we can evaluate the top-level multiplication, we must evaluate the addition and the subtraction in the subtrees. The addition, which is the left subtree, evaluates to 10. The subtraction, which is the right subtree, evaluates to 3. Using the hierarchical structure of trees, we can simply replace an entire subtree with one node once we have evaluated the expressions in the children. Applying this replacement procedure gives us the simplified tree shown below.
In the rest of this section we are going to examine parse trees in more detail. In particular we will look at how to build a parse tree from a fully parenthesized mathematical expression, and how to evaluate the expression stored in a parse tree.
The first step in building a parse tree is to break up the expression string into a list of tokens. There are four different kinds of tokens to consider: left parentheses, right parentheses, operators, and operands. We know that whenever we read a left parenthesis we are starting a new expression, and hence we should create a new tree to correspond to that expression. Conversely, whenever we read a right parenthesis, we have finished an expression. We also know that operands are going to be leaf nodes and children of their operators. Finally, we know that every operator is going to have both a left and a right child.
Using the information from above we can define four rules as follows:
- If the current token is a
'(', add a new node as the left child of the current node, and descend to the left child.
- If the current token is in the list
['+','-','/','*'], set the root value of the current node to the operator represented by the current token. Add a new node as the right child of the current node and descend to the right child.
- If the current token is a number, set the root value of the current node to the number and return to the parent.
- If the current token is a
')', go to the parent of the current node.
Before writing the Python code, let’s look at an example of the rules
outlined above in action. We will use the expression . We
will parse this expression into the following list of character tokens
['(', '3', '+', '(', '4', '*', '5' ,')',')']. Initially we will
start out with a parse tree that consists of an empty root node.
The figures below illustrate the structure and contents
of the parse tree, as each new token is processed.
Using the above, let’s walk through the example step by step:
- Create an empty tree.
- Read ( as the first token. By rule 1, create a new node as the left child of the root. Make the current node this new child.
- Read 3 as the next token. By rule 3, set the root value of the current node to 3 and go back up the tree to the parent.
- Read + as the next token. By rule 2, set the root value of the current node to + and add a new node as the right child. The new right child becomes the current node.
- Read a ( as the next token. By rule 1, create a new node as the left child of the current node. The new left child becomes the current node.
- Read a 4 as the next token. By rule 3, set the value of the current node to 4. Make the parent of 4 the current node.
- Read * as the next token. By rule 2, set the root value of the current node to * and create a new right child. The new right child becomes the current node.
- Read 5 as the next token. By rule 3, set the root value of the current node to 5. Make the parent of 5 the current node.
- Read ) as the next token. By rule 4 we make the parent of * the current node.
- Read ) as the next token. By rule 4 we make the parent of + the current node. At this point there is no parent for + so we are done.
From the example above, it is clear that we need to keep track of the current node as well as the parent of the current node. A simple solution to keeping track of parents as we traverse the tree is to use a stack. Whenever we want to descend to a child of the current node, we first push the current node on the stack. When we want to return to the parent of the current node, we pop the parent off the stack.
Using the rules described above, along with the stack and binary tree abstract data types, we are now ready to write a Python function to create a parse tree. The code for our parse tree builder is presented below.
import operator OPERATORS = { '+': operator.add, '-': operator.sub, '*': operator.mul, '/': operator.truediv } LEFT_PAREN = '(' RIGHT_PAREN = ')' def build_parse_tree(expression): tree = {} stack = [tree] node = tree for token in expression: if token == LEFT_PAREN: node['left'] = {} stack.append(node) node = node['left'] elif token == RIGHT_PAREN: node = stack.pop() elif token in OPERATORS.keys(): node['val'] = token node['right'] = {} stack.append(node) node = node['right'] else: node['val'] = int(token) parent = stack.pop() node = parent return tree
The four rules for building a parse tree are coded as the four clauses
of the
if statement above. In each case you can see that the code implements
the rule.
Now that we have built a parse tree, we can write a function to evaluate it, returning the numerical result. To write this function, we will make use of the hierarchical nature of the tree to write an algorithm that evaluates a parse tree by recursively evaluating each subtree.
A natural base case for recursive algorithms that operate on trees is to check
for a leaf node. In a parse tree, the leaf nodes will always be operands.
Since numerical objects like integers and floating points require no further
interpretation, the
evaluate function can simply return the value stored in
the leaf node. The recursive step that moves the function toward the base case
is to call
evaluate on both the left and the right children of the current
node. The recursive call effectively moves us down the tree, toward a leaf
node.
To put the results of the two recursive calls together, we can simply apply the operator stored in the parent node to the results returned from evaluating both children. In the example from above we see that the two children of the root evaluate to themselves, namely 10 and 3. Applying the multiplication operator gives us a final result of 30.
The code for a recursive
evaluate function is shown below. First, we obtain
references to the left and the right children of the current node. If both the
left and right children evaluate to
None, then we know that the current node
is really a leaf node. If the current node is not a leaf node, look up the
operator in the current node and apply it to the results from recursively
evaluating the left and right children.
To implement the arithmetic, we use a dictionary with the keys
'+', '-', '*', and
'/'. The values stored in the dictionary are
functions from Python’s operator module. The operator module provides us
with the functional versions of many commonly used operators. When we
look up an operator in the dictionary, the corresponding function object
is retrieved. Since the retrieved object is a function, we can call it
in the usual way
function(param1, param2). So the lookup
OPERATORS['+'](2, 2) is equivalent to
operator.add(2, 2).
def evaluate(tree): try: operate = OPERATORS[tree['val']] return operate(evaluate(tree['left']), evaluate(tree['right'])) except KeyError: # no left or no right, so is a leaf - our base case return tree['val']
Finally, we will trace the
evaluate function on the parse tree we created
above. When we first call
evaluate, we pass the root of the entire tree as
the parameter
parse_tree. Then since the left and right children exist, we
look up the operator in the root of the tree, which is
'+', and which maps
to the
operator.add function. As usual for a Python function call, the first
thing Python does is to evaluate the parameters that are passed to the
function. In this case both parameters are recursive function calls to our
evaluate function. Using left-to-right evaluation, the first recursive call
goes to the left. In the first recursive call the
evaluate function is given
the left subtree. We find that the node has no left or right children, so we
are in a leaf node. When we are in a leaf node we just return the value stored
in the leaf node as the result of the evaluation. In this case we return the
integer 3.
At this point we have one parameter evaluated for our top-level call to
operator.add. But we are not done yet. Continuing the left-to-right
evaluation of the parameters, we now make a recursive call to evaluate
the right child of the root. We find that the node has both a left and a
right child so we look up the operator stored in this node,
'*', and
call this function using the left and right children as the parameters.
At this point you can see that both recursive calls will be to leaf
nodes, which will evaluate to the integers four and five respectively.
With the two parameters evaluated, we return the result of
operator.mul(4, 5). At this point we have evaluated the operands for
the top level
'+' operator and all that is left to do is finish the
call to
operator.add(3, 20). The result of the evaluation of the entire
expression tree for is 23. | https://bradfieldcs.com/algos/trees/parse-trees/ | CC-MAIN-2018-26 | refinedweb | 1,803 | 72.36 |
In this post, I am going to discuss a few tips on customizing debugging windows. This may be very helpful during debugging of an application. While debugging, you may want to simplify debug window or you may want to clean up all the unnecessary fields that are not important during debugging from debugging windows. Here are a few tips for customization of debug window.
Use DebuggerBrowsable attribute to customize the debugging windows:
DebuggerBrowsable
Use DebuggerDisplay attribute to customize the debugging display.
DebuggerDisplay
To use the above attributes, you have to use System.Diagnostics namespace.
System.Diagnostics
If you want to customize the view on debugger window for any properties during debugging, you can easily do it using DebuggerBrowsable attributes. You can apply these attributes for any properties, fields or for Indexer. DebuggerBrowsable attributes constructor takes DebuggerBrowsableState as argument. DebuggerBrowsableState is used to provide the instruction to debugger about how it is going to be displayed in the debugger window.
DebuggerBrowsableState
We can provide three states for DebuggerBrowsable attributes:
You can read the complete definition of these DebuggerBrowsableState at MSDN.
Now I am going to demonstrate the use of this DebuggerBrowsable attributes and DebuggerBrowsableState using an example.
Before starting, let’s consider having the following code block:
namespace DebuggerDemo
{
/// <span class="code-SummaryComment"><summary>
</span>
Now, first let’s see how the normal debugging window behaves. Just put a breakpoint at the end of main method and try to explore the debugging window. You will get a debugging window as the below picture, which is the expected debugging window view.
main
In the above picture, you can see that we are having 6 student objects and each one having a different value. As Addresses is a different class and used as properties with multiple value, hence it is in the collapsed mode.
Now, I want to see all the addresses along with all other properties with expanded mode and also want to hide the Marks properties. To achieve the above requirement, we have to add DebuggerBrowsable attributes for the Marks and Addresses properties in the Student class.
Marks
Now if you put the breakpoint in the same location and explore the debugging window, you will find the debugging window view as the below picture:
So, from the above picture, you can easily identify the changes in the debugging window view.
Here is the second tip. By using DebuggerDisplay attributes, you can define how a class or field will be displayed in the debugger window. Using DebuggerDisplay, you can change the debugger window message and variables that you want to display.
DebuggerDisplay
If you consider the above code sample and debug the application, by default, you will get the below snaps of debugging:
Here, for each student object, you are getting NameSpace.ClassName as display message by default. Now we can customize the display using DebuggerDisplay attributes. DebuggerDisplay attributes constructors take display name as arguments or you can pass the named parameter that you want to display over there.
student
NameSpace.ClassName
After making the above changes, if you run the same code you will find that custom display message with proper value of parameter that you have given in debuggerdisplay attributes.
debuggerdisplay
While using DebuggerDisplay, you have to make sure that you are giving the proper field name as argument within { }. Otherwise, the above tips will help you in customizing debugging windows.
Filed under: General, Tips and Tricks, Visual Studio
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
Quoted From MSDN hope).
General News Suggestion Question Bug Answer Joke Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | http://www.codeproject.com/Articles/105590/Few-Tips-on-Customizing-Debugging-Window-View-in-V.aspx | CC-MAIN-2015-40 | refinedweb | 621 | 52.29 |
Simple implementation of archive writing to own ACF format binary file. More...
#include <CFileWriteArchive.h>
Simple implementation of archive writing to own ACF format binary file.
This imlementation is very fast and efficient and should be used if any standarized file format is needed.
Definition at line 24 of file CFileWriteArchive.h.
Definition at line 29 of file CFileWriteArchive.h.
Definition at line 30 of file CFileWriteArchive.h.
Contructor.
Begin of archive tag.
Reimplemented from iser::CBinaryWriteArchiveBase.
End of archive tag.
This method should be allways called after BeginTag is successfull called. If skipping of tag contains is supported, this will skip to the end of tag while archive reading. Otherwise you have to read contains of archive completely.
Reimplemented from iser::CBinaryWriteArchiveBase.
Force internal stream object to flush.
Return
true if the archive is valid (e.g.
the file medium can be accessed)
Definition at line 85 of file CFileWriteArchive.h.
Check if skiping to the end of tag on EndTag is supported.
Reimplemented from iser::CArchiveBase.
Process binary data block.
Implements iser::IArchive.
© 2007-2017 Witold Gantzke and Kirill Lepskiy | http://ilena.org/TechnicalDocs/Acf/classifile_1_1_c_file_write_archive.html | CC-MAIN-2018-30 | refinedweb | 182 | 62.75 |
Merge remote-tracking branch 'aosp/upstream-master' into update-effcee Includes: 6527fb2 Fail parsing checks if var def regexp is bad 0eb6499 Fail parsing checks if the regexp is bad. ecbc165 Add effcee-fuzz 3842fdc Add Bazel build rules. 4bef5db Require Python 3 8bf4e0a Add Clang warning -Wextra-semi Change-Id: If24758e51cfcc12826dd97550429502441d769e9 Testing: checkbuild.py on Linux
Effcee is a C++ library for stateful pattern matching of strings, inspired by LLVM's FileCheck command.
Effcee:
The following is from examples/main.cc:
#include <iostream> #include <sstream> #include "effcee/effcee.h" // Checks standard input against the list of checks provided as command line // arguments. // // Example: // cat <<EOF >sample_data.txt // Bees // Make // Delicious Honey // EOF // effcee-example <sample_data.txt "CHECK: Bees" "CHECK-NOT:Sting" "CHECK: Honey" int main(int argc, char* argv[]) { // Read the command arguments as a list of check rules. std::ostringstream checks_stream; for (int i = 1; i < argc; ++i) { checks_stream << argv[i] << "\n"; } // Read stdin as the input to match. std::stringstream input_stream; std::cin >> input_stream.rdbuf(); // Attempt to match. The input and checks arguments can be provided as // std::string or pointer to char. auto result = effcee::Match(input_stream.str(), checks_stream.str(), effcee::Options().SetChecksName("checks")); // Successful match result converts to true. if (result) { std::cout << "The input matched your check list!" << std::endl; } else { // Otherwise, you can get a status code and a detailed message. switch (result.status()) { case effcee::Result::Status::NoRules: std::cout << "error: Expected check rules as command line arguments\n"; break; case effcee::Result::Status::Fail: std::cout << "The input failed to match your check rules:\n"; break; default: break; } std::cout << result.message() << std::endl; return 1; } return 0; }
For more examples, see the matching tests in effcee/match_test.cc.
Effcee is mature enough to be relied upon by third party projects, but could be improved.
What works:
What is left to do:
%%appears where a regular expression is expected, then it expands to the regular expression for a local identifier in LLVM assembly language, i.e.
%[-a-zA-Z$._][-a-zA-Z$._0-9]*. This enables you to write precise tests with less fuss.
What is left to do, but lower priority:
Effcee is licensed under terms of the Apache 2.0 license. If you are interested in contributing to this project, please see
CONTRIBUTING.md.
This is not an official Google product (experimental or otherwise), it is just code that happens to be owned by Google. That may change if Effcee gains contributions from others. See the
CONTRIBUTING.md file for more information. See also the
AUTHORS and
CONTRIBUTORS files.
effcee/ : library source code, and tests
third_party/: third party open source packages, downloaded separately
examples/: example programs
Effcee depends on the RE2 regular expression library.
Effcee tests depend on Googletest and Python 3.
In the following sections,
$SOURCE_DIR is the directory containing the Effcee source code.
git clone $SOURCE_DIR cd $SOURCE_DIR/third_party git clone git clone cd $SOURCE_DIR/
Note: There are two other ways to manage third party sources:
googletestand
re2. They will be automatically downloaded by Bazel during build. Bazel will suggest adding
sha256attributes to each repository rule to get hermetic builds (these notices are safe to ignore if you are not interested in hermetic builds).
googletestprojects before adding Effcee.
Ensure you have the requisite tools -- see the tools subsection below.
Decide where to place the build output. In the following steps, we'll call it
$BUILD_DIR. Any new directory should work. We recommend building outside the source tree, but it is also common to build in a (new) subdirectory of
$SOURCE_DIR, such as
$SOURCE_DIR/build.
4a) Build and test with Ninja on Linux or Windows:
cd $BUILD_DIR cmake -GNinja -DCMAKE_BUILD_TYPE={Debug|Release|RelWithDebInfo} $SOURCE_DIR ninja ctest
4b) Or build and test with MSVC on Windows:
cd $BUILD_DIR cmake $SOURCE_DIR cmake --build . --config {Release|Debug|MinSizeRel|RelWithDebInfo} ctest -C {Release|Debug|MinSizeRel|RelWithDebInfo}
4c) Or build with MinGW on Linux for Windows: (Skip building threaded unit tests due to Googletest bug 606)
cd $BUILD_DIR cmake -GNinja -DCMAKE_BUILD_TYPE={Debug|Release|RelWithDebInfo} $SOURCE_DIR \ -DCMAKE_TOOLCHAIN_FILE=$SOURCE_DIR/cmake/linux-mingw-toolchain.cmake \ -Dgtest_disable_pthreads=ON ninja
4d) Or build with Bazel on Linux:
cd $SOURCE_DIR bazel build -c opt :all
After a successful build, you should have a
libeffcee library under the
$BUILD_DIR/effcee/ directory (or
$SOURCE_DIR/bazel-bin when building with Bazel).
The default behavior on MSVC is to link with the static CRT. If you would like to change this behavior
-DEFFCEE_ENABLE_SHARED_CRT may be passed on the cmake configure line.
By default, Effcee registers two tests with
ctest:
effcee-test: All library tests, based on Googletest.
effcee-example: Executes the example executable with sample inputs.
Running
ctest without arguments will run the tests for Effcee as well as for RE2.
You can disable Effcee's tests by using
-DEFFCEE_BUILD_TESTING=OFF at configuration time:
cmake -GNinja -DEFFCEE_BUILD_TESTING=OFF ...
The RE2 tests run much longer, so if you're working on Effcee alone, we suggest limiting ctest to tests with prefix
effcee:
ctest -R effcee
Alternately, you can turn off RE2 tests entirely by using
-DRE2_BUILD_TESTING=OFF at configuration time:
cmake -GNinja -DRE2_BUILD_TESTING=OFF ...
For building, testing, and profiling Effcee, the following tools should be installed regardless of your OS:
On Linux, if cross compiling to Windows: - MinGW: A GCC-based cross compiler targeting Windows so that generated executables use the Microsoft C runtime libraries.
On Windows, the following tools should be installed and available on your path:
diff.
Third party source locations:
EFFCEE_GOOGLETEST_DIR: Location of
googletestsources, if not under
third_party.
EFFCEE_RE2_DIR: Location of
re2sources, if not under
third_party.
EFFCEE_THIRD_PARTY_ROOT_DIR: Alternate location for
googletestand
re2subdirectories. This is used if the sources are not located under the
third_partydirectory, and if the previous two variables are not set.
Compilation options:
DISABLE_RTTI. Disable runtime type information. Default is enabled.
DISABLE_EXCEPTIONS. Disable exceptions. Default is enabled.
EFFCEE_ENABLE_SHARED_CRT. See above.
Controlling samples and tests:
EFFCEE_BUILD_SAMPLES. Should Effcee examples be built? Defaults to
ON.
EFFCEE_BUILD_TESTING. Should Effcee tests be built? Defaults to
ON.
RE2_BUILD_TESTING. Should RE2 tests be built? Defaults to
ON.
We track bugs using GitHub -- click on the “Issues” button on the project's GitHub page. | https://android.googlesource.com/platform/external/effcee/+/291037172dcb79318a7d0214a194e985c9ad0d7b | CC-MAIN-2020-34 | refinedweb | 1,019 | 58.08 |
descriptive information about an Fargate profile.
See also: AWS API Documentation
See 'aws help' for descriptions of global parameters.
describe-fargate-profile --cluster-name <value> --fargate-profile-name <value> [--cli-input-json <value>] [--generate-cli-skeleton <value>]
--cluster-name (string)
The name of the Amazon EKS cluster associated with the Fargate profile.
--fargate-profile-name (string)
The name of the Fargate profile.
fargateProfile -> (structure)
The full description of your Fargate profile.
fargateProfileName -> (string)The name of the Fargate profile.
fargateProfileArn -> (string)The full Amazon Resource Name (ARN) of the Fargate profile.
clusterName -> (string)The name of the Amazon EKS cluster that the Fargate profile belongs to.
createdAt -> (timestamp)The Unix epoch timestamp in seconds for when the Fargate profile was created.
podExecutionRoleArn -> (string)The Amazon Resource Name (ARN) of the pod execution role to use for pods that match the selectors in the Fargate profile. For more information, see Pod Execution Role in the Amazon EKS User Guide .
subnets -> (list)
The IDs of subnets to launch pods into.
(string)
selectors -> (list)
The selectors to match for pods to use this Fargate profile.
(structure)
An object representing an Fargate profile selector.
namespace -> (string)The Kubernetes namespace that the selector should match.
labels -> (map)
The Kubernetes labels that the selector should match. A pod must contain all of the labels that are specified in the selector for it to be considered a match.
key -> (string)
value -> (string)
status -> (string)The current status of the Fargate profile.
tags -> (map)
The metadata applied to the Fargate profile to assist with categorization and organization. Each tag consists of a key and an optional value. You define both. Fargate profile tags do not propagate to any other resources associated with the Fargate profile, such as the pods that are scheduled with it.
key -> (string)
value -> (string) | https://docs.aws.amazon.com/cli/latest/reference/eks/describe-fargate-profile.html | CC-MAIN-2021-49 | refinedweb | 300 | 50.63 |
Device independent layout for QML
This article needs to be updated: If you found this article useful, please fix the problems below then delete the {{ArticleNeedsUpdate}} template from the article to remove this warning.
Reasons: hamishwillee (21 Feb 2012)
Article provides only part of the layout/scalability/orientation story. Suggest it should be extended to include additional information from the linked SeeAlso section. I've also added some Comments in the talk page below.
QML components often need a width and height property, but setting explicit values causes issues when attempting to use the same code on different platforms. For example. a QML application that runs on a Nokia N900 may be designed for the 800 by 400 pixel screen, but the same application on a desktop may be used at 800 by 600 pixels, or it may be resized to be larger or smaller. The solution is to assign a size to the top-level item in your qml file and make all subsequent sizes relative to the main one. The root size should be made big enough so that the item, when viewed by itself in the qmlviewer for testing, is large enough and properly proportioned to be useful. But if all the other sizes are dependent on the root size, the item can be resized with impunity.
Take the following component (saved as ColorRectangle.qml, which creates a colored rectangle with a word in it:
import Qt 4.7
Item {
width: 250 //these are the only explicit sizes set
height: 250 //all others are relative
property alias rectColor: rect.color
property alias rectText: rectText.text
Rectangle {
id: rect
color: "red"
anchors.fill: parent
Text {
id: rectText
anchors.centerIn: parent
text: "RED"
font.pixelSize: parent.height * 0.25
}
}
}
If this is viewed in qmlviewer, it shows up as a red square. Note that it can be resized, and the text grows and shrinks relative to the overall size.
Now we use this in another qml file called Sizing.qml which is saved in the same directory with ColorRectangle.qml:
import Qt 4.7
Item {
width: 800 //these are the only explicit sizes set
height: 600 //all others are relative
Grid {
width: parent.width
height: parent.height
columns: 2
ColorRectangle {
rectColor: "red"
rectText: "RED"
width: parent.width / 2 //our component is resized
height: parent.height / 2
}
ColorRectangle {
rectColor: "green"
rectText: "GREEN"
width: parent.width / 2
height: parent.height / 2
}
ColorRectangle {
rectColor: "orange"
rectText: "ORANGE"
width: parent.width / 2
height: parent.height / 2
}
ColorRectangle {
rectColor: "purple"
rectText: "PURPLE"
width: parent.width / 2
height: parent.height / 2
}
}
Rectangle {
width: parent.width * 0.1
height: width
anchors.centerIn: parent
color: "gray"
radius: height/10
}
}
When we run this in qmlviewer we see that we have resized our ColorRectangle objects to be one quarter of the total size, and the text size grows with the rectangle size. We can resize this window (it only lets us grow larger than the defined size, not smaller) and all the objects resize cleanly. This will look good at nearly any common aspect ratio or size combination, and because we made everything relative there is no further work to do in resizing.
We have also added an additional rectangle that sits on top of the others and is centered. This has its height set to be equal to its width, and no matter how we resize our window, it always looks square. Also, the radius property has been designated as a tenth of the "height" property, so that as it resizes the roundness of the corner will look the same.
And object sizes can be related to more than just the parents--objects can refer to one another, as long as you are careful not to make the references circular. Object A can have a size of parent.width / 3; Object B can have a size of A.width; Object C can then have a size of parent.width - A.width - B.width. You can even set a property (e.g property int buttonWidth: main.width/10 and set all of your button objects to have width: buttonWidth. However it is done, the key is to avoid using real numbers anywhere but in the root element.
One additional note: the sizes can be divided or multiplied, and whole numbers or decimals can be used to get the precise look you want. | http://developer.nokia.com/community/wiki/index.php?title=Device_independent_layout_for_QML&oldid=175339 | CC-MAIN-2014-35 | refinedweb | 724 | 65.42 |
Qt/QML 5.6.0, Android virtual keyboard + AA_EnableHighDpiScaling problem
Normally, when you have a TextField and you select it for text input, the Android virtual keyboard pops up, and the text field control is magically raised above the keyboard so you can see what you are typing. This works fine, until you enable AA_EnableHighDpiScaling. In this case, the control is covered and you can't see what you are typing. Is there a workaround for this? Btw: I tried it with Qt labs controls, same thing happens.
Here is a minimal example:
main.cpp
(); }
main.qml
import QtQuick 2.6 import QtQuick.Controls 1.5 import QtQuick.Dialogs 1.2 ApplicationWindow { visible: true width: 640 height: 480 title: qsTr("Hello World") TextField { width:parent.width anchors.bottom: parent.bottom } }
I found a simple workaround (it doesn't quite "fix it" because the app works a bit differently, but at least the input areas are visible when the keyboard is visible in hdpi scaling mode):
in AndroidManifest.xml, add in your activity section:
<activity android:
I guess android is selecting the "adjustPan" mode for some reason, and it just doesn't workout right in some cases. | https://forum.qt.io/topic/66457/qt-qml-5-6-0-android-virtual-keyboard-aa_enablehighdpiscaling-problem/2 | CC-MAIN-2018-30 | refinedweb | 196 | 56.45 |
How to sort Python list with date
sort dataframe by date python
sorted python
datetime python
python sort datetime strings
python sort timestamps
sort datetime column python
sort list of dates java
I have a list in Python like this
myList = [' Google 2018-07-10', ' Apple Inc 2018-07-11', ' Microsoft 2018-07-12']
i want to sort this list by the dates
Here's a method that should work in a more general case:
from dateutil.parser import parse myList = [ ' Google 2018-07-10', ' Apple Inc 2018-07-11', 'Foo 2017-07-13', ' Microsoft 2018-07-12', '2015-07-15 Whatever' ] dct = {parse(v, fuzzy=True): v for v in myList} print([dct[k] for k in sorted(dct, reverse=True)]) print([dct[k] for k in sorted(dct)])
This way you won't be forced to have dates at the end of the list strings, output:
[' Microsoft 2018-07-12', ' Apple Inc 2018-07-11', ' Google 2018-07-10', 'Foo 2017-07-13', '2015-07-15 Whatever'] ['2015-07-15 Whatever', 'Foo 2017-07-13', ' Google 2018-07-10', ' Apple Inc 2018-07-11', ' Microsoft 2018-07-12']
How do I sort a list of datetime or date objects?, Given a list of dates in string format, write a Python program to sort the list of dates in ascending order. Examples: Input : dates = [“24 Jul 2017”, “25 Jul 2017”, “11 To sort a Python date string list using the sort function, you'll have to convert the dates in objects and apply the sort on them. For this you can use the key named attribute of the sort function and provide it a lambda that creates a datetime object for each date and compares them based on this date object.
Using
sorted with
lambda in
key
Ex:
myList = [' Google 2018-07-10', ' Apple Inc 2018-07-11', ' Microsoft 2018-07-12'] print( sorted(myList, key= lambda x: x.split()[-1], reverse=True) ) print( sorted(myList, key= lambda x: x.split()[-1]) )
Output:
[' Microsoft 2018-07-12', ' Apple Inc 2018-07-11', ' Google 2018-07-10'] [' Google 2018-07-10', ' Apple Inc 2018-07-11', ' Microsoft 2018-07-12']
Python: How to Sort a List? (The Right Way), Use list. sort() to sort a list of datetime objects in-place. yesterday = date. today() - timedelta(days=1) today = date. today() tomorrow = date. today() + timedelta(days=1) date_list =[today, tomorrow, yesterday] print(date_list) date_list. sort() print(date_list) In Python, we have the datetime module which makes date based comparison easier. The datetime.strptime() function is used to convert a given string into datetime object. It accepts two arguments: date (string) and format (used to specify the format. for eg: %Y is used for specifying year) and returns a datetime object.
You can split each string, take the last part, and sort by this part:
myList = [ ' Apple Inc 2018-07-11', ' Google 2018-07-10', ' Microsoft 2018-07-12' ] sorted(myList, key=lambda s: s.split()[-1])
Output:
[' Google 2018-07-10', ' Apple Inc 2018-07-11', ' Microsoft 2018-07-12']
Python, Python sort list of dates. In the next example, we sort a list of dates. sort_date.py. #!/usr/bin/env python3 from datetime import datetime values Python List sort() The sort() method sorts the elements of a given list. The sort() method sorts the elements of a given list in a specific order - Ascending or Descending.
You can also sort the list by applying
datetime.strptime() to
key:
>>> from datetime import datetime >>> myList = [' Google 2018-07-10', ' Apple Inc 2018-07-11', ' Microsoft 2018-07-12'] >>> sorted(myList, key=lambda x: datetime.strptime(x.split()[-1], '%Y-%m-%d')) [' Google 2018-07-10', ' Apple Inc 2018-07-11', ' Microsoft 2018-07-12']
Note: This might be over complicating it slightly, since ISO formats dates, and sorts string dates perfectly fine, as shown in the other answers. Using
strptime() is to just ensure that the dates are sorted by correct date format.
How to sort a list of datetime objects in Python, Sorted datetime objects with Python. gistfile1.py. # The trick is to pass timedelta object as a sort key and not to use cmp() function. import datetime. The sort () method sorts the list ascending by default. You can also make a function to decide the sorting criteria (s). list .sort (reverse=True|False, key=myFunc) Parameter Values. Optional. reverse=True will sort the list descending. Default is reverse=False. Optional. A function to specify the sorting criteria (s) Sort the list descending:
Python sort list - sorting list elements in Python, Python lists have a built-in list.sort() method that modifies the list in-place. There is also a sorted() built-in function that builds a new sorted list You have a python list and you want to sort the items it contains. Basically, you can either use sort or sorted to achieve what you want. The difference between sort and sorted is that sort is a list method that modifies the list in place whereas sorted is a built-in function that creates a new list without touching the original one.
Sorted datetime objects with Python · GitHub, If you have a little function to return those values for a given date string then you can compare the numeric values to sort them. In fact, there's no reason to return Javascript sort array of objects in reverse chronological order. javascript,arrays,sorting. As PM 77-1 suggests, consider using the built–in Array.prototype.sort with Date objects. Presumably you want to sort them on one of start or end: jobs.sort(function(a, b) { return new Date(a.ys, a.ms-1) - new Date(b.ys, b.ms-1); })
Sorting HOW TO, This easily sorts the list by datetime which was cool but I wasn't sure what the whole key and lambda thing was all about. Key Parameter. Referencing the How To Python list sort() The sort function can be used to sort a list in ascending, descending or user defined order. To sort the list in ascending order. This will sort the given list in ascending order. This function can be used to sort list of integers, floating point number, string and others.
- Have you tried something?
- yes, i've tried to sort dictionary before appending but my dictionary's key values are always changing so i couldn't figure it out with list
- thank you! but could you please explain me how python handle random list data in to dictionary like this? | https://thetopsites.net/article/51652979.shtml | CC-MAIN-2021-25 | refinedweb | 1,087 | 70.53 |
go to bug id or search bugs for
Description:
------------
(I choose "Reflection related" as "Package affected" because that was the closest thing I could see but this really more of a core language thing.)
PHP already has a function to create class aliases via class_alias():
class Foo { }
class_alias('Foo', 'Bar');
$x = new Bar;
var_dump($x); // -> object(Foo)#1 (0) { }
despite the relative ease with which a lot of classes can be aliased using inheritance anyway:
class Foo { }
class Bar extends Foo { }
$x = new Bar; // has all features of Foo
(Granted inheritance isn't exactly the same as aliasing but functionally equivalent in many cases.)
By comparison creating two functions of the same name requires you to repeat the argument list and can potentially make mistakes in the function body that would not be caught until the secondary function is called:
function addThree($a, $b, $c) { return $a + $b + $c; }
function add3($a, $b, $c) {
return addThee($a, $b, $c); // mistake not caught until add3 called
}
A predefined function such as `function_alias(string $orig, string $alias)` would simplify the above to:
function addThree($a, $b, $c) { return $a + $b + $c; }
function_alias('addThree', 'add3');
and allow errors to be caught earlier:
function_alias('addThee', 'add3'); // trigger error immediately
Additionally if the argument list of addThree were to change with `function_alias()` it only has to be changed in one place instead of 2.
I program in PHP using a modern functional style and make liberal use of freestanding functions i.e. not class members. To be able to simply alias them would be very useful and convenient to anyone who likes to program in this style which I believe will become more popular.
Add a Patch
Add a Pull Request
I would like to have this kind of feature, but not with alias functions. IMO, we should use namespace to handle userland class/function name aliasing.
BTW, we can do function aliasing via namespace. I would like to have ability to import name into root namespace. i.e. Replace module defined names with user defined one. This feature can be used like "monkey patch" in ruby.
We may be better to soft deprecate class_alias. i.e. Deprecation by document only.
Namespace can be used for aliasing and it's recommended way. The manual page should contain link to namespaced aliasing and recommend it rather than class_alias() at least.
Please however note that namespace aliases are file-local while class_alias() and the hypothetical function_alias() are not. This is a significant limitation of namespace aliases. | https://bugs.php.net/bug.php?id=73437 | CC-MAIN-2019-43 | refinedweb | 419 | 57.3 |
Take 40% off React Hooks in Action by entering fcclarsen3 into the discount code box at checkout at manning.com.
Figure 1 shows a basic illustration of React’s job: it should use the current state to render the UI. If the state changes, React should re-render the UI. The illustration shows a name in a friendly message. When the name value changes, React updates the UI to show the new name in its message. We usually want the state and UI to always be in sync (although we might choose to delay synchronization during state transitions – when fetching the latest data, for example).
Figure 1. When you change a value in a component, React should update the UI.
React provides a small number of functions, or hooks, to enable it to track values in your components and keep the state and UI in sync. For single values it gives us the useState hook and that’s the hook we’ll explore in this article. We’ll look at how to call the hook, what it returns, and how we use it to update the state, triggering React to update the UI. It’s not just a matter of documenting the useState API – you can go to the official React docs for that – we’ll use the discussion of the useState hook to help us better understand what function components are and how they work. To that end, we finish the article with a review of some key concepts.
A bookings manager
Your fun but professional company has a number of resources that can be booked by staff: meeting rooms, AV equipment, technician time, table football and even party supplies. One day, the boss asks you to create a component for the company network that lets staff select a resource to book in a booking application. The component should display a list of resources, or bookables, filtered by group, and highlight the one currently selected. When a user selects a bookable, its details can be displayed, as shown in figure 2.
Figure 2. The Bookables component displays a list of bookable items filtered by group and the details of the selected bookable.
You can find the full code examples for the bookings example app on GitHub at, with branches set up for each evolution of the code. Each listing for the example app in this article includes the name of the branch to checkout linked to the GitHub repo. For example, if you’ve cloned the repo, to get the code for the first branch, enter the command:
git checkout 301-hard-coded
Adding a database file
Our application will eventually need or generate a few different types of data, including users and bookings. We’ll manage all of the data in a single JSON file,
db.json. For now, we just need some bookables to show in a list, so the initial data file isn’t too complicated, as you can see in listing 1.
Branch: 301-hard-coded, File: /src/db.json
Listing 1. The bookings app data
{ "bookables": [ #A { "id": 1, "group": "Rooms", "title": "Meeting Room", "notes": "The one with the big table and interactive screen." }, { #B "id": 2, #B "group": "Rooms", #B "title": "Lecture Hall", #B "notes": "For more formal 'sage-on-the-stage' presentations" #B }, #B { "id": 3, "group": "Rooms", "title": "Games Room", "notes": "Table tennis, table football, pinball!" }, { "id": 4, #C "group": "Rooms", #C "title": "Lounge", #C "notes": "A relaxing place to hang out." #C }, { "id": 5, "group": "Kit", "title": "Projector", "notes": "Portable but powerful. Keep it with the case." }, { "id": 6, "group": "Kit", "title": "Wireless mics", "notes": "Really handy but don't forget to switch them off." } ] }
#A Assign an array of bookables data to the bookables property
#B Specify each bookable as an object
#C Give id, group, title and notes properties to each bookable
The bookables are stored in an array of bookable objects, assigned to the
bookables property. Each bookable has
id,
group,
title and
notes properties. The data in the book’s code repo has slightly longer notes but the structure is the same.
Storing, using and setting values with useState
Your React applications look after some state: values that are shown in the user interface, or that help manage what is shown. The state may include posts on a forum, comments for those posts and whether the comments are shown or not, for example. When users interact with the app, they change its state. They may load more posts, toggle whether comments are visible or add their own comments. React is there to make sure the state and the UI are in sync. When the state changes, React needs to run the components that use that state. The components return their UI using the latest state values. React then compares the newly returned UI with the existing UI and efficiently updates the DOM as necessary.
Some state is shared across the application, some by a few components and some is managed locally by a component itself. If components are just functions, how can they persist their state across renders? Are their variables not lost when they finish executing? And how does React know when the variables change? If React is faithfully trying to match the state and the UI, it definitely needs to know about changes to the state, right?
The simplest answer to the problem of persisting state across calls to your components and to the need to keep React in the loop when you change a component’s state is the useState hook. The useState hook is a function that enlists React’s help in managing state values. When you call the useState hook, it returns both the latest state value and a function for updating the value that keeps React in the loop and lets it do its syncy business.
Calling useState returns a value and an updater function
We want to alert React that a value used within a component has changed so it can re-run the component and update the UI if necessary. Just updating the variable directly won’t do. We need a way of changing that value, some kind of updater function, that triggers React to call the component with the new value and get the updated UI, as shown in figure 3.
Figure 3. Rather than changing a value directly, we call an updater function. The updater function changes the value and React updates the display with the recalculated UI from the component.
In fact, to avoid our component state value disappearing when the component code finishes running, we can get React to manage the value for us. That’s what the useState hook is for. Every time React calls our component to get hold of its UI, the component can ask React for the latest state value and for a function to update the value. The component can use the value when generating its UI and use the updater function when changing the value, for example in response to a user clicking an item in a list.
Calling
useState returns a value and its updater function as an array with two elements, as shown in figure 4.
Figure 4. The useState function returns an array with two elements: a value and an updater function.
You could assign the returned array to a variable, and then access the two elements individually, by index, like this:
const selectedRoomArray = useState(); #A const selectedRoom = selectedRoomArray[0]; #B const setSelectedRoom = selectedRoomArray[1]; #C
#A The useState function returns an array
#B The first element is the value
#C The second element is the function for updating the value
But, it’s more common to use array destructuring and assign the returned elements to variables in one step:
const [ selectedRoom, setSelectedRoom ] = useState();
Array destructuring lets us assign elements in an array to variables of our choosing. The names
selectedRoom and
setSelectedRoom are arbitrary and our choice, although it’s common to start the variable name for the second element, the updater function, with
set. The following would work just as well:
const [ myRoom, updateMyRoom ] = useState();
If you want to set an initial value for the variable, pass the initial value as an argument to the useState function. When React first runs your component, useState will return the two-element array as usual but will assign the initial value to the first element of the array, as shown in figure 5.
Figure 5. When the component first runs, React assigns the initial value you pass to useState to the ‘selected’ variable.
The first time the following line of code is executed within a component, React returns the value
“Lecture Hall” as the first element in the array. The code assigns that value to the
selected variable.
const [ selected, setSelected ] = useState("Lecture Hall");
Let’s get the
Bookables component to use the useState hook to ask React to manage the value of the selected item’s index. We’ll pass it
1 as the initial index. You should see the Lecture Hall highlighted when the
Bookables component first appears on the screen, as shown in figure 6.
Figure 6. The Bookables component with Lecture Hall selected.
Listing 2 shows the code for the component. It includes an
onClick event handler that uses the updater function assigned to
setBookableIndex to change the selected index when a user clicks a bookable.
Branch: 303-set-index, File: /src/components/Bookables.js
Listing 2. Triggering an update when changing the selected room
import React, { useState } from "react"; #A import {bookables} from "../db.json"; export default function Bookables () { const {bookablesInGroup.map((b, i) => ( <li key={b.title} className={i === bookableIndex ? "selected" : null} #C onClick={() => setBookableIndex(i)} #D > {b.title} </li> ))} </ul> ); }
#A Import the useState hook
#B Call useState and assign the returned state value and updater function to variables
#C Use the state value when generating the UI
#D Use the updater function to change the state value
React runs the
Bookables component code, returning the value for
bookableIndex from the call to useState. The component then uses that value when generating the UI, to set the correct
className attribute for each
li element. When a user clicks on a bookable, the
onClick event handler uses the updater function,
setBookableIndex, to tell React to update the value it’s managing. If the value has changed, React knows it’ll need a new version of the UI. So, React runs the
Bookables code again, assigning the updated state value to
bookableIndex, letting the component generate the updated UI. React can then compare the newly generated UI to the old version and decide how to update the display efficiently.
With useState, React is now listening. I don’t feel so lonely anymore. It’s living up to its promise of keeping the state in sync with the UI. The
Bookables component describes the UI for a particular state and provides a way for users to change the state. React then does its magic, checking if the new UI is different from the old (diffing), batching and scheduling updates, deciding on an efficient way to update DOM elements and then doing the deed and reaching out to the DOM on our behalf. We fixate on the state, React does its diffing and updates the DOM.
In listing 2 we passed an initial value of 1 to useState. A user clicking on a different bookable replaces that value with another number. But what if we want to store something more complicated, like an object, as state? In that case, we need to be a bit more careful when updating the state. Let’s see why.
Calling the updater function replaces the previous state value
If you’re coming from the class-based approach to component building in React then you’ll be used to state being an object with different properties for different state values. Moving to function components, you may try and replicate that state-as-an-object approach. It may feel more natural to have a single state object and have new state updates merge with the existing state. But the useState hook is easy to use and easy to call multiple times, once for each state value you want React to monitor. It’s worth getting used to separate calls to useState for each state property. If you really need to work with objects as state values, you should be aware of how
setState as a function component updater function is different from
this.setState you’d use with a class component. In this section we take a brief look at updating the state of an object in the two types of components.
The class component approach
With classes, you would set up the state as an object in the constructor, (or as a static property on the class):
class Bookables extends React.Component { constructor (props) { super(props); this.state = { bookableIndex: 1, group: "Rooms" }; } }
To update the state, in an event handler say, you would call
this.setState, passing an object with any changes you want to make:
handleClick (index) { this.setState({ bookableIndex: index }); }
React would merge the object you passed to
setState with the existing state. In the example above, it would update the
bookableIndex property but leave the
group property alone, as shown in figure 7.
Figure 7. In a class component, calling the updater function (this.setState) merges the new properties with the existing state object.
The function component approach
In contrast, for the new hooks approach, the updater function replaces the previous state value with the value you pass to the function. Now, that’s straightforward if you have simple state values, like this:
const [bookableIndex, setBookableIndex] = useState(1); setBookableIndex(3); // React replaces the value 1 with 3.
But, if you’ve decided to store JavaScript objects in state, you’ll need to tread carefully. The updater function will replace the old object entirely. Say you initialize the state like this:
function Bookables () { const [state, setState] = useState({ bookableIndex: 1, group: "Rooms" }); }
If you call the updater function,
setState, with just the changed
bookableIndex property:
function handleClick (index) { setState({ bookableIndex: index }); }
then you’ll lose the
group property. The old state object is replaced by the new one, as shown in figure 8.
Figure 8. In a function component, calling an updater function (returned by useState) replaces the old state value with whatever you pass to the updater function.
So, if you really need to use an object with the useState hook, you’ll need to copy across all the properties from the old object when you set a new property value:
function handleClick (index) { setState({ ...state, bookableIndex: index }); }
Notice how the spread operator,
...state, is used in the snippet above to copy all of the properties from the old state to the new. In fact, to ensure you have the latest state when setting new values based on old, you can pass a function as the argument to the updater function, like this:
function handleClick (index) { setState(state => { return { ...state, bookableIndex: index }; ); }
React will pass in the latest state as the first argument.
Reviewing some function component concepts
At this point, our
Bookables component is very simple. But there are already some fundamental concepts at work, concepts that underpin our understanding of function components and React hooks. Having a strong grasp of these concepts will make future discussions and your expert use of hooks much easier. In particular, here are five key concepts:
- Components are functions that accept props and return a description of their UI.
- React invokes the components. As functions, the components run their code, and end.
- Some variables may persist within closures created by event handlers. Others are destroyed when the function ends.
- We can use hooks to ask React to manage values for us. React can pass components the latest values and updater functions for those values.
- By using the updater functions, we let React know of changing values. It can re-run the components to get the latest description of the UI.
In order to discuss concepts with clarity and precision, from time to time we’ll take stock of the key words and objects we’ve encountered so far. Table 1 lists and describes some of the terms we’ve come across:
The component cycle diagram in figure 9 shows some of the steps involved when our
Bookables component runs and a user clicks a bookable. Its accompanying table discusses each step.
Figure 9. Stepping through the key moments when using useState
That’s all for this article. If you want to see more of the book, you can check it out on our browser-based liveBook platform here. | https://freecontent.manning.com/managing-component-state-with-the-usestate-hook/ | CC-MAIN-2021-49 | refinedweb | 2,779 | 60.65 |
1. Crawl a Facebook group for feed periodically.
2. Parse out the Feeds for a particular person.
3. Tweet to me when that happens.
For #1 and #3, I have already done that before with Twython and AWS Lambda in this post. So I just need to figure out how to crawl Facebook group. Here were the steps I took:
1. Create a Facebook App ID and get App secret. This is pretty easy and self-explanatory.
2. Use the Facebook Graph API Page and Documentation to find the right URI to get the feed that you want. This was the most time consuming as Facebook changes access rights from version to version, and there are differences between App Token and User Token. What I finally found was using the User Token for version 2.2 (latest was 2.7) was what I needed.
Here is the curl example. You can get the group ID from going directly to the Facebook group and looking at the URI. The feed limit is just to speed up the response:
○ → curl -i -X GET \
> "<facebook group ID>?fields=feed.limit(10)&access_token=<token>"
3. The user token is only good for about an hour, you can make it a 60 day long term token. Here was the Facebook LInk, but I find this instruction to be much easier.
4. Now that I have the feed, I just need to parse it out for message, creation time, and other fields that I wanted and fill in what I did before for Twitter bot and upload to Lambda. One thing that I did wrong was previously using update_status which spammed all of my followers, so I switched to direct_message. Here is the code:
import requests, pprint, json, re, datetime
from twython import Twython
from twython.exceptions import TwythonError
with open('credentials.json') as f:
credentials = json.loads(f.read())
client = Twython(credentials["consumer_key"],
credentials["consumer_secret"],
credentials["access_token_key"],
credentials["access_token_secret"])
url = "<group>?fields=feed.limit(50)&access_token=<long term access token>"
def lambda_handler(event, context):
r = requests.get(url)
for i in r.json()['feed']['data']:
if i['from']['id'] == '<some user ID>':
if re.search('shirt', i['message']):
name = (i['from']['name'])
#print(i['from']['id'])
createdTime = (i['created_time'])
message = (i['message'])
print("Tweeting..")
text = <your text>
client.send_direct_message(screen_name="<some user>", text = text)
Thanks for reading. Happy Coding!
great | http://blog.pythonicneteng.com/2016/10/combine-facebook-api-with-twitter-bot.html | CC-MAIN-2017-47 | refinedweb | 395 | 68.16 |
soundplayer_play_sound_async()
Play a system sound and do not wait for completion.
Synopsis:
#include <bps/soundplayer.h>
BPS_API int soundplayer_play_sound_async(const char *name, char **id)
Since:
BlackBerry 10.2.0
Arguments:
- name
The name of the system sound.
- id
If not NULL, the ID used in the play sound request will be returned in id. The caller must free this buffer using bps_free(). This same ID will be delivered in the corresponding SOUNDPLAYER_INFO event.
Library:libbps (For the qcc command, use the -l bps option to link against this library)
Description:
The soundplayer_play_sound_async() function plays the specified system sound and returns immediately. A SOUNDPLAYER_INFO event will be delivered when the sound is finished playing.
Returns:
BPS_SUCCESS upon success, BPS_FAILURE with errno set otherwise.
Last modified: 2014-05-14
Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus | http://developer.blackberry.com/native/reference/core/com.qnx.doc.bps.lib_ref/topic/soundplayer_play_sound_async.html | CC-MAIN-2014-35 | refinedweb | 143 | 61.02 |
lp:charms/trusty/apache-hadoop-hdfs-secondary
Created by Kevin W Monroe on 2015-06-01 and last modified on 2015-10-07
- Get this branch:
- bzr branch lp:charms/trusty/apache-hadoop-hdfs-secondary
Members of Big Data Charmers can upload to this branch. Log in for directions.
Branch merges
Related bugs
Related blueprints
Branch information
- Owner:
- Big Data Charmers
- Status:
- Mature
Recent revisions
- 70. By Cory Johns on 2015-10-07
Get Hadoop binaries to S3 and cleanup tests to favor and improve bundle tests
- 69. By Kevin W Monroe on 2015-09-15
[merge] merge bigdata-dev r79..81 into bigdata-charmers
- 68. By Kevin W Monroe on 2015-08-24
[merge] merge bigdata-dev r69..r78 into bigdata-charmers
- 67. By Kevin W Monroe on 2015-07-24
updated resources to use lp:git vs lp:bzr
- 66. By Kevin W Monroe on 2015-06-29
bundle resources into charm for ease of install; add extended status messages; use updated java-installer.sh that ensures java is on the path
- 65. By Kevin W Monroe on 2015-06-18
remove namespace refs from readmes now that we are promulgated
- 64. By Kevin W Monroe on 2015-06-01
remove dev references for production
- 63. By Kevin W Monroe on 2015-05-29
reference plugin instead of client in the docs
- 62. By Kevin W Monroe on 2015-05-29
update DEV-README to reflect correct relation data; reference plugin instead of client
- 61. By Kevin W Monroe on 2015-05-28
update jujubigdata so NN/RM wait for their process before providing relation data
Branch metadata
- Branch format:
- Branch format 7
- Repository format:
- Bazaar repository format 2a (needs bzr 1.16 or later) | https://code.launchpad.net/~bigdata-charmers/charms/trusty/apache-hadoop-hdfs-secondary/trunk | CC-MAIN-2017-13 | refinedweb | 287 | 65.73 |
Sean_Seanston 880 Report post Posted March 15, 2010 Shouldn't be topic-worthy I know but I've searched around and still can't solve my problem. Maybe I'm doing something small wrong but it seems to look just like other example code from what I can tell but maybe there's something different with the context... I want to sort a vector of objects representing enemy spawns by their y variable. I have a World class with this function: bool sortEnemies( const EnemySpawnList& e1, const EnemySpawnList& e2 ){ return ( e1.getY() < e2.getY() ); } Where getY() returns a float. Then I call this in another function of the World class: std::sort( enemySpawnLists.begin(), enemySpawnLists.end(), sortEnemies ); Where enemySpawnLists is a vector of EnemySpawnList objects. The compiler gives me: error C2662: 'EnemySpawnList::getY' : cannot convert 'this' pointer from 'const EnemySpawnList' to 'EnemySpawnList &' Which I think may be something to do with using those functions that return a value rather than a value itself since I messed around and that seemed to get rid of it. Is that right? That seems weird... hmm... Also I get: error C3867: 'World::sortEnemies': function call missing argument list; use '&World::sortEnemies' to create a pointer to member Now I've tried replacing the sortEnemies function etc. with what looked to me like the same code on cplusplus.com with simple ints and I still got errors. Any ideas? :/ 0 Share this post Link to post Share on other sites | https://www.gamedev.net/forums/topic/565231-trouble-sorting-a-vector-of-objects/ | CC-MAIN-2017-34 | refinedweb | 245 | 63.29 |
?.
Thanks
you are the best
Did not had to wait for long. Asked for it in different blog few days back
I hope you find the post useful!
I believe so. Things are getting deeper here.
Will we get recursive LSTM MODEL for multi step forecasting soon?
Will eagerly wait for that blog.
Thanks
Maybe.
Sir,
Hope to see that soon.
Thanks a lot for this post. I was trying to make this for my thesis since september, with no well results. But I’m having trouble: I’m not able to compile. Maybe you or someone who reads this is able to tell me why this happens: I’m getting the following error when running the code:
The TensorFlow library wasn’t compiled to use SSE instructions, but these are available on your machine and could speed up CPU computations.
The TensorFlow library wasn’t compiled to use SSE2 instructions, but these are available on your machine and could speed up CPU computations.
The TensorFlow library wasn’t compiled to use SSE3 instructions, but these are available on your machine and could speed up CPU computations.
.
The TensorFlow library wasn’t compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
The TensorFlow library wasn’t compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
The TensorFlow library wasn’t compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
Obviously it has something to do with Tensorflow (I have read about this problem and I think its becase is not installed on source, but have no idea about how to fix it).
Thank you in advance.
These are warnings that you can ignore.
Sir,
Can we say that multiple output strategy ( avoiding 1.direct, 2. Recursive, 3.direct recursive hybrid strategies) have been used here ?
Am I right ?
I think the LSTM has implemented a direct strategy.
Hi,Jason,
Your article is very useful! I have a problem, if the data series are three-dimensional data, the 2th line is the put -in data,and the 3th line is the forecasting data(all include the train and test data ),Do they can run the” difference”and “tansform”?
Thank you very much!
Great question.
You may want to only make the prediction variable stationary. Consider perform three tests:
– Model as-is
– Model with output variable stationary
– Model with all variables stationary (if others are non-stationary)
I have discovered how to do it by asking some people. The object series is actually a Pandas Series. It’s a vector of information, with a named index. Your dataset, however, contains two fields of information, in addition to the time series index, which makes it a DataFrame. This is the reason why the tutorial code breaks with your data.
To pass your entire dataset to MinMaxScaler, just run difference() on both columns and pass in the transformed vectors for scaling. MinMaxScaler accepts an n-dimensional DataFrame object:
ncol = 2
diff_df = pd.concat([difference(df[i], 1) for i in range(1,ncol+1)], axis=1)
scaler = MinMaxScaler(feature_range=(0, 1))
scaled_values = scaler.fit_transform(diff_df)
So, with this, we can use as many variables as we want. But now I have a big doubt.
When the transform or dataset into a supervised learning problem, we have a distribution in columns as shown in
I mean, for a 2 variables dataset as yours, we can set, for example, this values:
n_lags=1
n_seq=2
so we will have a supervised dataset like this:
var1(t-1) var2(t-1) var1(t) var2 (t) var1(t+1) var2 (t+1)
so, if we want to train the ANN to forecast var2 (which is the target we want to predict) with the var1 as input and the previous values of var2 also as input, we have to separate them and here is where my doubt begins.
In the part of the code:
def fit_lstm(train, n_lag, n_seq, n_batch, nb_epoch, n_neurons):
# reshape training into [samples, timesteps, features]
X, y = train[:, 0:n_lag], train[:, n_lag:]
X = X.reshape(X.shape[0], 1, X.shape[1])
I think that if we want to define X, we should use:
X=train[:,0:n_lag*n_vars]
this means we are selecting this as X from the previous example:
var1(t-1) var2(t-1)
(number of lags*number of variables), so: X=train[:,0:1*2]=train[:,0:2]
but…
Y=train[:,n_lag*n_vars:] is the vector of ¿targets?
the problem is that, on this way, we are selecting this as targets:
var1(t) var2(t) var1(t+1) var2(t+1)
so we are including var1 (which we don’t have the aim to forecast, just use as input).
I would like to know if there is any solution to solve this in order to use the variable 1,2…n-1 just as input but not forecasting it.
Hope this is clear :/
Thanks for the previous clarification. I have a dubt in relation to the section “fit network” in the code. I’m having some trouble trying to plot the training graph (validation vs training) in order to see if the network is or not overfitted, but due to the “model.reset_states()” sentence, i can only save the last loss and val_loss from de history sentence. Is there any way to solve this?
thank you in advance 🙂
I reply to myself, if someone is also interested.
Just creating 2 list (or 1, but i see it more clear on this way) and returning then on the function. Then, outside, just plot them. I’m sorry for the question, maybe the answer is obvious, but I’m starting on python and I’m not a programmer.
# fit network
loss=list()
val_loss=list()
for i in range(nb_epoch):
history=model.fit(X, y, epochs=1, batch_size=n_batch,shuffle=True, validation_split=val_split)
eqm=history.history[‘loss’]
eqm_val=history.history[‘val_loss’]
loss.append(eqm)
val_loss.append(eqm_val)
model.reset_states()
return model,loss,val_loss
# fit model
model,loss,val_loss=fit_lstm(train, n_lag, n_seq, n_batch, n_epochs, n_neurons)
pyplot.figure()
pyplot.plot(loss)
pyplot.plot(val_loss)
pyplot.title(‘cross validation’)
pyplot.ylabel(‘MSE’)
pyplot.xlabel(‘epoch’)
pyplot.legend([‘training’, ‘test’], loc=’upper left’)
pyplot.show()
Nice to see you got there jvr, well done.
History is returned when calling model.fit().
We are only fitting one epoch at a time, so you can retrieve and accumulate performance each epoch in the epoch loop then do something with the data (save/graph/return it) at the end of the loop.
Does that help?
It does help, thank you.
Now I’m trying to find a way to make the training process faster and reduce RMSE, but it’s pretty dificult (the idea is to make results better than in the NARx model implemented in the Matlab Neural Toolbox, but results and computational time are hard to overcome).
LSTMs often need to be trained longer than you think and can greatly benefit from regularization.
Hi,
Thanks for the great tutorial, I’m wondering if you can help me clarify the reason you have
model.reset_states()
(line 83)
when fitting the model, I was able to achieve similar results without the line as well.
Thanks!
It clears the internal state of the LSTM.
I have tried experimenting with and without mode.reset_states(), using some other dataset.
I am doing multistep prediction for 6-10 steps, I am able to get better results without model.reset_states().
Am i doing something wrong, or it completely depends on dataset to dataset.
Thanks in advance.
It completely depends on the dataset and the model.
Thank you so much. 🙂
Thanks for the quick reply Jason :-). I’ve seen other places where reset is done by using callbacks parameter in model.fit.
class ResetStatesCallback(Callback):
def __init__(self):
self.counter = 0
def on_batch_begin(self, batch, logs={}):
if self.counter % max_len == 0:
self.model.reset_states()
self.counter += 1
Then the callback is used by as follows:
model.fit(X, y, epochs=1, batch_size=1, verbose=2,
shuffle=False, callbacks=[ResetStatesCallback()])
The ResetStatesCallback snippet was obtained from:
Please let me know what you think.
Thanks!
Yes, there are many ways to implement the reset. Use what works best for your application.
Hi Jason, greate post, and I have some questions:
1. in your fit_lstm function, you reset each epoch state, why?
2. why you iterate each epoch by yourself, instead of using model.fit(X, y, epochs)
thx Jason
# fit an LSTM network to training data’)
# fit network
for i in range(nb_epoch):
model.fit(X, y, epochs=1, batch_size=n_batch, verbose=0, shuffle=False)
model.reset_states()
return model
The end of the epoch is the end of the sequence and the internal state should not carry over to the start of the sequence on the next epoch.
I run the epochs manually to give fine grained control over when resets occur (by default they occur at the end of each batch).
I’d like to clarify line 99 in the LSTM example:
—– plot_forecasts(series, forecasts, n_test+2)
Is the n_test + 2 == n_test + n_lag – n_seq?
Thanks,
J
I’d also like to know why using n_test + 2
I thought it should be n_test + 2 == n_test+n_seq-1 (regardless of n_seq). It would be great if someone could clarify that.
M, you are right. Otherwise the RMS is incorrectly calculated and plotting is not aligned.
Hi jason,
When I applied your code into a 22-year daily time series, I find out that the LSTM forecast result is similar to persistence one, i.e. the red line is just a horizontal bar. I’m sure I did not mess those two methods, I wonder what cause this?
My key configure as follows:
n_lag = 1
n_seq = 3
n_test = 365*3
and my series length is 8035.
You will need to tune the model to your problem.
Thanks to your tutorial, I’ve been tuning the parameters such as numbers of epochs and neurons these days. However, I noticed that you mentioned the grid search method to get appropriate parameters, could you please explain how to implement it into LSTM? I’m confused about your examples on some other tutorial which has a model class, seems unfamiliar to me.
See this example on how to grid search with LSTMs manually:
Thanks, I’ve just finished one test. What does it mean if error oscillates violently with epochs increasing instead of steady diminishing? Can I tune the model better, or LSTM is incapable of this time series?
You may need a larger model (more layers and or more neurons).
Jason,
Thank you for these tutorials. These are the best tutorials on the web. One question: what is the best way to forecast the last two values?
Thank you
Thanks MM.
No one can tell you the “best” way to do anything in applied machine learning, you must discover it through trial and error on your specific problem.
Jason,
Understood. Let me re-phrase the question. In a practical application, one would be interested in forecasting the last data point, i.e. in the shampoo dataset, “3-12”. How would you suggest doing that?
Fit your model to all of the data then call predict() passing whatever lag inputs your model requires.
Jason,
Should the line that starts the offset point in plot_forecasts() be
off_s = len(series) – n_test + i + 1
not
off_s = len(series) – n_test + i – 1
Hi Jason,
Thanks for your excellent tutorials!
I have followed a couple of your articles about LSTM and did learn a lot, but here is a question in my mind: can I introduce some interference elements in the model? For example for shampoo sale problem, there may be some data about holiday sales, or sales data after an incident happens. If I want to make prediction for sales after those incidents, what can I do?
What’s more, I noticed that you will parse date/time with a parser, but you did not really introduce time feature into the model. For example I want to make prediction for next Monday or next January, how can I feed time feature?
Thanks!
Yes, see this post for ideas on adding additional features:
Thanks for clarification.
I have two more specific questions:
1) In inverse_transform, why index = len(series) – n_test + i – 1?
2) In fit_lstm, you said “reshape training into [samples, timesteps, features]”, but I think the code in line 74 is a little different from your format:
73 X, y = train[:, 0:n_lag], train[:, n_lag:]
74 X = X.reshape(X.shape[0], 1, X.shape[1])
In line 74, I think it should be X = X.reshape(X.shape[0], X.shape[1], 1)
Hi Michael,
Yes, the offset finds one step prior to the forecast in the original time series. I use this motif throughout the tutorial.
In the very next line I say: “We will fix time steps at 1, so this change is straightforward.”
Hi Jason,
I would like to know how to do short term and long term prediction with minimum number of models?
For example, I have a 12-step input and 12-step output model A, and a 12-step input and 1-step output model B, would model A gives better prediction for next first time step than model B?
What’s more, if we have 1-step input and 1-step output model, it is more error prone to long term prediction.
if we have multi-step input and 1-step output mode it is still more more error prone long term. So how to regard the long term and short term prediction?
I would recommend developing and evaluating each model for the different uses cases. LSTMs are quite resistant to assumptions and rules of thumb I find in practice.
Hello, thanks for your tutorial
If my prediction model is three time series a, b, c, I would like to use a, b, c to predict the future a, how can I build my LSTM model.
thank you very much!
Each of a, b, and c would be input features. Remember, the shape or dimensions of input data is [samples, timesteps, features].
Does stationarizing data really help the LSTM? If so, what is the intuition behind that? I mean, I can understand that for ARIMA-like methods, but why for LSTM’s?
Yes in my experience, namely because it is a simpler prediction problem.
I would suggest trying a few different “views” of your sequence and see what is easiest to model / gets the best model skill.
Hi Jason,
I want to train a model with the following input size: [6000, 4, 2] ([samples, timestamps, features])
For example, I want to predict shampoo’s sale in next two years. If I have other feature like economy index of every year, can I concatenate sale data and index data in the above format? So my input will be a 3d vector. How should I modify the model to train?
I always get such error: ValueError: Error when checking target: expected dense_1 to have 2 dimensions, but got array with shape (6000, 2, 2).
The error comes from this line: model.fit(X, y, epochs=1, batch_size=n_batch, verbose=0, shuffle=False). Can you provide some advices? Thanks!
Reshape your data to be [6000, 4, 2]
Update the input shape of the network to be (4,2)
Adjust the length of the output sequence you want to predict.
sir,
To make one forecast with an LSTM, if we write
oneforecast = forecast_lstm(model, X, n_batch)
it says: undefined X
what should be the value of X? we know the model and n_batch value?
would you help?
X would be the input sequence required to make a prediction, e.g. lag obs.
sir,
what if I want to tell the model to learn from train data (23 samples here) and want to forecast only 3 steps forward (Jan, Feb, Mar). I want to avoid persistence model in this case and only require 3 step direct strategy. hope you got that.
any help would be grateful.
tarin (past data)= forecast (Jan, Feb, Mar)
Perhaps I misunderstand, but this is the model presented in the tutorial. It predicts 3 time steps ahead.
#
here if i would like to make only one forecast for 3 steps (jan,feb,march) what i have to change. i do not need the rest of the month(april, may, june, july,aug,……dec). one predictions or forecast for 3 steps.
hope you got me
Pass in only what is required to make the prediction for those 3 months.
sir,
will be kind enough to simplify a little bit more.
I did not get it.
I am getting an error while parsing the date at time of loading the data from csv file.
The error is:
ValueError: time data ‘1901-Jan’ does not match format ‘%Y-%m’
Anyone please help me to resolve this issue.
I’m sorry to hear that. Confirm you have copied the code exactly and the data file does not have any extra footer information.
hi
I have so this problem
i have downloaded the dataset from the link in the text
i think this error has occured because the data of our csv file is not in correct format!
can anyone give us the dataset plz???
Here is the raw data ready to go:
Sir,
I have the same issue. How can I fix the parser to resolve this error?
@Jason,
Data file doesn’t have any footer and i had simply copy paste the code but dateparser throwing the error. I have no idea why it is behaving strange.
Sorry, I don’t have any good ideas. It may be a Python environment issue?
Hi Jason,
Great explanation again. I have a doubt about this piece of code:
#
Why do you pass the parameter “n_seq” to the function if it has no use inside the function?
Good point, thanks.
Hi,
How would I go about forecasting for a complete month. (Assuming I have daily data).
Assuming I have around 5 years data 1.8k data points to train.
I would like to use one year old data to forecast for the whole of next month?
To do this should I change the way this model is trained?
Is my understanding correct that this model tries to predict the next value by only using current value?
Yes, frame the data so that it predicts a month, then train the model.
The model can take as input whatever you wish, e.g. a sequence of the last month or year.
Hey, thanks for the reply.
This post really helped me.
Now the next question is how do we enhance this to consider exogenous variables while forecasting?
If I simply add exogenous variable values at this step:
train, test = supervised_values[0:-n_test], supervised_values[-n_test:], (and obviously make appropriately changes to batch_input_shape in model fit.)
Would it help improve predictions?
What is the correct way of adding independent variables.
I have gone through this post of yours.
It was helful but how to do this using neural networks that has LSTM?
Can you please point me in the right direction?
Additional features can be provided directly to the model as new features.
See this post on framing the problem, then reshape the results:
Hi Jason, thanks for writing up such detailed explanations.
I am using an LSTM layer for a time series prediction problem.
Everything works fine except for when I try to use the inverse_transform to undo the scaling of my data. I get the following error:
ValueError: Input contains NaN, infinity or a value too large for dtype(‘float64’).
Not really sure how I can get past this problem. Could you please help me with this ?
It looks like you are tring to perform an inverse transform on NaN values.
Perhaps try some print statements to help track down where the NaN values are coming from.
Thank you for the reply. Yes, there are some NaN values in my predictions. Does that indicate a badly trained model ?
Your model might be receiving NaN as input, check that.
It may be making NaN predictions with good input, in which case it might have had trouble during training. There are methods like gradient clipping that can address this.
Figure out which case it is first though.
Thanks ! My inputs do not have any NaN. Will check out gradient clipping.
Let me know how you go Kiran.
Hi Jason,
When I try step by step forecast. i.e. forecast 1 point and then use this back as data and forecast the next point, my predictions become constant after just 2 steps, sometimes from the beginning itself.
In detail there. Can you say why this is happening? And which forecast method is usually better. Step by step or window type forecasts?
Also can you comment on when can ARIMA/ linear models perform better than netowrks/RNN?
Using predictions as input is bad as the errors will compound. Only do this if you cannot get access to the real observations.
If your model has a linear relationship it will be better to model it with a linear model with ARIMA, the model will train faster and be simpler.
But that is how ARIMA models predict right?
They do point by point forecast. And from my results ARIMA(or STL ARIMA or even XGBOOST) is doing pretty well when compared to RNN. 🙁
But i haven’t considered stationarity and outlier treatment and I see that RNN performs pathetically when the data is non stationary/has outliers.
Is this expected? I have read that RNN should take care of stationarity automatically?
Also, will our results be bad if we do first order differencing even when there is no stationarity in the data?
And as for normalization, is it possible that for some cases RNN does well without normalizing?
When is normalization usually recommended? When standard deviation is huge?
I have found RNNs to not perform well on autoregression problems, and they do better with more data prep (e.g. removing anything systematic). See this post:
Generally, don’t difference if you don’t need to, but test everything to be sure.
Standardization if the distribution is Gaussian, normalization otherwise. RNNs like LSTMs need good data scaling, MLPs less so in this age of relu.
Oh then a hybrid model using residuals from ARIMA for RNN should work well 🙂 ?
The residuals will not have any seasonal components.(even scaling should be well taken care of)
Or here also do you expect MLPs to work better?
It is hard to know for sure, I recommend using experiments to collect data to know for sure, rather than guessing.
I think there is an issue with inverse differencing while forecasting for multistep.(to deal with non stationary data)
This example is adding previously forecasted(and inverse differenced) value to the currently forecasted value.Isn’t this method wrong when we have 30 points to forecast as it keeps adding up the results and hence the output will continuously increase.
Below is the output I got.
Instead should I just add the last known real observation to all the forecasted values? I dont suppose that would work either.
It could be an issue for long lead times, as the errors will compound.
If real obs are available to use for inverse differencing, you won’t need to make a forecast for such a long lead time and the issue is moot.
Consider contrasting model skill with and without differencing, at least as a starting point.
Hi, thank you for your helpful tutorial.
I have a question regarding a seq to seq timeseries forcasting problem with multi-step lstm.
I have created a supervised dataset of (t-1), (t-2), (t-3)…, (t-look_back) and (t+1), (t+2), (t+3)…, (t+look_ahead) and our goal is to forcast look_ahead timesteps.
We have tried your complete example code of doing a dense(look_ahead) last layer but received not so good results. This was done using both a stateful and non-stateful network.
We then tried using Dense(1) and then repeatvector(look_ahead), and we get the same (around average) value for all the look_ahead timesteps. This was done using a non-stateful network.
Then I created a stepwise prediction where look_ahead = 1 always. The prediction for t+2 is then based on the history of (t+1)(t)(t-1)… This has given me better results, but only tried for non-stateful network.
My questions are:
– Is it possible to use repeatvector with non-stateful networks? Or must network be stateful? Do you have any idea why my predictions are all the same value?
– What do network you recommend for this type or problem? Stateful or non stateful, seq to seq or stepwise prediction?
Thanks in advance!
Sandra
Very nice work Sandra, thanks for sharing.
The RepeatVector is only for the Encoder-Decoder architecture to ensure that each time step in the output sequence has access the entire fixed-width encoding vector from the Encoder. It is not related to stateful or stateless models.
I would develop a simple MLP baseline with a vector output and challenge all LSTM architectures to beat it. I would look at a vector output on a simple LSTM and a seq2seq model. I would also try the recursive model (feed outputs as inputs for repeating a one step forecast).
It sounds like you’re trying all the right things.
Now, with all of that being said, LSTMs may not be very good at simple autoregression problems. I often find MLPs out perform LSTMs on autoregression. See this post:
I hope that helps, let me know how you go.
Hi Jason,
Thanks for your tutorials. I’m trying to learn ML and your webpage is very useful!
I’m a bit confuse with the inverse_difference function. Specifically with the last_ob that I need to pass.
Let’s say I have the following:
Raw Data difference scaled Forecasted values
raw_val1=.4
raw_val2=.35 -.05 -.045 [0.80048585, 0.59788215, -0.13518856]
raw_val3=.29 -.06 -.054 [0.65341175, 0.37566081, -0.14706305]
raw_val4=.28 -.01 -.009 [[0.563694, -0.09381149, 0.03976132]
When passing the last_ob to the inverse_difference function which observation do I need to pass to the function, raw_val2 or raw_val1?
My hunch is that I need to pass raw_val2. Is that correct?
Also, in your example, in the line:
forecasts = inverse_transform(series, forecasts, scaler, n_test+2)
What’s the reason of this n_test+2?
Thanks in advance!
Oscar
Hi Jason,
Great work.
I had a question. When reshaping X for lstm (samples,timesteps,features) why did you model the problem as timesteps=1 and features=X.shape[1]. Shouldn’t it be timesteps = lag window size
and the output dense layer have the size of horizon_window. This will give much better results in my opinion.
Here is a link which will make my question more clear:
I model the problem with no timesteps and lots of features (multiple obs at the same time).
I found that if you frame the problem with multiple time steps for multiple features, performance was worse. Basically, we are using the LSTM as an MLP type network here.
LSTMs are not great at autoregression, but this post was the most requested I’ve ever had.
More on LSTM suitability here:
So Jason,
Correct me if I am wrong but the whole point of RNN+LSTM learning over time(hidden states depending on past values) goes moot here.
Essentially, this is just an autoregressive neural network. There is no storage of states over time.
Yes, there is no BPTT because we are only feeding in one time step.
You can add more history, but results will be worse. It turns out that LSTMs are poor at autoregression:
Nevertheless, I get a lot of people asking how to do it, so here it is.
Hi, I try to use this example to identify the shape switch an angle , its useful to use this tutorial and how I can test the model I train it,
Regards,
Hanen
Hi there – I love your blog and these tutorials! They’re really helpful.
I have been studying both this tutorial and this one:.
I have applied both codes to a simple dataset I’m working with (date, ROI%). Both codes run fine with my data, but I’m having a problem that has me completely stumped:
With this code, I’m able to actually forecast the future ROI%. With the other, it does a lot better at modeling the past data, but I can’t figure out how to get it to forecast the future. Both codes have elements I need, but I can’t seem to figure out how to bring them together.
Any insight would be awesome! Thank you!
What is the problem exactly?
Jason, first of all, I would like to thank you for the work you’ve done. It has been tremendously helpful.
I have a question and seeking your expert opinion.
How to handle a time series data set with multiple and variable granularity input of each time step. for instance, consider the dataset like below:
Date | Area | Product category | Orders | Revenue | Cost
so, in this case, there would be multiple records for a single day aggregated on date and this is the granularity I want.
How should this kind of data be handled, since these features will contribute to the Revenue and Orders?
You could standardize the data and feed it into one model or build separate models and combine their predictions.
Try a few methods and see what works best for your problem.
I am using this framework for my first shot at an LSTM network for monitoring network response times. The data I’m working with currently is randomly generated by simulating API calls. What I’m seeing is the LSTM seems to always predict a return to what looks like the mean of the data. Is this a function of the data being stochastic?
Separate question: since LSTM’s have a memory component built into the neurons, what are the advantages/disadvantages of using a larger n_in/n_lag than 1?
THe problem might be too hard for your model, perhaps tune the LSTM or try another algorithm?
A key benefit of LSTMs is that they the lag can extend much longer than other methods, e.g. hundreds of time steps. This means you are modeling something like:
yhat = f(t-1, …, t-500)
And the model can reproduce something it saw 500 time steps ago if needed.
Thanks. I am playing with some toy data now just to make sure I’m understanding how this works.
I am able to model a cosine wave very nicely with a 5 neuron, 100 epoch training run against np.cos(range(100)) split into 80/20 training set. This is with the scaling, but without the difference. I feed in 10 inputs, and get 30 outputs.
Does calling model.predict change the model? I am calling repeatedly with the same 10 inputs and am seeing a different result each time. It looks like the predicted wave cycles through different amplitudes.
Ah ok, I got it. Since stateful is on, I would need to do an explicit reset_states between predictions. Makes sense, I think! Stateful was useful for training, but since I won’t be “online learning” and since I feed the network lag in the features, I should not rely on state for predictions.
Nice work!
Yes, generally scaling is important, but if your cosine wave values are in [0,1] then you’re good.
I have a simple question. Trying to set up an a different toy problem, with data generated as y=x over 800 points (holding out the next 200 as validation). No matter how many layers, neurons, epochs that I train over, the results tend to be a that predictions start out fairly close to the line for lower values, but it diverges quickly and and approaches some fixed y=400 for higher values.
Do you have any ideas why this would happen?
May be error accumulating. You’re giving the LSTM a hard time.
Can I get your input on this issue I’m having? I would really like to make sure that I’m not implementing incorrectly. If there are network parameters I need to do, I can go through that exercise. But, I am not feeling confident about what I am on the right path with this problem.
Hi, there is a problem with the code. when doing data processing, i.e. calculate difference and min max scale. you should not use all data. in more real situation, you can only do this to train data. since you have no idea about test data.
So I changed the code, cut the last 12 month as test. then only use 24 months data for difference, min max scale, fit the model and predict for month 25, 26, 27.
Then I continue to use 25 months data for difference, min max scale, fit the model and predict for month 26, 27, 28.
…
The final result is worse than baseline.!
Correct, this is a simplification I implemented to keep the tutorial short and understandable.
Hi Jason, I was able to get slightly better results with a custom loss function (weighted mse)
def weighted_mse(yTrue,yPred):
ones = K.ones_like(yTrue[0,:])
idx = K.cumsum(ones)
return K.mean((1/idx)*K.square(yTrue-yPred))
credit goes to Daniel Möller on Stack Overflow as I was not able to figure out the tensor modification steps on my own and he responded to my question there
Nice one! Thanks for sharing.
What is the point of the “train” data set as parameter in this function if it is not used?
Thanks
Yep, looks like its not used. You can probably remove it.
Hello, It is very useful tutorial. I am starter for the python and programming. May I convert input of model into 4 or more than one variable? and change the n_batch into other number not 1?
Sure.
But ,When I change the n_batch size, the model does not work. By the way, you said manually to epoch of model, would you tell me the how to do it?
Hi Jason,
thanks a lot for your tutorials on LSTMs.
Do you have a suggestion how to model the network for a multivariate multi-step forecast? I read your articles about multivariate and multi-step forecast, but combining both seems to be more tricky as the output of the dense layer gets a higher dimension.
In words of your example here: if I want to forecast not only shampoo but also toothpaste sales T time steps ahead, how can I achieve the forecast to have the dimension 2xT? Is there an alternative to the dense layer?
I see. You could have two neurons in the output layer of your network, as easy as that.
Thanks for this great tutorial. Do you think this technique is applicable on the case of a many-to-many prediction?
A toy scenario: Imagine a machine with has 5 tuning knobs [x1, x2, x3, x4, x5] and as a result we can read 2 values [y, z] as a response to a change of any of the knobs.
I am wondering if I can use LSTM to predict y and z at with a single model instead of building one model for y and another for z? I am planning to follow this tutorial but I will love to hear what you think about it.
Yes, LSTMs can easily be configured to support multiple input series and output a vector or parallel series.
For example of taking multiple series as input, see this post:
Hi Jason, thank you very much for this tutorial. I am just starting with LSTM and your series on LSTM is greatly valuable.
A question about multi-output forecasting: how to deal with a multi-output when plotting the true data versus the predicted data.
Let’s say I have a model to forecast the next 10 steps (t, t+1…,t+9).
Using the observation at time:
–> t=0, the model will give a forecast for t =1,2,3,4,5,6,7,8,9,10
and similarly, at
–> t=1, a forecast will be outpout for t=2,3,4,5,6,7,8,9,10,11
etc…
There is overlap in the timestep for the forecast from t=0 and from t=1. For example, if I want to know the value at t=2, should I use the forecast from t=1 or from t=0, or a weighted average of the forecast?
May be using only the forecast from t=1 enough, because it already includes the history of the time series (i.e it already includes the observation at t=0).
I’m not sure I follow. Perhaps you might be better off starting with linear models then move to an LSTM to lift skill on a framing/problem that is already working:
The:
return datetime.strptime(‘190’+x, ‘%Y-%m’)
gives me:
ValueError: time data ‘1901/1’ does not match format ‘%Y-%m’
Thanks in advance
Perhaps confirm that you downloaded the dataset in CSV format.
So you don’t actually need to split the data into test and training sets because you don’t use the training set in this code. So this then becomes an unsupervised problem?
No, it is a supervised learning model.
We use walk-forward validation. Learn more about it here:
my mistake, I was look at just the multi-step persistence model. Thanks!
No problem.
sorry i am confuse about the function inverse_transform why you use n_test+2 in the function but not n_test? | https://machinelearningmastery.com/multi-step-time-series-forecasting-long-short-term-memory-networks-python/ | CC-MAIN-2017-43 | refinedweb | 6,336 | 73.98 |
Due to the large screen sizes, we do not need (or want) to zoom/pan to form fields. We also do not need the next/prev buttons to appear. Tablet keyboards have a "Tab" button to make moving between fields easier.
We still want the combobox UI and the form suggestion bubble.
We should try to trigger this based on screen size. We are using >800 px in a different tablet UI bug.
(In reply to comment #0)
> We still want the combobox UI and the form suggestion bubble.
If you want the form suggestion bubble this is another good reason to move it out of FormHelperUI (bug 648026)
Created attachment 532162 [details] [diff] [review]
Patch
Created attachment 532163 [details] [diff] [review]
Patch
Oups, left some debug code (the previous version was always disabled)
Comment on attachment 532163 [details] [diff] [review]
Patch
>+ // Dynamically enabled/disabled the form helper if needed
>+ let mode = Services.prefs.getIntPref("formhelper.mode");
>+ let state = (mode == 2) ? (window.innerWidth <= 480) : !!mode;
I think we want to use a physical length. We are using 124 mm in a different place as a tablet trigger.
See for getting the DPI
if my math is right, this should work:
let dpmm = DPI / 25.4;
let state = (mode == 2 ? ((window.innerWidth / dpmm) <= 124) : !!mode);
>+ Services.prefs.setBoolPref("formhelper.enabled", state);
Instead of using "formhelper.enabled" can we just move this into the "enabled" getter? and make it memoized
> case "FormAssist:Hide":
>- this.enabled ? this.hide()
>- : SelectHelperUI.hide();
>+ if (this.enabled)
>+ this.hide();
>+ else {
>+ SelectHelperUI.hide();
>+ ContentPopupHelper.popup = null;
>+ }
Use { } around the "if" part
r- for the nits
Created attachment 532610 [details] [diff] [review]
Patch v0.2
(In reply to comment #4)
> Instead of using "formhelper.enabled" can we just move this into the
> "enabled" getter? and make it memoized
forms.js use formhelper.enabled too but can't access window.innerWidth (since it use a fake viewport)
Comment on attachment 532610 [details] [diff] [review]
Patch v0.2
>diff --git a/mobile/chrome/content/Util.js b/mobile/chrome/content/Util.js
> return (!appInfo || appInfo.getService(Ci.nsIXULRuntime).processType == Ci.nsIXULRuntime.PROCESS_TYPE_DEFAULT);
> },
>
>+ isTabletSized: function isTablet() {
I kinda prefer "isTablet"
>diff --git a/mobile/chrome/content/common-ui.js b/mobile/chrome/content/common-ui.js
>+ // Dynamically enabled/disabled the form helper if needed depending on
>+ // the size of the screen
>+ let mode = Services.prefs.getIntPref("formhelper.mode");
>+
>+ // See the tablet_panel_minwidth from mobile/themes/core/defines.inc
>+ let tablet_panel_minwidth = 124;
>+ let dpmm = Util.getWindowUtils(window).displayDPI / 25.4;
You can remove this code, right? You're using Utils.isTablet()
r+ with the nits fixed
We could use a test for this. Running it on phones would at least let us know the FormHelper is active for small screens.
Can someone having a tablet, please, verify this ?
Verified Ideos s7 - Mozilla/5.0 (Android; Linux armv7l; rv:6.0a1) Gecko/20110523 Firefox/6.0a1 Fennec/6.0a1 ID:20110523042031 | https://bugzilla.mozilla.org/show_bug.cgi?id=656373 | CC-MAIN-2016-26 | refinedweb | 486 | 61.02 |
Using Azure App Services to Integrate with Salesforce
- Posted in:
- salesforce
- integration
- azure
Microsoft recently announced Azure App Service, a new Azure service which integrates Web Apps, Mobile Apps, Logic Apps and API Apps in one service. Scott Guthrie’s blog has a very good article that explains this new service.
In this article I will talk about how to use Azure App Services to integrate with Salesforce. For this example I will develop a small solution based on a real case scenario. I will assume the reader is more familiarized with Microsoft’s technology so I will spend more time explaining the details about Salesforce.
The Scenario
The scenario is the following: suppose that our client is an insurance company that uses Salesforce as its CRM. Insured's call the Call Center department to requests assistances depending on the policy coverage they have contracted, for example: assistance on the road (for automotive policies), or request for professional assistance - e.g. a plumber, locksmith, etc. – at home (for home insurance policies). All assistances are attended by third party service providers (car shops, locksmiths, car towing companies, etc.) The idea is that service requests are made in the CRM, and then third party service providers can attend these requests and mark them as completed once the service has been delivered. It would be ideal that external providers could have access to the CRM to see all the pending requests and do updates directly in the system. However, in our case it won’t be that easy: there are many service providers and we don’t want to buy Salesforce licenses for them (Salesforce licenses are not precisely cheap), and besides, we don’t want external users to have access to our CRM. You could create a Community for this and be done with it, but in this case, and for the sake of illustration, I will instead show how you can develop a small web portal for service providers which we will synchronize with CRM using Azure App Services.
Our solution will have the following flow:
- Call Center users will receive assistance requests by insured’s. These requests will be entered in the CRM (Salesforce)
- At regular intervals, the Logic App will get from the CRM all pending requests using a Salesforce Connector. These requests will be sent to the database of the service provider web portal using a SQL Server connector
- New requests will appear in the service provider portal
- Service providers will take requests and deliver them. Once delivered, they will change the request’s status to completed
- At regular intervals the Logic App will get all completed requests and synchronize them back to the CRM using a Salesforce Connector
- Finally, CRM users will be able to see all the external service provider deliveries completed
In the diagram, we can see that we will require a bidirectional synchronization between Salesforce and the web app we will create.
It is worth noting that the goal of this article is to describe how to use Azure App Service to do integrations with Salesforce. The solution we will develop in this article does not pretend to be a complete solution, but a simple solution that will allow us to see the most important aspects to consider when doing these kind of integrations. A complete solution would probable have a better state workflow associated to requests, notifications (email, SMS and/or push) of new assignments to third party providers, notifications to insured’s of the status of their requests, etc. To maintain things simple, and the article not as long, we will focus on the most important aspects and leave some of the details of a real solution as homework.
Requirements
In order to follow along and create the solution we will need:
- An Azure subscription. If you don’t have one you can get one free trial from the Azure web site
- A Salesforce development environment. You can get a free Salesforce Developer instance from the Salesforce Developer’s web site. When you register you will get an email with information about how to connect to the instance.
- Visual Studio 2013 Community Edition (Update 4 at the time if this writing). You can download it free from the Visual Studio web site. You will also need the Azure SDK (version 2.5.1 at the this of this writing).
Development of the Salesforce Application
The first thing we’ll do is creating the necessary objects in our Salesforce developer instance. If you requested a new development instance and is the first time you get into Salesforce you will directly enter the Salesforce setup area. From here we can customize the system and create new objects. In our case we want an object called Assistance.
From the menu on the left, we will go the the “Build” section, then “Create”, and then “Objects”. You can also use the search box on the top and enter a keyword to filter the options available on the menu. On the “Custom Objects” screen click the “New Custom Object” button. On the screen to create the object fill the required data as shown in the following screen. For the rest of the information leave the default value.
Notice the “Object Name” is Assistance. Every object in Salesforce has an API Name, which is the name that we will use to identify the object in development. As a rule, Salesforce adds the __c suffix to the name of all custom objects. So, the real name of our newly created object is Assistance__c.
Click on the “Save” button to create the object. In Salesforce objects are immediately available. We will proceed to create some fields to get the data required for an assistance. On the same screen of the object, on the “Custom Fields & Relationships” section, click the “New” button and launch the new field wizard. We will first create the relationship between assistances and contacts: a contact could have many assistances and an assistance belongs to a unique contact. Salesforce comes out-of-the-box with an object called Contact (here the name doesn’t have the __c suffix as this is a standard object and not a custom object, all out-of-the-box objects are standard objects). To create the relationship between assistance and contact we need to create a field of type “Lookup Relationship”, we will choose this option on the wizard screen and click “Next”. On the next step, in the “Related To” field choose the Contact object. On the next step specify Contact as the name of the field, and leave the default values for the rest.
Click on “Next” and leave the default options for the rest of the steps and finalize the wizard clicking on “Save & New” to restart the wizard to create a new field. We will repeat the wizard to create all the fields described below:
After you finish creating the fields, these should look as follows:
Note that, as with the name of the custom object, the names of custom fields have also the __c suffix. Every object in Salesforce, custom or standard, has two fields called Id and Name. The first one is the primary key of the object, and the second is a text that works as the descriptor for the record. Both fields do not have the __c suffix, as they’re considered standard fields (even in custom objects).
Test the Application by Creating a Few Records
We already have our object for assistances. To test it we will create a new assistance. From the top menu choose the tab for “Contacts” to see a list of recent contacts (if it is the first time we use this tab, we will not have any recent contact and the list is empty). To see the list of all contacts, from the “View” dropdown select the “All Contacts” list and click the “Go” button. Select any of the contacts in the list to see his/her details. At the end of the form we should see the section for “Assistances”. Click on the “New Assistance” button to create a new assistance record. Notice how some of the fields are automatically filled based on default values: the Contact field gets filled since we created the record from a contact record, the Date field has today’s date as a default, and the Status is set to New as is the first value introduced in the picklist. Enter a name and description and save the assistance.
Create one or two more assistances so we have enough data for the rest of the tests that will follow.
If we go back to the contact record we should see the requests we just created. However, in the list we only see the name of the request. To see additional fields we need to modify the form. At the top of the page there is a link called “Edit Layout” which opens the form editor.
On the edit layout screen look at the end for the list of assistances and with the mouse hover on the title of the list until a wrench icon appears. When clicking this icon you will see a screen where you can select the fields you want to display on the list. Select fields Date, Status and Provider and then click the “OK” button. To save the layout click the “Save” button at the top of the layout editor, returning back to the contact record. You will now be a able to see more details in the list of assistances:
Development of the External Service Provider’s Portal
We will now proceed with the creation of the provider’s portal. For this we will create an MVC application using Visual Studio, and later publish it in Azure as a Web App with a SQL Server database.
Open Visual Studio and create a new project of type “ASP.Net Web Application”. I have called mine ProvidersPortal. Use the MVC project template and choose “Individual User Accounts” as the authentication method. Once the project is created we will enable Entity Framework Migrations. From the Package Manager console enter the following commands:
enable-migrations add-migration Initial update-database
Next, we will create a new entity to represent assistances. In the Models directory, add a new class and call the file Assistance.cs and overwrite the content of the file with the following code:
using System; using System.Collections.Generic; using System.ComponentModel.DataAnnotations; using System.Linq; using System.Web; namespace ProvidersPortal.Models { public class Assistance { [MaxLength(20)] [Key] public string Id { get; set; } [MaxLength(80)] public string Name { get; set; } public string Description { get; set; } [MaxLength(50)] public string Provider { get; set; } [MaxLength(20)] public string Status { get; set; } public DateTime? Date { get; set; } public bool? Synced { get; set;} } }
To expose this object to Entity Framework we need to add a property in the class for the context. To keep things simple we will use the same database used by ASP.Net Identity. Open the file Models\IdentityModels.cs and add the highlighted line to the class ApplicationDbContext:
public class ApplicationDbContext : IdentityDbContext<ApplicationUser> { public ApplicationDbContext() : base("DefaultConnection", throwIfV1Schema: false) { } public static ApplicationDbContext Create() { return new ApplicationDbContext(); } public DbSet<Assistance> Assistances { get; set; } }
Next, add a new migration for the new object. In the Package Manager console enter the following commands:
add-migration Assistance update-database
Now, we need a controller and a view for the assistance object. On the Controllers folder, add a new controller of type “MVC 5 Controller with views, using Entity Framework”. Fill the data as shown in the following figure:
Visual Studio will create (scaffold) files for the controller and the views for the typical CRUD operations. By default the controller is created allowing anonymous access to it, and in our case we want only authenticated users (external service providers) to have access to the assistance requests. Open the controller, the file Controllers\AssistanceController.cs and add the Authorize attribute to it:
[Authorize] public class AssistancesController : Controller
Now all we have left is to add a menu option to access the assistances. Open the file Views\Shared\_Layout.cshtml and add a new option to the menu, as shown in the following highlighted code:
<ul class="nav navbar-nav"> <li>@Html.ActionLink("Home", "Index", "Home")</li> <li>@Html.ActionLink("Assistances", "Index", "Assistances")</li> <li>@Html.ActionLink("About", "About", "Home")</li> <li>@Html.ActionLink("Contact", "Contact", "Home")</li> </ul>
Compile and execute the application. After registering you should be able to see the assistances screen (which will be empty at the moment):
Publishing of the Portal as a Web App
Now we’ll proceed to publishing the portal as a Web App in Azure. We will let Visual Studio do all the work for us. In the “Solution Explorer”, on the project node, right click and choose “Publish…”. Select “Microsoft Azure Web Apps” as the target. Visual Studio will ask you for credentials to connect to your Azure subscription. Click on the “New…” button to create a new application. I have called mine ProvidersPortal, and to keep things well organized I’ve decided to create a App Service plan as well as a new resource group for this application. We also need a database, so select the option to create a new database server. Click the “Create” button to let Visual Studio create the app in Azure.
Write down the name of the App Service plan and the resource group, you will need these when you create the rest of the applications (Logic Apps and API Apps) as they need to have the same plan and group so they can see each other. In my case, both the plan and the group is called ProvidersPortal.
Once the web application is created in Azure, the wizard will continue with the publishing. In the “Settings” section make sure you select “Execute Code First Migrations (runs on application start)” in order to provision the Azure database with the schema required by our application.
Finally, click the “Publish” button and when the wizard finishes will open the provider’s portal executing on the cloud! Register a user and browse the Assistances menu item so Entity Framework will create the database schemas for the app.
Integration using a Logic App
We already have our two applications executing independently. Now comes the fun part: the integration of both applications using a Logic App in Azure. We will use the Azure portal to do all the steps that follow.
First, we will create the API Apps that will allow us to connect to Salesforce and SQL Server.
Creation of API Apps
We need two API Apps, one for Salesforce and the other for SQL Server. Both apps will be created from the Azure Marketplace.
Salesforce API App
On the Azure portal, click on “New”, select “Web + Mobile” and then “Azure Marketplace”. Select the “API Apps” category. On the search field enter “salesforce” and press enter. From the search results click on “Salesforce Connector”. A blade will open with the details of the connector and with links to its documentation. Click on the “Create” button. A new blade will open with the properties of the connector. From here select the “Package Settings” section. In this section you can specify the Salesfoce objects you would like to access. Remove the objects that appear by default and enter the name of the object we created in Salesforce: Assistance__c (remember the __c suffix, which is part of the name):
In the section “App Service Plan” select the plan created before when publishing the web application. The service plan must be the same so all the parts can work together. In my case the service plan is called ProvidersPortal. Click on “Create” and let Azure provision the API App. After a minute approximately you will see a notification telling the app is ready.
By the way, if you want to modify the connector configuration parameters later, you need to do it from the Host of the API App (you have a link to the host on the connector details blade)
SQL Server API App
As we did before, from the Azure Marketplace search for “sql” and in the search results click on “Microsoft SQL Connector”. A blade will open with the connector details and a link to its documentation. Click the “Create” button to open a new blade with the configuration of the connector. From here select “Package Settings” section. In this section you can specify the connection details to the database. If you have doubts about what are the right values to put here you can look at the connection string that was created in the publishing profile of the web project when we published it to Azure (in Visual Studio, in Solution Explorer, go to Properties\PublishProfiles\<projectname>.pubxml). For the “Tables” specify Assistances. Make sure to select the same service plan we’ve been using. This is how the properties look in my case:
Click “Create” and let Azure provision the API App. After a minute approximately you will see a notification telling the app is ready. As with the previous API app, if you need to modify the configuration of the connector (because they changed or because you made a mistake) you will have to do it from the Host of the API App.
Creation of the Logic App
Let’s stop here for a moment and analyze what we want to accomplish. We want a bidirectional synchronization between Salesforce and the SQL Server database.
For the synchronization from Salesforce to SQL Server we want to do the following:
- First we find all those assistances that are in Salesforce which have not been synced and with a status of “New”
- For each one of these assistances we get its details
- For each one of these assistances we will add a new record in the provider portal’s database
- If the insertion is successful we mark the assistance as synced in Salesforce
For the synchronization from SQL Server to Salesforce we want to do the following:
- We will look for all the assistance records that have been completed and have not been synced yet
- For each one of these records we will update Salesforce
- If the update is successful we will mark the record as synced in the portal
Since (for the moment) the logic inside Logic Apps is lineal (the current step always depends on the previous) we will need to create two Logic Apps, one for each direction of the synchronization.
Note: in this case we will use a boolean field to know what records need to be synced from one system to the other. In a real application this is probably not the best since each record will be synced only once between systems. A better approach would be to base the synchronization on a timestamp field that has the last modification date made to a record.
Synchronization from Salesforce to SQL Server
We will now proceed to create the synchronization between both applications. For this we will create a Logic App and use the connectors created before. On the Azure portal, click on “New”, then on “Web + Mobile” and select “Logic App”. A new blade will open where you can enter the name of the app and select the service plan. Again, make sure to select the same service plan specified in the connectors. I called my logic app ProvidersPortalLogic and selected the ProvidersPortal plan. Click on “Create” to provision the application. After a minute approximately you will see a notification telling the app is ready.
Open the application and select the section “triggers and actions” to show the canvas to edit the actions for the app. For now, and to be able to develop and test our app, click on the option “Run this logic manually” which will allow us to run the application on demand as we develop it. On the right section of the edition canvas you should see, among others, the two API applications we created in previous steps: the Salesforce and the SQL Server connectors. If you don’t see the connectors you probably didn’t select the appropriate service plans to be all the same.
Let’s start by adding the first step. On the “API Apps” panel click on the Salesforce connector. This will add a step to the logic. You need to authorize, i.e. provide the connection credentials, the connector so it can connect to the Salesforce instance. Click on the “Authorize” button, a screen requesting Salesforce credentials will appear. Enter your credentials and when asked if you want to give access to the Azure Logic App say yes. If everything went ok the connector will get the metadata from Salesforce and will present you with some options:
It is possible that you get an error saying the the metadata has not been generate (“Error fetching swagger api definition”). You could try remove the step (by clicking on the wrench icon in the upper right menu of the step and selecting “Delete this action”) and try again. If you still get the same error then you can do the following trick: change the editor to the the code view by clicking on the “Code View” icon
and remove the Salesforce subscription (the one highlighted on the following image) and then try to add the step again.
If the above trick doesn’t fix the problem the you probably have an error in the configuration of the API App (the connector).
From the step actions click on three dots “…” to get more actions. Select the “Execute Query” action. This action allows you to execute a SOQL query against Salesforce. Enter the following query:
select Id from Assistance__c where Synced__c = false and Status__c = 'New'
We want the id of the records that have not yet been synchronized and with a status of “New”. When we validate the step we should see something like this.
When this step gets executed we will get all the assistance records that satisfy the criteria, and they will be stored in a property called result. Next we will add the second step: get the details of each assistance.
Add a new step by clicking on the Salesforce connector in the “API Apps” panel. Select “Get_Assistance__c” as the action. This action will get all the fields of a particular record. We want this step to be executed by each of the records obtained in the previous step. To achieve this, on the step menu, click on the wrench icon and select “Repeat over a list”. A new field “Repeat” will appear allowing you to specify the list to use to get the records. We will use an expression to get the list. Expressions are specified by adding the @ character as a prefix. If we want to get the list of records from the previous step we will use the following expression:
@body(‘salesforceconnector’).result.records
The expression means the following: get the body of the step called salesforceconnector and search for a property called result.records. The body is formatted as JSON and the result property is an object which contains an array property called records, containing all the records returned by the query.
By default, the name of each step is the name of the connector plus an index, so for example, the first step of the Salesforce connector is called salesforceconnector, the second step is called salesforceconnector0, the third is salesforceconnector1, and so on.
In the field “Record Id” we will enter the id of the current record in the loop. To get the current record inside the loop we will use the @repeatItem() expression. So, if we want the id of the record we will use:
@repeatItem().Id
The step should look like this:
Note: we could not add this second step to the logic and instead get the required data by specifying it on the query of the first step, but I’ve done it this way to illustrate the concepts.
Now that we have the data for each record the third step is to insert the record in the database of the web application. For this we will use the SQL Server connector. On the “API Apps” panel click on “Microsoft SQL Connector”. This will add a new step to the logic. Select the action “Insert into Assistances (JSON)”. The connector will ask for parameters for each field in the table. As with the previous step, we want this insertion to be made for each record in the list so click on the wrench icon and select “Repeat over a list”. In this case we get the list using the following expression:
@actions('salesforceconnector0').outputs.repeatItems
salesforceconnector0 is the previous step, from which we want to take the outputs property representing the output of the step (this output is actually many records since the step was also executed for a list). From this output we want the repeatItems property, which is an array with the result of each record it got in the loop. To get a specific field we will use again the expression @repeatItem():
@repeatItem().outputs.body.Id
The above expression would get the field Id from the Salesforce record. If this looks complex to you the best thing is for you to execute the application every time you add a step and analyze the output of each step to see the JSON produced. This will give you an idea on how to get the values you need.
We will repeat the above expression for each field of the table. Remember once again that, because we’re dealing with custom fields, you need to add the __c suffix to the field name, except for the Id and Name fields which are standard fields (even though they belong to a custom object). Fill the fields using the following table:
** The date field should have the following expression: @repeatItem().outputs.body.Date__c, however this value was giving me errors when trying to insert the record in SQL Server, something related with the date format. After trying unsuccessfully with various formats I decided to leave it blank at the moment and later find a solution.
We will leave fields Provider and Synced blank as these will be filled in the provider’s portal. The step should look as follows:
The last step would be to update the record in Salesforce and mark it as synced so the record would not be considered in subsequent runs of the logic app. Add a new step to the logic by clicking on the Salesforce connector in the “API Apps” panel. For this step select the action “Update Assistance__c”. Make the step execute for a list (select “Repeat over a list” from the step menu). The parameters for fields “Repeat” and “Record Id” are the same as in the previous step. Set the Synced field to true.
We have our logic app ready! Make sure to save changes if you haven’t done it yet and close the editor (you could get a message asking if you want to discard your changes even if you have saved them, say yes, your changes are already saved). Execute the application by clicking on the run icon
. If everything works ok you should be able to see the results of the execution:
You should also be able to see the assistances you created in Salesforce in the provider’s portal:
And records in Salesforce should be marked as synced:
With this we have finished the first part of our solution.
Synchronizing from SQL Server to Salesforce
For the second part of the solution we will create a new Logic App. The steps are the same as the ones we did for the previous app. I’ll just mention that I will call this app SalesforceLogic. Remember to assign the same service plan. To execute this app on demand make sure you select the “Run this logic manually”.
Let’s start with the first step. Add a SQL Server connector to the app and select the action “Select from Assistances (JSON)”. Leave “Fields“ blank to get all the fields of the table. In the “Where” add the following condition:
Status = 'Completed' and Synced is null
The step should look like this:
Now add a second step by clicking on the Salesforce connector. Since this is a new logic app you need to authorize it against the Salesforce instance again. Select the action “Update Assistance__c”. We want to execute the update in Salesforce for each one of the records retrieved in the previous step, so we need to execute this step for a list: select the “Repeat over a list” from the step menu. For the list we will use the body of the previous step with the expression @body(‘microsoftsqlconnector’), and for the “Record Id” use the expression @repeatItem().Id. We need to update the fields Provider__c and Status__c in Salesforce by using the expressions @repeatItem().Provider and @repeatItem().Status, respectively. The step should look as follows:
Finally, we will add a third step using the SQL Server connector to update the Synced field in the portal’s database so it will not be considered when the logic app is executed again. For this we will make this step run for the same list of the first step: @body(‘microsoftsqlconnector’) and set the Synced field to true. This step requires a “Where” condition which needs to be dynamic using the following expression:
@concat('Id = %27',repeatItem().Id,'%27')
Here the trick is to use the URL codification of the ' character (simple quote) using the code '%27'. The reason to do it like this is because we cannot use a simple quote inside a text (even though I have tried different escape sequences). This will produce a condition such as this: Id = 'xxxxxxx'.
We have our logic ready! Make sure to save changes and close the editor.
On the provider’s portal select some of the assistances and fill the Provider field and change the Status to Completed.
Execute the application by clicking on the run icon
. If everything works ok you should be able to see the records in Salesforce updated:
Also you should see records marked as synced in the portal’s database:
Automatic Execution
So far we have executed both Logic Apps manually on demand. On a real application we will execute both applications automatically at regular intervals. For this we need to uncheck the option “Run this logic manually” and use some application with a trigger. The Salesforce connector doesn’t provide any trigger, but the SQL Server connector does. In both cases one option is to use the “Recurrence” application, which will allow use to automatically execute the application using a predefined time interval. The only inconvenience of this is that, if you use the free plan, the minimum time interval you can specify is one hour (one of the reasons why we chose to run the logic on demand for this article).
Conclusions
As we’ve seen, Azure App Services is a powerful tool. With a great amount of connectors (which is rapidly growing) and with the escalation possibilities that offers Azure, it is very easy to create solutions such as the one created in this article. Definitely a tool that any Software Architect should have in her tool belt when designing integrations solutions.
It is true that some areas could be improved. Documentation lacks some details, and there are some minor errors when designing the application logic, but I’m pretty sure this will evolve and get fixed (if not already) as time goes by. Also, the logic of an App Logic is very lineal, and although you can get around some scenarios, this limits some other scenarios, but again, I’m pretty sure this will change later as they will add more options to it.
So, things can only get better, so I will be following this service and see how it evolves from here.
Resources
- Try Azure for free:
- Creation of a Salesforce Developer account:
- Documentation and videos of Azure App Services:
- Documentation of Salesforce connector:
- List and documentation of Azure App Service connectors:
- List of functions used in expressions:
You can also see this article in Spanish in the Microsoft MSDN blog for Spain:
So, apparently this doesn't work anymore? The salesforce connector won't show up as available to create a logic app. Only the ones with triggers for creation/modified records are there.Wanderlei Santos | http://bloggiovannimodica.azurewebsites.net/post/using-azure-app-services-to-integrate-with-salesforce | CC-MAIN-2020-45 | refinedweb | 5,423 | 60.45 |
I am writing tests for a module Child which inherits from Parent, and have run into a oddity which I can't grasp.
These modules are part of a larger framework and it is pretty difficult to strip it down to a non-working snippet of code.
However these are the facts before the crime:
my $o = Child->new;
print ref($o); #Child
print $o->isa('Child'); # 1
print $o->isa('Parent'); # 1
print join(':', @Child::ISA) # Parent
my $method = 'testSign';
my $code = $o->can($method);
print $code; # CODE(0x20d5b94)
[download]
Either of
$o->$method()
[download]
$o->$code()
[download]
emits the following error
Undefined subroutine &ClientChild::testSign called at program.t line nnn.
The testSign method is only defined in the Parent class.
So whats going on, to me it looks like inheritance does not occur even though the prerequsites seem fulfilled.
Hints, ptrs, wild shots in the dark greatly appreciated!
perl -v #AS-635
[download]
update: Thanks adrianh & ysth for spotting my transcription error.
Sort of Solved:Added a method Fubar to Parent. And tested with that, and now it worked. So obviously there was something special with the testSign name. A little searching revealed the testSign was referenced in Child like so
my %_init_mems = (
Password => \&testPassword,
Sign => \&testSign,
Name => \&testNameX2,
Titel => \&testStrX,
);
[download]
I'm not sure but does this act as a declaration for sub testSign in Child namespace ala ysth's and Ovid's suggestions?
But in that case the testSign in Child ought to have been invoked as a method instead ?
$ perl -we'sub foo; my $obj = bless {}, "main"; $obj->foo()'
Undefined subroutine &main::foo called at -e line 1.
[download]
$ perl -we'my $obj = bless {}, "main"; $obj->foo()'
Can't locate object method "foo" via package "main" at -e line 1.
[download]
You might have to do $code->($o) to make the coderef
WinXp/AS635
Yeah, $o->$method and the other two variations work in other programs. It also works well in the test script I wrote to exercise the "feature" ;-/
Is there any chance that you have more than one definition of the base or child class? If so, could you possibly be using a module different from the one you think you are using?
Also, how is testSign being called? If it's called with function syntax instead of method syntax, you'll get the error described.
Cheers,
Ovid
New address of my CGI Course.
Several good ideas!
Calling testSign directly as a subroutine works
print &Parent::testSign() # 1 eq OK
[download]
Showing that the subroutine/method is defined in Parent, but the method invocation does not dispatch to Parent.
No AUTOLOAD
No declarations
print join(':', @Parent::ISA) # aka empty
[download]
$code = $obj->can("testSign");
[download]
Undefined subroutine &Client::testSign called at program.t line nnn.
Client::testSign?
Undefined subroutine &Child::testSign called at program.t line nnn.
Can't locate object method "junk" via package PACKAGE_NAME
Couldn't agree more with you, this is I think the real indicator of a serious problem as I AM calling with different variations of method invocation "->"
The error message you indicate would be more normal, showing signs of dispatch mechanism not finding the requested method]
$code->();
do?
Same error as above.
Also, what happens when you call testSign directly?
$o->testSign();
[download]
Vert, very, very strange!
Agreed!
And no the direct call, dies with the same error.
Although I have no clue what is wrong, nor do I have anything to offer as a solution, for which I am sorry, I must say that this is a very well-presented question. That they were all this well. | http://www.perlmonks.org/index.pl/?node_id=379356 | CC-MAIN-2018-17 | refinedweb | 603 | 71.75 |
Java encryption voicepekerjaan [log masuk untuk melihat URL] Then Throw New InvalidOperationException("Private key is not loaded") End If Dim num As Integer = [l...
..
...performs deploys on target servers with Authentication solution (encryption) another solution you can recommend. See this link [log masuk untuk melihat URL] Need a simple Authenticated secrete solution that is explained, you can have each of the 3 or for steps do java version, hostname and ifconfig on the target servers. Is $10
..: [log masuk untuk melihat [log masuk untuk melihat
Sila Dafter atau Log masuk untuk melihat butir experiance java
Sila Dafter atau Log masuk untuk melihat butiran.
Hi, I have some issue with the code. Your job would be to fix it, and would implement some algorithm. Details will be shared later. Having encryption knowledge is helpful Thanks!
.. IPv4 forwarding
Hi, I have a Clearkey DRM Key server built, I need to further test and work on some encryption, I am looking for someone that has this experience, to look at this project.
...Camera with Text Recognition (example: [log masuk untuk melihat URL]) GPS Fingerprint Network Encryption Local DB with autosync to cloud storage Background Services Maps eGov / eID userauthentication / signature processing through this API: [log masuk untuk melihat
...Objective C = iOS + Java = Android --> ready in two weeks. The main customization's must be: - Upgrade security Many security features, which include i.e.: --> Use TLS connectivity for handshake --> Use VPN connection for additional security --> Use ZRTP encryption protocol on VoIP (voice and video) communication --> Use text encryption for mess...
.. [log masuk untuk melihat URL]; import [log masuk untuk melihat URL]; import [log masuk untuk melihat URL]; import [log masuk untuk melihat URL]; import [log masuk untuk melihat URL]; public class Java_AES_Cipher { private static int CIPHER_KEY_LEN = 32; private static String CIPHER_NAME = "AES/CBC/PKCS5PADDING"; public stat...
Project Theme/ Project Title: FRAPPE: DETECTING MALICIOUS FACEBOOK APPLICATIONS Overv...etc has however led to the increase in privacy and security concerns. Main Deliverable: Java or Python programming for penetration testing Secondary Deliverable: Theoretical studies of password protection, data & app security and understanding of data encryption
...
encryption of videos - no cut, copy or paste of videos
...quantum resistance encryption as layer such as [log masuk untuk melih got access in the server. The app must NOT store any encryption in the phone
Hi, I have some issue with the code. Your job would be to fix it, and would implement some algorithm. Details will be shared later. Having encryption knowledge is helpful Thanks!
I am looking for help transmitting orders from a website into a POS system. The order is transmitted via an encrypted email. We have not been able to eliminate the errors.
I need someone to change the login from my panel to use my IPBforum’s database. I already have a script that uses its encryption and such so it should be easier. Additionally I will need a form in that panel where you enter a username (located in the forum’s database) and it will set two values to NULL | https://www.my.freelancer.com/job-search/java-encryption-voice/ | CC-MAIN-2018-47 | refinedweb | 505 | 54.12 |
import random random.seed(1) import matplotlib.pyplot as plt from collections import defaultdict import scipy import scipy.optimize import numpy as np values = [0, 1] ans = defaultdict(int) for N in range(100000): tot = 0 for i in range(1000): tot += random.choice(values) ans[tot] += 1 start, end = 350, 650 x = np.arange(start, end) y = [] for i in range(start, end): if i in ans: y.append(ans[i]) else: if i > start: y.append(y[-1]) else: y.append(0) n = len(x) mean = 500. sigma = 10. def gaus(x,a,sigma): return a*scipy.exp(-(x-mean)**2/(2*sigma**2)) popt,pcov = scipy.optimize.curve_fit(gaus,x,y,p0=[1.0,sigma]) print popt plt.plot(x,y,'b+:',label='data') plt.plot(x,gaus(x,*popt),'ro:',label='fit') plt.legend() plt.show()PMC points out that a slightly easier way of getting the same value would be to use an equation: variance = n(1-p)p where n = 1000 and p is 0.5. The standard deviation is then the square root of this (i.e. sqrt of 250), 15.8.
However, the proper way of handling the original question is a power calculation. Typically power calculations are associated with calculating the necessary sample size. Here I have a fixed sample size (n=1000) and I want to figure out, given that the alternative hypothesis is true (biased coin), what number of heads must I observe to be 99.9% sure (power=0.999) that it would be show up as significant at the 0.1% level (alpha=0.001). The y = 0.5 below refers to the proportion of heads expected under the null hypothesis (unbiased coin). Since n=1000, we can use the normal approximation to the binomial distribution.
import math import scipy.stats def qnorm(p): return scipy.stats.norm.ppf(p) def solvequadratic(a, b, c): return [(-b+math.sqrt((b**2)-(4*(a*c))))/(2.*a), (-b-math.sqrt((b**2)-(4*(a*c))))/(2.*a)] if __name__ == "__main__": alpha = 0.001 # Significance level gamma = 0.999 # Power qgamma = qnorm(gamma) qalpha = qnorm(1.-(alpha/2.)) p = (qalpha+qgamma)**2 n = 1000 y = 0.5 a = 2*n+p b = (2*p-4*n)*y - 2*p # Note that if y is 0.5, then, b = -2*n-p (i.e. -a) c = -2*p*y + (2*n+p)*y*y print solvequadratic(a, b, c)...and the answer? 642. Thanks to ALC for help with this, though all errors are my own.
Notes: (added 04/12/2015)
Both the power and the significance level are conditional probabilities:
- The significance level (alpha) is the probability of rejecting the null hypothesis given that it is true, ie. prob(rejecting null hypothesis | null hypothesis is correct).
- The power (gamma) is the probability of detecting a difference, if indeed the difference does exist, ie. prob(rejecting null hypothesis | null hypothesis is incorrect). | https://baoilleach.blogspot.com/2015/ | CC-MAIN-2018-34 | refinedweb | 492 | 53.88 |
How John Got 15x Improvement Without Really Trying
By rchrd on Nov 17, 2011
The following article was published on a Sun Microsystems website a number of years ago by John Feo. It is still useful and worth preserving. So I'm republishing it here.
How I Got 15x Improvement Without Really Trying
John Feo, Sun Microsystems
Taking ten "personal" program codes used in scientific and engineering research, the author was able to get from 2 to 15 times performance improvement easily by applying some simple general optimization techniques.
Introduction
Scientific research based on computer simulation depends on the simulation for advancement. The research can advance only as fast as the computational codes can execute. The codes' efficiency determines both the rate and quality of results. In the same amount of time, a faster program can generate more results and can carry out a more detailed simulation of physical phenomena than a slower program. Highly optimized programs help science advance quickly and insure that monies supporting scientific research are used as effectively as possible.
Scientific computer codes divide into three broad categories: ISV, community, and personal. ISV codes are large, mature production codes developed and sold commercially. The codes improve slowly over time both in methods and capabilities, and they are well tuned for most vendor platforms. Since the codes are mature and complex, there are few opportunities to improve their performance solely through code optimization. Improvements of 10% to 15% are typical. Examples of ISV codes are DYNA3D, Gaussian, and Nastran.
Community codes are non-commercial production codes used by a particular research field. Generally, they are developed and distributed by a single academic or research institution with assistance from the community. Most users just run the codes, but some develop new methods and extensions that feed back into the general release. The codes are available on most vendor platforms. Since these codes are younger than ISV codes, there are more opportunities to optimize the source code. Improvements of 50% are not unusual. Examples of community codes are AMBER, CHARM, BLAST, and FASTA.
Personal codes are those written by single users or small research groups for their own use. These codes are not distributed, but may be passed from professor-to-student or student-to-student over several years. They form the primordial ocean of applications from which community and ISV codes emerge. Government research grants pay for the development of most personal codes. This paper reports on the nature and performance of this class of codes.
Over the last year, I have looked at over two dozen personal codes from more than a dozen research institutions. The codes cover a variety of scientific fields, including astronomy, atmospheric sciences, bioinformatics, biology, chemistry, geology, and physics. The sources range from a few hundred lines to more than ten thousand lines, and are written in Fortran, Fortran 90, C, and C++. For the most part, the codes are modular, documented, and written in a clear, straightforward manner. They do not use complex language features, advanced data structures, programming tricks, or libraries. I had little trouble understanding what the codes did or how data structures were used. Most came with a makefile.
Surprisingly, only one of the applications is parallel. All developers have access to parallel machines, so availability is not an issue. Several tried to parallelize their applications, but stopped after encountering difficulties. Lack of education and a perception that parallelism is difficult prevented most from trying. I parallelized several of the codes using OpenMP, and did not judge any of the codes as difficult to parallelize.
Even more surprising than the lack of parallelism is the inefficiency of the codes. I was able to get large improvements in performance in a matter of a few days applying simple optimization techniques. Table 1 lists ten representative codes [names and affiliation are omitted to preserve anonymity]. Improvements on one processor range from 2x to 15.5x with a simple average of 4.75x. I did not use sophisticated performance tools or drill deep into the program's execution character as one would do when tuning ISV or community codes. Using only a profiler and source line timers, I identified inefficient sections of code and improved their performance by inspection. The changes were at a high level. I am sure there is another factor of 2 or 3 in each code, and more if the codes are parallelized. The study’s results show that personal scientific codes are running many times slower than they should and that the problem is pervasive.
Computational scientists are not sloppy programmers; however, few are trained in the art of computer programming or code optimization. I found that most have a working knowledge of some programming language and standard software engineering practices; but they do not know, or think about, how to make their programs run faster. They simply do not know the standard techniques used to make codes run faster. In fact, they do not even perceive that such techniques exist. The case studies described in this paper show that applying simple, well known techniques can significantly increase the performance of personal codes. It is important that the scientific community and the Government agencies that support scientific research find ways to better educate academic scientific programmers. The inefficiency of their codes is so bad that it is retarding both the quality and progress of scientific research.
Table 1 — Area of improvement and performance gains of 10 codes
The remainder of the paper is organized as follows: sections 2, 3, and 4 discuss the three most common sources of inefficiencies in the codes studied. These are cache performance, redundant operations, and loop structures. Each section includes several examples. The last section summaries the work and suggests a possible solution to the issues raised.
Optimizing cache performance
Commodity microprocessor systems use caches to increase memory bandwidth and reduce memory latencies. Typical latencies from processor to L1, L2, local, and remote memory are 3, 10, 50, and 200 cycles, respectively. Moreover, bandwidth falls off dramatically as memory distances increase. Programs that do not use cache effectively run many times slower than programs that do.
When optimizing for cache, the biggest performance gains are achieved by accessing data in cache order and reusing data to amortize the overhead of cache misses. Secondary considerations are prefetching, associativity, and replacement; however, the understanding and analysis required to optimize for the latter are probably beyond the capabilities of the non-expert. Much can be gained simply by accessing data in the correct order and maximizing data reuse. 6 out of the 10 codes studied here benefited from such high level optimizations.
Array Accesses
The most important cache optimization is the most basic: accessing Fortran array elements in column order and C array elements in row order. Four of the ten codes—1, 2, 4, and 10—got it wrong. Compilers will restructure nested loops to optimize cache performance, but may not do so if the loop structure is too complex, or the loop body includes conditionals, complex addressing, or function calls. In code 1, the compiler failed to invert a key loop because of complex addressing
do I = 0, 1010, delta_x IM = I - delta_x IP = I + delta_x do J = 5, 995, delta_x JM = J - delta_x JP = J + delta_x T1 = CA1(IP, J) + CA1(I, JP) T2 = CA1(IM, J) + CA1(I, JM) S1 = T1 + T2 - 4 * CA1(I, J) CA(I, J) = CA1(I, J) + D * S1 end do end do
In code 2, the culprit is conditionals
do I = 1, N do J = 1, N If (IFLAG(I,J) .EQ. 0) then T1 = Value(I, J-1) T2 = Value(I-1, J) T3 = Value(I, J) T4 = Value(I+1, J) T5 = Value(I, J+1) Value(I,J) = 0.25 * (T1 + T2 + T5 + T4) Delta = ABS(T3 - Value(I,J)) If (Delta .GT. MaxDelta) MaxDelta = Delta endif enddo enddo
I fixed both programs by inverting the loops by hand.
Code 10 has three-dimensional arrays and triply nested loops. The structure of the most computationally intensive loops is too complex to invert automatically or by hand. The only practical solution is to transpose the arrays so that the dimension accessed by the innermost loop is in cache order. The arrays can be transposed at construction or prior to entering a computationally intensive section of code. The former requires all array references to be modified, while the latter is cost effective only if the cost of the transpose is amortized over many accesses. I used the second approach to optimize code 10.
Code 5 has four-dimensional arrays and loops are nested four deep. For all of the reasons cited above the compiler is not able to restructure three key loops. Assume C arrays and let the four dimensions of the arrays be i, j, k, and l. In the original code, the index structure of the three loops is
L1: for i L2: for i L3: for i for l for l for j for k for j for k for j for k for l
So only L3 accesses array elements in cache order. L1 is a very complex loop—much too complex to invert. I brought the loop into cache alignment by transposing the second and fourth dimensions of the arrays. Since the code uses a macro to compute all array indexes, I effected the transpose at construction and changed the macro appropriately. The dimensions of the new arrays are now: i, l, k, and j. L3 is a simple loop and easily inverted. L2 has a loop-carried scalar dependence in k. By promoting the scalar name that carries the dependence to an array, I was able to invert the third and fourth subloops aligning the loop with cache.
Code 5 is by far the most difficult of the four codes to optimize for array accesses; but the knowledge required to fix the problems is no more than that required for the other codes. I would judge this code at the limits of, but not beyond, the capabilities of appropriately trained computational scientists.
Array Strides
When a cache miss occurs, a line (64 bytes) rather than just one word is loaded into the cache. If data is accessed stride 1, than the cost of the miss is amortized over 8 words. Any stride other than one reduces the cost savings. Two of the ten codes studied suffered from non-unit strides. The codes represent two important classes of "strided" codes.
Code 1 employs a multi-grid algorithm to reduce time to convergence. The grids are every tenth, fifth, second, and unit element. Since time to convergence is inversely proportional to the distance between elements, coarse grids converge quickly providing good starting values for finer grids. The better starting values further reduce the time to convergence. The downside is that grids of every nth element, n > 1, introduce non-unit strides into the computation. In the original code, much of the savings of the multi-grid algorithm were lost due to this problem. I eliminated the problem by compressing (copying) coarse grids into continuous memory, and rewriting the computation as a function of the compressed grid. On convergence, I copied the final values of the compressed grid back to the original grid. The savings gained from unit stride access of the compressed grid more than paid for the cost of copying. Using compressed grids, the loop from code 1 included in the previous section becomes
do j = 1, GZ do i = 1, GZ T1 = CA(i+0, j-1) + CA(i-1, j+0) T4 = CA1(i+1, j+0) + CA1(i+0, j+1) S1 = T1 + T4 - 4 * CA1(i+0, j+0) CA(i+0, j+0) = CA1(i+0, j+0) + DD * S1 enddo enddo
where CA and CA1 are compressed arrays of size GZ.
Code 7 traverses a list of objects selecting objects for later processing. The labels of the selected objects are stored in an array. The selection step has unit stride, but the processing steps have irregular stride. A fix is to save the parameters of the selected objects in temporary arrays as they are selected, and pass the temporary arrays to the processing functions. The fix is practical if the same parameters are used in selection as in processing, or if processing comprises a series of distinct steps which use overlapping subsets of the parameters. Both conditions are true for code 7, so I achieved significant improvement by copying parameters to temporary arrays during selection.
Data reuse
In the previous sections, we optimized for spatial locality. It is also important to optimize for temporal locality. Once read, a datum should be used as much as possible before it is forced from cache. Loop fusion and loop unrolling are two techniques that increase temporal locality. Unfortunately, both techniques increase register pressure—as loop bodies become larger, the number of registers required to hold temporary values grows. Once register spilling occurs, any gains evaporate quickly. For multiprocessors with small register sets or small caches, the sweet spot can be very small. In the ten codes presented here, I found no opportunities for loop fusion and only two opportunities for loop unrolling (codes 1 and 3).
In code 1, unrolling the outer and inner loop one iteration increases the number of result values computed by the loop body from 1 to 4,
do J = 1, GZ-2, 2 do I = 1, GZ-2, 2 T1 = CA1(i+0, j-1) + CA1(i-1, j+0) T2 = CA1(i+1, j-1) + CA1(i+0, j+0) T3 = CA1(i+0, j+0) + CA1(i-1, j+1) T4 = CA1(i+1, j+0) + CA1(i+0, j+1) T5 = CA1(i+2, j+0) + CA1(i+1, j+1) T6 = CA1(i+1, j+1) + CA1(i+0, j+2) T7 = CA1(i+2, j+1) + CA1(i+1, j+2) S1 = T1 + T4 - 4 * CA1(i+0, j+0) S2 = T2 + T5 - 4 * CA1(i+1, j+0) S3 = T3 + T6 - 4 * CA1(i+0, j+1) S4 = T4 + T7 - 4 * CA1(i+1, j+1) CA(i+0, j+0) = CA1(i+0, j+0) + DD * S1 CA(i+1, j+0) = CA1(i+1, j+0) + DD * S2 CA(i+0, j+1) = CA1(i+0, j+1) + DD * S3 CA(i+1, j+1) = CA1(i+1, j+1) + DD * S4 enddo enddo
The loop body executes 12 reads, whereas as the rolled loop shown in the previous section executes 20 reads to compute the same four values.
In code 3, two loops are unrolled 8 times and one loop is unrolled 4 times. Here is the before
for (k = 0; k < NK[u]; k++) { sum = 0.0; for (y = 0; y < NY; y++) { sum += W[y][u][k] * delta[y]; } backprop[i++]=sum; }
and after code
for (k = 0; k < KK - 8; k+=8) { sum0 = 0.0; sum1 = 0.0; sum2 = 0.0; sum3 = 0.0; sum4 = 0.0; sum5 = 0.0; sum6 = 0.0; sum7 = 0.0; for (y = 0; y < NY; y++) { sum0 += W[y][0][k+0] * delta[y]; sum1 += W[y][0][k+1] * delta[y]; sum2 += W[y][0][k+2] * delta[y]; sum3 += W[y][0][k+3] * delta[y]; sum4 += W[y][0][k+4] * delta[y]; sum5 += W[y][0][k+5] * delta[y]; sum6 += W[y][0][k+6] * delta[y]; sum7 += W[y][0][k+7] * delta[y]; } backprop[k+0] = sum0; backprop[k+1] = sum1; backprop[k+2] = sum2; backprop[k+3] = sum3; backprop[k+4] = sum4; backprop[k+5] = sum5; backprop[k+6] = sum6; backprop[k+7] = sum7; }
for one of the loops unrolled 8 times.
Optimizing for temporal locality is the most difficult optimization considered in this paper. The concepts are not difficult, but the sweet spot is small. Identifying where the program can benefit from loop unrolling or loop fusion is not trivial. Moreover, it takes some effort to get it right. Still, educating scientific programmers about temporal locality and teaching them how to optimize for it will pay dividends.
Reducing instruction count
Execution time is a function of instruction count. Reduce the count and you usually reduce the time. The best solution is to use a more efficient algorithm; that is, an algorithm whose order of complexity is smaller, that converges quicker, or is more accurate. Optimizing source code without changing the algorithm yields smaller, but still significant, gains. This paper considers only the latter because the intent is to study how much better codes can run if written by programmers schooled in basic code optimization techniques.
The ten codes studied benefited from three types of "instruction reducing" optimizations. The two most prevalent were hoisting invariant memory and data operations out of inner loops. The third was eliminating unnecessary data copying. The nature of these inefficiencies is language dependent.
Memory operations
The semantics of C make it difficult for the compiler to determine all the invariant memory operations in a loop. The problem is particularly acute for loops in functions since the compiler may not know the values of the function's parameters at every call site when compiling the function. Most compilers support pragmas to help resolve ambiguities; however, these pragmas are not comprehensive and there is no standard syntax. To guarantee that invariant memory operations are not executed repetitively, the user has little choice but to hoist the operations by hand. The problem is not as severe in Fortran programs because in the absence of equivalence statements, it is a violation of the language's semantics for two names to share memory.
Codes 3 and 5 are C programs. In both cases, the compiler did not hoist all invariant memory operations from inner loops. Consider the following loop from code 3
for (y = 0; y < NY; y++) { i = 0; for (u = 0; u < NU; u++) { for (k = 0; k < NK[u]; k++) { dW[y][u][k] += delta[y] * I1[i++]; } } }
Since dW[y][u] can point to the same memory space as delta for one or more values of y and u, assignment to dW[y][u][k] may change the value of delta[y]. In reality, dW and delta do not overlap in memory, so I rewrote the loop as
for (y = 0; y < NY; y++) { i = 0; Dy = delta[y]; for (u = 0; u < NU; u++) { for (k = 0; k < NK[u]; k++) { dW[y][u][k] += Dy * I1[i++]; } } }
Failure to hoist invariant memory operations may be due to complex address calculations. If the compiler can not determine that the address calculation is invariant, then it can hoist neither the calculation nor the associated memory operations. As noted above, code 5 uses a macro to address four-dimensional arrays
#define MAT4D(a,q,i,j,k) (double *)((a)->data + (q)*(a)->strides[0] + (i)*(a)->strides[3] + (j)*(a)->strides[2] + (k)*(a)->strides[1])
The macro is too complex for the compiler to understand and so, it does not identify any subexpressions as loop invariant. The simplest way to eliminate the address calculation from the innermost loop (over i) is to define
a0 = MAT4D(a,q,0,j,k)
before the loop and then replace all instances of
*MAT4D(a,q,i,j,k)in the loop with
a0[i]
A similar problem appears in code 6, a Fortran program. The key loop in this program is
do n1 = 1, nh nx1 = (n1 - 1) / nz + 1 nz1 = n1 - nz * (nx1 - 1) do n2 = 1, nh nx2 = (n2 - 1) / nz + 1 nz2 = n2 - nz * (nx2 - 1) ndx = nx2 - nx1 ndy = nz2 - nz1 gxx = grn(1,ndx,ndy) gyy = grn(2,ndx,ndy) gxy = grn(3,ndx,ndy) balance(n1,1) = balance(n1,1) + (force(n2,1) * gxx + force(n2,2) * gxy) * h1 balance(n1,2) = balance(n1,2) + (force(n2,1) * gxy + force(n2,2) * gyy)*h1 end do end do
The programmer has written this loop well—there are no loop invariant operations with respect to n1 and n2. However, the loop resides within an iterative loop over time and the index calculations are independent with respect to time. Trading space for time, I precomputed the index values prior to the entering the time loop and stored the values in two arrays. I then replaced the index calculations with reads of the arrays.
Data operations
Ways to reduce data operations can appear in many forms. Implementing a more efficient algorithm produces the biggest gains. The closest I came to an algorithm change was in code 4. This code computes the inner product of K-vectors A(i) and B(j), 0 ≤ i < N, 0 ≤ j < M, for most values of i and j. Since the program computes most of the NM possible inner products, it is more efficient to compute all the inner products in one triply-nested loop rather than one at a time when needed. The savings accrue from reading A(i) once for all B(j) vectors and from loop unrolling.
for (i = 0; i < N; i+=8) { for (j = 0; j < M; j++) { sum0 = 0.0; sum1 = 0.0; sum2 = 0.0; sum3 = 0.0; sum4 = 0.0; sum5 = 0.0; sum6 = 0.0; sum7 = 0.0; for (k = 0; k < K; k++) { sum0 += A[i+0][k] * B[j][k]; sum1 += A[i+1][k] * B[j][k]; sum2 += A[i+2][k] * B[j][k]; sum3 += A[i+3][k] * B[j][k]; sum4 += A[i+4][k] * B[j][k]; sum5 += A[i+5][k] * B[j][k]; sum6 += A[i+6][k] * B[j][k]; sum7 += A[i+7][k] * B[j][k]; } C[i+0][j] = sum0; C[i+1][j] = sum1; C[i+2][j] = sum2; C[i+3][j] = sum3; C[i+4][j] = sum4; C[i+5][j] = sum5; C[i+6][j] = sum6; C[i+7][j] = sum7; }}
This change requires knowledge of a typical run; i.e., that most inner products are computed. The reasons for the change, however, derive from basic optimization concepts. It is the type of change easily made at development time by a knowledgeable programmer.
In code 5, we have the data version of the index optimization in code 6. Here a very expensive computation is a function of the loop indices and so cannot be hoisted out of the loop; however, the computation is invariant with respect to an outer iterative loop over time. We can compute its value for each iteration of the computation loop prior to entering the time loop and save the values in an array. The increase in memory required to store the values is small in comparison to the large savings in time.
The main loop in Code 8 is doubly nested. The inner loop includes a series of guarded computations; some are a function of the inner loop index but not the outer loop index while others are a function of the outer loop index but not the inner loop index
for (j = 0; j < N; j++) { for (i = 0; i < M; i++) { r = i * hrmax; R = A[j]; temp = (PRM[3] == 0.0) ? 1.0 : pow(r, PRM[3]); high = temp * kcoeff * B[j] * PRM[2] * PRM[4]; low = high * PRM[6] * PRM[6] / (1.0 + pow(PRM[4] * PRM[6], 2.0)); kap = (R > PRM[6]) ? high * R * R / (1.0 + pow(PRM[4]*r, 2.0) : low * pow(R/PRM[6], PRM[5]); < rest of loop omitted > }}
Note that the value of temp is invariant to j. Thus, we can hoist the computation for temp out of the loop and save its values in an array.
for (i = 0; i < M; i++) { r = i * hrmax; TEMP[i] = pow(r, PRM[3]); }
[N.B. – the case for PRM[3] = 0 is omitted and will be reintroduced later.] We now hoist out of the inner loop the computations invariant to i. Since the conditional guarding the value of kap is invariant to i, it behooves us to hoist the computation out of the inner loop, thereby executing the guard once rather than M times. The final version of the code is
for (j = 0; j < N; j++) { R = rig[j] / 1000.; tmp1 = kcoeff * par[2] * beta[j] * par[4]; tmp2 = 1.0 + (par[4] * par[4] * par[6] * par[6]); tmp3 = 1.0 + (par[4] * par[4] * R * R); tmp4 = par[6] * par[6] / tmp2; tmp5 = R * R / tmp3; tmp6 = pow(R / par[6], par[5]); if ((par[3] == 0.0) && (R > par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * tmp5; } else if ((par[3] == 0.0) && (R <= par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * tmp4 * tmp6; } else if ((par[3] != 0.0) && (R > par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * TEMP[i] * tmp5; } else if ((par[3] != 0.0) && (R <= par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * TEMP[i] * tmp4 * tmp6; } for (i = 0; i < M; i++) { kap = KAP[i]; r = i * hrmax; < rest of loop omitted > } }
Maybe not the prettiest piece of code, but certainly much more efficient than the original loop,
Copy operations
Several programs unnecessarily copy data from one data structure to another. This problem occurs in both Fortran and C programs, although it manifests itself differently in the two languages.
Code 1 declares two arrays—one for old values and one for new values. At the end of each iteration, the array of new values is copied to the array of old values to reset the data structures for the next iteration. This problem occurs in Fortran programs not included in this study and in both Fortran 77 and Fortran 90 code.
Introducing pointers to the arrays and swapping pointer values is an obvious way to eliminate the copying; but pointers is not a feature that many Fortran programmers know well or are comfortable using. An easy solution not involving pointers is to extend the dimension of the value array by 1 and use the last dimension to differentiate between arrays at different times. For example, if the data space is N x N, declare the array (N, N, 2). Then store the problem’s initial values in (_, _, 2) and define the scalar names new = 2 and old = 1. At the start of each iteration, swap old and new to reset the arrays.
The old–new copy problem did not appear in any C program. In programs that had new and old values, the code swapped pointers to reset data structures. Where unnecessary coping did occur is in structure assignment and parameter passing. Structures in C are handled much like scalars. Assignment causes the data space of the right-hand name to be copied to the data space of the left-hand name. Similarly, when a structure is passed to a function, the data space of the actual parameter is copied to the data space of the formal parameter. If the structure is large and the assignment or function call is in an inner loop, then copying costs can grow quite large. While none of the ten programs considered here manifested this problem, it did occur in programs not included in the study. A simple fix is always to refer to structures via pointers.
Optimizing loop structures
Since scientific programs spend almost all their time in loops, efficient loops are the key to good performance. Conditionals, function calls, little instruction level parallelism, and large numbers of temporary values make it difficult for the compiler to generate tightly packed, highly efficient code. Conditionals and function calls introduce jumps that disrupt code flow. Users should eliminate or isolate conditionls to their own loops as much as possible. Often logical expressions can be substituted for if-then-else statements. For example, code 2 includes the following snippet
MaxDelta = 0.0 do J = 1, N do I = 1, M < code omitted > Delta = abs(OldValue ? NewValue) if (Delta > MaxDelta) MaxDelta = Delta enddo enddo if (MaxDelta .gt. 0.001) goto 200
Since the only use of MaxDelta is to control the jump to 200 and all that matters is whether or not it is greater than 0.001, I made MaxDelta a boolean and rewrote the snippet as
MaxDelta = .false. do J = 1, N do I = 1, M < code omitted > Delta = abs(OldValue ? NewValue) MaxDelta = MaxDelta .or. (Delta .gt. 0.001) enddo enddo if (MaxDelta) goto 200
thereby, eliminating the conditional expression from the inner loop.
A microprocessor can execute many instructions per instruction cycle. Typically, it can execute one or more memory, floating point, integer, and jump operations. To be executed simultaneously, the operations must be independent. Thick loops tend to have more instruction level parallelism than thin loops. Moreover, they reduce memory traffice by maximizing data reuse. Loop unrolling and loop fusion are two techniques to increase the size of loop bodies. Several of the codes studied benefitted from loop unrolling, but none benefitted from loop fusion. This observation is not too surpising since it is the general tendency of programmers to write thick loops.
As loops become thicker, the number of temporary values grows, increasing register pressure. If registers spill, then memory traffic increases and code flow is disrupted. A thick loop with many temporary values may execute slower than an equivalent series of thin loops. The biggest gain will be achieved if the thick loop can be split into a series of independent loops eliminating the need to write and read temporary arrays. I found such an occasion in code 10 where I split the loop
into two disjoint loops) end do end do do i = 1, n do j = 1, m
Conclusions
Over the course of the last year, I have had the opportunity to work with over two dozen academic scientific programmers at leading research universities. Their research interests span a broad range of scientific fields. Except for two programs that relied almost exclusively on library routines (matrix multiply and fast Fourier transform), I was able to improve significantly the single processor performance of all codes. Improvements range from 2x to 15.5x with a simple average of 4.75x. Changes to the source code were at a very high level. I did not use sophisticated techniques or programming tools to discover inefficiencies or effect the changes. Only one code was parallel despite the availability of parallel systems to all developers.
Clearly, we have a problem—personal scientific research codes are highly inefficient and not running parallel. The developers are unaware of simple optimization techniques to make programs run faster. They lack education in the art of code optimization and parallel programming. I do not believe we can fix the problem by publishing additional books or training manuals. To date, the developers in questions have not studied the books or manual available, and are unlikely to do so in the future.
Short courses are a possible solution, but I believe they are too concentrated to be much use. The general concepts can be taught in a three or four day course, but that is not enough time for students to practice what they learn and acquire the experience to apply and extend the concepts to their codes. Practice is the key to becoming proficient at optimization.
I recommend that graduate students be required to take a semester length course in optimization and parallel programming. We would never give someone access to state-of-the-art scientific equipment costing hundreds of thousands of dollars without first requiring them to demonstrate that they know how to use the equipment. Yet the criterion for time on state-of-the-art supercomputers is at most an interesting project. Requestors are never asked to demonstrate that they know how to use the system, or can use the system effectively. A semester course would teach them the required skills. Government agencies that fund academic scientific research pay for most of the computer systems supporting scientific research as well as the development of most personal scientific codes. These agencies should require graduate schools to offer a course in optimization and parallel programming as a requirement for funding.
About the Author
John Feo received his Ph.D. in Computer Science from The University of Texas at Austin in 1986. After graduate school, Dr. Feo worked at Lawrence Livermore National Laboratory where he was the Group Leader of the Computer Research Group and principal investigator of the Sisal Language Project. In 1997, Dr. Feo joined Tera Computer Company where he was project manager for the MTA, and oversaw the programming and evaluation of the MTA at the San Diego Supercomputer Center. In 2000, Dr. Feo joined Sun Microsystems as an HPC application specialist. He works with university research groups to optimize and parallelize scientific codes. Dr. Feo has published over two dozen research articles in the areas of parallel parallel programming, parallel programming languages, and application performance. | https://blogs.oracle.com/run/tags/optimization | CC-MAIN-2016-07 | refinedweb | 5,496 | 61.97 |
Hello.I've just returned from ELC2007 andI haven't read all posts in this thread yet,but I want to comment to this function.> In AppArmor, we are interested in pathnames relative to the namespace root.> This is the same as d_path() except for the root where the search ends. Add> a function for computing the namespace-relative path.Yes. You came to the same conclusion as TOMOYO Linux does. Linux uses pathnames relative to the namespace root.You do this using d_path()'s way, but there needs some extensionsif you want to use d_namespace_path() for access control/auditing purpose.In Linux, all characters other than NULL can be used in its pathname.This means that you can't assume that whitespaces are delimiters.For example, when you process entries in "Access ..... granted/rejected\n" format(where ..... is a pathname and \n is a carriage return, like "Access /bin/ls granted\n"),an entry "Access /bin/ls granted\nAccess /bin/cat granted\n" can be producedif ..... is "/bin/ls granted\nAccess /bin/cat".Processing such entry will produce wrong result.Also, you want wildcards (usually "*") when doing pathname comparison,but there are files that contains wildcards(for example, "/usr/share/guile/1.6/ice-9/and-let*.scm" in CentOS 4.4).You need to escape so that you can tell whether "*" indicatesa literal "*" or a wildcard.Also, in non-English regions, characters that are out of ASCII printable rangeare included in its pathname (for example, files created via Samba from Windows client).Some programs can't handle characters that have MSB bit on,so you may want to represent all characters without using MSB bit.It may be OK if you use d_namespace_path() for processing a userland's configuration file,but it is not OK if you use it for processing a kernel's configuration file.The kernel has to be able to handle any characters.So, you may want customized version of d_namespace_path()?-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at | https://lkml.org/lkml/2007/4/21/95 | CC-MAIN-2016-50 | refinedweb | 354 | 58.38 |
Generates books based on other books using nltk
Project description
bookgen
A python library using nltk to analyse two books and generate a new one.
Installation
pip install bookgen
Usage
from bookgen import BookGen book = BookGen("word_base_book.txt", "sentence_base_book.txt") # book.download() will download the nltk extras required, only needed once print(book.run())
Explanation
BookGen will parse word classes from the first specified book, looking like this:
{"NOUN": ["Mountain", "Valley"], "VERB": ["take", "went"]}
These are sorted by the nltk universal tagset.
The second book serves as sentence base. It will be parsed into a list of word types that represent the whole book.
["NOUN", "VERB", "PREP", "NOUN", "CONJ", "VERB", "."]
Then, it generates a list of words from the words of the first book based on the second book.
["Nathan", "went", "to", "Valley", "and", "peed", "."]
This is joined with some capitalization fixes and returned.
Project details
Release history Release notifications
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/bookgen/ | CC-MAIN-2020-16 | refinedweb | 171 | 66.03 |
On Sat, Jan 8, 2011 at 7:06 PM, Ron Adam rrr@ronadam.com wrote:
On 01/06/2011 09:28 PM, Nick Coghlan wrote:
My original suggestion was along those lines, but I've come to the conclusion that it isn't sufficiently granular - when existing code tinkers with "__module__" it tends to do it at the object level rather than by modifying __name__ in the module globals.
What do you mean by tinkers with "__module__" ?
Do you have an example where/when that is needed?
from inspect import getsource from functools import partial partial.__module__ 'functools' getsource(partial) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib/python2.6/inspect.py", line 689, in getsource lines, lnum = getsourcelines(object) File "/usr/lib/python2.6/inspect.py", line 678, in getsourcelines lines, lnum = findsource(object) File "/usr/lib/python2.6/inspect.py", line 552, in findsource raise IOError('could not find class definition') IOError: could not find class definition
partial is actually implemented in C in the _functools module, hence the failure of the getsource call. However, it officially lives in functools for pickling purposes (other implementations aren't obliged to provide _functools at all), so __module__ is adjusted appropriately.
The other examples I have been using are the _datetime C acceleration module and the unittest pseudo-package.
If __import_name__ is going to match __module__ everywhere else, why not just call it __module__ every where?
Because the module level attributes for identifying the module don't serve the same purpose as the attributes identifying where functions and classes are defined. That said, calling it "__module__" would probably work, and make the naming logic a bit more intuitive. The precedent for that attribute name to refer to a string rather than a module object was set a long time ago, after all.
Would __package__ be changed in any way?
To look for __module__ before checking __name__? No, since doing that would make it unnecessarily difficult to use relative imports inside pseudo-packages.
So we will have: __package__, __module__, __import_name__, __impl_name__, and if you also include __file__ and __path__, that makes six different attributes for describing where something came from.
I don't know about you, but this bothers me a bit. :-/
It bothers me a lot, since I probably could have avoided at least some of it by expanding the scope of PEP 366. However, it does help to split them out into the different contexts and look at how each of them are used, since it makes it clear that there are a lot of attributes because there is a fair bit of information that is used in different ways.
Module level attributes relating to location in the external environment: __file__: typically refers to a source file, but is not required to (see PEP 302) __path__: package attribute used to identify the directory (or directories) searched for submodules __loader__: PEP 302 loader reference (may not exist for ordinary filesystem imports) __cached__: if it exists, refers to a compiled bytecode file (see PEP 3149)
It is important to understand that ever since PEP 302, there is no loader independent mapping between any of these external environment related attributes and the module namespace. Some Python standard library code (i.e. multiprocessing) currently assumes such a mapping exists and it is broken on windows right now as a direct result of that incorrect assumption (other code explicitly disclaims support for PEP 302 loaded modules and only works with actual files and directories).
Module level attributes relating to location within the module namespace: __name__: actual name of current module in the current interpreter instance. Best choice for introspection of the current interpreter. __module__ (new): "official" portable name for module contents (components should never include leading underscores). Best choice for information that should be portable to other interpreters (e.g. for pickling and other serialisation formats) __package__: optional attribute used specifically to control handling of relative imports. May be explicitly set (e.g. by runpy), otherwise implicitly set to "__name__.rpartion('.')[0]" by the first relative import.
Most of the time, __name__ is consistent across all 3 use cases, in which case __package__ and __import_name__ are redundant. However, when __name__ is wrong for some reason (e.g. including an implementation detail, or adjusted to "__main__" for execution as a script), then __package__ allows relative imports to be fixed, while __import_name__ will allow pickling and other operations that should hide implementation details to be fixed.
Object level attributes relating to location of class and function definitions: __module__ (updated): refers to __module__ from originating module (if defined) and to __name__, otherwise __impl_module__ (new): refers to __name__ from originating module
Looking at that write-up, I do quite like the idea of reusing __module__ for the new module level attribute.).
That basic problem is that __module__ currently tries to serve two masters:
Currently, the default behaviour of the interpreter is to support use case 1 and break use case 2 if any objects are defined in a different module from where they claim to live (e.g. see the pickle compatibility breakage with the 3.2 unittest implementation layout changes). The only tool currently available to module authors is to override __module__ (as functools.partial and the datetime acceleration module do), which is correct for use case 2, but breaks use case 1 (leading to misleading error messages in the C acceleration module case, and breaking otherwise valid introspection in the unittest case).
My proposed changes will: a) make overriding __module__ significantly easier to do b) allow the introspection use cases access to the information they need so they can do the right thing when confronted with an overridden __module__ attribute
Does this fit some of problems you are thinking of where the granularity may matter?
It would take two functions to do this. One to create the virtual module, and another to pre-load it's initial objects. For those objects, the loader would set obj.__module__ to the virtual module name, and also set obj.__original_module__ to the original module name. These would only be seen on objects in virtual modules. A lookup on obj.__module__ will tell you it's in a virtual module. Then a lookup with obj.__original_module__ would give you the actual location info it came from.
That adds a lot of complexity though - far simpler to define a new __impl_module__ attribute on every object, retroactively fixing introspection of existing code that adjusts __module__ to make pickling work properly across different versions and implementations.
By doing it that way, most people will never need to know how these things work or even see them. ie... It's advance/expert Python foo. ;-)
Most people will never need to care or worry about the difference between __module__ and __impl_module__ either - it will be hidden inside libraries like inspect, pydoc and pickle.
Any way, I hope this gives you some ideas, I know you can figure out the details much better than I can.
Yeah, the idea of reusing the __module__ attribute name at the top level is an excellent one.
Cheers, Nick.
-- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia | https://mail.python.org/archives/list/python-ideas@python.org/message/FCYVGIVJWFDJEKCBH2FZAVT2P4NWQ7BJ/ | CC-MAIN-2021-04 | refinedweb | 1,191 | 52.6 |
from fastai.vision.all import *
We're going to use the MNIST training code from the official PyTorch examples, slightly reformatted for space, updated from AdaDelta to AdamW, and converted from a script to a module. There's a lot of code, so we've put it into migrating_pytorch.py!
from migrating_pytorch import *
We can entirely replace the custom training loop with fastai's. That means you can get rid of
train(),
test(), and the epoch loop in the original code, and replace it all with just this:
data = DataLoaders(train_loader, test_loader) learn = Learner(data, Net(), loss_func=F.nll_loss, opt_func=Adam, metrics=accuracy, cbs=CudaCallback)
We also added
CudaCallback to have the model and data moved to the GPU for us. Alternatively, you can use the fastai
DataLoader, which provides a superset of the functionality of PyTorch's (with the same API), and can handle moving data to the GPU for us (see
migrating_ignite.ipynb for an example of this approach).
fastai supports many schedulers. We recommend fitting with 1cycle training:
learn.fit_one_cycle(epochs, lr)
As you can see, migrating from pure PyTorch allows you to remove a lot of code, and doesn't require you to change any of your existing data pipelines, optimizers, loss functions, models, etc.
Once you've made this change, you can then benefit from fastai's rich set of callbacks, transforms, visualizations, and so forth.
Note that fastai. | https://docs.fast.ai/migrating_pytorch | CC-MAIN-2020-50 | refinedweb | 234 | 53.41 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.