text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
I have a Asp.net web service running on that I
connect to using jQuery from my asp.net website. Everything works
perfect if the user access my website with. But if the
user uses only domain.com (without www) I get error:
View Complete Post="
Hi All, I have a WCF Rest service which gets a image file as a byte array[]. It works fine if the client is .net. if the client is iphone[objective c] it says the following error. The incoming message has an unexpected message format 'Raw'. The expected
message formats for the operation are 'Xml', 'Json'. This can be because a WebContentTypeMapper has not been configured on the binding. See the documentation of WebContentTypeMapper for more details. The service has been hosted secure site in iis.Please help
me to resolve the issue. my method is
public string ImageData(byte[] imgfile) {
long ll = imgfile.Length; byte[] dBytes = imgfile;
Stream imgStream = new MemoryStream(dBytes, 0, dBytes.Length);
Bitmap bmpImage = new Bitmap(imgStream);
bmpImage.Save(@"C:\1.jpeg", System.Drawing.Imaging.ImageFormat.Jpeg); return true; }
We have developed one webservice and it is deployed in our virtual directory. It is working fine.
But the problem is in otherhand. While they are accessing our Database through the web service this error is coming..
soap fault: Server was unable to process request. ---> Object reference not set to an instance of an object
HI,
When."
I have tested the example showed on MSDN for accessing a SharePoint List Web Service in
Build is OK. However, I get the following error while running try block.
Client found response content type of 'text/html; charset=utf-8', but expected 'text/xml'.
Any ideas?
I'm attempting to query the OrganizationData.svc web service from a SharePoint EventReceiver but don't seem to have the appropriate credentials. Thing is if I attempt to access it via a console app my credentials work fine. I expect that since the EventReceiver
is making the call, in scenario 2, that the identity is that of the process running SharePoint, but for some reason the credentials provided don't work. Is there an approach that works? Thanks
The wss_minimaltrust.config change I made was
<IPermission class="WebPermission" version="1" Unrestricted="true">
</IPermission> as per
Scenario 1: Console App
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
namespace OrgDataCo
Hall of Fame Twitter Terms of Service Privacy Policy Contact Us Archives Tell A Friend | http://www.dotnetspark.com/links/37262-problems-accessing-web-service-without-www.aspx | CC-MAIN-2017-22 | refinedweb | 411 | 60.51 |
struts2 Autocompleter
struts2 Autocompleter hi.
I am working Auto Completer Example in rose india.net
but it has a error occurred .
"" No tag "autocomple
Autocompleter bugs
Autocompleter bugs when i type in combobox nothing is happen...
Hi Friend,
Please visit the following link:
Struts Autocomplete Example
Thanks
struts2
struts2 how to read properties file in jsp of struts2
Hi... complete example at Read the Key-Value of Property File in Java.
Thanks
i want the properties file values in jsp not in action class...://joshuajava.wordpress.com/2008/12/27/creating-custom-components-with-struts-2/
I made example... the body if the name attribute is present how do i do that?
public class
How to get the request scope values? - Struts
How to get the request scope values? Get value in Struts
Captcha in Struts2 Application
thing worked fine. The only thing i didn't got is the captcha.jpg image. How to get the image?
Thanks in advance,
Raju Rudru.
Hi,
Here is example...Captcha in Struts2 Application Hi,
Iam working with the code
Generating dynamic fields in struts2
to read those field values in controller? Please provide me some example...Generating dynamic fields in struts2 Hi,
I want generate a web page which should have have some struts 2 tags in a group and a "[+]" button
how to prepopulate data in struts2 - Struts
how to prepopulate data in struts2 I wanted to show the data from database using Struts
Struts2 connection pooling - Struts
Struts2 connection pooling Dear Friends ,
How to make connection pooling in
dwr with struts2 - Struts
dwr with struts2 CAn u help me how to use dwr with struts2
question
("id",id);
session.setAttribute("name",name);
i didn't get this properly,i could get the last results only from resultset,please correct this
how to set a group of resultset values to session
Struts2 UI - Struts
Struts2 UI Can you please provide me with some examples of how to do a multi-column layout in JSP (using STRUTS2) ? Thanks
Struts session question
Struts session question how will i set and get session in struts 1.3 plz help.thanking Actions
.
However with struts 2 actions you can get different return types other than... any object parameter. Here one important
issue arises - how do you get access...Struts2 Actions
form values in java script - Struts
form values in java script how to get form values in java script functions with struts1 Tiles Example
Struts2 Tiles Example
The Following are the steps for Stuts tiles plugin
1...;!DOCTYPE struts PUBLIC
"-//Apache Software Foundation//DTD Struts Configuration...;
<struts>
<package name="default" extends= and Hibernate
Struts2 and Hibernate Sir/ madam,
Can we use iterator tag in struts for fetch the database value and shown on form. if yes then how
Struts2 ajax validation example.
Struts2 ajax validation example.
In this example, you will see how to validate login through Ajax in
struts2.
1-index.jsp
<html>
<...
uri="/struts-tags"
prefix="s"%> following
Struts2 Tags
/ index.jsp to the address bar.
Auto Completer Example
The autocompleter...Struts2 Tags
Apache Struts is an open-source framework used to develop Java Internationalization
Struts2 Internationalization Hi
How to use i18n functionality for indian languages in struts2 ?
I am able to use french and english but none... the following links:
how to display the values of one list in other upon clicking a button in struts2
how to display the values of one list in other upon clicking a button in struts2 Hello friends..Am new to struts2..Please any one has to guide me in struts2..
I have a problem, I have to display the values of one list in other
how to get the values from dynamically generated textbox in java?
how to get the values from dynamically generated textbox in java? I... to get and update this textbox values into both the tables(Xray,CTScan... textbox corresponding to the data.
I want to get data from textboxes(generated
How to get a values - JSP-Servlet
getting a null value aprt from a attachmented file,so how to get a other values...How to get a values Dear sir,
I have a one form with a multipart/form-data as follows
To
From
Subject
themes in struts2
themes in struts2 i want to create themes in strut2 can any one tell the step by step procedure of how to create it.with example links
thanks in advance
Java Get Example
is method and how to use the get
method in Java, this example is going...
In the example given below we will learn how to get name of the particular
class...;
How to get IP Example
This example shows
Auto Completer Example
Auto Completer Example
In this section, we are going to describe the
autocompleter tag. The autocompleter tag always... options shown in the dropdown
list. The autocompleter tag generates two input
Integrate Hibernate to struts2.
Integrate Hibernate to struts2. How to Integrate Struts Hibernate>
Java Swing Set And Get Values
Java Swing Set And Get Values
In this tutorial we will learn about how to set and get values using setter
and getter methods. This example explains you how... a simple example which will demonstrate you about how to use
setter and getter
Deployment of your example - Struts
Deployment of your example In your Struts2 tutorial, can you show how you would war it up so it can be deployed into JBOss? There seems to be a lot of unnecessary files in it, or am I mistaken
question for "get method"
question for "get method" when I want make method "get" for name or any char or string ..how I can write the syntax ?
and what does it return - Framework
struts2 i m beginner for struts2.i tried helloworld program from... is:
how i resolve this issue.plz help me.thnx Hi,
Please download the example code from
How to get Keys and Values from HashMap in Java?
How to get Keys and Values from HashMap in Java? Example program of
iterating... to iterate the keys and
get the values and print on the console.
How to get keys... to iterate through keys and get the values. So, let's see how to
iterate
question
question how to retreive data from database of a particular user after login using jsp+hiberante+struts configuration.
Please go through the following links:
struts2 properties file
struts2 properties file How to set properties file in struts 2 ?
Struts 2 Format Examples
Struts 2 Tutorial
Get Month Name Example
Get Month Name Example
In this Example we want to describe you a code that help you in understanding
a how to get a 'Get Month Name Example'.For this we have a class 2.1.8 Hello World Example
to develop simple Hello World
example in Struts 2.8.1. You will also learn how...
Struts 2.1.8 Hello World Example
... example
using the latest Struts 2.8.1. We will use the properties files
struts2 - Framework
struts2 thnx ranikanta
i downloaded example from below link
i m... is:
i m trying for 4days.but didnt get any result.plz helpme
thnx
struts2 - Struts
struts2 hello, am trying to create a struts 2 application that
allows you to upload and download files from your server, it has been challenging for me, can some one help Hi Friend,
Please visit the following
Get All Keys and Values of the Properties files in Java
Get All Keys and Values of the Properties files in Java... to get all keys and
it's values of the properties files in Java. Java provides... to be inserting in the
properties files.
Here, you will get the all keys and values Spring Hibernate integration
Struts2 Spring Hibernate integration How to integrate Struts2 Spring2.5 and Hibernate3 in a web application project?Could anyone give some example
struts - Struts
struts how to handle multiple submit buttons in a single jsp page of a struts application Hi friend,
Code to help in solving the problem :
In the below code having two submit button's its values
question
question Good Morning,
how to get month for january in mysql
get values from Excel to database
get values from Excel to database hi i want to insert values from Excel file into database.Whatever field and contents are there in excel file... express 2005. how can i do with java code
struts2 excel downloads
struts2 excel downloads hi friend,
how to set Timestamps(Date+time) values to excel sheet cells
question
question good afternoon ,
i wan't to check username colomn values of employee table is also present in attendance table.for example username john is also in attendance.please give me mysql+jsp code
How to get the correct value by calculating double values....
How to get the correct value by calculating double values.... Hello Sir,
I have a method in which i am getting getting wrong... and values like 59,142 etc
here i am getting wrong output for the same
question
question how to get every days date and time in to datevalue colomn of a table automatically +mysql
question
question how to get every days date and time in to datevalue colomn of a table automatically
question
question how to get every days date and time in to datevalue colomn of a table automatically +mysql + jsp
question
question good afternoon,
how to send values retrieved from database to user interface text box via response.sendRedirect or have any option to redirect selected values from database to same user interface who where in.please
question
question i am using following one but couldn't get currectly .please give me the correct one.i used grigorian calendar
if(hour<=13...,morning) values(?,?,curdate(),?,'L',curtime
question
question SELECT username, emp_id from attendance where date='today'
i couldn't get this code ,it Showing rows 0 - 3 ,but have colomn with values john 100 2011-06-28 11:59:36
i don't know what is the actual problem.please
Advertisements
If you enjoyed this post then why not add us on Google+? Add us to your Circles | http://www.roseindia.net/tutorialhelp/comment/44417 | CC-MAIN-2015-40 | refinedweb | 1,685 | 63.49 |
HANA Native, Create Database Artifacts Using Core Data Services (CDS)
You will learn
- How to use core data services to create simple database entities
- How to define database-agnostic artifacts in the persistence module
The Cloud Application Programming model utilizes core data services to define artifacts in the database module. Because this model is meant to be database-agnostic – i.e., work with any database – it does not allow you to leverage features that are specific to SAP HANA. For this reason, you will create two tables that do not require any advanced data types.
In the
db module, you will find a file called
data-model.cds. Right-click on it and choose Rename.
Use the following name:
interactions.cds
Double-click on the file to open it.
Replace the default contents with the following:
namespace app.interactions; using { Country } from '@sap/cds/common'; type BusinessKey : String(10); type SDate : DateTime; type LText : String(1024); entity Interactions_Header { key ID : Integer; ITEMS : association an association to each other. The design-time artifacts declared in this file will be converted to run-time, physical artifacts in the database. In this example, the entities will become tables.
Locate the other
cds file (this file may be called
cat-service.cds or
my-service.cds, depending on the IDE) in the
srv folder and rename it.
Use the following name:
interaction_srv.cds
Double-click to open it and replace the existing content with the following:
using app.interactions from '../db/interactions'; service CatalogService { entity Interactions_Header @readonly as projection on interactions.Interactions_Header; entity Interactions_Items @readonly as projection on interactions.Interactions_Items; }
Click Save all.
What is going on?
You are declaring services to expose the database entities you declared in the previous step.
Open the package.json file in the root of your project. Add a section to the
cds configuration to include the following
"hana": { "deploy-format": "hdbtable" }
Right-click on the CDS declaration of the services and choose Build > Build CDS.
Look into the console to see the progress. You can scroll up and see what has been built
If you pay attention to the build log in the console, you will see the
CDS artifacts were converted to
hdbtable and
hdbview artifacts. You will find those artifacts in a new folder under
src called
gen.
You will now convert those CDS files specific to SAP HANA into runtime objects (tables). Right-click on the database module and choose Build..
or for the HANA Cloud trial:
You can also check the resources in your space using the resource manager in SAP Web IDE:
You can find a similar example and further context on Core Data and Services in this explanatory video
You can now check the generated tables and views in the database explorer. Right-click on the database module and select Open HDI Container.
Once open, navigate to the
Tables section and double-click on the
Header table.
Note the name of the table matches the generated
hdbtable artifacts..
Download the header file and the items file into your local file system.
Right-click again on the header table and choose Import Data.
Browse for the
Header file and click Step 2.
Keep the default table mapping and click Step 3.
Click Show Import Summary.
And then Import into Database.
You will see confirmation that 3 records have imported successfully.
Repeat the process with the
Items.csv file into the
Items table.
You can now check the data loaded into the tables. Right-click on the
Items table and click Generate Select Statement.
Add the following WHERE clause to the SELECT statement and execute it to complete the validation below.
where "LOGTEXT" like '%happy%'; | https://developers.sap.com/tutorials/xsa-cap-create-database-cds.html | CC-MAIN-2020-40 | refinedweb | 612 | 58.99 |
:I? Yes. In fact, it makes it easier because you can create a kernel option for your scheduler and instead of replacing the NATA code you can conditionalize it. The way that works is you add your option name to /usr/src/sys/conf/options and specify an option header file to include, such as this: DISKSCHED_THACKER opt_disksched.h Then in the NATA driver you do: #include "opt_disksched.h" .... #ifdef DISKSCHED_THACKER ... calls to your API ... #else ... original nata queueing code ... #endif This allows people to select which scehduler they want to use when they compile their kernel. :> :> This would allow you to test your scheduler with a vkernel by also having :> the VKD driver call it (/usr/src/sys/dev/virtual/disk). : . : :Thanks for those tips! :Nirmal 'man vkernel' on a DragonFly box (e.g. on your leaf account). It is specific to DragonFly. Other BSDs have to use a machine emulator. Using a machine emulator works too, it just isn't as convenient. Basically whenever you make major modifications to a kernel, simply rebooting into that kernel can result in a very long development cycle because any bug in your code is likely to crash the kernel (and bugs in filesystem/block-device related code can result in a corrupt disk, as well). So it is best to do all major testing of such code in a virtual environment before trying it out on a kernel running on real hardware. -Matt Matthew Dillon <dillon@backplane.com> | https://www.dragonflybsd.org/mailarchive/kernel/2008-05/msg00071.html | CC-MAIN-2017-04 | refinedweb | 247 | 56.86 |
We usually come across these terms: Process, Application Domain or App Domain, Assemblies. It is good to have a basic knowledge of these terms and how each one is related to each other.
Process
A process is an operating system concept and it is the smallest unit of isolation provided by Windows OS. Each application or piece of code runs in Windows runs under the boundary of a process. Application isolation make sure the failure of one process does not affect the functioning of another process. It is also necessary to ensure that code running in one application cannot adversely affect other, applications.
When you run an application, Windows creates a process for the application with a specific process id and other attributes. Each process is allocated with necessary memory and set of resources.
Every Windows process contains at least one thread which takes care of the application execution. A Processes can have many threads and they speed up execution and give more responsiveness but a process that contain a single primary thread of execution is considered to be more thread-safe.
In Figure 1, you can see how the processes running in the machine are listed. Each process has name, id, description etc. and each process can be uniquely identified using process ID.
System.Diagnostics provides a number of classes to deal with processes including Process class. Figure 1
Let us try an application using C# to demonstrate working with process. The code in Listing 1 explains two things, How to start a process, how to kill / terminate a process.
Listing 1
There are many other useful properties available for Process class, please check it out. Application Domain
Application Domain or AppDomain is one of the most powerful features of .NET framework. AppDomain can be considered as a light-weight process. Application domain is a .NET concept whereas process is an operating system concept. Both have many characteristics in common. A process can contain multiple app domains and each one provides a unit of isolation within that process. Most importantly, only those applications written in .net or in other words, only managed applications will have application domain.
Application domain provides isolation between code running in different app domains. App Domain is a logical container for code and data just like process and has separate memory space and access to resources. App domain also server as a boundary like process does to avoid any accidental or illegal attempts to access the data of an object in one running application from another.
System.AppDomain class provides you ways to deal with application domain. It provides methods to create new application domain, unload domain from memory etc. Why we need App Domain while we have Process
This is the first question which came to my mind while I first heard about application domains.
A Process is much heavier and expensive to create and maintain. So, the server would have a tough time managing the processes. So, when Microsoft developed .Net framework, they introduced the concept of Application Domain and it is an integral part of .net framework.
While AppDomains still offer many of the features that a Windows process offers, they are more efficient than a process by enabling multiple assemblies to be run in separate application domains without the overhead of launching separate processes. An App Domain is relatively cheaper when compared to process to create, and has relatively less overhead to maintain.
Think about a server which hosts hundreds of applications. Before app domain was introduced, each running application used to create one process. AppDomain is of great advantage for ISP (Internet Service Provider) who host hundreds of applications. Because each application can be contained in an application domain and a process can contain may such AppDomains, thus providing a lot of cost saving.
In short, any process running a managed application will have at least one app domain in it. Since AppDomain is a .NET concept, any process running unmanaged code will not have any application domain.
Figure 2 will help you understand the concept better. Figure 2
Process A runs managed code with one application domain while Process B runs managed code has three application domains. Note that Process C which runs unmanaged code has no application domain.
Code and data are safely isolated using the boundary provided by the AppDomain. If two AppDomains want to communicate each other pass objects, .NET techniques such as Remoting or Web Services should be used. Application Domain Example
Create a Console Application
I am using Visual Studio 2017 for this demo.
File -> New -> Project
From the left pane, Visual C# -> Windows classic desktop
On the right pane, choose Console App.
Alternatively, you can type console app in the search box which is located at the top right corner of the window and choose the type of solution you want to create.
There will be Console App (.NET Framework) and Console App (.NET Core).
If you are going to use the application only in windows, choose .NET framework one.
.NET Core project will help you create a command-line application that can work on .NET Core on Windows, Linux and MacOS.
For our purpose, .NET framework one is enough.
There is a Visual C# version and a Visual Basic version for the above two types of application, choose the right one based on which language you are going to use.
I am going to use C# one instead of VB one.
Name the project as AppDomainSample and click OK. Figure 3
Add the below code shown in Listing 2 to the Program class and build the application.
I will show you how to create an application domain.
Listing 2
You can see in Figure 4; an application domain is created by default and code is running under that app domain. Figure 4
To create a new app domain, just add the following line of code.
It will create an application domain with name NewAppDomain. Assemblies
As per Microsoft, “An assembly is a collection of types and resources that forms a logical unit of functionality. All types in the .NET Framework must exist in assemblies; the common language runtime does not support types outside of assemblies. An assembly is the unit at which security permissions are requested and granted.”.
In more simple terms, when you create console application, or a windows forms application or a class library or other types of applications in .NET, you are creating an assembly. Assembly can be an .exe or .dll. A console application creates and .exe while a class library creates a .dll.
When you add a reference to another project or a .dll in your application, you are loading an assembly into your project. It is a static way of assembly binding since you know which assembly to be loaded before compilation. So, it is easy for you to create objects of classes in those assemblies and to use their methods.
There are situations when you will have to dynamically load an assembly during runt time. System.Reflection.Assembly class provides different ways to load Assemblies – Assembly.Load(), Assembly.LoadFrom(), Assembly.LoadFile().
Use these methods to load an assembly, into the current application domain. Assembly Loading Example
I am going to create another console application.
This time I am leaving the default name as it is.
This will create a new Console Application with name ConsoleApp1. Figure 5
Just added a line of code in the Main method and added GetAssemblyName() as a public method.
Listing 3 Creating object of a class / type in the dynamically loaded assembly and Dynamically invoking a method with Reflection
Now go back to the AppDomainSample project and add the below code to the Main method.
Listing 4
You need to add a reference to the System.Reflection namespace.
The assembly ConsoleApp1.exe will be loaded to the same application domain as the AppDomainSample is running.
You can verify this by looking at the output of the program as shown in Figure 6.
The assembly is loaded onto the current AppDomain even when you statically load assemblies, I mean by adding reference to your project before compilation. Figure 6
Dynamically invoking a static method with Reflection
For invoking a static method, you only need to do a few changes to the code for invoking an instance method.
You don’t need to create an instance of the class.
In this example, you can comment the line:
While invoking the method, you don’t need to pass the object as a parameter, so pass it as null.
So, the code will change like:
As we have seen the assemblies are loaded onto the current application domain. This may create memory issues when there are large assemblies to load or the number of assemblies increases.
So, it is better to unload the unwanted assemblies once it is executed.
You can use Unload method of the AppDomain class to unload the contents (all assemblies) of an application domain.
But, once an assembly is loaded an application domain, you cannot unload the assembly alone from the AppDomain.
To overcome this, we can create a new Application Domain and load the new assemblies there. Then, after the new assemblies are executed, you can safely unload the new app domain.
This approach has two benefits,
Create a new application domain.
NewAppDomain is the name of the AppDomain.
The above line of code will load the assembly ConsoleApp1 into the NewAppDomain application domain and will start executing it.
After you have done executing the new assembly, you can unload it using:
To test if ConsoleApp1 is loaded into the new application domain, go to ConsoleApp1 project and add the following two lines to the Main method.
Now build the ConsoleApp1 project and build and run the AppDomainSample project. Figure 7
You can see that the ConsoleApp1 project is loaded into the NewAppDomain application domain we created.
Source code is attached with this article.
You can try other methods available with Assembly, AppDomain and Process class. References
View All | https://www.c-sharpcorner.com/article/understanding-process-application-domain-and-assemblies/ | CC-MAIN-2020-34 | refinedweb | 1,670 | 56.76 |
Uhhhh... So I can't reproduce the issue anymore. It seems 'clean product' wasn't enough to clean my build folder, I had to use a key combination to expose another menu option for 'clean build...
Uhhhh... So I can't reproduce the issue anymore. It seems 'clean product' wasn't enough to clean my build folder, I had to use a key combination to expose another menu option for 'clean build...
Yeah, this is just extracted out of the section of code that I've been poking and pulling strings on to try and get more info out of. I realize that it does nothing useful in its current state, but...
Yeah I'm completely baffled. Test code, with class names changed:
void ClassX::method() {
MyClass inst;
cout << inst.nodes.size() << endl;
MyClass::test();
}
After poking around a lot more, it seems things are broken right off the bat.
At risk of sounding really dumb:
Do I explicitly need to define constructors that chain down the inheritance...
@cyberfish -
a) Xcode, so I guess LLVM. I haven't been able to repro yet with a smaller program.
b) It's a non-pointer member variable of an instance which is declared as a non-pointer static...
Blah. Yeah it looks like something is definitely broken. empty() returns false, size returns 0, begin() returns a reference to address 0x01, end() returns address 0, and resize(1) gives another...
Given a std::vector<MyClass> named myVec:
// Loop version 1
for (int i = 0; i < myVec.size(); ++i) {
// Do something with myVec[i]
}
// Loop version 2
for...
#include <type_traits>
#include <iostream>
#include <vector>
struct IWriter {};
template<typename T> struct foo {};
template<typename T> class foo2 {};
struct BaseBar {};
struct DerivedBar:...
I was imprecise :) Plat A = iOS, which is fine; Plat B = Android, where app assets are exposed via a read interface on AAssetManager. I could write my own streambuf or whatnot, but this seemed like...
Thanks Elysia,
>>Use the stream operators to ensure it can be read and written properly.
Is there something preventing read/write from operating correctly? I wrote it in this way because I need to...
Many thanks!
So it looks to me like the vector variant isn't actually a template specialization but a... templated overload of the template function?
I'm trying to write some naive binary serialization code and wanted to cut down on repetition of logic for serializing/deserializing nested vectors or other STL containers to reduce the chance of...
There's a nice article about it here, posted Feb. this year:
Nullable<T> vs null
It looks like the C# compiler does special-case this type.
Abachler, IIRC raw sockets on unix allow you access to the ethernet frame level, e.g. before even the MAC address is put on the packet. Wouldn't that indicate level-2 access? This seems suspect,...
Also look into dyndns.org.
Very useful if you don't like memorizing your IP address every time it changes.
LowlyIntern, have you followed up on Codeplug's suggestion?
Good luck.
>>UINT C_Control::StartCryptoPortThread( LPVOID pParam )
Is this declared as a static function? If so, you should have nothing to worry about, because static functions act essentially the same way...
*shrug*
>and that PortThread[3] is a valid value?
Not meaning to be nitpicky, but have you put a breakpoint in the constructor to ensure that the thread creation is not failing? Also, you'll want...
PortThread[3]= AfxBeginThread(StartCryptoPortThread, &Port[3],THREAD_PRIORITY_NORMAL,0,CREATE_SUSPENDED);
You should make sure this is being called from a function somewhere, and you are 100% sure...
So the main issue is scalability then (limited by number of samplers)? Does that mean a Gaussian would be equivalent (except floating point textures) if there were sufficient samplers to cover all...
I did some experimentation finally. I didn't really get any definitive information on what Kawase Bloom actually is, but it seems to me that the general idea is doing multiple passes of almost any...
I still don't know what was wrong. But, I created a new project, and rewrote the damn thing while testing it line by line, and it worked. *shrug*
You could also do a 2D game using 3D graphics. That would let you focus more on the graphics and flashy effects, and less on the actual game logic.
Heh thanks Bubba, I understood this much already ;)
zacs:
GL_SRC_ALPHA, GL_ONE should theoretically be (Cs * As) + Cd, right? In which case I'm pretty sure it should work.
My test code is the... | https://cboard.cprogramming.com/search.php?s=19d422a1e92569ff556dbb3ce7f8f55c&searchid=5966838 | CC-MAIN-2020-40 | refinedweb | 756 | 66.03 |
--- Nicolas Maisonneuve <n.maisonneuve@hotPop.com> wrote:
> hy ,
> i would like code a transformer that it's a combinaison of multiples transformers configured..
> (we check the URI namespace of the generated document , and we configure the xsl, the
index
> base, the xmldb base.. see the fowlling sample)
There is work toward this, search on the dev list for "Virtual Pipeline Components".
I do not believe this has been implemented yet, but would be glad to be proved wrong.
In the mean time, you can define a map:resource that contains your transforms.
Then you can call the resource where you wish to have the compound transform performed.
It gives you the same effect of modularity, just with a different syntax.
--Tim Larson
__________________________________
Do you Yahoo!?
Yahoo! SiteBuilder - Free, easy-to-use web site design software
---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@cocoon.apache.org
For additional commands, e-mail: users-help@cocoon.apache.org | http://mail-archives.apache.org/mod_mbox/cocoon-users/200309.mbox/%3C20030918180635.9165.qmail@web41904.mail.yahoo.com%3E | CC-MAIN-2018-05 | refinedweb | 155 | 58.79 |
We have recently started the development of a new application for a new client. One of the features requires the user to be able to select words in an input that, under the hood, uses the Select2 tagging system.
Writing a test to validate that this feature works would look something like this:
create_tags_in_database sign_in user visit a page when I type "ta" in my tag_list field and I choose the tag "tag1" in my tag_list field And press save Then my record tag_list should contains "tag1"
Since we are using Rails 5.1 and the latest rspec-rails, we are now able to use system tests with Rspec. And since this particular test relies on JavaScript they are a very good candidate.
While trying to get this test to work, I encountered a few quirks, mainly my created records not being available in my system tests and trying to select something inside select2.
Let’s dive into how to solve them.
Prerequisite
First, you will need to have the latest
rspec-rails and
capybara versions.
You will need to install 2 new gems in your :test group
gem "chromedriver-helper", group: :test gem 'selenium-webdriver', group: :test
Now we will create a way to load different drivers, so when we’re running system tests that won’t require JavaScript we will use the fastest rack server, and when we need javascript we will use Selenium chrome headless. Create a new file in spec/support/system/driver.rb:
# spec/support/system/driver.rb RSpec.configure do |config| config.before(:each, type: :system) do driven_by :rack_test end config.before(:each, type: :system, js: true) do ActiveRecord::Base.establish_connection driven_by :selenium_chrome_headless end end
A new system test file would now look something like this:
require 'rails_helper' RSpec.describe "Creating a post with tags", type: :system do end
And you need JavaScript to be enabled – you will just need to add a js: true to the describe block.
require 'rails_helper' RSpec.describe "Creating a post with tags", type: :system, js: true do end
If at that stage, you get complaints like “cannot find :selenenium_chrome_headless” please make sure you have the latest capybara version.
And, if you’re using Devise, make sure that you add this line to your rails_spec.rb config:
config.include Devise::Test::IntegrationHelpers, type: :system
But, I can’t view my record on the page!
If you’re using Puma, and you were writing a test like this…
require 'rails_helper' RSpec.describe "Creating a post with tags", type: :system, js: true do context "visiting the new post page" do it "shows me a list of tags" do Tag.create(name: "tag1") visit new_post_page expect(page).to have_content("tag1") end end end
… then there is a big chance that your test will fail. The reason is that the Tag record is created in another thread.
To remedy that, you need to ensure that Puma starts in 0 workers mode and 1 thread only.
Our current Puma config looks like this:
# config/puma.rb workers Integer(ENV.fetch("WEB_CONCURRENCY", 2)) threads_count = Integer(ENV.fetch("MAX_THREADS", 2)) threads(threads_count, threads_count) preload_app! rackup DefaultRackup environment ENV.fetch("RACK_ENV", "development") on_worker_boot do # Worker specific setup for Rails 4.1+ # See: ActiveRecord::Base.establish_connection end
You now need to create a support/puma.rb file:
ENV["WEB_CONCURRENCY"] = "0" ENV["MAX_THREADS"] = "1"
Now, you should have a green test!
Let’s select a tag with Select2 in Capybara
Select2 hijacks your normal form select[multiple=true] and creates a fake input driven by JavaScript. So I have created a small helper and put it in support/helpers/select2_choose_tag.rb
def select2_choose_tag(field_class, options = {}) within(".form-group.#{field_class}") do first('.select2-container', minimum: 1).click first('.select2-search__field').send_keys(options.fetch(:choose, ""), :enter) end end
Now your final test can look like this:
require 'rails_helper' RSpec.describe "Creating a post with tags", type: :system, js: true do context "visiting the new post page" do it "shows me a list of tags" do Tag.create(name: "tag1") visit new_post_page fill_in "Title", with: "Rspec & System tests are fun" select2_choose_tag "tag_list", choose: "tag1" click_on "save" expect(Post.last.tag_list).to eql(["tag1"]) end end end
Closing thoughts
System tests are now the defaults in Rails, and the Rspec team recommends that you move your feature tests to system tests. Doing so is fairly easy. You will just need to amend the describe block to include type: :system and change .feature for a .describe.
You also need to make sure to only load js: true when you really need JavaScript and allow for non-JavaScript dependent tests to remain as snappy as possible. | https://www.cookieshq.co.uk/posts/rspec-system-test-javascript-and-select2-recipes | CC-MAIN-2022-21 | refinedweb | 771 | 65.42 |
Internet profilingDownload PDF
Info
- Publication number
- US6839680B1US6839680B1 US09410151 US41015199A US6839680B1 US 6839680 B1 US6839680 B1 US 6839680B1 US 09410151 US09410151 US 09410151 US 41015199 A US41015199 A US 41015199A US 6839680 B1 US6839680 B1 US 6839680B1
- Authority
- US
- Grant status
- Grant
- Patent type
-
- Prior art keywords
- web
- category
- user
- proreach
-
1. Field of the Invention
The present invention relates to the analysis of the behavior and interests of users of online networks, and more particularly to the analysis and modeling of user's interests for users of the Internet and World Wide Web.
2. Background of the Invention
In any market, customer behavior is important. This is true of traditional retail businesses, where there are well developed mechanisms for determining customer's interests. In brick-and-mortar businesses, the customers of the business can be observed by watching those customers walk through a store. Customer behavior can also be observed by tracking their purchases (e.g., through credit card purchases.) Customer observation is, in fact, an important technique used by many retail businesses. It is so important that major databases of customer behavior exist and are in continuous usage. For example, many supermarket chains have vast databases of customer behavior. Analysis of the data in such databases can be used for many purposes (e.g., inventory control, product placement, new product analysis).
Understanding customer behavior is also necessary for electronic commerce, but the techniques of observing the customer in this medium are necessarily different. The way that customers interact with an e-commerce web site is radically different from the experience of walking into a business in person and making a purchase, but many things remain the same. When Web visitors browse a web site, sometimes they buy, and sometimes they do not. Businesses are very interested in knowing why visitors buy and why they don't. So these new electronic merchants want to understand their prospects and their customers. These businesses must observe their web visitors. This observation leads to the need for modeling the interests of customers over time, the need for managing the tremendous amount of data that such modeling would entail, and the need for categorizing web content to providing for meaningful models of user interests.
Conventionally, observation of users in online systems has typically involved using user-provided information about users interests, such as surveys or forms that allow the user the identify the categories of information that are important to them. Examples of this approach include the various customizable home pages offered by search portals such as Yahoo and Excite. In these portals, users can select various predefined categories of interest, and relevant news and related data is then provided to the user. If however the user's interests change over time, the user must manually change the specified categories of interest; this is not done automatically. These sites also allow users to specify their interests with simple keywords, but again, if the interests change, the user must manually change these keywords.
Other web sites more systematically track user behavior in terms of clickthroughs and page views, and then assemble information about these activities. As the user's activity changes on this particular web site, the assembled information is updated. This approach, while capturing some aspects of change in user behavior, it typically limited to only identifying interests relative to a single web site. User behavior on other web sites does not effect the particular site's assembled information, even though such remote behavior may most accurately express the user's interests. More particularly, the analysis of user behavior is typically limited to the particular Internet domain of the server that tracks the usage. User activity at another domain is not tracked.
Further, the assembled information on such a server only expresses the user's interest without respect to potential future or past interests. That is, it does not model changing user interests over time. However, it is the change in user interest over time that is of significant value to web marketers and others attempting to deliver content to web visitors.
The present invention overcomes the limitations in the prior art by providing a system and methodology, and various software products that tracks user activity across multiple domains, and from such activity develops a time based model that describes the user's interests over time. The changing user interests are also used to determine each user's membership in any number of defined user groups. Each user's time based model of interests and group memberships forms a detailed profile of the Internet activity that can be used to market information and products to the user, to customize web content dynamically, or for other marketing purposes.
Thus the present invention fulfills an important need: to identify web visitors and understand their interests over time. The present invention, sometimes referred to herein as “ProReach” or “ProReach system” is a software system that tracks and analyzes web visitors on the World Wide Web. In short, it helps turn web visitors into web customers. The present invention has the following features and aspects.
First, the present invention can identify and monitor a web visitor as he visits a web site. Of course, on the internet there are many web sites, and there would many web visitors. Whether two web sites or thousands of web sites are involved, or there are millions of web visitors, the present invention provides a system which can identify many visitors across many web sites. Thus, in this aspect, the present invention identifies each visitor to a web site, with unique identification information. This allows the visitor to be consistently identified, during both multiple visits to the same web site, and during visits to other web sites.
ProReach combines data from many web activities to get a more complete picture of a web visitor. ProReach is able to combine the data from these different web sites because the visitor identification process works across the web. This simply means that when a web visitor goes from place to place on the world wide web, ProReach can repeatedly and consistently identify the web visitor typically. More specifically, in contrast to other web tracking products, the ProReach System collects data on both the web server and the web client. ProReach does the latter by providing downloadable software that web clients can install on their systems. Once installed, this software tracks the web user's actions from his machine. Each time he visits a web site, his actions are recorded. Periodically, a compact version of this data is uploaded to ProReach, and then distributed to other web sites which maintain profiles and user group information relative to the user.
Accordingly, the user's activity at each web site there is monitored to identify items of web content with which the user interacts, such as page views, purchases, and so forth. The monitoring may be done by the web server itself, or by the client side software. This monitoring includes identifying each item of web content, such as with its URL or URI, along with information about how long the user viewed the content. This is beneficial because web activities that take longer —such as reading a web page —reflect a higher of interest by the user. The data of a user's specific interaction with an item of content is stored in a web event record. (Certain web activities, e.g., simple, fast clickthroughs may not be tracked in a web event record because they do not useful reflect a user's interest.) This process of identifying web visitors and monitoring the web content they interact with occurs automatically and continuously. Over time then, a large number of web event records will be generated resulting from the activities of many web users at many web sites.
Once data of a web visitor visit to a web site is gathered, this data is not yet in a form that is particularly helpful to making business decisions. For example, it is not particularly helpful to know that some web visitor has viewed hundreds of web pages at a dozen web sites. Rather, it is more useful to understand what kinds of things did the web visitor look at: Motorcycles? Cosmetics? News? Technical information? Music CDs? Books?
Ideally, every document on the World Wide Web would be associated with a description that would describe briefly what that document was about.
That is, this description would categorize that document, much in the way in which books are categorized in a library. Such an ideal is never going to be a reality any time soon, if ever. So there needs to be a way to automatically categorize the documents that a web visitor sees. This categorization technique should be robust, accurate and maintainable.
The ProReach system provides just this capability. It uses a content recognition engine to do this. A content recognition engine is a software component that can take a document and a set of categories and compute how closely the document matches up with these categories. Using the content recognition engine, the ProReach system can categorize various kinds of web document, and provide a ranked list of categories, including hierarchical categories that pertain to the document. The basic idea is that the content recognition engine evaluates some number of categories that may or may not match up with a given document. The content recognition engine tests the document and returns a score as to how closely it matches with each category. During this process, the document gets tested against many categories, so the resulting categorization is really a vector of categorization scores. Each categorization score of that vector shows how well that document matches up with a given category, such as sports, news or computers.
Accordingly, each web event record is processed to determine its relevance to various defined categories. The categories are maintained in a category tree which covers a wide range of categories and topics. Preferably the web content is scored with respect to each category to indicate to the degree to which the content may be said to be about category. This categorization takes place automatically, without requiring action by a webmaster or system administrator.
The categories themselves used as part of the categorization process are part of the data that are provided to the content recognition engine. ProReach preferably provides turnkey categories, allowing the system to categorize web content as soon as ProReach is installed and running on a particular web site. In one embodiment, the turnkey categories are provided from a central host system that is in communication with a particular local ProReach system installation The host ProReach system provides a comprehensive set of categories that target the practical information needs of e-businesses, and it provides sample data for these categories.
As an optional capability, ProReach system users can modify categories, or create their own. In this way, a web site using the ProReach system can categorize the viewing habits of its prospects and customers in a custom fashion. They can create new kinds of categories. This customization is optional. They are not required to do this. ProReach is a turnkey system that is customizable. It is not a system that requires customization to be used. ProReach also provides other tools to assist in the process of category creation and maintenance.
The data about a web visitor's activities is valuable, but ProReach can distill more meaning from this data. Electronic commerce decision makers are interested in the psychographic and demographic profile of the user. They do not want every single detail of the user's activities, but rather a summary of the user's interests which is abstracted from the details of the user's activities. It therefore becomes very desirable that all the detailed data of the user's activities can be compressed into a highly meaningful summary. Accordingly, the present invention further processes this information to develop detailed Internet profiles of each user, and of different user groups and categories of information.
The ProReach system of the present invention creates summaries of a web visitor's activities via a process of web activity aggregation. Through this process, the ProReach system automatically takes the previous history of a visitor's activities and integrates this with data collected from new visits. This process of taking new visits and integrating them with previous visits is performed on an as-needed basis. In this way, the profile of a web visitor is always kept up to date, reflecting that web visitor's interests.
More specifically, ProReach aggregates web visitor's web activity data on three dimensions —on who they are (identity), what they did (content categorization) and when they did it (time). This process is called dimensional combining. Along these three dimensions, ProReach provides sophisticated, statistical-based aggregation.
Another strength of the ProReach system is its flexible approach to aggregating a visitor's activities. Different kinds of e-commerce businesses will want to summarize their visitor's activities in different ways. This is because different companies have different needs for understanding the nature of their customers. Accordingly, aggregation may be tuned to the needs of a particular business.
Hence the ProReach system provides excellent aggregation capabilities that can then be tuned by ProReach system administrators. It allows parameters to be set that control the aggregation process. Power and flexibility are combined. These parameters control what information is maintained and the amount of storage allowed for its maintenance.
In this aspect of the invention then, the web event records accumulated at a given web server are first aggregated into a set of aggregated results for each web user at the site, preferably on a periodic, fixed basis, such as a daily basis. Thus, a user may visit a particular web site several times a day, each time generating dozens of individual web event records. The same is true for many different users. Accordingly, for each user, the web event records are combined to collect all of the categorization information for that user together. In addition, the category score information in each web event record is processed to reflect the duration of the web activity. This processing results in a set of category weights.
The combined category weighting information for the collected period, such as a day, describes in detail the user's degree of interest across a number of categories. However, further processing is beneficial to obtain a more summarized model of the user's interests. Thus, from the weighted category information various statistical measures are derived such as the mean category weight over the period, maximum and minimum weights, standard deviation, and the like. In addition, a trend pattern is also extracted which described whether the user's interest in the category is increasing, decreasing, or constant, or some combination of these, over the time period. This summarized representation of the category weights for the time period can be stored, and best captures the changes in the user's interest, across a number of categories, over the time period. As a result, the underlying raw data of the web event records deleted, so that storage efficiency is achieved.
First, the period information may be aggregated for each user with respect to each of the categories across a longer time period. For example, the daily aggregated information for a user may be further aggregated for a week's time period, a month, a quarter, a year and so forth. This forms what is termed a user-category complex, wherein the statistical information for a single category from many different days is combined by an aggregation function. One exemplary aggregation function is mean, and thus the mean of the category weights for this particular category over the time period is obtain, along with trend pattern and other statistical measures.
Second, dimensional combining may be used to form category complexes. A category complex summarizes a large number of users' interests in a particular category over a selected time period. This complex describes the level of interest, over time, for a population of users in a particular category.
Another type of dimensional combining now makes use the user-category complexes. First, the many user-category complexes for an individual user may be combined for a selected time period, to form an aggregated view of the user's overall interests. That is, the category information from many different categories is aggregated and describes the user's interests overall.
Additionally, the user-category complexes may be combined for an individual category and across selected users who form a user group, to create user group-category complexes. The user group members are selected by having meet certain membership tests based on their category interests and optionally demographics. This gives a summary of the user group's interest in that category over time.
The user complexes can be further combined into user group complexes to describe overall group interest across all categories. Finally, the group complexes may be aggregated to form an overall total complex which describes the total population's interest across all categories for the selected time period.
In addition to the various complexes that may be aggregated, individual profiles of the users can be further augmented with the user group information. A number of user groups may be defined, each having particular membership criteria. Marketers can define groups of users that share interests, buying propensities or demographics. The criteria are preferably based on a user having (or not having) particular levels or ranges of category weights for one or more categories. A user may be member of multiple user groups. The group membership is automatically updated, as the users interacts with web content over time, and as their interests change as expressed by the changing levels of categories weights. The ProReach system will automatically classify a user into the right user groups based on his or her profile. If the definition of the user groups changes, then the ProReach system will automatically re-classify users into the right user groups. Similarly, as the interests of user change, they will automatically be put into the right visitor segments based on their new interests. In this way, a marketer has immediate access to market segments on demand, and can swiftly apply electronic sales campaigns.
The visitor profile information that ProReach systems generate can be retained for the sole use and benefit of the web site that created it. It also possible for ProReach systems to share their user profile information. To facilitate this sharing, ProReach provides a centralized service that helps ProReach systems define policies for the transfer of information between each other. For ProReach customers that want a deeper relationship with each other, the present invention provides for an alliances. An alliance is a group of ProReach systems who have decided to contribute their user profiles into a database of profiles. All members of the alliance contribute profiles, and all members of the alliance benefit by getting a degree of access to the alliance profiles. In particular, alliances are useful to vertical markets where companies may want to work together on the world wide web. Such groups of businesses may benefit from combining their information, but they need the infrastructure to facilitate this sharing, regulate it and make it safe. ProReach provides this enabling infrastructure. In an alliance, each member contributes visitor profiles created for visitors to the member's web sites. These contributed profiles are aggregated together in a database of profiles maintained by the alliance. All members to the alliance get controlled access to these profiles. A system of sharing rules controls this whole sharing process, so that companies only share selected information. ProReach supports the formation of multiple alliances. An ProReach-enabled system can belong to more than one alliance.
A very large amount of visitor activity data will be generated by web sites using ProReach systems. The existence of this data raises privacy concerns. It also raises issues about how ProReach Systems themselves share data amongst themselves. ProReach has an architecture that addresses privacy concerns. ProReach ensures the privacy of web visitors via what it calls an identity firewall. The purpose of an identity firewall is to establish a boundary. Inside the boundary of the identity firewall, the identity of a web visitor is accessible to authorized personnel or processes. Other personal information is also available, such as e-mail address, home address and age.
Outside the boundary of the identity firewall, no data is provided that could be used to identify a web visitor. Instead, any person or process requesting information outside an identity firewall, only gets an opaque visitor identifier. The ProReach System that issues the opaque visitor identifier can use it to uniquely identify the web visitor. Hence, an opaque visitor identifier is an externalizable reference to ProReach visitors.
A person or process with an opaque visitor identifier can present the opaque visitor identifier to that ProReach System. The ProReach System can then map that opaque visitor identifier back to the actual visitor. Using this method, it is possible for a web marketer, for example, to be given a large amount of information about the interests of a web visitor but the marketer doesn't know the visitor's identity or contact information. The web marketer is simply given an opaque visitor identifier (or a set of such identifiers). The marketer gets the data he needs, but the privacy of the visitor's data is maintained. So outside the identity firewall, a web visitor being tracked by ProReach is anonymous.
The web marketer may have the ProReach system contact the web visitor on his behalf using IPro's Visitor Contact Service. Given an opaque visitor identifier and a message, the Visitor Contract Service looks up the e-mail address (or other necessary information). It then sends the message to the web visitor. The web marketer gets his message delivered to the web visitor, but the web marketer does not know the web visitor's identity.
Identity firewalls can be flexibly configured. They can be configured so that the identity firewall encloses a single ProReach System. They can be configured so that an identity firewall encloses a group of ProReach systems. The latter configuration would make sense when there are multiple ProReach servers working as a group (e.g., for a portal with multiple servers) and data should be shared between the servers.
- I. WEB EVENTS AND AGGREGATION
- A. W
EBE VENTR ECORDS
- II. OVERVIEW OF PROREACH SYSTEM ARCHITECTURE
- A. G
LOBALS ERVICES
- III. BASIC SYSTEM PROCESSING
- A. P
ROREACHF UNCTIONALO VERVIEW
- B. C
ATEGORYD ISCOVERYA NDM AINTENANCE
- 1. Category Discovery
- 2. Category Maintenance
- IV. PROREACH SYSTEMS WITH ALLIANCES
- V. AGGREGATION
- A. A
GGREGATINGD AILYW EBE VENTS
- 1. Transform Category Scores to Weights
- 2. Restructure Web Event Records to Collate Category Weights by User
- 3. Create Category Interest Time Model Information
- B. D
IMENSIONALC OMBINING
- C. U
SERG ROUPS YSTEM
- D. D
AILYA GGREGATION
- E. A
FFINITYG ROUPM ANAGER
- F. T
HEU PDATEO BJECT
- G. S
CHEDULER
- H. E
VENTD ISPATCHER
- I. P
ROFILES YSTEM
- J. AQL S
YSTEM
- 1. AQL Language
- 2. AQL Interpreter
- VI. CATEGORIES AND CATEGORIZATION
- A. O
VERVIEW OFC ATEGORIZATION
- B. C
ATEGORIES ANDH IERARCHIESO RGANIZED ATA
- 1. Building and Maintaining Category Hierarchies
- C. C
ATEGORYN AMES ANDID'S
- 1. Default Unalterable User Category Structure
- 2. Similarities and Differences Between Categories and Groups
- D. U
SINGS OURCE ORL OCATION INC ATEGORIZATION
- E. T
HEC ONTENTC ATEGORYL IFECYCLE:F ORMATION,T UNING, ANDC HANGE
- 1. The Standard Category Tree and Additions by ProReach System Administrators
- a) Adding Categories At ProReach systems
- b) Updating the Standard Category Tree
- c) Building the Standard Category Tree
- d) Discovery, Refinement, and Editing of Categories
- F. C
ATEGORIZATIONM ODEL OF THEC ONTENTR ECOGNITIONE NGINE
- 1. Category Creation
- 2. Document Categorization
- 3. Multiple Dictionary Categorization
- 4. Category Cache
- VII. GLOBAL SERVICES
- A. G
LOBALI DENTIFIERS ERVICE
- 1. Requests For GIDs.
- 2. Individual Identification via PIDs
- B. G
LOBALU PLOADS ERVICE
- C. G
LOBALC LIENTM ANAGEMENTS ERVICE
- D. Y
ELLOWP AGES
- E. G
LOBALE XCHANGEP OLICY
- VIII. PROREACH CLIENT SIDE WEB USAGE DATA COLLECTION
- A. W
EBA CTIVITYM ONITORING
- B. P
ROREACHC LIENTW EBU SAGED ATAF ILTRATION ANDA GGREGATION
- 1. Time-based consolidation
- a) Adjust web event record time stamps
- b) Ignore short-term activities
- c) Aggregate Web activities
- 2. Other Filtration of Data
- 3. Privacy Control
- C. F
ILTRATION BASED ON PRIVACY SETTINGS(U SERM ODIFIABLE)
- 1. URL pattern-based filtration
- 2. Keyword-based filtration
- D. D
EFAULT PRIVACY-RELATED FILTRATION
- E. P
ROR EACHC LIENTD ATAU PLOAD
- 1. ProReach client upload queue
- 2. ProReach Upload Stream and Upload Record
- 3. Data upload
- a) Web Event Record upload
- b) Homepage URL upload
- 4. Upload time and upload stages
- a) Pre-upload stage
- b) Upload stage
- c) Post-upload stage
- 5. ProReach Upload Service and upload
- IX. CONTENT TARGETING
- A. A
CCESS TOP ROFILE BY ACGI
- 1. Access to page Metadata by CGI
- a) Handling dynamic content categorization of multipart pages at runtime
I. Web Events and Aggregation
Referring now to
Finally, the web event 101 includes information that identify one or more categories 105 into which the web content visited by the web visitor belongs and a measure of the user's interest in each of the one or more category. The categories used to describe the web content preferably form a hierarchy of categories, with parent categories (e.g., “Sports”) having multiple child categories (e.g., “Soccer” and “Golf”).
These three pieces of data are used model the basic idea that a user viewing or interacting with an item of web content is expressing an “interest” in whatever category or categories that web content is about. The longer the visitor views or interacts with the content, the greater the visitor's interest is presumed to be (other factors may also be used to scale the level of interest, such as the type of interaction, e.g., a simple viewing of a page versus a purchase).
This measure of interest in of a user in a category at a particular time or duration is expressed as a weight. A weight is a function of the amount of time spent by the visitor interacting with an item of web content, and the degree to which the category is deemed to describe the content. In a preferred embodiment where there are a number of categories available, a web event includes a weight for each category. This reflects the fact that a given item of web content may relate to many different categories in different degrees.
To provide a meaningful scale of interpretation of these weights, and hence a level of interest in a category, the weights are scaled to a standard unit called an interaction unit. An interaction unit is interpreted to mean 1 minute of attention paid by a user to an item of content. By scaling web events using interaction units, it becomes possible to meaningfully compare the interests of any variety of different users and categories of web content.
These three types of information are collected for each item of web content viewed by a web visitor at a particular web site, and by extension by multiple different visitors across many web sites. For example, as the visitor moves from one web page to another on a given web site, a web event is generated which encapsulates the information identifying the visitor, the category description of the page, and the amount of time spent by the visitor on the page. As the same visitor visits different web sites, they are identified and web events which capture the category of content and time spent viewing such content are generated.
In themselves, web events are merely individual data items, and do not directly describe the overall patterns of interest of any individual user or groups of users, or patterns relatives to categories or time. This level of abstraction is provided by a second aspect of the present invention, aggregation. Most generally, aggregation is the process of summarizing the weights of different groups of web events to establish patterns of interest. Generally, web events can be combined with respect to time periods, individuals users, groups of users, categories, or groups of categories, or any combinations of these. When considered together, there are six different ways to combine web events:
- 1) Combine all web events between two dates: This combination approach combines web events related to all categories and all users over a given time period to provide a model of the global interests of the population of users.
- 2) Combine all web events for a category between two dates: This combination combines the web events for a specific category (or group of categories) for all users over a given time period to provide a model of the user's level and pattern of interest in the specified category.
- 3) Combine web events for a user and a category between two dates: This combination combines the web events of specific user and a specific category over a time period to provide a model of the user's level and pattern of interest in the specified category.
- 4) Combine web events for a user group and a category between two dates: This combination provides a model of the group's interest in the specified category.
- 5) Combine web events for a specific user between two dates, across all categories. This combination provides a description of how the overall distribution of a user's interests for all categories, whether narrowly interested in one or a few categories maintained a web site narrow or broadly interested in many of the categories at the web site.
- 6) Combine web events for a user group between two dates, across all categories.
In one embodiment, when performing these various types of combinations, the events selected during a given time period are thus which start during the time period, even if they end after the selected time period.
We call the process of combining web events in these various ways “dimensional combining”, since there are six “dimensions” in the data along which web events may be combined. These possible combinations can be used to provide an analysis of any user's or group's interest in any category or categories over any time period. Referring now to
In
First, in Level 1, the daily aggregates can be combined per (3) above into “UC” or User-Category complexes 203, or per (2) above into individual “C” or Category complexes 205. Note that a single daily aggregated result 101 may contribute to either of these complexes; that is, the results of a particular web visitor's web activity contributes to both the Category complexes 205 for all categories effected by that visitor's activity, as well as to that user's specific user-category complexes 203 describing that user's level of interest in the various categories.
Next in Level 2, the individual UC complexes can themselves be combined. First, per (4) above, the particular UC complex for certain users who form a user group can be combined into “GC” or Group-Category complex 206. This complex 206 describes the group's interest in the particular category for the data. Second, per (5) above, all of the User-Category complexes for a particular user can be combined to form a single “U” or User complex 207, summarizing the user's interests across all of the categories. The User complex 207 is particularly useful to gauge the breadth or narrowness of user's interest. For example, a web site may have a limited number of categories of content. For one user of this web site, the user complex 207 may show a high level of interest in a just one or two categories, whereas another user's user complex 207 may show a high level of interest in a majority of the categories; this second user is like to be more valuable to the web site for purposes of marketing or other value driven activities.
Next in Level 3, the complexes 207 for individual users can be combined per (6) into “G” or Group complex 209 across all categories.
Finally, in Level 4, the complexes 209 from the many groups can be combined per (1) above into Total complex 211, describing the interests of all users across all categories.
This web event modeling and aggregation framework provides many advantageous features. First, it allows a system administrator (or a member of ProReach System) to arbitrarily select the time period over which any of these aggregations to obtain broader or narrow analyses of the time pattern of the users' interests. This is useful to identify very short term interest trends or longer term trends in users' interest. Second, because each level of aggregation fully captures the information of the level below it, the underlying web event data may be selectively discarded to improve storage efficiency. For example, web events for categories which have a very low level of interest (identified by a low weight) may be discarded after their data has been summarized into UC or C data. Web events with greater weight may be stored longer to allow them to be used for more analysis or marketing.
A. Web Event Records
When a web visitor performs a web activity, such as viewing the contents of a uniform resource locator, or clicking on a submit button that initiates a web transaction, this web activity is recorded by client-side or server-side trackers, which record this web event. The data of each web event is stored in a web event record. Web event records are then aggregated into the daily aggregated results 101, and from there into the various complexes. The basic features of a web event record are as follows:
The web event records may be generated by either the web client 108 or the web server 102. If generated on the web client 108, the corresponding web event records would be as follows (note that the user ID and category score information is not shown here).
Note 1. When a URL is captured, the current time is stored in the Start-time timestamp field in web event record. The difference between the current time and the time in the timestamp of the previous record is calculated and stored in the previous record's “duration” field.
Note 2: Duration may or may not equal (End-time —Start-time). This is because there may be other events between the earliest download at this URL and the last download. For example, there is a gap of 2 minutes between visits to <URL B> and <URL C>. The “duration” in the activity table shows the actual time a user spends on browsing a particular URL, while the “duration” in web event record is an approximation of that time. Where the web event record is created by the web client 108, then the client software may only approximate the real “duration” by taking the Start-time of the next URL as the End-time of the current URL. There is no way for the software to know about idle gaps in between URL visits without user intervention. Where the web event record is generated by the web server 102 that is tracking the user, then the duration can be estimated.
Note 3. Here too, the duration for <URL E> can only be calculated by the web client 108 as 13 min 50 sec (10:30—10:16:10=00:13:50). The web client 108 will not know of the idle time after the access to <URL E>. However, the web client 108 (or the web server 102) may keep a pre-set max time for the duration of a single URL access, for example, 5 minutes. This is to normalize the “duration” factor so that no one single URL access can have abnormally large “duration”. A user may be tied with other activities for a while between the two URL accesses, and this may result in some abnormally large duration numbers. Those abnormally large duration numbers will incorrectly affect a user's Web usage pattern and profile. Note that the cumulative duration, however, is not limited to that max duration number. For example, the duration for <URL A> is an aggregation of two separate URL accesses; therefore, it is not confined to the 5 minutes limitation.
Note 4. Activities 5, 7, and 8 were not included in the total duration of any web event since they were filtered out for being two short of a period of time. This is done to help reduce the data collection requirements and because such short duration views are not likely to be indicative of the user's actual interests.
The next sections we describe the architecture and functionality of a system which records web events and provides the various capabilities to aggregate data as described.
II. Overview of ProReach System Architecture
The present invention may be embodied in a system which we call “ProReach”. We begin with a very high-level overview of the ProReach architecture, and describe the high-level components involved in this 20 analyzes the data in order to evaluate the web visitor, and create or update a profile of the web visitor. The resulting profile of the user (or other profiles that are effected by the user's visits) can be used for marketing purposes, for page composition or for driving banner ads.
The various ProReach system make use of ProReach Global Services 110. These global services 110 perform various tasks that are best centralized for purposes of efficiency and integrity of information. These global service 110, which are further discussed below, including identification of web visitors, maintenance and distribution of standardized categories to the various systems 100, and mechanisms for exchanging information between systems 100.
Referring now to
More particularly, each spoke 202 is dedicated to collecting and categorizing the visitor data from a web server 102. Once the data is collected from the web server 102, it is partially processed on the spoke 204. The partially processed data is then moved from the spoke 202 to the hub 204. At the hub 204, the data is aggregated and further analyzed to produce up-to-date visitor profiles. Note that data from the same web visitor might stream in from different spokes 202, where the hub 204 aggregate this data into the appropriate user profile.
ProReach is architected so that most ProReach services are within company firewalls. Web servers 102 themselves are outside the firewall. A typical ProReach configuration including a ProReach system 100 for a single web server is depicted in FIG. 5. Here, the ProReach-enabled web server 102 is outside the firewall 206. An ProReach spoke 202 is connected to the web server 102, with communication taking place using server-side plug-ins, such as Java servlets. The ProReach spoke 202 itself is connected to a ProReach hub 204, as previously described. In
ProReach also works across web firewalls 206. For example, suppose a company had two web servers 102, each with its own domain name and firewall 206. It might be desirable to track all the web visitors at these web sites. In this case, a different configuration of ProReach is used, in which one of the spokes 202 attached to a local hub 204, and the other spoke 204 is remote and behind another firewall 206. The ability for ProReach to work across firewalls is desirable, particularly when web sites belonging to different organizations or companies are to be grouped together as logical unit, with the data of their web visitors shared.
A. Global Services
In one embodiment, ProReach provides a number of global services 112. These services are provides by a master host system and server, such as may be provided by an overall provider of ProReach systems 100. The global services are shown in FIG. 6.
Global Identifier Service 502. This global service allocates global identifiers [GIDs] and provides other functionality related to visitor identification. A GID is used to globally identify a web visitor, so that the visitor's web events and other usage data can be properly collated when received from many different ProReach systems 100 or ProReach enabled web clients 108.
Global Category Tree Service 504. This global service maintains and distributes a standard collection of categories. This allow the different ProReach systems 100 to use a common set of categories for describing and categorizing web content. In this manner, interest information from many different web site can be measured and evaluated against a common framework of categories.
Global Upload Service 506. This global service works with the client tracking software to received uploaded web activity data from the various ProReach enabled web clients 108. This global service then distributed this web activity data to the appropriate ProReach systems 100.
Global Client Management Service 508. This global service helps manage ProReach-enabled ProReach enabled web clients 108, by keeping a list of all such clients, and by maintaining this list (e.g., adding new ProReach enabled web clients 108 and deleting those no longer in operation).
Global Yellow Pages 510. This global service maintains an LDAP directory of ProReach systems 100.
Global Exchange Policy Service 512. This global service allows individual ProReach system 100 to describe the business rules under which it will exchange web visitor information with other ProReach-enabled systems 100.
III. Basic System Processing
ProReach's job is to capture user data, subject it to analysis and produce a visitor profile summary for any individual visitor or groups of visitors collectively. The visitor profile summary describes the interests of that given web visitor or group. There are many different processes involved in producing this web profile summary. These generally are as follows:
- tracking visitor web visitor activity on the web server;
- tracking visitor web visitor activity from the web client;
- categorizing the documents that the web visitor views and determining their weights;
- aggregating web events by time, by user and by category;
- identifying the same web visitor when he visits different web sites;
- aggregating the data —at different web sites —for the same web visitor, so that a global profile of the web visitor results;
- category discovery and maintenance
In the first of the next two sections, we will summarize through some ProReach's key applications processes. Following that section, we will look at category discovery and optimization.
A. ProReach Functional Overview
In this section, we describe the basic processing steps that take place, in order to show how data flows through a basic ProReach system 100. We will also view in more detail the structural features of a ProReach system 100.
Because we want to concentrate on these basic processing steps, we will make some simplifications and only explores a specific scenario. We will explore a scenario where the ProReach-enabled web server 102 only tracks web visits based on cookies resident on web clients 106. So while ProReach also tracks web visitors based on their login name and other information, this tracking is not shown below. We also assume here that the web client 106 allows cookies, which is true for most web clients.
In general, the overall process of tracking web activity is as follows:
- A web client 106 visits a ProReach-enabled site 100.
- The ProReach-enabled web server 102 redirects the web client 106 to a global service web server 112. This web server 112 is responsible for allocating global identifiers (GID) that identify web visitors. Web visitors are identified as specifically as is possible. Sometimes the identification pinpoints the actual person; sometimes it can only identify the web client 106 being used.
- The global service web server 112 redirects the web client 100 back to the original ProReach-enabled web site 100 with extra data. That identifies the web visitor.
- The ProReach-enabled web server 102 takes this identifier and logs the web hit on a log. The entry on the log contains this identifier.
- The web server 102 reads from this log of web hits and sends the data to a ProReach spoke 202. Processing of each entry on this log begins on the spoke 202. The category of the web pages viewed by the visitor is computed. At this point the ProReach system 100 has determined who has accessed the web page and what the content of the web page is about.
- Over time, a visitor's repeat visits to a web site 100 will result in a history of web events associated with that web visitor. ProReach manages this data by subjecting the data to an aggregation process. This process both keeps the data compact as possible, but while retaining useful analytical properties. In particular, the aggregation process summarizes web events into more generalized descriptions of web activity, including summaries across users and or categories.
- After the aggregation step is completed, a profiling step takes place. This profiling step identifies the interests of a web visitor. The result is a web visitor profile summary of his or her interests.
The above steps demonstrate basic processing steps used to track, categorize and aggregate web visitor data. The result of these steps is a database of web visitor profiles that can be explored by web marketers, as well as being used for other purposed (selecting banner ads, personalizing content or services). Alternatively, a web marketer can then explore the population of his web visitors by using query tools.
These steps will now be explored in detail in the remainder of this section.
Referring to
We begin our processing with a visit from a web client 106. The web client 106 accesses 701 a web page hosted by the web server 102. The Logger 702 requests a GID for the web client. To get this GID, the Logger 702 makes a request to the global identifier service 602 of the global ProReach service 112. This request is initiated by redirecting 703 the web client to a ProReach web server that is part of ProReach global services 612, via the HTTP protocol. In
If the request does not include the ProReach cookie, and hence if the web client does not have a GID, then a new GID is generated by the global identifier 612. This GID is guaranteed to be globally unique. The GID that the global service has computed is now returned 707 to ProReach-enabled web server 102 via web redirection. The actual GID is encoded in the URL, so that the ProReach-enabled web client 106 can receive 705 this URL and extract the GID from it, storing the GID in a cookie. Other information is also encoded in the URL so that the web client 106 will be sent back to the page he originally requested.
If a web visitor has configured their browser not to accept cookies, the global identifier service 602 can detect this, and will still allocate a GID for this web visitor which is returned via the redirect as a GID in the usual way. However, the value of this GID tells the ProReach-enabled web server 102 not to try and issue a session cookie and to log the events of this web visitor as an unknown or anonymous user.
In
As shown in
Once the data reaches the spoke 204, it is pre-processed 706 for inclusion in the Visitor Log 708. The preprocessing turns the data —no matter its specific format —into web events of the standard form (e.g., an object representation of that data).
The Event Queue 710 monitors this log 708, and when new web event data is available, it fetches the data and also sorts the web entries by GID. The Event Queue 710 then calls on the Event Processor 712 to process each web event in the log 708. The Event Processor 712 ensures that the web event is categorized by making a request to the categorizer 714. It is possible that the web page has already been categorized, and that this categorization information has been entered as entries into the Page Metadata 716. Prior categorization occurs since ProReach spiders web sites in order to categorize their web pages as early as possible, as to avoid doing categorization at runtime. However, since some web sites produce web content dynamically, ProReach cannot pre-categorize all web pages, and must be prepared to categorized web pages on a just-in-time basis.
If the URL visited by the web visitor has already been categorized, then this data can be fetched from the Page Metadata cache 716. If this is not true, then the categorizer 714 then makes calls on a content recognition engine 718. The content recognition engine 718 manages a database of categories. Each category represents some kind of topic, such as “sports” or “news.” A web page can be matched against any number of categories. The matched categories describe what a web page is about, and provide a means by which the visitor's interests can be identified.
The content recognition engine 718 provides a score for a number of categories, each score measuring the degree to which the page may be said to be about the category. Preferably, a score is provided by the content recognition engine 718 for each category in the category database; alternatively a score is provided only for a selected number of top scoring categories (e.g., top 10 highest scoring categories).
When the content recognition engine 718 completes its categorization process of a given web event, it updates the Page Metadata cache 916 for the web event to include a list of the scored categories and their respective scores. Once the cache is updated, the categories of the web event and their respective scores are returned to the Event Processor 712. The Event Processor 712 modifies the web event record to include the results of the categorization for that web event. Alternatively, the categorization information may be stored separately from the web event, and accessed from the web event by some other means, such as a URL. Once the web event record has been categorized, the web event is ready to be sent off to the next stage of processing. That next stage of processing is on the ProReach Hub 204. More generally, the categorized web events are streamed from the ProReach spoke 202 or spokes to the hub 204.
In
The hub 204 maintains a database 720 of web profiles. Each profile in this database 720 is uniquely identified by a GID. In each web profile, the web events of the web visitor are maintained by category. A exemplary web profile will describe a individual (or group's) interest in each of number of categories included in the category database.
The ProReach hub 204 takes newly categorized web events and integrates this data with the data of an existing web profile; this updates the profile of the visitor with the most current information about their interests, as captured in the web events generated from their web activity. If a web profile does not exist for the web visitor, then one is created.
The first step of this aggregation process is to fetch the needed web profile from the database 720, using the web visitor's GID to select the web profile. When an web event record or a set of event records are aggregated, they are processed in groups where each web event has the same GID.
Once the web profile for a GID is retrieved, the Aggregator System 724 performs an aggregation operation for all categories of documents that this web visitor has accessed. In one preferred embodiment, a threshold value for is updating category weights is established, and only those categories for which the document scored higher than the threshold are updated.
Generally, the aggregator 726 updates the various user, group, and category summaries as described with respect to FIG. 2. Each of these summaries is held in its own web event record, which identifies both the user or user group or the category to which it applies, and the appropriate other aggregated weight values. Because of this approach, ProReach can retain large amounts of visitor data at lower cost and this data is of higher quality, because it is designed to support the kind of operations needed by web marketers, that is, analysis of user interests and trends.
When the aggregation process is completed, the next step is to update the visitor's profile. Profiling 726 is a task that identifies the interests of a web visitor. To understand how this works, we first explore a brief example. Suppose there is a web marketer who wants to identify “sports enthusiasts” using visiting the web site. The web marketer first defines what he means when by “Sports Enthusiasts”. There are many ways that this term could be defined:
- Absolute Interest Magnitude Definition: A sports enthusiast is someone who looks at sports-related web pages at least twenty times every year;
- Relative Interest Frequency Definition: A sports enthusiast is someone who looks at sports-related web pages more frequently than he looks at other web pages. For example, a sports enthusiast is someone who, if they look at 100 web pages, tends to look at least ten sports-related web pages.
- Comparative Interest Frequency Definition: A sports enthusiast is someone who looks at sports-related documents much more often than other web visitors
Each of these three candidate definitions for the term Sports Enthusiast describe the interest as a function of the weight or weights of a “sports” category or categories, as determined from the web activity of the user.
Any of these types of definitions (or others) may be used to define an interest with respect to any set of categories. Logically, an interest may be understood as a query, such as one uses in SQL, against the profile database 720 that determines if a web visitor does not or does not have that interest. The query can be defined to evaluate the weights of any combination of categories. With ProReach, a web marketer can name and define such interests using a simple query tool, such as a query by example tool, that operates on the database 720 via database agent 728,
Once an interest is defined, the new interest is added into a given ProReach system 100 and activated. Once an interest is activated, it is the responsibility of the profiler 726 to take each interest and test whether a given web visitor has that interest or does not. When profiling takes place, each activated interest is applied to the web visitor's data to determine if the visitor has that interest. The result is profile which identifies which interests are applicable to the visitor.
For example, imagine that there were five active interests in the database 720, such as Sports Enthusiast, Conservative, Hobbyist, Recent Divorcee and Planning For Retirement, each of which has been previously defined by a set of criteria, such as described above, with respect to various categories. Thus, the Conservative interest may be defined by a relative frequency of accessing pages which are categorized in categories deemed to be associated with conservative ideas or beliefs; the Recent Divorcee interest may be defined by comparative frequency (to identify most current behaviors) of viewing web content related to divorce attorneys.
Such a set of interests are stored in the database 720 and applied by the profiler 726 to a web visitor's data. The query associated with each interest is applied (as a predicate) and the result of this predicate evaluation is a boolean value. From this processing, a set of results would flow, for example:
Note there, the results are Boolean values, indicating whether or not the visitor had the interest. In an alternative embodiment using fuzzy set membership, each interest result may be expressed as a measure of the degree to which the user has the interest (e.g., a scaled value between 0.0 and 1.0).
Based on a result such as this example, the web profile of this web visitor is then updated 723. Preferably, a web profile summary record in the database 720 lists the interests of the web visitor. In one embodiment, the web profile summary record contains an interest field which list the interests of the web visitor, as determined by the profiler 726. After profiling completes, this interests field is updated. Each interest is associated with an interest identifier, and so it is actually a sequence of integers that is assigned to this interest field, such as
- {101,321,19}
For example, if the SportsEnthusiast interest has an ID of 101, and the Conservative interest has an ID of 321, and the PlanningForRetirement interest has an ID of 19, then this means the same thing as:
- {SportsEnthusiast, Conservative, PlanningForRetirement}.
Each such interest ID thus concisely identifies an interest for that web visitor.
Interests are useful because they help categorize web visitors. However, interests are distinct from categories, in several ways. First, interests describe users or groups of users, whereas categories describe web content. Second, interests are formed from combinations of multiple factors, including category scoring of visited web content, demographics, and the like and thus interests are not easily constrained to hierarchical parent-child relationship, as typified by the categories of the content recognition engine 718.
As ProReach profiles web visitors, it computes the interests of each web visitor, and then recomputes them as needed. When this computation is performed, the updated profile summary is then stored 722 back in the database 720 via database agent 728. The result is an updated web profile, with all the data relating to categories, and with all the interests of that web visitor updated as well.
Other ProReach tools, such as the query tools, can use this data to quickly pinpoint groups of ProReach web visitors. For example, a query can be made to identify all web visitors who are both “sports enthusiasts” and “conservative.” Alternatively, a query could be made to identify all web visitors who are “sports enthusiasts” but who are not “conservative.”
At this point, we have shown how interests are defined and how profiles are updated to reflect the web visitor's current set of interests.
ProReach has many other capabilities, such as the tracking of web activities from the web client; it supports the exchange of web profile data between ProReach systems. It supports facilities helping web marketers identify and contact prospects. It supports advanced categorization techniques that allow businesses in vertical markets to create categories suited to their business. It also supports categorization techniques that automate the process of developing and maintaining categories.
B. Category Discovery And Maintenance
This section introduces ProReach's processes for category discovery and category maintenance. We will describe these processes by example.
1. Category Discovery
Suppose a ProReach system 100 has the following categories for computer peripherals, as managed by its content recognition engine 718:
The Storage Device category is the parent category for the other categories. First, it should be noted that the total number of documents in the subcategories is 430, whereas there are 500 documents categorized as Storage Device documents. This suggests that there is some other category in these documents that is related to storage, but which is distinct from the existing subcategories.
The category discovery process uses statistical analysis to look for the hidden categories in some existing category. As will be further described below, category discovery identifies categories based on frequency and relationships between words appearing in a set of documents. In the example above, this category discovery process might find that many storage documents were about DVDs. It would then identify “DVDs” as a potential new category. In one embodiment, the category discovery process does not automatically create a new category. Instead, any category change suggested by the category discovery process is checked and confirmed by an operator. This interaction with the operator is desirable for a number of reasons. First of all, the category discovery process may make many valuable suggestions, but it may not always be right. Some degree of human guidance is useful to ensure that only meaningful categories get added.
Suppose in the above case that the operator confirmed that a new DVD category should be added. Once confirmation is given, the rest of the process is automatic; the category can then be used immediately by the content recognition engine 718 to categorize documents. Existing documents may also be re-evaluated to determine their category score.
One issue in determining when to apply the category discovery process is when should a search take place for new categories. In one embodiment a search for new categories takes place when any of the following are true:
- There are a large number of documents categorized within a given category (e.g., more than a predetermined number or percentage of all categorized documents); or
- There are signs of a missing category (e.g., parent category having more than a predetermined number or percentage of documents relative to its subcategories); or
- There are a large number of web visitors accessing the documents with a given category (e.g., more than a predetermined number or percentage of visitors within a selected time period).
Also some branches of the category tree will likely exhibit more volatility over time (e.g., high technology). Hence, the historic volatility of that section of the category tree may also be a factor.
2. Category Maintenance
Category discovery pertains to discovering new categories. Category maintenance pertains to maintaining and improving existing categories. As with category discovery, the process of category maintenance is preferably an advisory process, which suggests changes to the categories. It does not execute those change unless confirmation is given; alternatively the changes may automatically implemented.
In particular, category maintenance provides suggestions for:
- Removing a category; and
- Altering the training documents related to a category;
Like category discovery, category maintenance involves statistical analysis. For example, a suggestion to remove a category might be made if there are very few web pages concerning this topic and there are very few people looking at such documents. Few documents and few viewers of them suggests that the category is a candidate for deletion.
For example, training documents are selected based on scoring; if the category scores are below a threshold the training documents are reselected. Categories are moved when the keywords associated with the category are not scoring sufficiently high.
To create category:
Select category
Select training documents
Score training documents, to generate keywords
Human judgment as to whether the keywords are reflective of the category.
IV. ProReach Systems With Alliances
In existing systems, companies that might benefit from the sharing of visitor profile information are reluctant to do so for several reasons. There is no infrastructure to facilitate this sharing, so sharing the information would require a huge initial outlay of software support. There are also ownership and use issues in respect to the profile information itself: which companies own the profile information, and who decides?
In the present invention, alliances are a means of facilitating the sharing of profile information between businesses, and overcoming these barriers to sharing. By doing so, ProReach enables business-to-business sharing of data that is mutually beneficial to the business parties. In many cases, alliances are formed to service the businesses clustered around some vertical market. For example, there might an alliance for pharmaceuticals, or there might be an alliance for oil-related businesses. Referring to
Of course, the same web visitor may visit multiple ProReach systems 100 that are members of the same alliance 700. When different local hubs send profiles for the same web visitor, the alliance 700 can take these separate local profiles and assemble them together into a single alliance profile for that web visitor. Using the GID, the alliance can easily compute which profiles belong to the same web visitor, and correctly merge the information in these profiles to avoid duplication.
In exchange for providing their local profile information to the alliance, the members of the alliance 700 get some degree of access to the alliance profiles. An ProReach system 100 can be a full access, limited access or minimum access member of an alliance 800. The responsibilities and rewards of each membership level vary.
A full access member gets the maximum allowed access to vertical profiles. Full access members must also provide a maximum amount of information from its local profiles.
A limited access member gets a moderate degree of access. It must provide a moderate amount of information from its local profiles.
A minimum access member gets the least amount of access to vertical profiles. It is required to provide a minimal amount of profile information from its local profiles.
Participation in a vertical alliance allows each member controlled access to the jointly produced alliance profiles. Rewards and responsibilities are rationalized through the small number of membership levels. Memberships have to specify what categories of information they will provide and in what volume, and for what kind of web visitor. Hence this scheme provides a credible incentive for individual ProReach systems 100 to participate in various alliances.
ProReach systems 100 benefit from being members of alliance by having access to the alliance profiles of the web visitors. Because the alliance profiles are aggregated over multiple web sites and ProReach systems 100, they provide a more accurate and comprehensive assessment of the interests of the web visitor. This in turns allows a given ProReach system 100 to more accurately target web content to the w web visitor when the visitor visits the ProReach system 100 that is an alliance member.
V. Aggregation
In this section we describe in detail one embodiment of the process by which web events are aggregated by aggregation system 724 in conjunction with the aggregation queue 722. The aggregation queue 722 stores a set of web event records that are unconverted. These records are updated to the queue 722 by the event processor 712 on the spoke 204, in the order in which they are received, that is, as they come in from one or more spokes. Overall, the queue will store the web events generated by many different users over some time period.
Referring to
Referring now to
The Daily Aggregation System comprises a Handler object 920, a Calculus object 922, a Parser object 924, an Aggregator object 926. The aggregation queue 722 is also best understood as being a entry point to the Daily Aggregation System 724 (and was illustrated separately in
An Event Dispatcher 930 monitors all the activities within all the services of the Aggregation System, and fires events to whoever is interested in listening to them. The Event Dispatcher is not part of the services within the Aggregation System. It simply monitors and overlook and watches all the activities going on inside the Aggregation System like a camera.
The Daily Query object 932 is part of the Daily Aggregation System and is responsible for all queries concerning daily aggregates. The Daily Query object handles all types of queries regarding interests of users, as described above, including defining interests, and identifying users having particular interests (on daily basis). Queries are processed by a query language interpreter 944, which uses a query language 946. The handler 920 exports the interface of the Daily Aggregation System, and manages the remaining components of the daily aggregation service during the daily aggregation process of packets of web events.
The Combiner 938 is part of the Dimensional Aggregation System and is responsible for doing dimensional aggregation as scheduled by member of ProReach. More particularly, the Combiner 938 is responsible for the dimensional combining of the daily aggregated web events (or of the complexes) into higher level summaries (e.g., across times, users, group, and categories), such as illustrated in Levels 1-4 of
The update object 940 is responsible for updating the Daily Aggregate whenever the Daily Aggregation System processes a packet of web events.
The database 720 stores the aggregated information from the web events in a number of different tables. These are as follows:
User Table: This table stores information identifying and describing each user. The fields of this table include: userID, last name, first name, this table is indexed by userID.
UserID Contact Table: This table contains the following columns regarding the contact address: userID, address, address2, city, state_prov, zipcode, country, and e-mail.
Demographic Table: This table contains demographic information about users. It contains the following columns: userID, gender, age, education, job.
Members Table: This table contains information about the members of ProReach System, that is the people (or companies) that have an account with ProReach System. This table contains the following columns: ID#, lastname, firstname, e-mail, login, password, URL, account type. The URL represents the domain name of the web site owned by the member. If the member does not own a web site, the URL column will be empty. The account_type represents the type of account the member has. According to this type, the member will have access to certain services and other services might be denied.
Categories Table: This table stores all of the categories used by the content recognition engine 718. The table includes the fields: categoryID, category name, and parent categoryID. The table is indexed by categoryID, and secondary indices on name and parent. The parent categoryID is used to construct a hierarchy of categories, and is further used to aggregate low level category information into higher categories.
Daily Aggregate Table: Each row in this tables stores daily aggregate objects for a specific user-category combination that occurred on a given day. This information corresponds to the data at Level 0 of the Aggregation Tree shown in FIG. 2. The fields include: userID, categoryID, weight, Deviation, Day, and Trend.
Deviation stores a standard deviation of the category weight over the given time period for the specified (by category ID) category.
Day stores a date or day number.
Trend stores a string or encoded value that describes the shape or slope of a curve of the user's interest of the time period. For example, and as will be further explained below, the trend may describe the curve as “increasing then decreasing”, or as “constant then increasing”.
User Group Table: This table identifies each of the user groups, along with their size and a description of what the user group is about, or what are the rules for defining membership. The fields include: user groupID, group name, description, and size. Size indicates the number of group members.
Criterion Table: This table stores the rules which may be used define various membership tests for any of the user groups. Used in conjunction with the user group criterion table, below. The fields include:
Criterion ID: identifies the rule number.
CategoryID: identifies the category to which the criterion is applied.
Minimum: defines the minimum weight a user can have to satisfy the rule
Maximum: defines the maximum weight that satisfies the rule.
Negation: specifies whether satisfying the rule results in group inclusion or exclusion.
Example: Assume that a rule had minimum=20 and maximum=80 and that negation=“No.” This membership rule means:
“for a user to satisfy the membership test, his/her weight for the category must be between 20 and 80”
If negation=Yes, then this means that the weight must not be between 20 and 80 in order to be a member of the group for this rule.
User Group Criterion Table: This table associates each user group with one or more of the membership rules defined in the criterion table. The field include: user group ID, and criterion ID.
Maintained Categories Table: This table contains the set of categories for which information (such as weight, user groups, profiles, and so forth) will be maintained. The field include: Category ID, CurrentValue, Permanent, LowInterested, MediumInterested, HighInterested, and VeryHighInterested.
This table allows the system administrator or a marketer to chose which categories will be maintained and which categories will be disregarded. This choice can be either absolute or dynamic. In the absolute case, the marketer simply chose a collection of categories one and for all and maintain information only about these categories. In the dynamic case, the marketer consider all categories on the same foot and giving each category a certain rank in the CurrentValue field. The CurrentValue rank can change dynamically according to how many users are interested in the category. If for example, the CurrentValue drops under a certain level, then the category will be disregarded and removed from the table. If a new category acquires a degree of importance, then it can be added to the table. This is the dynamic case.
The marketer can even combine both the dynamic and absolute case. For example, the marketer can chose a certain number of categories to be Permanent (Boolean flag), and other categories to be rather dynamic than permanent. The permanent categories will always stay in the table, and information related to them (through user groups, profiles, etc.) will always be maintained. The dynamic categories are categories that can be removed from this table whenever their CurrentValue is under a certain level. The threshold is preferably defined by a configuration file for the aggregation system 724 or by a system administrator.
The other columns of the table such as LowInterested, MediumInterested, HighInterested, VeryHighInterested contain the number of users whose interest in the category is low, medium, high, and very high, as determined by their weights. In one embodiment, these interest grouping are associated with weight quartiles: if the weight is between 1 and 24 the interest is low (hence the user is counted under “LowInterested”); if the interest is between 25 and 49, the interest is medium; if the interest is between 50 and 74, the interest is high, and between 75 and 100, very high interest.
Maintained Users Table: This table lists all of the users for which profiles will be maintained. The field include user ID, Rank, and HotCategoryID. The Rank field is a value that can change according to the importance of the user. If this value is under a certain level (e.g., below the 100th or 1000th rank), the user will be removed from the table and no profile will be maintained on this user. If however, a new user become very important, then this user will be added to this table and a profile will be maintained for the user.
HotCategoryID identifies the category which has the highest category weight for this user.
Profile Table: This table describes each user's profile in terms of which user groups the user is a member. The fields include: user ID, user group ID, Member Since, Membership Ended, Current Member, and Last Update.
Member Since: identifies the date that the user A user can be a member of many user groups and this membership is also dynamic and changes over time. The profile table keeps a history record of user group membership. For every user group, the profile table indicates when the first time the user became a member (Member Since), whether he/she is still member (Current Member) and when the membership ended (Membership Ended). From this history record of changes between different user groups, one can derive a certain behavior and pattern that can be used to predict user reactions in the future, and use this information for marketing purposes.
User-Category Complex Table: This table stores the data for the UC (User-Category) complexes 203 described for FIG. 2. The fields include: user ID, category ID, weight, deviation, weight against categories, weight against population, trend, from and to.
User ID and category ID define the respective user-category combination.
Weight: describes the average weight of the user's interest in the category specified by category ID.
Deviation: the standard deviation for this average.
Weight against categories: stores a measure of how important the specified category is for the user relative to other categories. In one embodiment, the value of WeightAgainstCategories is the percentage of the totaled categories weights for the specified category. That is, WeightAgainstCategories for category j is equal to the weight of category j divided by the sum of all category weights, and then multiplied by 100 to create a percentage (though raw decimal value may also be used).
Weight against population: stores a measure of how important the specified category is for the user relative to all other users. In one embodiment, the value of WeightAgainstPopulation is the percentage of the totaled categories weights for the specified category relative to all other users. That is, WeightAgainstPopulation for category j and user k is equal to the weight of category j for user k divided by the sum of category weights for category j for all users, and then multiplied by 100 to create a percentage (though raw decimal value may also be used).
Trend: describes the shape or slope of the user's interest in the category over the time period defined by From and To.
From and To: define the earliest and latest start time of web activity used to generate this complex.
User Complex Table: This table stores the contents of the U (User Category) complexes 205. The fields include user ID, weight, deviation, trend, from and to, and categories Count. Since a user complex summarizes the user's interest over many categories, Categories Count tracks the number of categories that interest the user. The number also is the number of children of the user complex object in the aggregation tree.
The Categories Count value is used in incremental updating of the weights. When a new user-category complex 207 is formed (i.e., a new child of a user-complex) with a new weight w, then the new weight of the User complex is incremented as follows:
new weight (UComplex)=([categoriesCount*old weight(UComplex)]+w)/(categoriesCount+1)
Category Complex Table: This table stores the data for the C (Category) complexes 205 described in FIG. 2. The fields include: category ID, Weight, Deviation, Trend, From and To As this complex summarizes over multiple users, thus the weight and deviation are with respect to all users with respect to the time period defined by From and To.
Group Category Complex Table: This table stores the contents of the GC (Group Category) complexes 207. The fields include user group ID, category ID, weight, deviation, trend, from and to, and users Count. Users Count tracks the number of users in this group with respect to the selected category.
Group Complex Table: This table stores the contents of Group complexes 209, that is group summaries across all categories. The fields include user group ID, Weight, Deviation, Trend, From and To, and user Count.
The user count is used to update the weight for a group during incremental aggregation as follows:
new weight(GComplex)=((usersCount*old weight(GComplex))+w)/(usersCount+1)
where w is the weight of the new added member to the user group.
Total Complex Table: Finally, this table stores the overall Total complex 211. Every row corresponds to a total complex 211 for a defined period of time. The fields include: Start Date, LengthDays, LengthWeeks, LengthMonths, LengthYears, weight, deviation, trend, and usergroup Count. The various length fields define the time interval over which the aggregation is performed for a particular complex. The user group count contains the total number of user groups over which the total is aggregated. As with the other counts, this is used during incremental aggregation:
new weight(TComplex)=((usergroupCount*old weight(TComplex))+w)/(usergroupCount+1)
where w is the weight of a new user group complex 209 being added to the total complex.
We now describe the process of aggregating web events.
A. Aggregating Daily Web Events
The scheduler 934 is responsible for initiating various processes for aggregating web events into aggregated information for various periods of time. Accordingly, on at least a daily basis, the scheduler 934 invokes the handler 920 to aggregate web events from the aggregation queue 722 into daily aggregated events, as shown in Level 0 of FIG. 2. Accordingly, The handler 920 requests and receives a set of web events from the aggregation queue 722 for a given day. The queue 722 keeps tracks of which events have been retrieved, and provides, in response to a handler request, those events which have not been processed, assembling the events that correspond to the desired day.
The Aggregation System does the combining using two subsystems. A first subsystem is responsible for generating the daily aggregates from the web events (the web events are called user hits in the terminology of the Aggregation System). The second subsystem is responsible for generating the higher level of aggregation (aggregation over weeks, months, quarters, or years, across categories, across users, across user groups), that is the dimensional combining.
The Daily Aggregation Service operates as follows:
1. The Handler object takes a packet of web events from the Aggregation Queue.
2. The Handler sends the packet to the Calculus object to compute the weights of the web events and to scale them from 0 to 100.
Let's give a very simple example. Suppose that the packet contains only two web events A and B. Web event A contains only one category C1 with a score 200 and a duration 4 minutes. Web event B contains one category C2 with a score 300 and duration 2 minutes. First, the Calculus object computes the weight for the category C1 in the web event A:
weight (C1)=score(C1) *duration=200*4=800.
Since there is no other categories in the web event A, we go to the next 20 web event B to compute the weight for the category C2 (in the second web event B):
weight(C2)=score(C2)* duration=300*4=600
Since there is no other categories in the web event B, we have finished computing the weights. Now we need to scale the numbers we have just computed, namely 800 and 600. Scaling consists of replacing 800 by:
[800/(800+600)]*100=57.14%
- and replacing 600 by:
[600/(800+600)]*100 42.8%
Now, if the userID in web event A and in web event B are the same, and category C1 and category C2 are also the same, then in this case, The Aggregator object will average the two weights:
(57.14%+42.8%)/2
and keep the average. If the two web events A and B have different userID or different categories, then we do not average, and we keep the two weights 57.14% and 42.8%.
In any case, inside the DailyAggregate object, every pair (userID, category) has only one number between 0 and 100 (a percentage number) that we call the weight of the pair (userID, category). If (within a single packet of web events) one (userID, category) pair has many percentage numbers (i.e. many weights), then we average them (this is done by the Aggregator object when the Parser gives the hash map to the Aggregator, as described next).
- 1. The Calculus object returns the packet (of web events, where the scores are now weights that are scaled) to the Handler object and the Handler gives it to the Parser object. The Parser object transforms the data structure of the packet (from a vector to a hash map) and gives the hash map to the Aggregator object.
- 2. The Aggregator object computes certain quantities such as the mean, the deviation, trend and the time interval (from, to). The Aggregator object uses the services of the Calculus object to compute these quantities. After computing these quantities, the Aggregator object calls the update methods of the Update object. The Update object has many methods (that all start with the word update). Every method has its special purpose: For example, the method updateDailyAggregate( ) will update the values in the DailyAggregate object using incremental aggregation from the new hash map that was produced by the Aggregator. The method updateUCComplexo updates the values of all UCComplex objects using incremental aggregation from what has changed in level 0 of the aggregation tree, etc. That is, the dimensional aggregation is automatically done (incrementally) just after the Aggregator finishes processing one packet of web events.
So the Update object provides data access between the two systems, Daily Aggregation System and Dimensional Aggregation System. Whenever the Daily Aggregation System finishes processing a packet of web events, the Update object starts the Dimensional Aggregation (incrementally) based on what have changed at level 0 of the aggregation tree due to the processing a new packet of web events.
There is another aspect of the dimensional aggregation that is scheduled. We have just said that the dimensional aggregation starts automatically (and incrementally) each time the daily aggregation system finishes processing a single packet of web events. Let us explain why we also use a scheduled dimensional aggregation:
When the ProReach System is be running, it will have some members. A member is a person or a company that has an account with the central ProReach System. Let's say User A is a member. User A will have a login name and a password, and ID number that is assigned to User A by ProReach System (when you subscribed for the first time). When User A wants to use the services offered by ProReach System, he first to goes the web page of the central ProReach System and logs in using his login name and password. Once he logs in, he can use the services. Here is a short list of the services that he can use:
- a. Issue queries (on the web page) and the answer to the queries will show on the web page.
- Queries can be on profiles, user groups, on interest for some categories, etc.
- b. Create user group and set the membership rules to be satisfied in order that a user be added to the user group User A has created. User A can schedule when to update the members of each user group, when to add new members, and how long he would like to keep each user group in the database.
- C. If User A owns a web site, he can have the web traffic of your web site be sent to the central ProReach system, so that ProReach can do aggregation for the web events of his site and keep the results of the analysis in the ProReach's database ready for him to query it anytime.
These are only examples of the services that can be offered by ProReach System through the web. Each service has a certain fee. There are different types of accounts. Some accounts provide users with a certain set of services, and other accounts may provide users with larger set of services. For example, consider the case of a person (or company) that owns a web site and uses the last service of the list above (that is, service c.). Such a person has the right to chose when to do dimensional aggregation (for the web events of his/her web site) and for what time interval. Such a person can schedule these tasks from his/her account. This is what we call the scheduled dimensional aggregation tasks. This is different from the dimensional aggregation that is done automatically each time the Daily Aggregation System finishes processing a single packet of web events.
1. Transform Category Scores to Weights
The handler 920 first invokes the math package 922 to transform the category scores in each web event 900 (within a single packet of web events) into duration adjusted scores. This step normalizes the scores, and removes the need to separately store both the category scores and the duration of the event. Normalization further allows different web events to be compared as to their overall significance with respect to any category or user.
The Calculus object 922 operates as follows to support this function. As noted, each web event 900 includes a vector of categories and scores. The Calculus object 922 process each web event 900 in turn (inside a packet of web events). For each category in the category vector of a single web event 900, the math package 922 scales each category score by the duration of the web event, and with respect to all other category scores for that web event. In one embodiment, the scaling process is as follows:
First, the Calculus object 922 adjusts each score by the duration of the web event and the type of the web event:
NewScore=Score*Duration*type
where NewScore is the adjusted category score (that we will call weight after it will be scaled from 0 to 100), Score is the original category score, Duration is the time between the start time and end time (or the duration value if directly provided. If it is not provided, the duration's default value is 1 minute) and type is the a number that depends on the type of the web event. For example, if the web event is a transaction, the type would be higher than just a clickthrough or a page view. The type of a page view is higher than the type of a clickthrough.
Next, the Calculus object 922 scales the adjusted scores relative to all of the adjusted scores:
where n is the number of categories (all the categories inside the packet of web events. A packet of web events might contain 10 web events. And each web event might contain 20 categories. So the total number of categories might be 200), and i iterates over each category.
The result of this process is that each web event 900 now contains a list of weights in place of the original category scores. The weights succinctly describe the significance of the category with respect to all other categories for that particular web event; more particularly, the weights describe as each category's score as a percentage of all of the time-adjusted scores.
2. Restructure Web Event Records to Collate Category Weights by User
The handler 920 next calls the parser 924, and passes in the updated packet of web events 900. The parser 924 restructures the packet for input into the Aggregator object 926. More particularly, the parser 924 collates the category weights of a number of web event records 900 first by user, and then by category.
Referring to
Let us explain the task of the Parser object by a very simple example. Suppose that the packet of web events contains only 5 web events that we may call for example: we1, we2, we3, we4, and we5. (we is an abbreviation for Web Event). Assume that the first, third and last web events (we1, we3, we5) all have the same userID (let's call this userID by Jack). Assume further that a category C exists inside the three web events we1, we3, we5. We have three weights for the pair (Jack, C): w1, w3, w5. The first weight w1, is the weight of the category C inside the first web event we1:
w1=weightaack, C) inside web event we1
The second weight w3 is the weight of the same category C for the same user Jack, but inside the third web event we3:
w3=weight(Jack, C) inside web event we3
The third weight w5 is the weight of the same category C for the same user Jack but inside the last web event we5 of the packet:
w5 weight(Jack, C) inside web event we5
The Parser object associates the sequence (w1, w3, w5) to the pair (Jack, C). The sequence (w1, w3, w5) is a sequence of weights for different instant of time and it represents a curve (a function of time that measures the interest of the user Jack for the category C). This function is given only by this sequence (w1, w3, w5), and is thus a discrete function. Ideally, we would like to have a continuous function because a continuous function can shows us clearly what the shape of the graph is. If we know the shape of this graph (as a curve) than we know how the interest of Jack to the category C is changing with time. Since the sequence (w1, w3, w5) represents a discrete function and not a continuous function, we apply the rules of Probability theory to this discrete function in order to get some information about it.
The first thing we do about this discrete function is to compute what in Probability theory is called the expectation of the random variable. In our case, this expectation is simply the average of the weights in the sequence (w1, w3, w5). This average is called the mean and it is computed by the Aggregator object (with the help of the Calculus object). The second thing the Aggregator does, is to compute the “error”, or what Probability theory calls the variance of the random variable. This “error” is called deviation. The third thing that the Aggregator object does is to determine what is roughly the shape of the graph of the discrete function represented by the values (w1, w3, w5). Is the shape of an increasing curve, or a decreasing curve or some sort of combination of the two? The shape of this curve is called the trend. Once this is done, the Aggregator object associates the data (mean, deviation, trend) to the pair (Jack, C) in some data structure (like a hash map, or a hash table, or the like . . . ). The Aggregator does all this for every pair (user, category).
When the Aggregator finishes the processing, the result (which is a hash map, or hash table, . . . ) forms an object that we call DailyAggregate. Therefore, a Daily Aggregate is an object that contains may pairs (user, category), and for every pair (user, category) there is associated to it a data of the sort (mean, deviation, trend). There is also a time stamp which is the time interval that was covered by the packet of web events.
In conclusion, the Daily Aggregation System processes a single packet of web events, and produces a result object that we call DailyAggregate.
When the Daily Aggregation System finishes processing a packet of web events (by producing a DailyAggregate object), it goes again to the Aggregation Queue to pick up another packet of web events. The Daily Aggregation System keeps processing web events from the Aggregation Queue by packets.
Now assume that we start the Daily Aggregation Service for the first time. The Daily Aggregation System goes to the Aggregation Queue and picks up the first packet of web events (packet1). After processing packet1, it produces an object (called daily aggregate, or just aggregate for short). Let us call this aggregate by agg1. Now the Daily Aggregation System goes again to the Aggregation Queue and takes the second packet of web events (packet2) and process it. After processing packet 2, it produces a second aggregate, that we can call agg2 for example. This aggregate agg2 is merged with agg1 to form only one aggregate object that we can call agg12, for example. After fusion, the aggregate agg1 and agg2 both cease to exit, and only the aggregate agg12 exists in the database. This fusion between agg1 and agg2 is an incremental aggregation that is carried out by the Update object (through its updateDailyAggregate( ) method). The new aggregate object agg12 represents the outcome of processing a single packet of web events that is the union of the first two packets, packet1 and packet2.
Daily Aggregate objects (or aggregates for shorts) are the data at level 0 of the Aggregation Tree illustrated in FIG. 2. Each day is represented by a single Daily Aggregate object.
The result is that for a given user associated with a number of web event records—as will typically occur during a visit to a web site, perhaps generating 20 to 100 or more web events the category weights from the many different records are collected and collated in a single category hash table 1100, so that for each category, all of the weights and start times are packaged together. This allows all of the relevant information about the user's web activity during the day the web event records were collected to be easily accessed from a single data source.
3. Create Category Interest Time Model Information
The result of the prior step is one user-category table 1100 for each user that appeared on the web server 102 on the day being processed. With each of these user-category hash table 1100, the handler 920 next calls the aggregation engine 926. The aggregation engine 926 processes these tables into a category interest time model information for each user. The summarized information describes the particular user's interests in the various categories over the day for the collected web event records. The aggregation engine 926 operates as follows on each received user-category hash table:
First, for each category table 1100 the aggregation engine 926 sorts the category's weight list 1102 by the start times. The aggregation engine 926 preferably does this by call a sorting routing in the math package 922. The result is a set of data points, essentially a curve, which describes the user's level of interest in the category over the time period from the earliest start time to the latest start time.
The goal at this next stage is then to capture each category interest curve 1200 mathematically, and eliminate the need to store the underlying weight and time data of the weight list. More particularly, for each category, the aggregation engine 926 determines the expected value of the category interest curve 1200 over the time period (e.g.,, one day). In one embodiment, the aggregation engine 926 determines the mean weight and the standard deviation of the weights in the category for the time period. The mean weight is simply the total of all weights in the weight list 1102 for the category divided by the number of weights, which will be the number of web events for this user during the time period. The standard deviation is computed normally. Again, these computations are preferably performed by the math package 922, as requested by the aggregation engine 926.
The aggregation engine 926 then creates a trend description for the category interest curve. The trend description describes the changes in the user's level of interest in the category over the time period represented by the curve. Preferably, this trend description is a string description (or its coded equivalent).
To obtain this trend in one embodiment, the aggregation engine 926 first takes the difference between the weight of the earliest start time and the mean weight. This describes whether the curve is increasing, decreasing, or constant relative to the earliest start time. Next, the aggregation engine 926 takes the difference between the mean weight and the latest start time, and again, determine if the curve is decreasing, increasing or constant. Thus, there are nine possible trends:
- 1. Increasing, decreasing
- 2. Increasing, constant
- 3. Increasing, increasing
- 4. Constant, decreasing
- 5. Constant, constant
- 6. Constant, increasing
- 7. Decreasing, decreasing
- 8. Decreasing, constant
- 9. Decreasing, increasing.
The aggregation engine 926 determines the appropriate time trend, and stores information for this time trend for the category. The stored information may be the strings themselves (“increasing,” “constant,” and “decreasing”), or code value for these (e.g., 1=increasing, and so forth). Obviously, more than three times/two segments can be selected to result in more complex time trend descriptions.
The aggregation engine 926 may apply other methods to determine the time trend of the category interest curve. In another embodiment, the aggregation engine 926 selects a number of sample times in the interest, including a point at or near the earliest start time, a point at or near the latest start time, and a number of times between these two times. Then beginning with the first selected time, the aggregation engine 926 determines whether the curve is increasing or decreasing, or constant to the next selected time, and assigned a string or code equivalent to that portion of the curve. For example, in one embodiment, three times are selected: the earliest start time, the middle start time, and the last start time. With these three times, there are two curve segments, and, the aggregation engine 926 determines whether the curve is increasing, decreasing or constant in each segment.
In yet another embodiment, the aggregation engine 926 determines the time trend, by identifying the times at which the slope of the category interest curve changes from positive to negative, and storing both the start time, and the appropriate descriptive information about the time period being described.
With the time trend information, the aggregation engine 926 now has a complete description of the user's category interest for the given day. More specifically, it can store the following category time pattern model for subsequent use:
{User ID, Category ID, Mean Category Weight, Category Weight Standard Deviation, From, To, Trend}
where “From” is the earliest start time, and “To” is the latest start time in the sorted weight list 1102, and Trend is the description of the curve changes (either string or encoded).
The underlying category weight information from the raw web events can now be deleted, and the category time pattern model stored in the database 720 in the User-Category table. This process is repeated for each category weight list in the user-category hash table 1100.
B. Dimensional Combining.
The combiner 938 is the component that is responsible for combining the daily aggregated information summarized complex information of the various complexes of The dimensional aggregation tasks carried out by the Combiner object correspond to scheduling tasks make by some members. The automatic (incremental) dimensional aggregation that occurs all the time is carried out by the Update object.
Referring again to
Generally, each of the aggregate complexes in
In one embodiment, the aggregation function is the average weight value. Other embodiment use different aggregation functions, and preferably the aggregation function can be selected on demand. Thus, for clarity of explanation, we will refer to the aggregation function generally and provide specific examples using an average weight aggregation function.
In Level 1, there are two types of aggregated data: User-Category complexes 203, and Category complexes 205. A Category complex 205 is computed by an aggregation function of the category weight for all users and a particular category over the selected time period, such as a week, month, quarter, etc. The category ID of the desired category, and the start and end dates are passed into the combiner 938. The combiner 938 retrieves the appropriate category interest time models from the database 720, by providing the category ID and time period, and obtaining the matching records from the User-Category table. The category weight means for the retrieved records are then processed by the aggregation function to produce the final value for the complex. If the aggregation function is the average function, the mean weight is the sum of the weights taken over the number of days being aggregated divided by this number of days. The resulting aggregated weight value is stored in new record in the Category Complex table, along with the category ID, deviation, trend, and From and To dates. For this complex, the trend is determined by whether the aggregated weight value has increased, decreased, or is constant relative to a prior value.
For the User-Category complex 203, the process is similar, but restricted to a particular user for the given time period. The result is stored in the User Category Complex table.
In Level 2, there the Group-Category complexes 205 and the User complexes 207. To obtain a Group-Category complex 205, the combiner 938 retrieves from the User-Category complex table all of the User-Category complexes 205 for a specified user group. User group membership information is stored in the database in the profile table, which identifies for each user ID the groups that the user is a member of Given the group ID then, the combiner 938 can identify the users in this group, and then retrieve the User-Category complexes 205 for each of these users. The weights of the retrieved complexes are then aggregated by the appropriate aggregation function, and the result stored in the Group-Category Complex table.
To create a User Complex 207 for a specific user, the combiner 938 retrieves the User Category complexes from the User-Category Complex table given the user's userID and a desired From and To interval, and aggregates their weights. The result is stored in the User Complex table.
In Level 3 there are Group Complexes 209. To create a Group complex the combiner 938 retrieves all of the User complexes 207 from the User-Complex table, using the user group ID for the desired user group, and a desired From and To interval. The result is stored in the Group Complex table. Preferably, when retrieving user complexes 207 for a given group, the combiner 938 queries the User Group Criterion table and verifies that each user is currently a member of the desired user group, and includes only those users who are members at the time the aggregation occurs.
Finally, the Total Complex 211 is shown in Level 4 of FIG. 2. To create this complex, the combiner 938 retrieves all available Group Complexes 209 for a specified time interval from the Group Complex table and aggregates their weights. The result is stored in the Total Complex table.
As noted, in one embodiment the aggregation function for weight is an average function, and thus, for any desired complex, the weight value is the average of the weight values of complexes that contribute to the desired complex.
More particularly, the aggregation service stores a configuration file which defines for each type of complex, the aggregation function to be used for that complex. In addition, the configuration files stores for each complex a lifetime value that defines how long the complex is to be stored in the database before being deleted.
C. User Group System
The user group manager 936 is responsible for defining and maintaining the user groups, and for responding to queries about the membership of users in particular groups. As explained above, each user group has one or more membership rules, which are stored in the criterion table. The user group manager 936 provides the following functions:
Get List of User Groups: returns the list of user groups from the user group table.
Get Group Size(User Group): returns the size of the specified user group.
Get Which Group User Belongs To(User): returns a list of groups of which the specified user is member.
Get Group Description(User Group): returns the description of the specified user group from the user group table.
Get Users of Group(User Group): returns the list of users currently members of this user group by reviewing the profile table.
Add User to Group(User, User Group): tests whether the specified user meets the membership rule(s) for the specified group; if so the user is added to the group in the profile table.
This function is also executed whenever a new user is added to the user table; the user group manager 938 tests the new user against each of the existing defined groups in the user group criterion table, and updates the profile table for each user group for which the user satisfies the membership rules.
Remove User from Group(User, User Group): removes the specified user from the specified user group in the profile table.
Define Membership Rule(Category, Minimum, Maximum, Negation): adds a new membership rule to the criterion table. For example, to define a category of “Auto Racing Enthusiasts” a criterion may be defined as:
AUTO_RACING_GROUP=user.category(auto racing)> 80
meaning that the weight in an “Auto Racing” category for a particular user is greater the 80.
Thus the call would pass in the “auto racing” category, minimum=0, and negation=No.
Delete Rule(Criterion): Removes the specified membership rule from the criterion table.
Define Rule for Group(User Group, Criterion): Adds the specified criterion to the specified user group in the user group criterion table.
Delete Rule from Group(User Group, Criterion): Removes the specified criterion from the specified group in the user group criterion table.
Any of the foregoing functions can be scheduled with the scheduler 934 to be performed on a periodic basis for automatically updating the users and the user group tables.
D. Daily Aggregation
The DailyQuery object 932 (part of the Daily Aggregation System) is responsible for responding to queries about user interest levels as expressed in the various category weights for the daily aggregates. Each day is represented by a single DailyAggregate object. The DailyQuery object allows one to acquire all kind of information about these daily aggregate objects, such as to what day they correspond, what are the users there, the most active of them, what are the categories there, and the most important categories of them (category(ies) with highest weight(s) for user).
E. Affinity Group Manager
The affinity group manager 936 is responsible for identify users groups that are related to each other. An affinity group is defined by criteria related to interests and other customer profile information (such as from legacy databases) combined by Boolean logic. For example, using age, income, and education demographics, one could define an affinity group “yuppie sportsters” by the following membership qualification:
age<=35 AND (income> 60,000 OR education>=undergraduate) AND interest(sports)> 1.5
In this case, legacy data would be combined with relative interest ProReach data. The affinity group “yuppie sportsters” could then be queried in the same way that regular user groups can be queried. In this case, the calculation of group membership is an expensive operation, so an affinity group has a recalculateMembership( ) command and keeps track of its last recalculation.
Once an affinity group is created, the event records for individual user aggregate into the affinity group, but the affinity group itself does not aggregate into other groups or complexes. Thus, it becomes more usable after having remained defined during several aggregation cycles, but administrators are free to remove it.
The affinity group manager 936 provides the following functions:
1. Automatic creation of affinity-groups, as well as marketer-custom-made affinity-groups
2. Automatic adding/removing users to/from the affinity-groups.
3. Methods for inquiring and manipulating the affinity-groups. These include:
getListOfAffinityGroups: returns the list of all the affinity-groups.
howManyUsersIn(AffinityGroup group): returns the number of users in the specified affinity-group.
toWhichAffinityGroupsBelong(String user): returns a list of all the affinity-groups to which the specified user belongs.
getUsersIn(AffinityGroup group): returns a list of all the users in the the specified affinity-group.
add(AffinityGroup, user): This adds the specified user to the specified affinity-group.
remove(AffinityGroup, user): This removes the specified user from the specified affinity-group.
F. The Update object
The update object 940 is responsible for incrementally updating the daily aggregate and for updating the complexes of the Aggregation Tree as described with respect to FIG. 2. Incremental updating occurs each time when the Daily Aggregation System finishes processing a single packet of web events. The incremental update is applied to each complex that is effected, starting with Level 1 complexes, and continuing up the aggregation tree. The formulas for incremental updating are specified above with respect to the various complex tables. This incremental update is done automatically and all the time (each time the daily aggregation system finishes processing a packet). This is different from the task carried out by the Combiner object. The Combiner object does dimensional aggregation upon the request of a member (for certain specific objects). The Update object is part of the Dimensional Aggregation System. The Update object is a door between the Daily Aggregation System and the Dimensional Aggregation System.
G. Scheduler
The scheduler 934 is responsible for scheduling executing various tasks related to the maintenance of the database 720. The scheduler 934 can execute any of the following tasks on user defined periodic basis:
1. For any given category, aggregation over users and over a time interval (the category being fixed during the aggregation). The result of this aggregation is a category complex.
2. For any given user and category, aggregation over a time interval (the user and the category are both being fixed during the aggregation). The result of this aggregation is a user-category complex.
3. For any given category and user group, aggregation over users in the given user group and over a time interval (the category and the user group are both being fixed during the aggregation). The result of this aggregation is a group category complex.
4. For any given user, aggregation over all categories and over a time interval (the user being fixed during the aggregation). The result of this aggregation is a user category complex.
5. For any given user group, aggregation over the users in the given user group, over all categories, and over a time interval (the user group being fixed during the aggregation). The result of this aggregation is a group complex.
6. Aggregation over-all user groups, over all categories, and over a time interval.
The result of this aggregation is a total complex 211, representing the total aggregation of all the web activity.
7. Deletion of the daily results.
8. Deletion of category complex objects.
9. Deletion of user category complex objects.
10. Deletion of group category complex objects.
11. Deletion of us er complex objects.
12. Deletion of group complex objects.
13. Deletion of the total complex object.
14. The frequency for picking up the web event record from the aggregation queue. The frequency can be scheduled, so that the handler picks up an event record every 15 minutes, or every hour, or every minute, and so forth.
Each of these tasks is identified by its corresponding task number within the scheduler 934. To schedule a task, the schedule provides the following function:
Schedule(task, startTime, maxDuration, frequency, timeInterval): Task identifies one of the above tasks by number. StartTime identifies a time at which the task is executed. MaxDuration specifies the maximum amount of time for the task to take to complete. If the task is not completed after the maximum duration has elapsed then the process is stopped. TimeInterval is a time interval over which the task should execute, such as day, week month, etc. Frequency is a number of time the task should run in the defined time interval.
H. Event Dispatcher
The event dispatcher 930 provides for event driven management of the aggregation service, and particularly for management of the various complex tables, user tables, and category tables in the database. The event dispatcher 930 can dispatch the following events:
1. CComplexBeginEvent: This event by the event dispatcher 930 at the start of the aggregation over users and over many days (in order to produce a CComplex object).
2. CComplexEndEvent: This event by the event dispatcher 930 at the end of the aggregation over users (i.e. after a CComplex object is constructed).
3. UCComplexBeginEvent: This event by the event dispatcher 930 at the start of the aggregation over daily results (in order to produce a UCComplex object).
4. UCComplexEndEvent: This event by the event dispatcher 930 at the end of the aggregation over daily results (i.e. after a UCComplex object is constructed).
5. GCComplexBeginEvent: This event by the event dispatcher 930 at the start of the aggregation over UCComplex objects (in order to produce a GCComplex object).
6. GCComplexEndEvent: This event by the event dispatcher 930 at the end of the aggregation over UCComplex objects (i.e. after a GCComplex object is constructed).
7. UComplexBeginEvent: This event by the event dispatcher 930 at the start of the aggregation over UCComplex objects (in order to produce a UComplex object).
8. UComplexEndEvent: This event by the event dispatcher 930 at the end of the aggregation over UCComplex objects (i.e. after a UComplex object is constructed).
9. GComplexBeginEvent: This event by the event dispatcher 930 at the start of the aggregation over UComplex objects (in order to produce a GComplex object).
10. GComplexEndEvent: This event by the event dispatcher 930 at the end of the aggregation over UComplex objects (i.e. after a GComplex object is constructed).
11. TcomplexBeginEvent: This event by the event dispatcher 930 at the start of the aggregation over GComplex objects (in order to produce the TComplex object).
12. TComplexEndEvent: This event by the event dispatcher 930 at the end of the aggregation over GComplex objects (i.e. after the TComplex object is constructed).
13. UserGroupAddEvent: This event by the event dispatcher 930 whenever a user becomes a member of a user group (i.e. whenever a user is added to a user group).
14. UserGroupRemoveEvent: This event by the event dispatcher 930 whenever a member of a user group is removed from the user group.
15. UserGroupCreatedEvent: This event by the event dispatcher 930 whenever a new user group is created.
16. UserGroupDeletedEvent: This event by the event dispatcher 930 whenever a user group is deleted.
17. UserGroupTestBeginEvent:This event by the event dispatcher 930 whenever user group manager starts testing whether the members of a user group still satisfy the user group membership test or not.
18. UserGroupTestEndEvent: This event by the event dispatcher 930 whenever the com.fujitsu.proreach.agg.UserGroupManager class finishes the user group membership testing.
19. CComplexDeletedEvent: This event by the event dispatcher 930 whenever a CComplex object is deleted
20. UCComplexDeletedEvent: This event by the event dispatcher 930 whenever a UCComplex object is deleted.
21. GCComplexDeletedEvent: This event by the event dispatcher 930 whenever a GCComplex object is deleted.
22. UComplexDeletedEvent: This event by the event dispatcher 930 whenever a UComplex object is deleted.
23. GComplexDeletedEvent: This event by the event dispatcher 930 whenever a GComplex object is deleted.
24. TComplexDeletedEvent: This event by the event dispatcher 930 whenever the TComplex object is deleted.
25. DailyResultCreatedEvent: This event by the event dispatcher 930 whenever a daily result is created.
26. DailyResultsDeletedEvent: This event object is fired by the event dispatcher 930 whenever the daily results are deleted.
The event dispatcher 930 can dispatch these events to any of the other components of the aggregation service to allow such components to appropriately respond to the event. For example, the update manager may respond to a DailyResultsAddedEvent to perform an incremental update of the appropriate complexes.
I. Profile System
The Profile System 955 provides an object called Profile Query that is responsible for all queries about profiles. The service also includes a Profile Manager object that is responsible for the management of profiles. Such management includes for example, profile sharing: Say that a member A maintains profiles for his/her web site within the central ProReach System database. Another member B would like to have some of these profiles (more specifically those profiles that show a high interest in electronics). Member B does not own these profiles, but nevertheless, member B would like to receive some of these profiles. Handling such requests and keeping records of what profiles were shared is all done by the Profile Manager object.
More particularly, the Profile Query is responsible for handling queries about user profiles. The Profile Query receives a query specifying a user's ID, and retrieves from the profile table the user's group membership information, and retrieves from the user-category table the user's interest information in the categories (e.g., weights, deviations, or trend information). The Query object constructs from the retrieved information a user profile. The user profile includes at one of the following items: a current user group list of the user groups of which the user is current member; a group change history list of which identifies the groups of which the user is a new member over some time period, and from which groups the user has been dropped as a member; and a list of the top N categories of interest, based on the category weight, such as the top 10 categories of interest. The category list may be further refined to include only categories which show an increasing trend, so as to predict the user's future interests for marketing purposes.
In a preferred embodiment, two types of user profiles are maintained, local and global. A local user profile is maintained at each ProReach enabled web site 100 using web event information that is gathered at the site from user visits there. The global user profiles are maintained by the host system 103 or the global server 112, and are created from the local user profiles for each user.
J. AQL System
The UserGroup Manager 936, the Daily Query 932, and the Profiler 726 objects need a mechanism by which system administrators (and various members of ProReach System) can form queries about users' interest, categories, groups and so forth. In one embodiment this mechanism is provided by a flexible query language called Aggregation Query Language (AQL), which is processed by the AQL system 944 to form query objects which are executed by the various managers.
1. AOL Language
AQL is a predicate query language, which means that it is a language that is mainly based on predicates alone. There is no data type declarations. Every predicate has a certain number of arguments (its arity) and the data types the arguments are supposed to have. When a predicate is used in a query, it is implicitly assumed that the data types of the arguments of the predicate are used, and there is no need for declaring the data types of the variables. AQL has the following features:
- 1. A rich collection of primitive data types and primitive predicates.
2. The possibility of constructing new predicates from old or primitive ones, and very simple syntax for doing it.
3. A very simple syntax for constructing queries, using predicates.
4. A simple interface between a marketer and the predicates, so that the marketer does not need to learn the query language.
There are two kind of statements in AQL (Aggregation Query Language):
1. A Query statement (a statement which inquires some information).
2. A Predicate definition statement (a statement which constructs a new predicate).
A query statement has the following form:
\query x, y, . . . , z [P(x, y, . . . , z)]
the sentence means that we are interested in all tuples (x, y, . . . , z) such that the sentence P(x, y, . . . z) is true. For example, if P(x) means “the user x is very interested in Fishing”, then, the query: \query x [P(x) \] will return all the users that are very interested in “Fishing”.
More formally, the syntax of the a query statement always starts with the keyword \query followed by an identifier (possibly many identifiers separated by commas) and then a predicate.
A predicate can be either a composite predicate or a built-in predicate. A built-in predicate is a predicate that is already provided by the aggregation service. A composite predicate is a predicate that one can build by combining built-in predicates with logical connectors (conjunction, disjunction, negation, etc . . . ). One can also build a composite predicate by combining other composite predicates. In conclusion, a composite predicate is a predicate that is built by the marketer, while a built-in predicate is a predicate that already exists and ready to use (already provided by the aggregation service). When we use the word predicate, this can be a built-in predicate or a composite predicate. The syntax for writing predicates is follows:
A composite predicate can either be a conjunction, a disjunction, or a negation as follows:
If the predicates are separated by comas, then it is a conjunction. For example, the following sentence represents the conjunction of three predicates P, Q and R: [P, Q, R]
If the predicates are separated by a colon, then it is a disjunction. The following sentence represents the disjunction of three predicates P, Q, and R:
[P:Q:R]
If the predicate is enclosed by curly braces, then it is a negation. The following sentence represents the negation of the predicate P: {P}
One can build a new predicate from existing (i.e. primitive or already defined) predicates, by composing two predicates or more via these logical connectors for conjunction, disjunction, and negation. To define a new predicate, one uses a predicate definition statement, as follows:
\predicate identifier predicate
Let's give an example: Suppose we have a predicate P(x) that means “the category x interests more than half of the population”, and a predicate Q(y) that means “The user y has interest in medicine” and a third predicate R(y, z) that means “the user y is strongly interested in the category z alone”. We can build a new predicate K(y, x) as follows:
\predicate K(y, x) [P(x), Q(y), R(y, x)]
Now we can use the new predicate K(y, x) to make a query like this:
\query y, x [K(y, x)]
This query will return all users y and categories x such that the user y has interest in Medicine and is strongly interested in the category x alone and the category x interests more than half of the population.
In AQL, we can express a quantified statement (i.e. a statement with a logical quantifier). Suppose we have a predicate P(x, y) that means “the user x has a medium interest in the category y”. And we would like to express a sentence such as: “There exists a category for which user x has a medium interest”. In Predicate calculus, this is done via the existential quantifier:
∃y P(x, y)
In AQL this can be written as follows: P(x,X)
The upper-case letter X always means that it is a quantified variable. If we make the following query:
\query z [P(z,X)]
it will return all users z having a medium interest in some category.
AQL can also express the universal quantifier. According to the rules of first order logic, the universal quantifier can be expressed by combining the negation and the existential quantifier. For example, suppose we would like to express this sentence:
- “for every category, the interest of the user z is higher than 70%”.
This new predicate P(z) tells us that the user z is interested in every category with an interest that is always higher than 70% whatever the category is.
Suppose we have a predicate Q(x, y) that means:
“the interest of the user x in the category y is higher than 70%”
We can express the predicate P in terms of the predicate Q as follows:
\predicate R(x, y) [{Q(x,y)}]
\predicate P(z) [{R(z,X)}]
2. AQL Interpreter
The AQL system 944 includes an interpreter that is responsible for interpreting the AQL language into executable objects (e.g., Java objects) and returning the results. The components of the interpreter include a Statement Analyzer, a Predicate Definition Processor, a Recorder, a Tree Builder, a Factory, a Predicate Tree Builder, a Predicate Builder, and an Evaluator.
Given an AQL statement, the first component that gets the statement is the Statement Analyzer component. This component simply determines what 10 kind of statement it is, whether it is a query statement and a predicate definition statement. If the statement turns out to be a query statement, then the Statement Analyzer sends the predicate part of the statement to the Tree Builder component. The Tree Builder component builds a tree from the predicate part of the statement. For example, suppose that the original statement was a query statement of the form:
\query x, y [[[P(x), Q(y)]: [R(x, y), P(x)]], Q(y) ]
The predicate part of the above statement is the string that starts with the first bracket “[” and ends with the last bracket “]”.
The tree that the Tree Builder will construct from the above query statement is the following:
Once this tree is constructed by the Tree Builder component, the Factory component constructs a predicate object for each leaf of the tree (i.e., for R(x,y), P(x), and Q(y)). Then the Predicate Tree Builder replaces every leaf of the tree with the corresponding predicate object that was constructed by the Factory component. The Predicate Builder component constructs a predicate object for the whole tree. The Evaluator component takes the predicate object constructed by the Predicate Builder component, supplies the arguments for it and evaluates it, and gets the results of the query statement to the requesting entity. For example, the Evaluator may return its results to the UserGroup Manager object or Profiler object or Daily Query object or AggQuery object depending on the type of the query and which object should handle that query.
As noted above, the Statement Analyzer component first determines what is the type of the statement (a query statement, or a predicate definition statement). Now, if the statement turns out to be a predicate statement rather than a query statement, then the Statement Analyzer handles the statement to Predicate Definition Processor component. This component takes the predicate part of the statement and gives it to Tree Builder component, to the Factory component, to the Predicate Tree Builder component, and then to the Predicate Builder. Then the Predicate Definition Processor gets the predicate object constructed by the Predicate Builder component. The Predicate Definition Processor component gives the predicate object to the Recorder component together with the identifier part of the predicate definition statement. The Recorder component puts the pair (identifier, predicate object) in the main HashTable of the interpreter, where it is stored for use in subsequent queries.
VI. Categories and Categorization
A. Overview of Categorization
When a web visitor engages in activity such as by looking at a web pages, a ProReach system analyzes the activity by determining what has happened, i.e. who has done what and when. This section explains how ProReach identifies who and what, namely by categorization. In an alternative embodiment, an additional dimension for categorization is applied: determining where an activity takes place, such as indicating at what company website or division activity occurred.
To categorize documents and other web content, ProReach's content recognition engine 718 builds category “patterns” from sample documents and categorizes documents based on which category's pattern(s) they best match. In one embodiment, the content recognition engine 718 is based on an available engine from Autonomy, Inc. of San Mateo, Calif. The content recognition engine does linguistic analysis on a document to identify keywords.
The content recognition engine includes a library of categories related to e-commerce. These are organized hierarchically to better approach how users might think about web related content. ProReach also provides the content recognition engine with an architecture for adding, refining, and editing categories, both semi-automatically and by human administration.
ProReach includes a standard category tree that system administrators may extend in their areas of expertise or heavy traffic. As documents are categorized and their usage is recorded, ProReach builds two Baysian networks that describe the probabilistic relationships between categories. First, an inheritance tree helps improve the hierarchical category structure and streamline categorization performance. Second, a relationship network is built by both automated and human-driven data mining to document how categories co-occur. Understanding these relationships can be of important benefit to marketers. By integrating selected additions to the standard category tree, it is anticipated that this tree will become an increasingly accurate measure of the content that system administrators use in their web sites.
As described above, all web event records are weighted, as are aggregated complexes of web events, such as user, user groups and category complexes. This weighting optimizes all calculations for relevance to ProReach system owners. For each combination of a content category and a user group, an aggregate complex models the web traffic for this combination.
In one embodiment, each ProReach system 100 has a user group called “systemEveryone”, which in combination with a particular content category, the describes the behavior of all visitors to a given ProReach system 100 with respect to the specific category; this is embodied as Group Category complex, where the Group is systemEveryone. Similarly, a content category “everything” summarizes all of the categories and is used with each user in a user category complex 203 to describe the interests of any particular user group with respect to all content categories. Thus, categorization serves as a method for grouping data for further analysis. More globally, central ProReach administration may use the group “everyone” and content category “everything”, for all categories and all users known anywhere. Aggregate complexes using these “global” categories may be downloaded by systems as desired.
B. Categories and Hierarchies Organize Data
In the preferred embodiment, all content categories fall into strict hierarchies. Each hierarchy has a root: all users are included in the “everyone” user group, and all content is included in the “everything” category. Any category may subsume child categories, which are children only of that one parent. Classifying an event (or a user) of the parent category into one of the child categories provides additional data. These must be justified by their utility in providing valuable information. They must be meaningful to humans.
Child categories must be different, conceptually as well as in web traffic patterns, form each other and from their parent. In particular, child categories should be easy to distinguish computationally. Child categories are distinguished from each other based on a weighting derived from the amount of visitor views of documents in the categories. Categories which are too “light”, i.e., insufficient traffic, to exist on their own are “folded in” to their parent category, with their weighting information aggregated with that of their parent category. The weighting of categories depends in part on how system administrators choose to weight individual web pages and other documents.
The level of detail stored by ProReach for a category can be regulated by setting global options. When these options are adjusted to lower storage, data are compressed both by storing fewer details about time patterns, and by folding smaller categories into parent categories.
1. Building and Maintaining Category Hierarchies As data patterns change, existing categories must be adjusted and new categories created. A category usefulness is preferably measured by its distinguishability from others. The present invention handles category discovery and maintenance by documenting event records for categories. When one unsubdivided category becomes too heavy, four things happen:
- Sample documents from the growing category are collected by statistical sample.
- Key phrases are identified from sample documents.
- An algorithm searches for features (such as key phrases) to identify one or more new subcategories.
- Central ProReach administrators are alerted to the new subcategories so as to approve or disapprove of the inclusion.
The second and third steps here automatically by the content recognition engine 718, which determines the appropriate groupings of documents, and suggests potential category names. A human administrator may accept the suggestion, or adjust the category based on refinements to the automated suggestion. For example, the human may choose different representative documents for a category and may choose descriptive names for new categories. The new categories then become part of the standard ProReach distribution and are available for download by ProReach systems, which will subsequently build event records covering the new categories if they have sufficient traffic in this area.
The categories used by a ProReach system are formed from a combination of strict hierarchies and pseudo-hierarchies. A strict hierarchy is defined as directed tree-like structure with single inheritance: each node (except the root) has exactly one parent, so that each child of a parent is a child with 100% probability. The tree structure implies that a given child is never its own ancestor (such as parent's parent's parent) and that there are never two different paths from a child to an ancestor. This structure is clear and convenient to work with. However, strict hierarchies often fail to capture the actual, more complex between categories that documents or users may be associated with. Strict hierarchies also fail to account for uncertainty, that is indeterminacy of which category or groups a particular document or user belongs to.
Pseudo-hierarchies remedy these deficiencies. A pseudo-hierarchy still maintains parent-child relationships, but allows for a document or user to partially belong to multiple categories. For example, document about “dogs” may belong 60% to “pet” category and 30% to a “mammal” category. In one embodiment of the present invention, these pseudo-hierarchies are treated as Baysian networks, to model the probability of classifying documents into content categories, or users in user groups. In this case, there would be one node per category. Say that the relationship between the “sports” category and its child category “football” is (30%, 85%). By this we mean that if we knew only that a given document had been classified as “sports”, there would be a 30% chance that the document would also be classified as “football,” and conversely that a document classified as “football” with 85% probability would also be classified as “sports.” In particular, sports may have another parent category.
ProReach combines the two approaches of hierarchies and pseudo-hierarchies by initially modeling content categorization on a strict hierarchy, even though the actual performance of the content categorization engine is pseudo-hierarchical. In accordance with this doctrine, we consider web traffic that occurs within children categories as also occurring in parent categories.
Simultaneously with this external point of view, ProReach collects statistics on how parent and child categories relate to each other, including the probability that one category is classified into the other category.
C. Category Names and ID's
Categories used in a ProReach system 100 may be created by independent and unrelated companies and organizations. It is essential that categories named by independent entities do not have identical names. More immediately, one would not want a ProReach system 100 to name two of its own categories the same way. Such name collisions could cause considerable confusion and lead to processing errors.
Since alphabetical names are intended primarily for human consumption, and since actual category discrimination is based on the underlying category ID's (both for users and for content), the two identifiers use different approaches. For example, it is easier to enforce uniqueness of category identifiers by encoding in them information that is difficult to duplicate accidentally. On the other hand, textual names must be as brief as possible to convey their meaning. It may make sense to allow for locale-specific rendering of category names.
To enforce unequivocal naming for ID's and to encourage this for text names, each ProReach system 100 carries a unique identifier and a unique text string, which is determined at the time of system installation. Whenever the originating location of a category is uncertain, this must be prepended to the local category ID or name, respectively. Thus, if a ProReach system with the unique identifier “4Q5f4” at SportsWorld were to define a category “Xj542” called “Football”, this category would be treated as:
ID: 4Q5f4.Xj542
name:SportsWorld.Football
In case this were clear from context, the prefix “SportsWorld” may be taken as a default and either hidden or encoded by color when viewed by users. In the likely case that ProReach had already defined a “Football” category such “H730,” a ProReach administrator at SportsWorld would have received a warning message when attempting to name a local category the same as a standard category. If we assume that the central ProReach system at the central system has its own unique identifier, e.g., ID B345, then the central system's corresponding category would be seen as:
ID: B345.H730
name:Central.Football
Note: in these examples, identifiers of systems (like B345 or SportsWorld) are called prefixes.
During the update process, ProReach systems 100 exchange their information with the central system. Depending on their policy, they supply more or less event record information to the central system, which in turn provides upgrade information combined from all ProReach systems 100 and administration at the central system. New categories are added at appropriate places in the hierarchy, and in cases where the category refinement at a ProReach system 100 overlaps substantially with that at the central system, new categories are listed. ProReach systems 100 are given the chance to fold some of their specialized categories into those that the central system has added to the standard category tree (see that section below.)
1. Default Unalterable User Category Structure
To facilitate communication between different ProReach systems 100 an initially sparse set of user groups is provided. All ProReach systems 100 share these user groups near the top of their hierarchies, and allow for the inclusion of additional new groups and subgroups. As with content categories (discussed later in this chapter), this is a standard structure, as illustrated FIG. 13.
First, notice that the user group “global.everyone” is the only category built by data collected at the central system. All other categories are specific to each system 100 (indicated by the second level of user groups denominated “system1.everyone” and so forth). Thus, for example, the company SportWorld, one should substitute “system1” with “SportWorld”. Remember that these names are merely descriptive, and actual category identifiers are system-assigned numbers.
The categories “everyone” and “global.everyone” are the only ones for which the central system tracks information. There is a separate system.everyone-rooted subhierarchy for each ProReach system 100. As discussed in the section on Aggregation, below, during a system update, a system 100 submits information for its system.everyone to the central system, which responds by sending back information about the central system.everyone. In this way, categories from many different ProReach systems 100 are kept up-to-date.
The categories “anonymous”, “cookie”, and “registered” are respectively for customers who are unidentified, known by the cookie they have allowed ProReach to store, or who have completed a full registration, usually including such demographics as name and address.
2. Similarities and Differences Between Categories and Groups
Administrators may wish to add subcategories of either kind (users or content), detailing their vertical specializations. These would always be added to one of the existing categories. A system administrator may add categories under his own system's naming convention, i.e. in their own namespace. There are also important differences between the two types of categories. These are highlighted in the table below. These differences will become clearer when content categories are discussed later in this section.
Differences between User Groups and Content Categories
D. Using Source or Location in Categorization
Source is another dimension similar to that captured by user groups. For example, the company SportWorld would be very interested in knowing how much its clients visit the competing website SportsOnline.com. If both SportWorld and SportsOnline.com were ProReach systems 100, they could become quite dissatisfied both with each other and with the central system administration if their competitor was able to use ProReach to spy on their customer's behavior at their site. On the other hand, it should matter to SportWorld whether customer activity (say on football) is at their website or somewhere else.
To balance these concerns, ProReach keeps track of the source of events in a way similar to its handling of user groups and content categories, but only distinguishes between inside and outside of a given company at any ProReach system. This means that for any user and category, a system (like SportWorld) may have two extended event records —one for activity within the company, and one for all other activity. An extended event record behaves internally almost like the event records introduced in the next section, except that there is the additional parameter of source used to index extended event records. The central system keeps track of more than two sources, differentiating between different systems and between their “inside” and “outside” sources.
E. The Content Category Lifecycle: Formation, Tuning, And Change
- 1. The Standard Category Tree and Additions by ProReach System Administrators
Referring to
Individual ProReach systems 100 are not allowed to modify any of these standard categories. More generally, ProReach systems 100 are only allowed to modify categories 1302 under their own system, namely having the prefix assigned to their system. If they attempt to delete “Standard” categories, this will only be a virtual deletion. In other words, the category will be invisible to them, and any classification they see will not descend into the categories they have made invisible.
- a) Adding Categories At ProReach systems
An administrator of a ProReach-system 100 can manually 1408 add new subcategories of existing categories to their local category tree 1402 by creating a set of sample documents and instructing ProReach to use them to create a new category. The categories are preferably added in response to user activity 1404 indicating that certain documents are experiencing significant usage, which may indicate the need to further subcategorize the content in the category of which these documents are categorized. ProReach will first categorize them under the old tree 1402 to determine the parent of the new category. If the parent is not the one intended, this may serve as an indication that either the old parent category does not perform well, or the sample documents do not fit where the administrator intended. In particular, the sample documents may not all belong in a single category, in which case perhaps only a subset or altogether different documents should be used to train the new category.
ProReach monitors category editing activity along with which categories are involved. These data are stored locally and transmitted during upgrades, so if several systems have administrators who attempt similar additions, this indicates which categories to reexamine.
By successively adding categories, a particular ProReach system 100 may accumulate a specialized hierarchy 1402 in its own are of expertise. Since new categories may only be added as subcategories of existing ones, each new category will have an ancestor in the standard category tree. Thus even if the standard category tree never expands in this particular area, event records in these categories contribute to the totals in ancestor categories that are meaningful to every other ProReach system 100.
Specialized expansions of the category tree are particularly interesting to the central system 103, because these capture expertise and leverage the companies specialized experience. As the standard category tree is expanded to include the new third-party subcategories with the heaviest traffic, the standard category tree will be able to reflect content increasingly accurately.
The standard category tree will not become too big for companies to use, because each ProReach system 100 keeps only that level of detail relevant to its own business. Each category that is too light will be considered only as folded in to parent categories.
- b) Updating the Standard Category Tree
The central system 103 improves its standard category tree based on incoming data and practical experience. These improvements lead to continual upgrades to the standard interest tree. Each change carries a time stamp, so that ProReach-enabled sites may download only those upgrades they have not already incorporated.
As part of the update, ProReach systems 100 provides summary information about traffic on their own system. The degree of information collected in this way from businesses may vary. However, the data is preferably designed in such a way as to be unobtrusive and not to disclose either information about individual customers or an accurate financial picture of a company. Instead, only summary event record and category performance statistics will be shared. This will foster a symbiotic relationship between the central system 103 and other ProReach systems 100, allowing each to build more precise models of their own data.
- c) Building the Standard Category Tree
The ProReach standard category tree 1400 preferably has approximately eight hundred categories. These categories range from cosmetics, sports, board games, stamps, cars, trucks, books, health, real estate, travel and so forth. The standard category tree 1400 is hierarchically structured. The categories are implemented in a database table of categories, each of whose entries contain a field that identifies the parent category.
ProReach constructs its initial standard category tree 1400 based on trees at leading web portal sites, such as AltaVista. These sites have already built categories that are validated by their continuous traffic. ProReach uses a spidering system that collects pages from these sites and builds up a categorization engine trained by pages that link from categories. Several tens of thousands of categories are available from leading portal sites. Spidering starts from the top down and increases knowledge of categories over time.
Categories are revised periodically, since their content may change. It should be noted that many categories may be limited to topics of current interest such as daily news. These highly dynamic categories are recalculated quite often to stay current.
- d) Discovery, Refinement, and Editing of Categories Categories added at ProReach systems 100 do not interfere with standard categories because they always are added as descendants to the standard categories. However, ProReach system administrators have arbitrary freedom to refine standard categories by adding their own child categories. Over time, both the central system 103 and owners of ProReach systems 100 may choose to add categories to the tree 1400. These are always added as children of existing categories and are thus considered to define a specialized subset of their parent category. In addition to manual addition of categories, those categories with heavy traffic seek to be split into smaller, more specific pieces. In order to do this, they store a statistical sample of distilled documents, which can then be categorized into separate subcategories by administrators.
Performance of categories is always measurable, and serves as a basic means to drive (or inhibit) specialization, as appropriate. If a category does not perform well, that information is stored as a warning signal, which leads to monitoring and possible re-training of the category. Refinement is driven strictly by traffic in the standard category tree. Given a high level of traffic 1404, a statistical sample of documents is collected, generating candidate specialized subcategories. After testing, these are added to the standard category tree 1400.
It may also occur that a category gradually loses traffic. If this happens at a ProReach system 100, it de-activates the category and redirects related traffic to the parent category. If the global category performance is found at the central system to be so small that the category is not worth maintaining, the history of the category is archived for possible later revival, and the category is simply folded in to its parent in the standard category tree 1400.
When a category is modified, it may not categorize its original target documents perfectly. As a result, a new category ID is generated (possibly with the same name) and event records for the old pattern are converted to event records for the new category. To make this work, the old category is assigned to redirect its event records to the new category, along with a number indicating what fraction of old content would be classified into the new category. By default, one minus that fraction would be classified into the parent of the old category. If a parent category has changed, the children should be redirected to have the new category as a parent.
F. Categorization Model of the Content Recognition Engine The content recognition engine 718 is able to train categories on training documents so that any other document can be scored against any category. This means that for any document and any trained category, the content recognition engine 718 outputs:
score(document, category)
which ranges from 0 to 1,000,000 (or other suitable maximum), with higher scores hopefully indicating a better fit of the document in the category. (1,000,000 is used as a maximum instead of 1:0 to along for storage of high precision results as long interests instead of floating point values).
Many web pages are visited frequently by ProReach system users. It would be inefficient to categorize each document each time it is viewed by a user. Thus, one optimization strategy is to store, rather than recompute, category weights whenever possible. This can be accomplished by two means. For pages on a local ProReach system 100, category identification are stored inside the page as metadata. Alternately, frequently visited pages' categorizations are be cached in the page metadata cache 716. When ProReach sees a record of a visit to a URL, it first checks the cache 716 and then searches for metadata. Only if neither of these yields a categorization are the other procedures here followed.
1. Category Creation
The first step in creating a category is identifying a representative set of documents. Documents for a category are selected by the system administrator or by experts in the category's subject, categorized by the content recognition engine 718, and then the quality is tested on real-world documents by an administrator or other content expert, who validates the categorization results. If the categorization produced is good according to the expert, then a good set of representative documents was used. Otherwise, it was not, and the set of representative documents should be altered, and the testing process repeated.
When testing produces good results with good frequency, then the category is done. This set of documents which is used to train a category is the category's prototype. Using statistical methods, the content recognition engine 718 analyzes the set of representative documents and produces a category pattern. This pattern consists of weighted key phrases, which are stored in a category-defining database table. Each key phrase is a group of words extracted from a sample document and stemmed to standard word forms. For example, a document about football might contain both the terms football players and football player. In this case, both of these would be considered equivalent, and the singular form would be stored as a key phrase in the category pattern.
2. Document Categorization
Once such a pattern exists, the content recognition engine 718 can compare any document to that pattern and compute how closely that documents fits the pattern. When a document is categorized, it is first processed by separating its text into phrases. Linguistic analysis and information theoretic processing then identify the phrases most likely to be important in the document. For example, words like “and”, “I”, and “or” occur too frequently to distinguish meaning in documents, and are discarded from further consideration. Some of the remaining phrases are identified as key phrases and are weighted in proportion to how much they are thought to define the meaning of the document.
The key phrases derived from the document are then looked up in the category-defining database table and matched against stored category patterns. Only those patterns that contain any of the document's key phrases are considered further as candidates for the document's category. Suppose that only four categories' patterns match any of these key phrases. Then the document's score in each of these categories is computed as shown in FIG. 15.
A document can match a pattern for example 90%, or it might be a 50% match. This match is called a score, and is calibrated to range from 0 to 1,000,000. The highest possible score 1,000,000 is given when a document matches perfectly a predetermined number or percentage of key phrases stored for a pattern. The score 0 means that no match has been found, which occurs for those patterns which have eliminated prior to the step discussed above. In general, the score is calculated by summation of matches between a pattern's key phrases and a document's key phrases. As described above, the set of category scores is a category vector 908 of pattern matching results —one result for each category pattern with a positive match. For categories where there is no pattern match, the category vector stores a 0 for the category.
In one preferred embodiment, the score given a particular category can be a function of category score given to any of the category's subcategories. This results in a “composite score” for the parent category. For example, if “ECOMMERCE” is a subcategory of “BUSINESS” and if a document scores high on ECOMMERCE and low on BUSINESS, the content recognition engine 718 may increase the score for the BUSINESS category. This approach preserves the hierarchical relationship of the categories, and overcomes the counter-intuitive instances in which a document scores high in a subcategory but low in the parent category.
This approach may be implemented as follows: If a parent category has subcategories, the score of that parent category will be the higher of: its own score, or the average of its score and the subcategory with the highest score. Hence assume B,C,D are subcategories of A, and a document has the following raw category scores of A=300000, B=700000, C=10000, D=200000. In this case the composite score for A would be 500000, which is the average of 300000 and 700000 (the maximum subcategory for subcategory B). Those of skill in the art will appreciate that there are various ways to augment a parent category's score by variations of this approach. Thus, in general, composite scoring is a function f such f(parent category score, scores of subcategories) yields a composite score for the parent category.
3. Multiple Dictionary Categorization
The ProReach systems can be tuned to their particular environment by splitting categories across multiple category tables. In one embodiment, this is done wit various category dictionaries, each covering different sets categories; the dictionaries may be implemented as different category tables in the database 720. A given category may be present in one or more dictionaries.
ProReach first categorizes the document using a first dictionary. In most cases, this will determine the final category for the document. Suppose for the sake of example that the chosen category is an uncommon parent category for a whole branch of the standard tree 1400, say stored mostly within a second, different dictionary. In this case, a further classification occurs, and again the highly unusual situation occurs where a further categorization in yet another, third dictionary is needed in order to obtain the finest possible detail. In this example, it turns out that the root category in dictionary No. 3 is a better match for the document than any of its descendants. Thus the third step (using dictionary No. 3) merely confirms the previous classification. Depending on time constraints, the second and third steps may have been turned off. In this case, the first step would still have provided useful partial information. If many categorizations descend three steps deep, periodic optimizations will tend to redistribute categories between dictionaries in a way that lessens the likelihood of this descent.
Each dictionary operation is a database table query followed by a small amount of processing. This operation takes approximately three times as long as a classification that has completed inside dictionary No. 1. The reason for more complex structure is that it frees performance limitations associated with large database searches and excessively large collections of categories. In particular, it combines the high precision of many categories with a low expected processing time.
For ProReach systems 100 with low traffic, one dictionary is likely sufficient, since data constraints do not justify storage of finest level of detail. However, for larger systems, in case poorly performing categories are present, these can mainly be delegated to secondary, more specific dictionaries. These secondary dictionaries also store detail in areas infrequently used.
This optimization seeks to maximize the event record weight classified completely within the first dictionary. This is optimized automatically for each system 100 based on current event record history. To do this, the heaviest categories are stored in the first dictionary. As in the example, one secondary dictionary might store subcategories of the category returned by the first pass, which are then used to determine further detail.
In one embodiment, this approach may be implemented as follows: Provide a tree of categories such there is a parent-child relationship between categories, such as in the standard category tree 1400. As in the category tree, each category has either zero or one parents. The category with no parent is known as the root. Let there be a threshold T such T is some integer between zero and a million inclusive (this range should be identical with the range of the scores).
Next, define a queue Q of categories. Add the root of the category tree to the ordered queue Q.
Select a document D to be categorized. Let R be a vector of category/score pairs, such as the category vector 908. That is, each element in the vector is a record consisting of a category and a score.
While the queue Q is not empty do the following:
1. Pop a category C from the queue.
2. Retrieve S, the set of subcategories of C.
3. Let V be a vector of category/score pairs P that result from categorizing document D with the set of categories in S.
4. Add the elements of the vector V to the vector R.
5. For each category/score pair P in V, add the P.category to Q if and only if P.score>=T
This approach provides a descent through a tree of categories that is controlled by how well a document scores against a parent category. If the score against the parent category is too low (i.e. lower than the threshold), then categorization of the subcategories of that parent category does not occur.
4. Category Cache
ProReach preferably uses a caching subsystem that associates documents resident on a ProReach system 100 with their categorizations. This avoids re-categorizing documents, unless the documents have been changed.
More specifically, ProReach maintains two caches. One cache is the page metadata cache 716 which is persistent, and is stored in a database 720. The other cache is main-memory resident. On an as needed basis, data from the database cache is brought into the main-memory cache. Items can also be ejected from the main-memory cache because of resource limits (e.g., main memory, CPU utilization). The database cache is maybe stored as a relation of documents, timestamps and their categorization. Use of the page metadata cache 716 is as follows.
Given a document, a search is made for the document in the memory cache. If it is not there, a check is made to see if the document is in the metadata cache 716. If it is, an item representing that information is loaded from the database into the memory cache. If there is no cached item, even on the metadata cache, then the document has not been categorized. It is then categorized, and eventually the categorization will be flushed back to the database. (Flushing updates from the memory cache to the database is done as a background process).
If a cached item is found in respect to a document, then this cached data is ignored if the timestamp on the document is more recent than the timestamp on the cached data. If the document is considered to have changed, based on its timestamp, then the document is re-categorized.
Certain optimizations may also be made to this cache over time. In particular, highly dynamic data may cause the cache to churn, through unnecessary related re-categorization attempts. Such wasted work may be avoided by keeping a counter on each cached item, and updating the counter each time the cached item is changed. If more than a predetermined number of changes occur (within some prescribed time period), it is probably reasonable to infer that the document is dynamic in its content and it should be considered uncacheable.
To this effect, a cached item could have an “UNCACHEABLE” field on it. Once a cached item has this field set, the cache manager will immediately stop looking for this item on the database, and it will not try to maintain it in the memory cache either.
Recently, when web sites want to customize web page content to users, they have tended to store specific parameters in cookies rather than in the parameters passed in URL's (and passed to CGI scripts.) Therefore, ProReach attempts to identify this practice to label these URL's as UNCACHEABLE.
Clients of the cache subsystem may want to aggressively populate the cache. Typically, this will be done by spidering some set of documents and running their corresponding uniform resource locators through the content recognition engine. Such spidering can be run once or periodically. It is quite possible in many systems that almost all documents will have an entry in the cache subsystem. This will reduce the computational cost and delay of runtime categorization.
VII. Global Services
ProReach provides a set of global services via the global services server 112. These global services are global in the sense that they are run via the Internet as a centralized set of services available to all ProReach systems 100 and ProReach-enabled web clients 108. One capability of these global services is the allocation of global identifiers that are used to identify web visitors, but these global services also provide many other capabilities.
There are six global services. They are as follows:
- Global Identifier Service
- Global Upload Service
- Global Client Management Service
- Yellow Pages Service
- Global Exchange Policy Service
- Global Aggregation Service
A. Global Identifier Service
In ProReach, it is always the goal to identify a web client as accurately as possible. To this end, a number of modern techniques are used by the global identifier service 602 to identify web clients. First, each web visitor (or web client) will be represented by a unique identifier, such as a 128-bit value.
In many cases, a web visitor cannot be personally identified. In many cases, a web visitor cannot be personally identified. Instead, we can only identify the machine on which web visitor was using his or her web browser, sometimes we can only approximately identify the machine, because we can only identify the web browser via examination of cookies held by that web browser. If a single computer could only use a single web browser, then the one-to-one correspondence between the computer and the web browser would allow a more precise identification. However a user on a single machine might have multiple (N) web browsers, and thus would be treated as N distinct web visitors. It is also the case that multiple individuals could use the same web browser (or web browsers). In this case, we would be unable to detect the different individual persons using the same web browser, and would treat this set of individuals using the same browser as a single web visitor.
In other cases, a web visitor can be individually identified. To draw attention to this distinction, we have two kinds of 128-bit identifiers.
- GIDs: Global IDs identify computers using cookies with those GIDs in the cookie.
- PIDs: Person IDs identify individual web visitors based on their login name and other demographic data.
As just stated GIDs and PIDs are both 128 bits; to distinguish between these two types of IDs, the first bit of GID is always set to zero and the first bit of a PID is always set to one. Hence, GIDs and PIDs are easily distinguished from each other.
The Global Identifier Service 602 plays an important role in allocating or computing GIDs and PIDs. The “clients” of this aspect of the global services server 112 are other web servers, particularly ProReach-enabled web servers 102. These ProReach-enabled web servers 102 may need the assistance of the Global Identifier Service 602 in order to identify a web visitor —be this a computer needing a GID or a person needing a PID. We call the ProReach-enabled web servers 102 that make requests for identification as identifier requestors. These identifier requestors make identification requests to the global identifier service 612. Each such identification request will be one of two kinds. It will be an anonymous identification request or an individual identification request. The handling of each kind of request is described below.
1. Requests For GIDs.
An ProReach-enabled web server 102 needing a GID to identify a web client 106 makes a request to the Global Identifier Service 612. The protocol used is HTTP-based in order for the Global Identifier Service 612— acting as a web server—to gain access to ProReach cookies. The process flow for this request was previously described with respect to
The ProReach-enabled web server 102 cannot examine this ProReach cookie directly because the HTTP protocol only allows a web server to look at its own cookies. Since ProReach-enabled web servers 102 do not belong to the ProReach domain, but to their own domains, they do not have access to ProReach cookies. This fact explains why ProReach-enabled web servers depend on a global service, running under the ProReach domain, to get access to the GID stored in the ProReach cookie (if any).
When a web client 106 contacts a ProReach-enabled web server, the ProReach-enabled web server uses the HTTP protocol to redirect the web client 106 to the global services server 112. However, the global service server 112 must be able to redirect the web client 106 back to its web server 102. This is done by web server 102 redirecting the web client to the global services server 112 via a URL that contains callback information. In particular, the URL contains the domain of the web server 102, and it contains some other data.
The exact format of the URL-encoded request might be something like what is shown below:
where identifies the domain of the requesting web server 102.
The web client 106 receives this URL as part of a redirection request. The web client then automatically goes to this URL, and carries the ProReach cookie with it. The global identifier service 602 takes this request and extracts the request identifier and the name of the web server. It checks for the ProReach cookie. If one is there, it extracts the GID. If one is not there, it generates a GID, and creates a ProReach cookie with the GID embedded in it. This GID is guaranteed to be unique across all systems. That cookie with the GID is then stored back on the web client, so it will be there for next time. Also a check is made to see if the cookie was accepted because we do not want to assume the client accepted the cookie; it is important enough to warrant a check to determine that it was accepted.
After this, the ProReach web server does a web redirect back to the originating “client” web server. So two web directions are involved in order to make this scheme work. This second web redirection just goes in the opposite direction of the first, and this time the URL to which the web client is redirected contains the GID obtained.
Suppose the 128 bit GID, in octal notation, is 123456787012345677, then the result message for the ProReach-enabled web server 102 might be something like this:
The format for encoding the information below used above is merely suggestive. The originating web server 102 can then take this metadata and associate the incoming request with a GID; it can then associate this GID with any kind of HTTP session it uses.
The global identifier service 612 also maintains another table called the GIDHID table. This table has two columns: a HID column and a GID column. A HID is an identifier that uniquely identifies a ProReach system 100, specifically it is a hub ID. For example:
Each time ProReach returns a GID to a ProReach system 100, it ensures that there is a row in this table with the HID of the requesting ProReach system and the returned GID.If the row already exists, no change is needed. If the row does not exist (e.g., for a newly created GID, or for an GID of a new web client to the server 102), it is inserted. Note that this is a many-to-many relationship. Each HID can be related to many GIDs. Each GID can be related to many HIDs. Note for example that GID 023231787012345677 is associated with two hubs, 391 and 421, meaning that this web client 106 has been used when visiting both hubs.
Using the GIDHID table, it is simple to form SQL-like queries that can compute what hubs a web visitor visited. It is also simple to compute the web visitors that visited a given hub. It is also simple to compute the web visitors that visited two different hubs.
2. Individual Identification via PIDs
It can often be difficult to uniquely identify an individual. For example, two distinct people can have the same exact name and same date of birth; conversely, a person might go by her maiden name when she works professionally and by her married name otherwise, and yet these “two” people with different names are the same person. Accordingly, to determine whether two web visitors are in fact the same person, we compare the demographic data of the two web visitors and determine, through some set of comparison rules, whether this demographic data identifies the same person or not. Such a conclusion is a judgment that will depend both on the quality and quantity of the demographic data and the comparison rules.
We call the demographic data of an individual a dossier. The actual data in such a dossier can vary, but will typically include attributes such as standard demographic data including attributes such as name, date of birth sex, country of residence, and country of national origin. A dossier might also include attributes for primary e-mail addresses, all known e-mail addresses, work phone number, home phone number, cell phone number, names of friends, university attended, name of spouse, education level, religion, occupation, hobbies, sports interests, favorite kinds of music, favorite kinds of books, favorite web sites, favorite web pages etc. because it is hard to anticipate all possible attributes that should be stored in a dossier, a dossier may also be implemented simply as a hashtable, so that an attribute name is used as a key, and its value is stored based on that key is the hashtable.
Requests for identifying an individual via a PID are called individual identification requests. An individual identification request contains some set of demographic information (e.g., name, date of birth, sex and occupation) selected from a dossier associated with the PID. Thus, for each PID, ProReach maintains a dossier in a dossier table. An example of a dossier table is:
The columns given here are suggestive only. For example, the table definition below does not account for the fact that the same person might have multiple e-mail addresses or physical addresses, though this is easily accommodated by providing multiple email address fields for each PID. By the same token, additional fields for other demographic attributes may be easily provided.
Using a dossier table the Global Identifier Service 612 maintains a database of such web visitor dossiers. Each row represents a dossier of a particular web visitor.
When an visitor visits the web server 103, the server determines if the user has visited before. Typically, this is done by requesting a name and password from the web visitor. Using the name and password, a check is made to see if such a registered user is known with this name and password. If so, then a PID for this user will have already been obtained. It will have been obtained via the following method.
During the registration process, demographic data from the user being registered is collected. Typically, this is done by having a user fill out a form with this information on some web based form. This demographic data for this registered user can be used to create a dossier.
The dossier of the user being registered at the web site is then shipped to the Global Identifier Service where this dossier can be matched against all the other dossiers in the dossier database. The actual matching rules by which it is determined if a dossier matches up with an existing dossier are specified by the systems administrator, and for example, may be embodied in an expert system that has rules that determine whether two dossiers do or do not represent the same person. If a matching dossier is found, the PID associated with that dossier is the PID for this newly registered user, and this PID is returned to the web server 102.
If no matching dossier is found, then a new PID is created, and a visitor dossier is created for this web client. This visitor dossier will contain the PID, the name, the e-mail address and other available metadata. This dossier is then added to the dossier table of visitor dossiers. The new PID is then returned to the ProReach-enabled web server 102 as the result of the identification request.
If a dossier match occurs, the new dossier (in the identification request) may contain information absent in the existing dossier. When this occurs, this new information is added to the existing dossier, so as to improve the likelihood of matches in the future.
An alternative embodiment is to never return PIDs to web servers. Instead, unique identifiers called RIDs could be returned to the web servers. An RID could be an integer or other string. Together, a web server's HID [its hub identifier] and RID form a compound key that uniquely identifies a PID on the global services server. The keys are stored in an HIDRID table maintained on the global services server. Note also that a HID and PID uniquely identifies a RID.
Each time a PID request is fulfilled, a unique HID and RID is returned to the ProReach-enabled server. This is done as follows. The PID computed and the HID of the requesting hub are used to select a RID from the HIDRID table. If there is no such PID and HID combination in the table, then a unique RID value is generated for the combination and stored in the table. The RID must unique in the sense that the HID and RID columns form a compound key. Finally, the selected (or dynamically generated) RID is returned as the result of the PID request. The sample HIDRID table below illustrates this relationship:
An advantage of this approach is that there is a level of indirection between the RIDs and the PIDs. This level of indirection allows dossier matching 15 mistakes to be corrected. For example, suppose it is discovered that the PIDs 0232310000345677 and 7652317870345644 actually represent the same individual. This error can be fixed by adjusting the HIDRID table to replace one of the PIDs with the other, so that both HID-RID associated have the same PID. For example, PID column of the second row may be updated so that it now has the value 0232310000345677, as follows:
This change will now ensure that if the web visitor at hub 184 with RID 343242 is compared with the web visitor at hub 100 with RID 444343, they will be identified as the same individual.
The global identifier service provides a service that takes two such HID/RID pairs and returns true if they related to the same PID in the HIDRID table. Otherwise it returns false.
Note that this level of indirection can also be used to fix dossier matching mistakes where two actually distinct web visitors were erroneously matched, via dossier matching, as the same person. Again, as in the above example, the mistake can be fixed in the HID/RID table. The two or more rows that have the same PID would be altered so that their PID columns were distinct. In addition, new dossiers for the new PIDs would be created in the dossier table.
B. Global Upload Service
The Global Upload Service 606 enables ProReach-enabled web clients 108 to upload their web activities. In response to received data, the service sends an acknowledgement to the ProReach client 108 when an upload is completed successfully.
In addition, the Global Upload Service 606 has the responsibility for distributing this data to the appropriate ProReach systems 100. The Global Upload Service enables ProReach systems 100 become a subscriber to web visitor data. It also allows ProReach systems to stop being a subscriber to web visitor data. Each system 100 can subscribe for the uploaded data of specific web visitors. To do so, the service 606 provides a list of GID to an system 100; the system returns the GIDs of the visitors that it wants to subscribe for.
When a web client uploads its web activity data using the Global Upload Service, then the Global Upload Service determines which systems 100 subscribe to this visitor's data. The service notifies each subscribing ProReach system 100 that it has data waiting for it. This notification is sent to a Receive Client Data Service of each such subscriber ProReach system 100. Once the ProReach system 100 is notified of the waiting data, each such notified ProReach system 100 retrieve the data within a reasonable period of time (e.g., 24-72 hours). If it is not retrieved, it is deleted.
To manage delivery of uploaded data, the Global Upload Service 606 creates a package including the uploaded data and a recipient list. The list identifies by HID those ProReach systems 100 that are subscribers and includes a timestamp. The data will be deleted when the current time advances beyond the timestamp. At that time, the uploaded data expires and is deleted.
In addition, when subscribers come and retrieve the uploaded data, that subscriber is removed from the recipient list. When all subscribers are removed from the list, the data is discarded, as it has been delivered to all the recipients. Of course, if recipients fail to pick their data, it will be discarded anyway when it expires.
C. Global Client Management Service
ProReach tracks web clients with client-side software that monitors the web user's activities. Periodically, the collected data is uploaded to ProReach, as described above.
To provide this facility, user can download the ProReach client software to install on their computer. The global manager service 608 also maintains a list of those client computers (identified by GID) that have downloaded the client software. When the software is installed, the client 108 transmits a confirmation to the service 608, and with the client's GID. When a confirmation is given, the GID provided with the confirmation is maintained in a list of GIDs. Using the received GID from the installation, and an email address in the dossier, it is possible to contact any web clients that have installed the client-side tracking software.
If the client-side tracking software is uninstalled, the uninstall sends an uninstall message to this service along with the associated. This GID is then removed from list of GIDs with client-side tracking enabled.
D. Yellow Pages
This service 610 maintains a database of the ProReach systems 100. Every ProReach system 100 is registered by the yellow pages service 601, and listed in this database. The database includes for each ProReach system 100:
- The name of the ProReach system.
- IP address and port of hub, and a list of the supported domains.
- Contact information for the ProReach system, including an e-mail address of the system's administrator is included, so that e-mail can be sent to the person responsible for the ProReach system.
- A unique ProReach system ID (e.g., the HID) that uniquely identifies that ProReach system.
- An indication whether the listing is private, protected or public. A listing is private if it cannot be seen by any one else (except ProReach Global Services). A listing is protected if it can only be seen by ProReach systems that share a common ProReach alliance 800. A listing is public if it can be seen by any ProReach system 100. The default is private.
- A list of the alliances 800 that the ProReach system is a member of.
An ProReach system 100 can only add, delete or modify its own entry. An ProReach system 100 can read the entry of any public listing, and any listing that is private and in same alliance as it is.
The service 610 provides the abilities to add, delete, and update any entry, and to make an entry public, private or protected. The service further enables systems 100 to join or leave alliances. The service further provides lookup functions by company name, domain, or alliance. Finally, the service 610 provides functions to create an alliance, and list all alliances, and list members of an alliance.
E. Global Exchange Policy
Each ProReach system 100 can define an exchange policy. An exchange policy serves two related but distinct purposes. First, the exchange policy describes a demographic statement. A demographic statement explains what kind of visitors visit ProReach system: number of visitors, kind of interests, frequency of visits, kind of web visitors. The information in a demographic statement is the responsibility of the individual ProReach system that makes the statement. A demographic statement can be used by others as a way to evaluate this ProReach system. Such an evaluation may be made when one ProReach system is considering a sharing relationship with another ProReach systems. Second, the exchange policy enables trading of anonymous user group and category complexes, and user profiles. The policy can identify one or more specific users, user groups, or complexes as being available for trading. This information is anonymous, as the profiles and complexes do not contain information that can be used by the recipient to personally identify any individual user. A collection of such information is described in an information resources, which may be associated with keywords to allow other systems to more readily search for an identify the resource. An information resource may also contain one or more exclusions, which describe information (e.g., profiles, categories, groups, or complexes) that will not be traded.
For example, ProReach-enabled could have a SportsCustomer resource and another ProReach system-enabled could have a WomensClothing resource. These two ProReach systems could agree to make an exchange, such that the profile data of both groups is transmitted to the other ProReach system, either on a one-time basis or periodically. The data in these exchange policies will make it simpler for these ProReach systems to find each other and do some trading. The transmission of this data preferably does not include customer contact information, so that the anonymity of the web visitor is preserved across systems. However, even with this restriction, the information is still useful, because now each ProReach system's database of profile information is increased.
For example, suppose via this exchange gets profile information on a web visitor associated with GID9834232122. Suppose that has never been visited by the web visitor with GID9834232122.
Now suppose that this web visitor with GID9834232122 visits this site. While this web visitor GID9834232122 is new to, this ProReach-enabled web site already has information about this web visitor. It got this profile from.
An exchange policy can also specify a just-in-time sharing policy. A just-in-time sharing policy indicates that profile information for a specific GID can be requested. Such explicit requests are useful because, as a new visitor arrives at a ProReach-enabled web site, the website can welcome the web visitor and —in the background —request profile information related to this GID, looking for this information from its exchange partners.
Accordingly, the global exchange policy service 612 enables ProReach systems 100 to create, delete and modify an exchange policy. Creation includes defining the information resources that the system 100 is willing to trade. The service further enables methods to create, delete and modify an information resource for an exchange policy. The service 612 then maintains a database of the listed exchange policies, and allows searching of the database by keyword, category, user group, or user GID.
Global profiles are maintained very much the way they are maintained on the individual ProReach systems. However, unlike the local, system specific profiles, the global profiles only track user interest in the categories in the standard category tree 1400. It is anticipated that this database will be quite large, and thus a high performance, scalable database is desired. In a preferred embodiment, an Oracle8I database is used for this implementation so that any Java processing can be executed inside the actual database server.
VIII. ProReach Client Side Web Usage Data Collection
A. Web Activity Monitoring
As described above, certain web clients are ProReach enabled by including client side software that track their web activity. This activity is need only be recorded for web activity that arises on web servers 110 that are not ProReach enabled and thus do not have the ability to track web activity directly.
This activity is recorded in web event records and then uploaded to the global upload service. In one embodiment, this activity is captured by monitoring the browser during operation. One method is using browser APIs to monitor the browser events and communicate with the browser when a browser has API support for external applications. One other possible method is using low-level Windows API/service such as Windows Hooks to monitor browser's window events.
For monitoring Microsoft Internet Explorer browsers, we prefer to use a Browser Helper Object (BHO) to attach to Internet Explorer, which has a COM-based object model. A BHO is a COM in-process server registered under a certain registry's key. Upon startup, Explorer looks up that key and loads all the objects whose CLSID is stored there. The BHO is tied to browser's main window. Each new instance of a browser window will have its own BHO associated with it. A BHO is unloaded when a browser window is destroyed. A BHO can receive notifications about the Explorer OLE-COM events. There are a total of 18 different events an browser window can fire. By monitoring events such as DownloadComplete, NavigateComplete2, OnStatusBar etc., a BHO can know what document has been downloaded in a browser window.
Netscape browsers provide an API called NCAPI (Netscape Client API). NCAPI has two major parts: one part uses OLE, the other uses DDEML (Dynamic Data Exchange Management Library). The one of interest to ProReach client-side tracking is DDEML. Just like BHO in Explorer, an application can use NCAPI's DDEML to communicate with Netscape browsers and get notifications when certain browser activities happen. Unlike BHO, an NCAPI DDEML program is an external application, and it is tied to a Netscape process, not just a browser window. One instance of an NCAPI DDEML program can monitor all Web activities in all browser windows associated with a Netscape browser process.
B. ProReach Client Web Usage Data Filtration and Aggregation
- 1. Time-based consolidation
Given the rapidity with which users view and move between web content it is likely the many web events that are not useful to record. Second, because the many web clients 108 are not time synchronized, the recorded times in the records will not be consistent between clients. There are various mechanisms to handle these issues.
- a) Adjust web event record time stamps
Every client machine has different clock settings. It is meaningless to record the time of the user's Web activity based on client machine clock. ProReach client software needs to adjust the time stamp of each user Web activity with a global reference time. This adjustment is done before web event record is uploaded.
1) ProReach client software to firsts query the ProReach Global Upload Service on the server's GMT reference time.
2) ProReach client software then calculates the difference in GMT time between the client machine and the ProReach server. This difference is TD.
3) ProReach client software adjusts the time stamp in each entry of web event record by adding this TD to the time stamp.
- b) Ignore short-term activities
If a web activity lasts for a very short time, for example, less than 10 seconds, ProReach will not record it in a web event record. This may happen while a user is using the browser's back/forward button to search for a previously visited URL or when a user is navigating through links.
- c) Aggregate Web activities
As mentioned before, multiple occurrences of the same Web activity will be aggregated. This aggregation is done on the fly while URL is being captured by ProReach client software. To speed up computation, ProReach client software will use hash table to store WUR.
2. Other Filtration of Data
To further limit the data collected, the client 108 also filters out and does not store web event records for accesses to the user's home page. However, the user's homepage may be stored in the user's profile to provide additional demographic or other interest information about the user.
As noted, when the client 108 is visiting a ProReach enabled web server 102, there is no need for the client 108 to capture web events. Accordingly, whenever the client 108 observes URLs for web servers 102, or domains served by such servers, it does not store the web activity data.
3. Privacy Control
ProReach client 108 users agree to use ProReach client software based on “informed consent.” ProReach system provides an explicit privacy statement to potential users before they become ProReach client software users, so that users will know that their activity is being tracked and recorded. The ProReach client software contains a user-modifiable control mechanism and a default control mechanism. The default control mechanism addresses the control of common privacy related issues that can be applied to all users. These mechanisms allow the user to filter web activity data from being recorded according to user preference.
C. Filtration based on privacy settings (User modifiable)
ProReach client software supports configurable user privacy preferences and at least two types of filtration based on user privacy settings: URL pattern-based filtration and keyword-based filtration.
1. URL pattern-based filtration—
ProReach client software allows users to set the patterns of the URLs they do not want to be recorded and shared with a ProReach system 100. The URL pattern can be a complete URL, the domain part of a URL or part of a URL with wild-card characters. Example of URL patterns include:
- 1) A complete URL:
- 2) A partial URL:
- 3) The domain part of a URL:
- 4) Wild-card pattern:*xyz*
2. Keyword-based filtration
Users can specify a list of keywords as part o their privacy preference settings. ProReach client software matches the content of the URL captured against the keywords, and if there's any keyword matching, the URL will not be recorded in a web event record. Keyword matching includes single word, multiple single word, and phrases. In one embodiment, to reduce the overhead of this process to the user's computer, by default, the client 103 only does the keyword match on document title and the HTML “keyword” <meta> tag. Alternatively, the entire document content keyword matching will be provided as user-selectable option.
In one embodiment, the ProReach client software provides a standard keyword templates for its users. Each template is based on a specific category or categories from the ProReach standard category tree. Users also have an option to add more keywords to a specific template. Again, when keywords from the template are matched against a page of web content, then the URL is not recorded
D. Default privacy-related filtration
ProReach client software supports a default policy on privacy-related accesses to user's Web activity data. One privacy-related activity is the user login process. Many Web sites use a simple HTML form-based login, and the user login information is sent to a CGI program by an HTTP “GET” request. In such cases, the user's login data are all included in the URL, and ProReach client software can capture all those data. In its simplest form, the login data may not even be encrypted before they are sent from the user's Web browser. If the ProReach client software treats such kind of URL without discrimination and sends it in its entirety to the ProReach system, it may inadvertently disclose private information Any person who has control of a ProReach system could get access to many people's very private information such as bank account, social security number etc. Accordingly, ProReach client software makes it a default policy to filter and strip off the login data contained in the URL. For example, user Joe is trying to log in to XYZ bank's online service via a browser, the URL may look like:
In this case, the ProReach client software either strips off the sub-string in the URL after “?” or ignores the entire URL completely.
E. ProReach Client Data Upload
1. ProReach client upload queue
ProReach client software maintains an Upload Queue. We use the file system of client computer's Operating system for creating the ProReach client upload queue. Each item in the upload queue is a file. The file name has a fixed portion and a variable portion. The variable portion of the file name is a number. ProReach client software will maintain a counter for this queue number. For example, the file can be named ProReach1.WER, ProReach2.WER, ProReach3.HOM etc. “WER” means the upload item is a list of web event records, while “HOM” means the upload item is the user's browser's startup page URL. The counter will be reset to 0 when the queue is empty. There is a pre-set size of the upload queue, and it is FIFO (First In First Out). If the upload queue is full and new data need to be inserted into the queue, the first item in the queue has to be discarded. The upload queue size will be large enough, 500K, for instance, so that no data will be discarded before the upload of them occur. The data will only be discarded either after a successful upload or after some number of repeated upload attempts.
2. ProReach Upload Stream and Upload Record
An ProReach upload stream represents data uploaded in one upload session. Data uploaded in one upload can be composed of several upload records. The upload stream has a head and a data part. The head marks the beginning of the upload stream and contains the ProReach Global ID for the user and the number of upload records contained in this upload stream. The data portion-contains one or more upload records. Each upload record in the upload stream corresponds to an upload item in the upload queue. There can be two types of upload records: web event record and HOM record. Each upload record also has a head and a data part. The head marks the beginning of an upload record, and the data is the actual upload data. The head of the upload record contains the head divider, the name of the upload queue item for this record, the upload record number, the length of the data (excluding head and record dividers), and the number of records in the data portion. The heads for both the upload stream and the upload record have fixed lengths. The web event records and the HOM records have variable lengths. ProReach client software will use a non-printing character as the record divider.
3. Data upload
- a) Web Event Record upload
ProReach client software has to upload the captured web events at pre-configured time intervals. This time interval is pre-determined and preferably cannot be reset by the ProReach client user. The preferred time interval is between every 15 and 30 minutes.
- b) Homepage URL upload
This upload is an infrequently scheduled task. It is not likely that a user will change the startup page daily or weekly. Each time when the ProReach client software is started, it will check if the user's browser startup page has changed. If the startup page has changed, ProReach client software will insert a “HOM” upload record in the upload queue. It will perform this operation only if the startup page is a Web page designated with “http” protocol; it will not do this if the startup page is a local file.
4. Upload time and upload stages
Let's discuss ProReach client software operation related to data upload in three different stages: pre-upload, upload, and post-upload. Upload is needed only if the web event records in memory are not empty or the upload queue is not empty. There are two condition for uploading:
- 1) On a pre-set interval, when the user is connected to the Internet and the web event in memory is not empty or upload queue is not empty.
- 2) When a new browser process is started and the upload queue is not empty
- a) Pre-upload stage
Before uploading a web event record:
- 1) Adjust time stamps.
- 2) Add the current web event record in memory to the ProReach upload queue. In addition to at the pre-set upload time, ProReach client software needs to add the web event record in memory to the upload queue when it exits.
- b) Upload stage
ProReach client software will always upload data from the upload queue. ProReach client software has an “upload threshold”. This is the amount of data that can be uploaded during each upload. During ProReach client software initialization time, this threshold is calculated based on the client computer's modem speed. It is desirable to limit each upload task to last for no more than 5 seconds. For example, if a client has a 14.4K modem, the “upload threshold” will be (14.4K/8)*5=9 K bytes. In each upload time, ProReach client software checks the size of the items in the upload queue and upload data up to the threshold. As an example, assume there are three items in the upload queue: item 1 is 1K, item 2 is 6K, and item 3 is 5K. Only item 1 and item 2 will be uploaded in the current upload; item 3 will be left to the next upload. If any upload item is greater than the upload threshold, it will be divided into smaller items before ProReach client software does the actual upload. If a user has a fast network connection, the threshold will be bigger. User's network connection speed will be detected by the ProReach client software during its initialization.
- c) Post-upload stage
After the upload, the ProReach client software has to wait for acknowledgment from the ProReach Global Upload Service on uploaded data before it can discard the uploaded data. If there are no acknowledgments, the same items in the upload queue could be uploaded repeatedly until acknowledgments are received. Since there is a limit on the size of the upload queue, items uploaded previously without acknowledgments will be discarded eventually. However, if that happens, it usually means there are some serious problems with either the network or the ProReach Global Upload Service.
5. ProReach Upload Service and upload
As mentioned in previous sections, ProReach client software has to wait for acknowledgment from the Upload Service before it can discard upload items in the upload queue. The ProReach Global ID in the header of the upload stream tells the Upload Service where it comes from and what user the uploaded data is associated with. The Upload Service will check information contained in the Upload Stream header and the Upload Record headers to make sure all data are received successfully. The Upload Service will then send an Acknowledgment Record to the ProReach client to note it has successfully received the upload stream. The Acknowledgment Record contains a header and the data. The header contains a number that represents the number of names contained in the data part of the acknowledgment. The data part is a string with names of received upload items; the names are separated by “,”. After the ProReach client software has received the acknowledgment record, it deletes upload queue items whose names match the names in the acknowledgment record.
This client data upload can be done via HTTP. In this case, the Global Upload Service resides on a Web server 112 or it has to be able to handle HTTP protocol, and the ProReach client software is implemented as an HTTP client (agent). The ProReach Upload Stream is sent as an HTTP POST request. There will be a timeout set for the ProReach client to wait for the Upload Service HTTP server reply for that HTTP POST request. If the ProReach client does not get reply within the timeout, the upload data stream will be resent later.
IX. Content Targeting
One of the features of ProReach is enabling targeted content delivery for web visitors. The services running on the web server that deliver this targeted content need to have a mechanism to access the profile of a current web visitor, or access the category information about a given page the web visitor has selected. ProReach makes this possible by exposing API's for java, “C”, or Perl to access the ProReach data on visitor profiles and page categorization.
There are two scenarios where a dynamic web server process would need to access the ProReach data at runtime from a CGI or filter/module:
A. Access to Profile by a CGI
Each ProReach server 102 maintains a database of visitor profiles for each visitor that has ever visited a site within this ProReach hub's network; this is the profile table of the database 720. In one implementation of this database 720 in Java, a visitor profile object is composed of a vector of interests that indicate the categorization of activities of this web visitor. This Java-based instance of a visitor profile also contains several methods for accessing string-valued data such as the web visitors real name and postal address, which may be utilized in targeting web advertising to this visitor. For instance, it would not be useful to show a web visitor an advertisement for an Auto Transmission shop that does not exist in the region where the web visitor lives.
We have described above the process of uniquely identifying the web visitor via the GID using the HTTP protocol redirect functionality and cookies. If a ProReach-enabled system 100 wants to enable targeted content delivery we can use a similar method to get the profile for the web visitor.
Some web sites may wish to access Profile data from a Java Servlet or application, and in this case an API is provided. Some examples of access to the Java API are listed below:
VisitorProfile joeUser=new VisitorProfile(ProReachGID);//constructor for visitor profile, takes GID as input
For (int i=0; i<joeUser.interestvec.length( ); i++)
/* Each profile contains a vector of interest names and integer values, called interestvec here. This loop will print out all of the interest names and values for this web visitor
system.out.print1n(“interest”+joeUser.interestvec[i].get name( )+“score is ”+joeUser.interestvec[i].get_value( ));
Int interest_value=joeUser.interestvec.get_value(int interest_index);//get the interest value given the index
String interest_name=joeUser.interestvec.get_name(int interest_index); //get interest name given index
Identity joesData=new Identity(ProReachGID);//constructor for the demographic portion of profile
Identity joesData=joeuser.Identity;//getting identity out of the profile
String firstname=joesData.firstname;//getting first name from demographic portion of profile
String lastname=joesData.lastname;//getting last name from demographic portion of profile
String email=joesData.email;//getting email from demographic portion of profile
String address1=joesData.address1;//getting address from demographic portion of profile
String day_phone=joesData.day_phone;//getting phone from demographic portion of profile
1. Access to page Metadata by CGI
The ProReach server maintains a database of categorizations for every page of the site, called Page Metadata 716. The method described above for using the http protocol to access profiles on the ProReach Spoke can also be used to efficiently access page Metadata. This solution for getting the metadata about a page at runtime only works if a mapping exists between all of the possible URL's of the site and their categorizations. This mapping is created by the Page Content Spider. This is a tool used by the web master to pre-categorize all of the web pages on the site before it goes into production. The Page Metadata Service can then use this data to service requests for page categorizations from the ProReach-enabled web server (see Chapter 14 for more information on the Page Metadata Service).
Some web sites have a single entry point for all page requests that come into their web server. This would be like an IIS filter, an Apache module or it could be a servlet. If such an architecture previously exists on the ProReach customers web site or can be implemented on the ProReach-enabled web site we can take advantage of this to optimize the ProReach-enabled web site's access to page metadata. A web developer may design a filter, module, or servlet that reads in the entire mapping to main memory first, and then indexes into this structure from main memory to access a page's metadata in the fastest way at runtime.
In the Java language the pageindex Object could be derived from the Hash object. The PageIndex object returns a Vector object of category scores for each valid URL object that is used to index into it:
PageIndex pageIndex=New LoadPageIndex(String SiteIdentifier);//constructor for page metadata object
Vector Cat=pageIndex.get_value(Url);//retrieve the categorization for the page given the url
Vector Cat=GetCatFromPageIndex(Integer Index);//retrieve the categorization of a page given its index
Below is a static method to perform the same task in Java in the case where the CGI only needs the category vector for a single page:
Vector Cat=GetCatFromUrl(URL Url);//this is a static method call to get one categorization for one URL
- a) Handling dynamic content categorization of multipart pages at runtime
The above solutions for server-side content targeting and page classification require that each URL requested from the server has been pre-categorized. Another embodiment provides a solution to web site developers who build pages from many component documents, and cannot or do not wish to categorize all of the possible permutations used to form the composite documents.
To implement this feature we a function such as getCategoryFromComponents(A, B, C, etc.). In this case A, B, and C are documents that are subcomponents of a page and have been pre-categorized and stored in the Page Metadata. The system administrators of this ProReach site then instrument the site CGI's that compose pages from components to make the above ProReach API call, which categorizes each component. This provides the capability to determine at runtime the composite categorization derived from these three component categorizations.
Claims (2)
NewScorei=Category Scorei*Durationi*Constant | https://patents.google.com/patent/US6839680?oq=5987118 | CC-MAIN-2018-13 | refinedweb | 35,302 | 51.58 |
Question
mod_wsgi configuration with apache2
hello there
im running python bottle framwork with apache2 via wsgi handling
the problem i had it is that my website (Ip address ) still display the error page 500 !!
without showing the html pages from handling from wsgi script !
this the app.wsgi file (the error is not in this file I’m sure):
import sys,os ,bottle sys.path.append('/var/www/apabtl/') import hello application = bottle.default_app()
- hello is my bottle script (which returned & renders the html pages)
this is may 000-default.conf :
ServerName 147.178.331.204 ServerAdmin YU@gmail.com ServerRoot "/var/www/apabtl" DocumentRoot "/var/www/apabtl" #ServerName 147.178.331.204 WSGIDaemonProcess apabtl threads=5 WSGIScriptAlias / /var/www/apabtl/app.wsgi <Directory /var/www/apabtl> <Files app.wsgi> Require all granted </Files> </Directory>
- apabtl is my application.
please can you help me (:
These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.×
Hi @say090u,
Do you by any chance have any errors in your logs? From the look of your configuration I assume your application is misbehaving. Do you know if your application works in python without Apache? | https://www.digitalocean.com/community/questions/mod_wsgi-configuration-with-apache2 | CC-MAIN-2021-04 | refinedweb | 215 | 52.66 |
Discussion in 'BlackHat Lounge' started by jonross, Oct 27, 2015.
Is he? Did he die?
He is dead
Hes alive at that particular moment though
RIP Glenn..........
I don't think so.
If you saw the part after the guy shot himself, it's possible that he could have fallen on top of Glenn and distracted the zombies.
In earlier seasons, the group would cover themselves in blood to "reek of death", which would cause the zombies to ignore them. If that suicidal guy was aware of that, then that scene could have been an act of redemption.
Glenn wasn't on the Talking Dead, which is peculiar as they always feature the dead main character on there.
Usually when a main character dies, they die in an "eventful" way and not by the coward who got the Everybody Hates Chris guy killed.
Supposedly on the Talking Dead (because I don't watch that), they weren't really referring to Glenn as deceased nor was he featured as a deceased character.
Glenn's character plays a vital role that can't be replaced in the comics a little bit later.
The death scene is ambiguous; you didn't really see him die, you just saw him crying with zombie hands tearing into "someone". The angle of the camera suggests that this was on purpose.
There's a photo taken from a bit later in the season showing Glenn with the group being filmed during a scene. I'll see if I can find it and post it here.
Glenn still has protagonist plot armour that's nearly comparable to Daryl's and they only "killed" him to build up hype and get the old viewers to return (much like Family Guy did with Brian not too long ago).
Edit: Look at this. Unelss Glenn suddenly developed huge pecs, it looks like the suicidal guy is on top of him. The colour of the Nicholas' shirt can also been seen before they tear into him (Glenn's shirt was brown).
They gave Tyrese an entire episode when he died.
I assume Glenn would get a much longer build up if he actually died.
I don't think he is! He is still alive!
That was totally Nicholas on top of him getting torn to pieces. I'm going to assume that he survives the same way they did in the one episode where they covered themselves in blood. He will likely die this season, but probably not in that crappy fashion.
What a bummer. He survived.
Nearly a month later... And upperminds and I were right.
That protagonist plot armour is thick.
I hate you guys.
He is still alive, I'm sure
Why does this make so much sense?
Which part are you referring to?
We called that shit, Zwielicht.
The last time I tore into a recently deceased person blood didn't squirt like that, you need a heart to do that shit.
Yeah, but that wasn't blood. If you look in the episode, Nicholas is clearly sipping on a Grande Peppermint Mocha (no whip) right before shooting himself in the facehead. That thing that looks like a blood spurt is most likely just spillage from the deadbros tearing into the controversial red cup.
no no
ok, now I can read this thread. Ya bastages.
For some reason I find morgan kinda annoying.
Separate names with a comma. | https://www.blackhatworld.com/seo/is-glenn-dead.797691/ | CC-MAIN-2017-51 | refinedweb | 570 | 83.15 |
;
Let's look at both pass by value and pass by reference to make sure we understand the difference. Here is a little example that illustrates how parameters are passed by value. Here's what that looks like:
int a = 5; int b = 9; swap(a,b); // main pgm function call ... void swap(int x, int y) // pass by value { int temp; temp = x; x = y; y = temp; return; }Simple, direct - right? Well yes, but the values of a and b in the main program have not been changed! If that is what you really wanted to do, you should have used pass by reference.
To do this, you need to change
the function prototype and header to
void swap(int & x, int & y)
Here is what the whole program looks like.
//Filename: swap.cpp #include <string.h> #include <iostream> using namespace std; void swap (int& x, int& y); int main () { int a = 5; int b = 9; cout << "This program exchanges 2 values using reference parameters." << endl; cout << "Values before the exchange:" << endl; cout << "a= " << a << " b= " << b << endl; swap(a, b); // code that calls the function cout << "Values after the exchange:" << endl; cout << "a= " << a << " b= " << b << endl; } // function for passing by reference void swap (int& x, int& y) { int temp; temp = x; x = y; y = temp; return; } // end swap
Now, when the function is executed, the values of a and b will be changed in the main program.
(Click to download swap.cpp)Let us reimplement swap with pointers. This is how pass by reference was done in traditional C (before C++)
// function for passing by reference using pointers void swap (int* x, int* y) { int temp; temp = *x; *x = *y; *y = temp; return; } // end swap
Why do we have to put the * in front of x and y when we are doing assignment?
Note that our call from main will be:
swap (&a, &b);
It may seem like we have added a level of complexity to pass by reference, but trust me, this information will help you if you ever work with traditional C functions (found in CS330).
Next, we will look at how pointers are used with C++ structures.
struct Student // define the structure { string name; gurus (or nullptr) after you delete the associated dynamic data space. e.g.
tmpPtr = NULL; //or tmpPtr = nullptr weightPtr = NULL; //or weightPtr = nullptr
Segmentation fault - core dump.This is a really ugly way of punting you and your program out of computer memory. Unix gurus emacs, you can search for a particular string (such as the name of a pointer) by entering: C-s search_string | http://www.cs.uregina.ca/Links/class-info/115/10-pointers/ | CC-MAIN-2017-43 | refinedweb | 435 | 74.32 |
Introduction to MetPy
Introduction to MetPy
Unidata Python Workshop
Questions¶
- What is MetPy?
- How is MetPy structured?
- How are units handled in MetPy?
Objectives¶
What is MetPy?¶
MetPy is a modern meteorological open-source toolkit for Python. It is a maintained project of Unidata to serve the academic meteorological community. MetPy consists of three major areas of functionality:
Plots¶
As meteorologists, we have many field specific plots that we make. Some of these, such as the Skew-T Log-p require non-standard axes and are difficult to plot in most plotting software. In MetPy we've baked in a lot of this specialized functionality to help you get your plots made and get back to doing science. We will go over making different kinds of plots during the workshop.
Calculations¶
Meteorology also has a common set of calculations that everyone ends up programming themselves. This is error-prone and a huge duplication of work! MetPy contains a set of well tested calculations that is continually growing in an effort to be at feature parity with other legacy packages such as GEMPAK.
File I/O¶
Finally, there are a number of odd file formats in the meteorological community. MetPy has incorporated a set of readers to help you deal with file formats that you may encounter during your research.
Units and MetPy¶
In order for us to discuss any of the functionality of MetPy, we first need to understand how units are inherently a part of MetPy and how to use them within this library.
Early in our scientific careers we all learn about the importance of paying attention to units in our calculations. Unit conversions can still get the best of us and have caused more than one major technical disaster, including the crash and complete loss of the $327 million Mars Climate Orbiter.
In MetPy, we use the pint library and a custom unit registry to help prevent unit mistakes in calculations. That means that every quantity you pass to MetPy should have units attached, just like if you were doing the calculation on paper! Attaching units is easy:
# Import the MetPy unit registry from metpy.units import units
length = 10.4 * units.inches width = 20 * units.meters print(length, width)
10.4 inch 20 meter
Don't forget that you can use tab completion to see what units are available! Just about every imaginable quantity is there, but if you find one that isn't, we're happy to talk about adding it.
While it may seem like a lot of trouble, let's compute the area of a rectangle defined by our length and width variables above. Without units attached, you'd need to remember to perform a unit conversion before multiplying or you would end up with an area in inch-meters and likely forget about it. With units attached, the units are tracked for you.
area = length * width print(area)
208.0 inch * meter
That's great, now we have an area, but it is not in a very useful unit still. Units can be converted using the
.to() method. While you won't see m$^2$ in the units list, we can parse complex/compound units as strings:
area.to('m^2')
- Create a variable named speed with a value of 25 knots.
- Create a variable named time with a value of 1 fortnight.
- Calculate how many furlongs you would travel in time at speed.
# YOUR CODE GOES HERE
# %load solutions/distance.py # Cell content replaced by load magic replacement. speed = 25 * units.knots time = 1 * units.fortnight distance = speed * time print(distance.to('furlongs'))
77332.22424242426 furlong
Temperature¶
Temperature units are actually relatively tricky (more like absolutely tricky as you'll see). Temperature is a non-multiplicative unit - they are in a system with a reference point. That means that not only is there a scaling factor, but also an offset. This makes the math and unit book-keeping a little more complex. Imagine adding 10 degrees Celsius to 100 degrees Celsius. Is the answer 110 degrees Celsius or 383.15 degrees Celsius (283.15 K + 373.15 K)? That's why there are delta degrees units in the unit registry for offset units. For more examples and explanation you can watch MetPy Monday #13.
Let's take a look at how this works and fails:
We would expect this to fail because we cannot add two offset units (and it does fail as an "Ambiguous operation with offset unit").
10 * units.degC + 5 * units.degC
On the other hand, we can subtract two offset quantities and get a delta:
10 * units.degC - 5 * units.degC
We can add a delta to an offset unit as well:
25 * units.degC + 5 * units.delta_degF
Absolute temperature scales like Kelvin and Rankine do not have an offset and therefore can be used in addition/subtraction without the need for a delta verion of the unit.
273 * units.kelvin + 10 * units.kelvin
273 * units.kelvin - 10 * units.kelvin
# 12 UTC temperature temp_initial = 20 * units.degC temp_initial
Maybe the surface temperature increased by 5 degrees Celsius so far today - is this a temperature of 5 degC, or a temperature change of 5 degC? We subconsciously know that its a delta of 5 degC, but often write it as just adding two temperatures together, when it really is:
temperature + delta(temperature)
# New 18 UTC temperature temp_new = temp_initial + 5 * units.delta_degC temp_new
# YOUR CODE GOES HERE
# %load solutions/temperature_change.py # Cell content replaced by load magic replacement. temperature_change_rate = -2.3 * units.delta_degF / (10 * units.minutes) temperature = 25 * units.degC dt = 1.5 * units.hours print(temperature + temperature_change_rate * dt)
13.5 degree_Celsius
MetPy Constants¶
Another common place that problems creep into scientific code is the value of constants. Can you reproduce someone else's computations from their paper? Probably not unless you know the value of all of their constants. Was the radius of the earth 6000 km, 6300km, 6371 km, or was it actually latitude dependent?
MetPy has a set of constants that can be easily accessed and make your calculations reproducible. You can view a full table in the docs, look at the module docstring with
metpy.constants? or checkout what's available with tab completion.
import metpy.constants as mpconst
mpconst.earth_avg_radius
mpconst.dry_air_molecular_weight
You may also notice in the table that most constants have a short name as well that can be used:
mpconst.Re
mpconst.Md
MetPy Calculations¶
MetPy also encompasses a set of calculations that are common in meteorology (with the goal of have all of the functionality of legacy software like GEMPAK and more). The calculations documentation has a complete list of the calculations in MetPy.
We'll scratch the surface and show off a few simple calculations here, but will be using many during the workshop.
import metpy.calc as mpcalc import numpy as np
# Make some fake data for us to work with np.random.seed(19990503) # So we all have the same data u = np.random.randint(0, 15, 10) * units('m/s') v = np.random.randint(0, 15, 10) * units('m/s') print(u) print(v)
[14.0 2.0 12.0 5.0 3.0 5.0 14.0 8.0 9.0 10.0] meter / second [6.0 10.0 7.0 11.0 10.0 13.0 2.0 3.0 5.0 0.0] meter / second
Let's use the
wind_direction function from MetPy to calculate wind direction from these values. Remember you can look at the docstring or the website for help.
direction = mpcalc.wind_direction(u, v) print(direction)
[246.80140948635182 191.30993247402023 239.74356283647072 204.44395478041653 196.69924423399362 201.03751102542182 261.86989764584405 249.44395478041653 240.94539590092285 270.0] degree
- Calculate the wind speed using the wind_speed function.
- Print the wind speed in m/s and mph.
# YOUR CODE GOES HERE
# %load solutions/wind_speed.py # Cell content replaced by load magic replacement. speed = mpcalc.wind_speed(u, v) print(speed) print(speed.to('mph'))
[15.231546211727817 10.198039027185569 13.892443989449804 12.083045973594572 10.44030650891055 13.92838827718412 14.142135623730951 8.54400374531753 10.295630140987 10.0] meter / second [34.0719985051177 22.81236360769857 31.076512145333307 27.029004056895516 23.354300529953807 31.156917227058244 31.63505642387918 19.11239205734952 23.030668711943 22.36936292054402] mile_per_hour
As one final demonstration, we will calculation the dewpoint given the temperature and relative humidity:
mpcalc.dewpoint_from_relative_humidity(25 * units.degC, 75 * units.percent) | https://unidata.github.io/python-training/workshop/Metpy_Introduction/introduction-to-metpy/ | CC-MAIN-2021-25 | refinedweb | 1,396 | 67.15 |
More than one year ago, two colleagues in my company went to a conference and came back talking about the Selenium test framework, which seems very attractive at the first sight. But soon, we realized that it is very difficult to maintain and refactor, which is often the case in an agile environment. I worked on the selenium tests for a project and came out with the idea of using abstract object modules to add another tier to the selenium framework. The framework was written in Java and later on, couple colleagues from other teams tried to use my framework, but it was quite difficult to use because it lacks of expressiveness. Recently, I re-write my framework using Groovy to make it much easier to use and add a lot of functionalities which are difficult to implement without the dynamic nature of Groovy. I make it open source so that more people can use it and I can improve it with your suggestions and comments.
Tellurium is developed for anyone who needs to write selenium tests, including developers, QA people, or any one who knows about XPATH or HTML markup. Tellurium includes a DSL executor. You can write your test in pure DSL, but by writing Java test cases you have the advantage to modularize you code.
The Tellurium framework is mainly written in Groovy and the code can be compiled as a jar file. You can include the Jar file in your lib directory and write your JUnit tests by extending TelluriumJavaTestCase if you want all your test code in one Java class or by extends BaseTelluriumJavaTestCase if your test codes are in multiple test files and you want to put them in a test suite.
The second way is to write Tellurium tests in groovy, users can use TelluriumGroovyTestCase for a single test and TelluriumSuiteGroovyTestCase for a test suite.
The other way is to write DSL script directly, that is to say, your tests will be pure DSL scripts. This is really good for non-developers or QA people. The DslScriptExecutor? can be used to run the .dsl files.
Tellurium comes with an embedded Selenium server and you do not need to set up an external selenium server. You can still use external selenium server by changing the settings in the configuration file, TelluriumConfig.groovy.
If your tests are written in Java, you just run them like JUnit test cases or test suites. If your test code is written in pure dsl, please use "DslScriptExecutor dsl_file" to run it.
There are a lot of ways, such as Selenium IDE. For firefox, the best way is to use the XPather plugin, which supports showing xpath by clicking on WEB. You may need to refactor the xpath a bit to make it simpler and more robust. Another good plugin for Firefox is web developer, which allows you to look at the DOM at run time.
Tellurium is tested under Firefox and it should support other browsers since it is built on top of the selenium framework.
Not really, as long as the locators you specified are correct at run time, you can define UI objects from different pages in one class, which is fine for pure DSL. But for Java test cases, I would suggest you write multiple UI modules for a page as a class to make it easier to maintain and refactor.
Selenium test itself is kind of functional test or integration test. Although Tellurium uses JUnit test, it is still functional test or integration test. You can put a lot of test cases into a test suite and the test suite can be your UAT test. Furthermore, you can also create data-driven tests using TelluriumDataDrivenTest.
Sure, Tellurium has a lot of pre-defined UI Objects, which will handle all the actions and data automatically for you. For example, you can define a template for a table element, the framework will automatically help you to locate the tableij element and support the different actions for them.
The object to locator mapping (OLM) framework is available from version 0.3.0. It includes couple parts. The first one is to map a UI id in the format of "search1.inputbox1" to the actual object you defined in your object module. The second part will be automatically mapping a certain parameters you provided for a UI to its actual locator. For example, you can say the UI object has the tag "input" and other attributes. Tellurium will try to create the XPath for you to make your life easier. The third will be to utilize the group object concept to locate the UI module in the DOM at run time.
Since Tellurium requires Groovy support, you can use Eclipse, Netbeans, or IntelliJ with groovy plugin if you want to work on Tellurium source code. IntelliJ is recommended because its Groovy plugin is excellent. If you only need to use Tellurium as a jar file, you can use any IDE that supports JAVA.
Make sure your web browser in your environment path. If you cannot really make it work, check the settings for the embedded server section in the configuration file TelluriumConfig.groovy. Or change the file SeleniumConnector.groovy, although this is not really recommended. Change the following line:
sel = new DefaultSelenium("localhost", port, "*chrome", baseURL);
First, you need to create your UI object groovy class by extending class UiObject or Container if it is a container type object. Then, you need to create your UI object builder by extending class UiObjectBuilder. Finally, register your ui builder for your ui object by call the
public void registerBuilder(String uiObjectName, UiObjectBuilder builder)
method in class TelluriumFramework. You can also register your builder in class UiObjectBuilderRegistry if you work on Tellurium source code directly.
From Tellurium 0.4.0, a global configuration file TelluriumConfig.groovy is used for users to customize Tellurium. You can also define your own UI object in this file as follows,
uiobject{
builder{
Icon="org.tellurium.builder.IconBuilder"
}
}
That is to say, you should create the UI object and its builder and then in the configuration file specify the UI object name and its builder full class name. Note, this feature is included in Tellurium 0.5.0, please check the SVN trunk for details.
Tellurium provides the ant build script. You may need to change some of the settings in the build.properties file so that it matches your environment. For example, the settings for javahome and javac.compiler. Then in the project root directory, run command:
ant clean
to clean up old build and run
ant dist
to generate a new artifact, which can be found in the dist directory.
Run
ant compile-test
to compile test code.
To use composite locator, you need to use "clocator" and its value is a map, i.e., value1, key2: value2, ... format in Groovy. The defined keys include "header", "tag", "text", "trailer", and "position". They are all optional. If a UI object has fixed tag and it is defined in object class, you do not need to include "tag".
You may have additional attributes, define them in the same way as the predefined keys. For example [value: "Tellurium home"]
First make sure the UI object is a Container or other classe that extends Container. You can specify group: "true", which will turn on group locating. That is the only thing you need to do. The group locating will be done by Tellurium automatically at run time. Right now, the UI object can only use the information provided in its child's composite locator. Other children with base locators will be ignored.
I suggest you only use the group locating option at the top level object. Once the top level object is found, it is not a big deal to find its descendants.
I used that way to write selenium tests for a whole project one year ago and I know how painful that is. There are drawbacks to use properties file because you have to specifiy the runtime locators, which is painful for developers and may not be robust unless you are xpath expert. The Tellurium framework can utilize relationship between UI objects to automatically generate the runtime xpath for you. Also It exploits the Group Locating concept to co-locate a group of UI objects. In the future, more advanced features and tools will be developed so that developers do not need to manually find the locators for UI elements. They can simply map their JSP or other mark-up files to the Tellurium UI definition or use tools to automatically do that.
The DSL is in the domain of "Selenium Test" and it is a DSL.
Not really. If you write your test as pure DSL, you only need to know the DSL syntax. If you need to write your test in Java, you only need to define your UI modules in Groovy file by extending the DslContext class. Since Groovy accept all Java syntax, you can in fact write Java code in your Groovy file that defines the UI modules. For the actual TestCase or TestSuite, they are already Java code. As long as you know Java, you know how to write Tellurium tests.
Tellurium is like a newborn and he needs your support and contribution. We welcome contributions in many different ways, for example,
For very active users in Tellurium user group, we will consider you as Tellurium contributors. We will provide you the open source license for the IntelliJ IDEA. A Tellurium contributor can be nominated by one of Tellurium team members as a new Tellurium team member. Tellurium team members will vote if we accept the contributor as a member. If most existing members agree, then the contributor will become a project member and contribute more to Tellurium.
Our project team is open for new members and is expecting for new members. We will recruit new members from Tellurium contributors. But if you think you are exceptional and want to contribute to Tellurium code immediately, please contact Jian Fang (John.Jian.Fang@gmail.com) or other existing Tellurium members.
AOST is used as the project prototype name and Tellurium is the official project name. Tellurium means a lot of for us. First, it means the project is full functional and no long a prototype. Second, the project becomes a team project. Third, the project is targeting Automated testing, not just Selenium test.
Data driven test means the testing flow is specified by the input data file and the test framework works as a driving engine to read the input data, do data bind, choose the test specified in the input file, run the test, and compare the expected results with the actual results. This is a total new way of writing tests.
Tellurium provides an expressive way for you to describe the input file formats starting with "fs.FieldSet", define test using "defineTest". Bind your variables to the field you defined in a field set, i.e., the format of a line of data, using "bind", and use "compareResult" to compare the actual result with the expected one. For input file, Tellurium supports pipe format file. You can use "loadData" to read the input data from a file. You can also use "useData" to use a String defined in your test script as the input data.
TelluriumDataDrivenModule is inherited from DslContext and you should define Ui modules, FieldSets, tests there, but not the testing flow. TelluriumDataDrivenTest is the actual testing class and it can read multiple test modules using the "includeModule" command and then use "step", "stepOver", and "stepToEnd" to control the testing flow. Please see the introduction wiki page for more details.
Tellurium provide a configure file "TelluriumConfig.groovy" and you should put this file in the path you run the tests. In this file, you can change the settings for Tellurium, for example, what port the embedded Selenium server will use, what browser you want to run Tellurium tests, and what is the output format of your data driven tests.
This is a known Selenium issue. A work-around is to install the 2.0.16 firefox version or change browser type to use iehta. You can make this change in TelluriumConfig.groovy file located at the Tellirium root project folder.
That is a great question, The problem is not in XPath itself, but the way you use it.
1) "//input[@name='btnG' and @type='submit" is a very clear XPath expression. Let us look at more general case. Usually, Selenium IDE or XPather created the XPath like
"//div/table[@id='something']/div[2]/div[3]/div[1]/div[6]"
See any problem with that? It is not robust. Along the path div -> table -> div -> div ->div -> div, if anything is changed there, your XPath is no longer valid. For example, if you add additional UI elements and the new XPath was changed to
"//div[2]/table[@id='something']/div[3]/div[3]/div[1]/div[6]"
You have to keep updating the XPath. For Tellurium, it more focuses on element attributes, not the XPath and it can be adaptive to the changes to some degree.
More importantly, Tellurium uses the group locating concept to use information from a group of UI elements to locate them in the DOM. In most cases, the group of elements themself can be enough to decide their locations in the DOM, that is to say, your UI element's location does not depend on any parent or grandparent elements. For example, in the above example, if you can use the group locating concept to find locators for the following part of UI elements,
"div[3]/div[1]/div[6]"
directly, then they do not depend on the portion "div[2]/table[@id='something']/div[3]", certainly. your UI elements can address any changes in the portion of "div[2]/table[@id='something']/div[3]". Note, in Tellurium, you most likely will not use the format of "div[3]/div[1]/div[6]". 2) The syntax of
selenium.type("//input[@title='Google Search']", input)
selenium.click("//input[@name='btnG' and @type='submit']")
...
selenium.type("//input[@title='Google Search']", input)
selenium.click("//input[@name='btnG' and @type='submit']")
...
selenium.type("//input[@title='Google Search']", input)
selenium.click("//input[@name='btnG' and @type='submit']")
...
everywhere is really ugly to users. Especially if someone needs to take over your code. In Tellurium, the UiID is used and it is very clear to users what you are acting on.
click "google_start_page.googlesearch"
3) The test script created by Selenium IDE is a mess of actions, not modularized. Other people may take quite some time to figure out what the script actually does. And it is quite difficult to refactor and reuse them. Even the UI is not changed, there are data dependence there and for most cases, you simply cannot just "record and replay" in practical tests.
In Tellurium, once you defined the UI module, for example, the Google search module, you can always reuse them and write as many test cases as possible.
4) Selenium is a cool and the idea is brilliant. But seems to me, it is really for low level testing and it only focuses on one element at a time and does not have the whole UI module in mind. That is why we need another tier on top of it so that you can have UI module oriented testing script and not the locator oritented one. Tellurium is one of the frameworks designed for this purpose.
5) As mentioned in 4), Selenium is a quite low level thing and it is really difficult for you to handle more complicated UI components like a data grid. Tellurium can handle them pretty easily, please see our test scripts for our Tellurium project web site.
Tellurium is still focus on UI modules at this point and all the DSLs are UI module related. Seems to me, RSpec is a bit more abstract than the current Tellurium UI module since it specifies what is the correct behavior for web UI. We do have RSpec in mind and thought of it. Hope we can add similar abstract level testing support in Tellurium later on.
TestNG is supported in Tellurium. Please check the code on SVN trunk. The test case BaseTelluriumJavaTestCase can be extended for both JUnit 4 and TestNG. TelluriumJavaTestCase is for JUnit 4 and TelluriumTestNGTestCase is for TestNG. For TestNG, you can use the following annotations:
@BeforeSuite
@AfterSuite
@BeforeTest
@AfterTest
@BeforeGroups
@AfterGroups
@BeforeClass
@AfterClass
@BeforeMethod
@AfterMethod
@DataProvider
@Parameters
@Test
for details, please see TestNG document:
Tellurium includes IntelliJ project files in its code base. To make Tellurium work properly in IntelliJ, please first check if you have installed IntelliJ Groovy plugin, JetGroovy. To check if JetGroovy is installed, open IDE settings and click on plugins, you will see all installed plugins. If JetGroovy is not on the list, please click on the "available" tab and install it. To check your Groovy setting, please open the "project settings", under the "platform settings", you will see the "Global libraries" item, click on that, you should be able to see GROOVY. If you do not see it, something is wrong and you need to configure Groovy for IntelliJ. Once you get this right, you should be all set. IntelliJ has excellent Groovy support and you can debug groovy code just like Java code.
Tellurium DSL scripts are actually Groovy scripts written in DSL syntax. Thus, Tellurium DSL scripts support all assertions in JUnit 3.8, which GroovyTestCase extends.
But for Tellurium Data Driven testing scripts, it is a bit different. Usually, you should use:
compareResult expected, actual
and it in turn calls
assertEquals(expected, actual)
This is because DDT script has to be general enough for different input data. If you want to use your own assertions, Tellurium provides the capability for that. You should use a Groovy closure to replace the default asserEquals. For example, in your DDT DSL script, you can overwrite the default behaviour using
compareResult(expected, actual){
assertNotNull(expected)
assertNotNull(actual)
assertTrue(expected.size() == actual.size())
}
This brings up one interesting question "why should I put assertions inside compareResult, not anywhere in the script?"
The answer is that you can put assertions any where in the DDT script, but that will cause different behaviour if the assertion fails.
If you put assertions in compareResult and the assertion fails, the AssertionFailedError will be captured and that comparison fails, but the rest script inside a test will continue executing. But if you put assertions outside of compareResult, the AssertionFailedError will lead to the failure of the current test. The exception will be recorded and the current test will be stopped. The next test will take over and execute.
Tellurium is built on top of Selenium and it is using Selenium RC 0.9.2, the latest stable version. Selenium RC 1.0 is still in Beta and once it becomes stable, Tellurium will start to support that version.
Be aware that Container, TextBox, List, and some other UI objects in Tellurium do not have default tags. The reason is that they are abstract UI objects and can represent multiple actual UIs. You can use any tag for them. Also, you can overwrite the default tag of a UI object by specify the "tag" attribute in the composite locator, but of course, such replacement should make sense. If all UI objects do not satisfy your need, you can even define your own UI object.
The steps to use remote selenium server in Tellurium are as follows,
First, run selenium sever on the remote machine, saying 192.168.1.106
java -jar selenium-server.jar -port 4444
for more selenium server options, please use the following commands:
java -jar selenium-server.jar --help
Then, you should modify the TelluriumConfig.groovy as follows,
tellurium{
//embedded selenium server configuration
embeddedserver {
//port number
port = "4444"
//whether to use multiple windows
useMultiWindows = false
//whether to run the embedded selenium server. If false, you need to manually set up a selenium server
runInternally = false
}
//the configuration for the connector that connects the selenium client to the selenium server
connector{
//selenium server host
//please change the host if you run the Selenium server remotely
serverHost = "192.168.1.106"
//server port number the client needs to connect
port = "4444"
//base URL
baseUrl = ""
//Browser setting, valid options are
// *firefox [absolute path]
// *iexplore [absolute path]
// *chrome
browser = "*iehta"
}
......
}
That is to say, you should disable the embedded selenium server by specifying
runInternally = false
and specify the remote selenium server host as
serverHost = "192.168.1.106"
After that, you can run the test just like using the embedded selenium server. But be aware that there are some performance degradation, i.e., the test is slower, with remote selenium server.
Container is most like an abstract object and it can be of any type of UI objects that can hold other UI objects. The UI objects inside the Container are fixed once it is defined and inner objects can be referred directly by "container_uid.object_uid". Be aware that Tellurium Container type objects can hold any UI objects including container type objects and nested UI can be constructed in this way.
Table and List are both Container type UI objects and are designed mainly for dynamic size UI objects. For example, table can be used to mode data grid, whose size is not fixed and is dynamic at run-time. For this purpose, the UI objects inside the table can be used as templates and how they are used is totally dependent on their UIDs.
For table, the UID of its inner objects is in the following formats:
At runtime, the following rules applies if any of them are defined:
1) > 2) > 3) > 4)
In this way, you can always define various templates for your table.
Once the templates are defined and you use table[i][j] to refer the inner object, Tellurium will automatically apply the above rules and find the actual UI object for you. If no templates can be found, Tellurium will use default UI object TextBox.
One such good example is the data grid of Tellurium downloads page:
ui.Table(uid: "downloadResult", clocator: [id: "resultstable", class: "results"], group: "true"){
//define table elements
//for the border column
TextBox(uid: "row: *, column: 1", clocator: [:])
//the summary + labels column consists of a list of UrlLinks
List(uid: "row:*, column: 3", clocator: [:]){
UrlLink(uid: "all", clocator: [:])
}
//For the rest, just UrlLink
UrlLink(uid: "all", clocator: [:])
}
List is similar to the Table, but it is one dimension. As a result, its UID uses the following formats:
the rule is 1) > 2).
This is for Linux, the configure "browser.preferences.instantApply" is set to true by default. You can point your firefox to "about:config" and change the option to false. After that, you will see the "OK" button.
Most media players should support ogg format, if you cannot find a media player for it, you can download the free media player VLC at
You can specify the profile in Tellurium Configuration file TelluriumConfig.groovy as follows,
embeddedserver {
//port number
port = "4444"
//whether to use multiple windows
useMultiWindows = false
//whether to run the embedded selenium server. If false, you need to manually set up a selenium server
runInternally = true
//profile location
profile = ""
}
TelluriumConfig.groovy acts like a global setting file if you do not want to manually change it. Now, the BaseTelluriumJavaTestCase provides two methods for you to overwrite the default settings,
public static void setCustomConfig(boolean runInternally, int port, String browser,
boolean useMultiWindows, String profileLocation)
public static void setCustomConfig(boolean runInternally, int port, String browser,
boolean useMultiWindows, String profileLocation, String serverHost)
As you result, if you want to use your custom settings for your specific test class, you can use the following way taking the Google test case as an example,
public class GoogleStartPageJavaTestCase extends TelluriumJavaTestCase
{
static{
setCustomConfig(true, 5555, "*chrome", true, null);
}
...
}
The "Include" syntax in Ui module definition can be used for this purpose. You can put frequently used UI modules into a base class, for example,
public class BaseUiModule extends DslContext {
public void defineBaseUi() {
ui.Container(uid: "Search"])
}
ui.Container(uid: "GoogleBooksList", clocator: [tag: "table", id: "hp_table"], group: "true") {
TextBox(uid: "category", clocator: [tag: "div", class: "sub_cat_title"])
List(uid: "subcategory", clocator: [tag: "div", class: "sub_cat_section"], separator: "p") {
UrlLink(uid: "all", clocator: [:])
}
}
}
}
Then you can extend this base Ui module as follows,
public class ExtendUiModule extends BaseUiModule {
public void defineUi() {
defineBaseUi()
ui.Container(uid: "Google", clocator: [tag: "table"]) {
Include(ref: "SearchModule")
Container(uid: "Options", clocator: [tag: "td", position: "3"], group: "true") {
UrlLink(uid: "LanguageTools", clocator: [tag: "a", text: "Language Tools"])
UrlLink(uid: "SearchPreferences", clocator: [tag: "a", text: "Search Preferences"])
UrlLink(uid: "AdvancedSearch", clocator: [tag: "a", text: "Advanced Search"])
}
}
ui.Container(uid: "Test", clocator: [tag: "div"]) {
Include(uid: "newcategory", ref: "GoogleBooksList.category")
Include(uid: "secondcategory", ref: "GoogleBooksList.category")
Include(uid: "newsubcategory", ref: "GoogleBooksList.subcategory")
}
}
}
Note that the "Include" must have the ref attribute to refer to the element it wants to include. You can still specify the uid for the object (if you do not need a different uid, you do not need the uid), if the object uid is not equal to the original one, Tellurium will clone a new object for you so that you can have multiple objects with different uids.
The StandardTable is designed for this purpose and it has the following format
table
thead
tr
td
...
td
tbody
tr
td
...
td
...
tbody (multiple tbodies)
tr
td
...
td
...
tfoot
tr
td
...
td
For a StandardTable, you can specify UI templates for different tbodies. For Example:
ui.StandardTable(uid: "table", clocator: [id: "std"]) {
UrlLink(uid: "header: 2", clocator: [text: "%%Filename"])
UrlLink(uid: "header: 3", clocator: [text: "%%Uploaded"])
UrlLink(uid: "header: 4", clocator: [text: "%%Size"])
TextBox(uid: "header: all", clocator: [:])
Selector(uid: "tbody: 1, row:1, column: 3", clocator: [name: "can"])
SubmitButton(uid: "tbody: 1, row:1, column:4", clocator: [value: "Search", name: "btn"])
InputBox(uid: "tbody: 1, row:2, column:3", clocator: [name: "words"])
InputBox(uid: "tbody: 2, row:2, column:3", clocator: [name: "without"])
InputBox(uid: "tbody: 2, row:*, column:1", clocator: [name: "labels"])
TextBox(uid: "foot: all", clocator: [tag: "td"])
}
There are three methods in DslContext for you to select different XPath Library,
public void useDefaultXPathLibrary()
public void useJavascriptXPathLibrary()
public void useAjaxsltXPathLibrary()
The default one is the same as the "Ajaxslt" one. To use faster xpathlibrary, please call useJavascriptXPathLibrary().
For example, in the test case file,
protected static NewGoogleStartPage ngsp;
@BeforeClass
public static void initUi() {
ngsp = new NewGoogleStartPage();
ngsp.defineUi();
ngsp.useJavascriptXPathLibrary();
}
You should escape the "." or other jQuery reserved characters.
For example, use "dateOfBirth.\\month" for "dateOfBirth.month" as the ID.
add FAQs
John - What is the url for the Tellurium google group? Thanks
It is changed to
since we changed project name. Sorry for the inconvenience.
Thanks.
add FAQs
John - What is the url for the Tellurium google group? Thanks
It is changed to
since we changed project name. Sorry for the inconvenience.
Thanks. | http://code.google.com/p/aost/wiki/FAQ | crawl-002 | refinedweb | 4,501 | 54.73 |
Solution for Programming Exercise 4.6
This page contains a sample solution to one of the exercises from Introduction to Programming Using Java.
Exercise 4.6:
For this exercise, you will do something even more interesting with the Mosaic class that was discussed in Section 4.6. (Again, don't forget that you will need Mosaic.java and MosaicPanel.java to compile and run your program.).
I will call the program RandomConvert, since the basic operation is to convert one square to be the same color as a neighboring square. An outline for the main program is easy:
Open a mosaic window Fill the mosaic with random colors while the window is open: Select one of the rectangles at random Convert the color of one of that rectangle's neighbors Short delay
We have already seen a subroutine for filling the mosaic with random color, in Subsection 4.6.2. I will also write a subroutine to do the second step in the while loop. There is some question about what it means to "select one of the rectangles at random." A rectangle in the mosaic is specified by a row number and a column number. We can select a random rectangle by choosing a row number and a column number at random. Assuming that ROWS and COLUMNS are constants that give the number of rows and the number of columns, we can do that by saying
int randomRow = (int)(ROWS * Math.random()); int randomColumn = (int)(COLUMNS * Math.random());
where I have declared each variable and initialized it in one step, as discussed in Subsection 4.7.1. For the "convert" subroutine to do its work, we will have to tell it which rectangle has been selected, so randomRow and randomColumn will be parameters to that subroutine. The code for the program's main() routine becomes:
Mosaic.open(ROWS, COLUMNS, SQUARE_SIZE, SQUARE_SIZE); fillWithRandomColors(); while (Mosaic.isOpen()) { int randomRow = (int)(ROWS * Math.random()); int randomColumn = (int)(COLUMNS * Math.random()); convertRandomNeighbor(randomRow, randomColumn); Mosaic.delay(1); }
All that remains is to write the convertRandomNeighbor() subroutine. This routine should pick a random neighbor of a given rectangle and change its color. A rectangle in the mosaic has four neighbors, above, below, to the left, and to the right. We can pick one at random by selecting a random integer less than four and using that integer to decide which neighbor to select. We have a problem, though, if the rectangle is on the edge of the mosaic. For example, if the rectangle is in the top row, then there is no neighbor above that rectangle in the mosaic. One solution to this problem is to wrap around to the bottom of the mosaic and use a square from the bottom row as the neighbor. Essentially, we think of the top of the mosaic as connected to the bottom and the left edge as connected to the right. We have seen something like this in Subsection 4.6.3, in the randomMove() subroutine. The convertRandomNeighbor() code can use some basic ideas from randomMove(). Here is a version of convertRandomNeighbor() that would work:
static void convertRandomNeighbor(int row, int col) { /* Choose a random direction, and get the row and column * numbers of the neighbor that lies in that direction. */ int neighborRow; // row number of selected neighbor int neighborColumn; // column number of selected neighbor int directionNum = (int)(4*Math.random()); // random direction; switch (directionNum) { case 0: // Choose neighbor above. neighborColumn = col; // Neighbor is in the same column. neighborRow = row - 1; // Subtract 1 to get neighbor's row number. if (neighborRow < 0) // Neighbor's row number is outside the mosaic. neighborRow = ROWS - 1; // So wrap around to bottom of the mosaic. break; case 1: // Choose neighbor to the right. neighborRow = row; // Same row. neighborColumn = col + 1; // Column to the right. if (neighborColumn >= COLUMNS) // Outside the mosaic? neighborColumn = 0; // Wrap around to the left edge break; case 2: // Choose neighbor below. neighborColumn = col; neighborRow = row + 1; if (neighborRow >= ROWS) neighborRow = 0; break; default: // Choose neighbor to the left. neighborRow = row; neighborColumn = col - 1; if (neighborColumn < 0) neighborColumn = COLUMNS - 1; break; } /* Get the color components for position (row,col) */ int red = Mosaic.getRed(row,col); int green = Mosaic.getGreen(row,col); int blue = Mosaic.getBlue(row,col); /* Change the color of the neighbor to color of the original square. */ Mosaic.setColor(neighborRow,neighborColumn,red,green,blue); }
Note the use of a default case at the end of the switch statement. Saying "case 3" will not work here, because the computer would not be able to verify that values have definitely been assigned to neighborRow and neighborColumn.
In my program, I actually used a different algorithm that requires somewhat less code. My algorithm goes like this:
Get the color components for the rectangle at position (row,col). Modify the value of row or col to point to a neighboring rectangle. Set the color of position (row,col).
This is a little tricky, since the variables row and col are used both for getting the color and for setting it. But by the time row and col are used for setting the color, they are referring to a different rectangle. You can see my version of convertRandomNeighbor in the full source code listing below.
In the end, I made one change in the program. I found that I wanted the process to go more quickly, but leaving out the delay altogether made it much too fast. My solution was to do several conversions each time through the while loop in the main routine, instead of just one. I added a BATCH_SIZE constant to say how many conversions to do at a time, and I added a for loop in the main program to implement the multiple conversions. A BATCH_SIZE of 10 or even 100 gives a much more satisfactory animation.
Here's a screenshot from my program after it was allowed to run for a while:
/** * This program fills a mosaic with random colors. It then enters * a loop in which it randomly selects one of the squares in the * mosaic, then randomly selects one of the four neighbors of that * square and converts the selected neighbor to the color of the * originally selected square. The effect is to gradually build * up larger patches of uniform color. The animation continues * until the user closes the window. This program depends on * the non-standard classes Mosaic and MosaicCanvas. */ public class RandomConvert { final static int ROWS = 40; // Number of rows in the mosaic. final static int COLUMNS = 40; // Number of columns in the mosaic. final static int SQUARE_SIZE = 10; // Size of each square in the mosaic. final static int DELAY = 1; // Millisecond delay after each convert. final static int BATCH_SIZE = 10; // The number of squares tested/converted at a time. /** * The main() routine opens the mosaic window, then enters into * a loop in which it repeatedly converts the color of one square. * The loop ends when the user closes the mosaic window. */ public static void main(String[] args) { Mosaic.setUse3DEffect(false); Mosaic.open(ROWS, COLUMNS, SQUARE_SIZE, SQUARE_SIZE); fillWithRandomColors(); while (Mosaic.isOpen()) { for (int i = 0; i < BATCH_SIZE; i++) { int randomRow = (int)(ROWS * Math.random()); int randomColumn = (int)(COLUMNS * Math.random()); convertRandomNeighbor(randomRow, randomColumn); } Mosaic.delay(DELAY); } } /** * Set each square in the mosaic to be a randomly selected color. */ static void fillWithRandomColors() { for (int row = 0; row < ROWS; row++) { for (int col = 0; col < COLUMNS; col++) { int r = (int)(256*Math.random()); int g = (int)(256*Math.random()); int b = (int)(256*Math.random()); Mosaic.setColor(row,col,r,g,b); } } } /** * Select one of the neighbors of the square at position (row,column) in * the mosaic. Change the color at position (row, column) to match the * color of the selected neighbor. The neighbors of a square are the * squares above, below, to the left, and to the right of the square. * For squares on the edge of the mosaic, wrap around to the opposite * edge. */ static void convertRandomNeighbor(int row, int col) { /* Get the color components for position (row,col) */ int red = Mosaic.getRed(row,col); int green = Mosaic.getGreen(row,col); int blue = Mosaic.getBlue(row,col); /* Choose a random direction, and change the value of row * or col to refer to the neighbor that lies in that direction. */ int directionNum = (int)(4*Math.random()); switch (directionNum) { case 0: // Choose neighbor above. row--; // Move row number one row up. if (row < 0) // row number is outside the mosaic. row = ROWS - 1; // Wrap around to bottom of the mosaic. break; case 1: // Choose neighbor to the right. col++; if (col >= COLUMNS) col = 0; break; case 2: // Choose neighbor below. row++; if (row >= ROWS) row = 0; break; case 3: // Choose neighbor to the left. col--; if (col < 0) col = COLUMNS - 1; break; } /* Change the color of the neighbor to color of the original square. */ Mosaic.setColor(row,col,red,green,blue); } } | http://math.hws.edu/javanotes/c4/ex6-ans.html | CC-MAIN-2018-47 | refinedweb | 1,476 | 57.67 |
Create TensorFlow.js Project with Parcel31 Mar 2019
Creating a web application is often troublesome due to the complexity of the framework and ecosystem. There are a bunch of build systems to launch the project. We also need to be familiar with the difference between programming languages used by web applications (e.g. JavaScript, AltJS, TypeScript). It may be a common sense especially to those who are not familiar with the latest web technologies as it changes so quickly.
Parcel is a tool to bundle all assets needed for an application into one package. You can use it immediately without writing any configuration. I found it’s so beneficial to use Parcel for creating TensorFlow.js application quickly. It cares to compile TypeScript, dependency resolution and package bundle on behalf of me. In this article, I’m going to introduce the way how to create TensorFlow.js application in a few minutes by using Parcel.
First, you need to create the application directory in your machine. Then as you may usually do, you prepare the npm package in the directory.
$ mkdir myapp $ cd myapp $ npm init -y
It will create an initial
package.json. Please make sure to add TensorFlow.js as a dependency as follows. The pre-trained models of TensorFlow.js may be also useful.
"dependencies": { "@tensorflow/tfjs": "^1.0.3", "@tensorflow-models/mobilenet": "^1.0.0", "parcel": "^1.12.3" }
You may need to ensure the dependencies are installed properly by running
npm i.
Then let’s write the source code of the application. The structure of application looks like as follows.
$ tree . . ├── package.json └── src ├── cat.jpg ├── index.html └── index.ts 1 directory, 3 files
The following shows the
index.html and
index.ts. One of the great thing of Parcel is that it can automatically detect the resource used in the application. In this case,
cat.jpg and
index.ts are compiled and bundled into the artifact directory,
dist.
<script src="index.ts"></script> <div style='display: flex'> <img id="img" src="cat.jpg"/> </div>
import * as mobilenet from '@tensorflow-models/mobilenet'; async function run(img: HTMLImageElement) { // Load the MobileNetV2 model. const model = await mobilenet.load(2, 1.0); // Classify the image. const predictions = await model.classify(img); console.log('Predictions'); console.log(predictions); } // Ensure to load the image. window.onload = (e) => { const img = document.getElementById('img') as HTMLImageElement; run(img); }
Surprisingly, all things are already prepared to start the application. Let’s run the application with Parcel development server.
$ npx parcel src/index.html --open
This command automatically builds the package and open the web browser. Writing a deep learning application itself is not always easy. You may want to focus on writing the core of the application such as improving the accuracy of the model or collecting the datasets. Parcel can be a tool to help you boost your creativity of building the web application leveraged by deep learning.
If you want to learn more about client side deep learning, Deep Learning in the Browser
is a good introduction to get used to the deep learning framework written in JavaScript or TypeScript and bootstrap your own web application with deep learning.
Thanks! | https://www.lewuathe.com/create-tensorflow.js-project-with-parcel.html | CC-MAIN-2021-49 | refinedweb | 529 | 52.76 |
Hello All I have an issue. created 2 java programs, a.java and b.java content of a.java: package test; import test1.*; publuc class ATEST { public static void main (String[] args) { BTEST.call(); } } content of b.java: package test1; public class BTEST { string name; void call() { . . . } } Getting call() is not public in BTEST; cannot be accessed by outside package Question is b.java must be declared in this fashion. I need to add access modifiers so that call() and any constructors are the only things that are accessible from other packages. Any other method can only be accessed from the BTEST class. Your kind assistance would be greatly appreciated. Thanks, The Ozman
0 | https://www.daniweb.com/programming/software-development/threads/442621/call-is-not-public-in-btest-cannot-be-accessed-by-outside-package | CC-MAIN-2017-51 | refinedweb | 114 | 72.12 |
This chapter walks you through a simple example of using the Java Native Interface. We will write a Java application that calls a C function to print "
Hello World!".
Figure 2.1 illustrates
the process for using JDK or Java 2 SDK releases to write a simple Java application
that calls a C function to print "
Hello World!". The process consists
of the following steps:
HelloWorld.java) that declares the native method.
javacto compile the
HelloWorldsource file, resulting in the class file
HelloWorld.class. The
javaccompiler is supplied with JDK or Java 2 SDK releases.
javah
-jnito generate a C header file (
HelloWorld.h) containing the function prototype for the native method implementation. The
javahtool is provided with JDK or Java 2 SDK releases.
HelloWorld.c) of the native method.
Hello-World.dllor
libHello-World.so. Use the C compiler and linker available on the host environment.
HelloWorldprogram using the
javaruntime interpreter. Both the class file (
HelloWorld.class) and the native library (
HelloWorld.dllor
libHelloWorld.so) are loaded at runtime.
The remainder of this chapter explains these steps in detail.
You begin by writing the following program in the Java
programming language. The program defines a class named
HelloWorld
that contains a native method,
class HelloWorld { private native void print(); public static void main(String[] args) { new HelloWorld().print(); } static { System.loadLibrary("HelloWorld"); } }
The
HelloWorld class definition begins with
the declaration of the
main method that instantiates the
Hello-World class
and invokes the
There are two differences between the declaration of
a native method such as
native modifier. The
native modifier indicates that
this method is implemented in another language. Also, the native method declaration
is terminated with a semicolon, the statement terminator symbol, because there
is no implementation for native methods in the class itself. We will implement
the
Before the native method
HelloWorld
class. The Java virtual machine automatically runs the static initializer before
invoking any methods in the
HelloWorld class, thus ensuring that
the native library is loaded before the
We define a
main method to be able to run
the
HelloWorld class.
Hello-World.main calls the native
method
System.loadLibrary takes a library name,
locates a native library that corresponds to that name, and loads the native
library into the application. We will discuss the exact loading process later
in the book. For now simply remember that in order for
System.loadLibrary("HelloWorld")
to succeed, we need to create a native library called
HelloWorld.dll
on Win32, or
libHelloWorld.so on Solaris.
HelloWorldClass
After you have defined the
HelloWorld class,
save the source code in a file called
HelloWorld.java. Then compile
the source file using the
javac compiler that comes with the JDK
or Java 2 SDK release:
javac HelloWorld.java
This command will generate a
HelloWorld.class
file in the current directory.
Next we will use the
javah tool to generate
a JNI-style header file that is useful when implementing the native method in
C. You can run
javah on the
Hello-World class as follows:
javah -jni HelloWorld
The name of the header file is the class name with a
"
.h" appended to the end of it. The command shown above generates
a file named
HelloWorld.h. We will not list the generated header
file in its entirety here. The most important part of the header file is the
function prototype for
Java_HelloWorld_print, which is the C function
that implements the
HelloWorld.
JNIEXPORT void JNICALL Java_HelloWorld_print (JNIEnv *, jobject);
Ignore the
JNIEXPORT and
JNICALL
macros for now. You may have noticed that the C implementation of the native
method accepts two arguments even though the corresponding declaration of the
native method accepts no arguments. The first argument for every native method
implementation is a
JNIEnv interface pointer. The second argument
is a reference to the
HelloWorld object itself (sort of like the
"this" pointer in C++). We will discuss how to use the
JNIEnv interface
pointer and the
jobject arguments later in this book, but this
simple example ignores both arguments.:
#include <jni.h> #include <stdio.h> #include "HelloWorld.h" JNIEXPORT void JNICALL Java_HelloWorld_print(JNIEnv *env, jobject obj) { printf("Hello World!\n"); return; }
The implementation of this native method is straightforward.
It uses the
printf function to display the string "
Hello
World!" and then returns. As mentioned before, both arguments, the
JNIEnv
pointer and the reference to the object, are ignored.
The C program includes three header files:
jni.h-- This header file provides information the native code needs to call JNI functions. When writing native methods, you must always include this file in your C or C++ source files.
stdio.h-- The code snippet above also includes
stdio.hbecause it uses the
printffunction.
HelloWorld.h-- The header file that you generated using
javah. It includes the C/C++ prototype for the Java_HelloWorld_print function.
Remember that when you created the
HelloWorld
class in the
HelloWorld.java file, you included a line of code
that loaded a native library into the program:
System.loadLibrary("HelloWorld");
Now that all the necessary C code is written, you need
to compile
Hello-World.c and build this native library.
Different operating systems support different ways to
build native libraries. On Solaris, and Win32 you need to put in the include paths that
reflect the setup on your own machine.
At this point, you have the two components ready to run
the program. The class file (
HelloWorld.class) calls a native method,
and the native library (
Hello-World.dll) implements the native
method.
Because the
HelloWorld class contains its
own
main method, you can
The equivalent command in the C shell (
csh
or
tcsh) is as follows:
setenv.
Please send any comments or corrections to jni@java.sun.com | http://java.sun.com/docs/books/jni/html/start.html | crawl-002 | refinedweb | 953 | 58.99 |
- : Parser Combinators
Tue, 2009-02-03, 17:08
Daniel Swe schrieb:
> But how would you proceed? Which parts do I need to change for it to
> accept an underscore as a letter?
You are using StandardTokenParsers which is just:
class StandardTokenParsers extends StdTokenParsers {
type Tokens = StdTokens
val lexical = new StdLexical
}
You'd have to replace this with a class that uses something else as
lexical that has a different token parser from
class StdLexical extends Lexical with StdTokens {
// see `token' in `Scanners'
def token: Parser[Token] =
( letter ~ rep( letter | digit ) ^^ { case first ~ rest => processIdent(first :: rest mkString "") }
...
(letter | '_') ~ rep (letter | '_' | digit) ^^ {...})
might be the minimal change. But be careful that with this definition, both
aʹ and aʼ are valid (and different) identifiers, while a' isn't.
And of course,
scala.util.parsing.combinator.lexical.Lexical parses Chars which means
you are limited to the BMP. So yes, japanese works, cuneiform or ugaritic wont.
I can't reproduce your \t problem, so no help there.
- Florian. | http://www.scala-lang.org/old/node/751 | CC-MAIN-2014-52 | refinedweb | 167 | 55.84 |
Opened 11 years ago
Closed 4 years ago
#10899 closed New feature (wontfix)
easier manipulation of sessions by test client
Description
Creating and modifying sessions is painful for the test client...
class SimpleTest(TestCase): def test_stuff(self): # ugly and long create of session if session doesn't exist from django.conf import settings engine = import_module(settings.SESSION_ENGINE) store = engine.SessionStore() store.save() # we need to make load() work, or the cookie is worthless self.cookies[settings.SESSION_COOKIE_NAME] = store.session_key # ugly and long set of session session = self.client.session session['foo'] = 33 session.save() # pretty and faster self.client.session['foo'] = 33 # ugly and long pop of session session = self.client.session val = session.pop('foo') session.save() # pretty and faster val = self.client.session.pop('foo')
The attached patch makes the "pretty and faster" possible.
It's faster because every session get doesn't have to go
to the database. The pretty code fails before the patch
because each fetch of self.client.session creates a new
SessionStore with a small scope, not lasting long enough
to be saved.
Attachments (7)
Change History (44)
Changed 11 years ago by
comment:1 Changed 11 years ago by
Changed 11 years ago by
comment:2 Changed 11 years ago by
comment:3 Changed 11 years ago by
comment:4 Changed 11 years ago by
Seems like you got the wrong ticket number, Russell.
As an aside, here's how I access the session in a very session backend agnostic way:
from django.test import TestCase from django.test.client import ClientHandler class SessionHandler(ClientHandler): def get_response(self, request): response = super(SessionHandler, self).get_response(request) response.session = request.session.copy() return response class SessionTestCase(TestCase): def _pre_setup(self): super(SessionTestCase, self)._pre_setup() self.client.handler = SessionHandler() class FooTestCase(SessionTestCase): def test_session_exists(self): resp = self.client.get("/") self.assert_(hasattr(resp, "session")) self.assert_("my_session_key" in resp.session) self.assertEqual(resp.session["my_session_key"], "Hello world!")
As you can see, it'd be very easy for the original
ClientHandler class to just copy over the session to the response object.
comment:5 Changed 11 years ago by
Good job - I scratched my head for a bit figuring out why my session tests were failing (due to my implementation of the obvious "pretty and faster" code).
Hint: you don't need the
_failure method for property - just don't provide a set or del method for the property and Python will raise an attribute error if someone tries to.
I'd leave out your
ClientHandler changes from this ticket - open that in a new ticket if you want, but no point in making this one any more complex.
comment:6 Changed 11 years ago by
comment:7 Changed 10 years ago by
comment:8 Changed 10 years ago by
Smiley Chris,
I attached a new patch that applies cleanly to trunk as of [14099]. It includes tests and passes the Django test suite. It doesn't have docs yet. That's because when I went to write them I noticed that this functionality is already provided in a fairly straightforward manner, with a caveat:
See.
The caveat is that the docs don't explain that the session has to already be created, by calling a view or another manner, else it returns a dictionary object rather than a SessionStore. That means if you run the provided code sample in a test case you get an attribute error:
AttributeError: 'dict' object has no attribute 'save'
Could you give a decision on what to do with this next?
- Go with the requested behavior of this patch and add the needed docs. It's a slight improvement over the existing way of setting session variables, although it's less explicit because the save behavior is happening behind the scenes.
- Keep the documented behavior but add the disclaimer that the session needs to be created by calling a view with the client before editing the session.
- Open a different ticket for the existing caveat to be fixed and change the test client to always return a session store when 'django.contrib.sessions' is installed by creating one if it doesn't exist.
The changes I made are also on Github:
Thanks.
Preston
Changed 10 years ago by
Patch with tests
comment:9 Changed 10 years ago by
I think the patch is on the right track. The currently documented behavior needs to continue to work for backwards compatibility purposes, but I don't see the proposed behavior affecting that.
I have two comments on the patch:
Firstly, can the call to _session_save() be pushed further down the stack into request()? That would removes some code duplication, and I can't see any obvious reason why it wouldn't work the same (feel free to prove me wrong!)
Secondly, there is some loss of cookie properties from the login method. This needs to be checked to make sure there aren't any backwards incompatible changes to the way the login cookie is being set during testing.
Regarding docs -- you're correct that this new behavior is less explicit, so explaining exactly how and when the session is persisted will be an important thing to explain well.
comment:10 Changed 9 years ago by
comment:11 Changed 9 years ago by
The patch does not apply to current trunk - do you have an up to date patch?
Changed 9 years ago by
comment:12 Changed 9 years ago by
Attached is an updated patch against r16086. I'd appreciate if somebody could review this.
My questions are:
1) Are the doc's I added enough? What needs to be said on this? Should the old example be removed?
2) Since the session is being cached on the Client, problems arise with functions that change the session key. This patch makes sure the session key is updated both when a view is called and when the login/logout Client methods are called. Are there any other functions that change the session key I need to be aware of?
Thanks.
comment:13 Changed 9 years ago by
comment:14 Changed 9 years ago by
Thanks for the updated patch!
With the patch, I ran the test suite (using the supplied test_sqlite settings). There are a few tests that check the number of queries, so a failure is no surprise. But the strange thing is: I get 9 queries instead of 1. Why so many? And how do we want to deal with the problem that the patch might break users' test suite? I have no idea what the policy is, but IMHO it should be expected that the number of SQL queries can change with a new release.
But in any case, your patch needs to modify the affected tests in the Django test suite.
Here's the test run:
> ./tests/runtests.py --settings=test_sqlite test_utils Creating test database for alias 'default'... Creating test database for alias 'other'... ..F.F... ====================================================================== FAIL: test_with_client (regressiontests.test_utils.tests.AssertNumQueriesContextManagerTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/mir/kunden/django/workspace/django/tests/regressiontests/test_utils/tests.py", line 80, in test_with_client self.client.get("/test_utils/get_person/%s/" % person.pk) File "/home/mir/kunden/django/workspace/links/django/test/testcases.py", line 234, in __exit__ executed, self.num AssertionError: 9 queries executed, 1 expected ====================================================================== FAIL: test_assert_num_queries_with_client (regressiontests.test_utils.tests.AssertNumQueriesTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/mir/kunden/django/workspace/django/tests/regressiontests/test_utils/tests.py", line 38, in test_assert_num_queries_with_client "/test_utils/get_person/%s/" % person.pk File "/home/mir/kunden/django/workspace/links/django/test/testcases.py", line 537, in assertNumQueries func(*args, **kwargs) File "/home/mir/kunden/django/workspace/links/django/test/testcases.py", line 234, in __exit__ executed, self.num AssertionError: 9 queries executed, 1 expected ---------------------------------------------------------------------- Ran 8 tests in 0.025s FAILED (failures=2)
I have another rather formal suggestion regarding your patch: Your patch changes the number of blank lines between functions in a few places. It shouldn't, first because 2 blank lines are standard in all Django files, second you should not touch code places that are not really related to the ticket, this can get a real hassle when merging in other changes. Would you like to clean this up by yourself?
Re documentation: It is a bit short but might be sufficient. I don't like the wording, but I'm no native speaker and I guess the final committer will polish it up anyway. But ... I think the preceding paragraph does not apply after your patch, does it?
To modify the session and then save it, it must be stored in a variable first (because a new ``SessionStore`` is created every time this property is accessed):: def test_something(self): session = self.client.session session['somekey'] = 'test' session.save()
Changed 9 years ago by
Updated ticket to not produce extra queries
comment:15 Changed 9 years ago by
Whoops. Let's try this again.
comment:16 Changed 9 years ago by
Thanks for reviewing that mirs.
comment:17 Changed 9 years ago by
The test suits passes now and it looks fine, but I need a bit more time.
comment:18 Changed 9 years ago by
Any progress on this guys?
Changed 9 years ago by
Updated patch
comment:19 Changed 9 years ago by
comment:20 Changed 9 years ago by
comment:21 Changed 9 years ago by
Tests failing:
FAIL: test_group_permission_performance (regressiontests.admin_views.tests.GroupAdminTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/admin/Projects/django/django-git/tests/regressiontests/admin_views/tests.py", line 2983, in test_group_permission_performance self.assertEqual(response.status_code, 200) File "/Users/admin/Projects/django/django-git/django/test/testcases.py", line 246, in __exit__ executed, self.num AssertionError: 8 != 6 : 8 queries executed, 6 expected ====================================================================== FAIL: test_user_permission_performance (regressiontests.admin_views.tests.UserAdminTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/admin/Projects/django/django-git/tests/regressiontests/admin_views/tests.py", line 2952, in test_user_permission_performance self.assertEqual(response.status_code, 200) File "/Users/admin/Projects/django/django-git/django/test/testcases.py", line 246, in __exit__ executed, self.num AssertionError: 9 != 7 : 9 queries executed, 7 expected ---------------------------------------------------------------------- Ran 4228 tests in 340.354s FAILED (failures=2, skipped=69, expected failures=3)
comment:22 Changed 9 years ago by
comment:23 Changed 9 years ago by
I was looking into these test failures in the recent NC sprint. From what I could tell the two additional queries are related to a session read and write now that the test sessions work more like the real sessions. Starting a new project, creating a superuser and viewing the superuser in the admin shows 9 queries using django-debug-toolbar rather than 7 asserted in the test case.
comment:24 Changed 8 years ago by
comment:25 Changed 8 years ago by
comment:26 Changed 7 years ago by
comment:27 Changed 7 years ago by
What's the progress on this?
comment:28 Changed 7 years ago by
comment:29 Changed 7 years ago by
Fixed in:
Thanks prestontimmons for the patch. The 2 extra queries in the failing tests are from the read and
write of the session object to the database.
Changed 7 years ago by
comment:30 Changed 7 years ago by
While reviewing the patch together I made a few changes, see attachement.
I don't understand why the block added at line 415 is necessary; the session middleware should take care of saving the session.
If this piece of code is actually useful, could you explain why? Thank you!
comment:31 Changed 7 years ago by
There's been some work on some parts of the code that overlap with this issue.
I'm not really sure anymore as to how to resolve this so it would be nice if someone with more insight
would take a look.
Also I'm not sure if the patch is the best solution because of the part on line 415, as Aymeric mentioned.
It is questionable in the sense that it's there to enable *direct session manipulation* (as outlined in the tests in the pull request)
but the part that checks if the session was modified, and therefore saves it, is the middleware. The test client
goes through the middleware and there should be a way to modify it to enable direct session manipulation.
comment:32 Changed 7 years ago by
comment:33 Changed 7 years ago by
comment:34 Changed 7 years ago by
Hi guys,
This makes unit testing of views impossible when they use session variables, unless creating a view that insert data into the session but this is a ugly and dangerous hack.
Are you planning to fix this issue or is it considered as a no fix like :
comment:35 Changed 6 years ago by
In light of #21357, I think this ticket should be closed as wontfix.
- It's no longer necessary to manually import and instantiate the session engine.
- The proposed patches here modify the test client to automatically call save on the session object if it's modified. That's better handled explicitly or by the session middleware.
comment:36 Changed 4 years ago by
As prestontimmons said above, this seems much easier now since the engine does not need to be instantiated. Also, it is documented that the session must be stored in a variable:
So maybe this ticket can be closed?
Here is an updated patch... | https://code.djangoproject.com/ticket/10899 | CC-MAIN-2020-29 | refinedweb | 2,226 | 64.3 |
Hello,
if the user log out from viewpoint, I want to execute a method in my portlet. Is there a way how can I do it? If the user open the portlet, the portlet opens a connection to the db over the class TeradataDataSourceFactory. If the user go out from the portlet, I want to close the connection. I Know I can close and open the connection by execute one or a group of sql statements. I cannot add a connection pool by starting the server, because the properties of the connection will be set over viewpoint.
I know I can use the a javascript method by removing the portlet, but this is not the solution that I´m searching for.
There are 2 situations in which I would recommend that you close your connection to the database. The first is when the user logs out, and the second is when the user removes the portlet from his page. Viewpoint's SQL Scratchpad portlet deals with this problem in the exact same fashion.
To handle the situation where the user logs out of Viewpoint, you should write a Java class that implements the HttpSessionListener interface. This class should close the connection to the database in the sessionDestroyed method. This class then needs to be registered in the web.xml file as a listener.
public class PortletSessionListener implements HttpSessionListener {
public void sessionCreated(HttpSessionEvent event) {
// Do nothing
}
public void sessionDestroyed(HttpSessionEvent event) {
// Retrieve and close your connection here
}
}
<!-- Fragment to add to web.xml -->
<listener>
<listener-class>com.teradata.viewpoint.portlets.myportlet.listener.PortletSessionListener</listener-class>
</listener>
The second situation to handle is when the user removes the portlet from the page. You will want to register a custom callback function in JavaScript that will be triggered when the portlet is removed. This callback function should make a call to a URL in your portlet which can be responsible for closing the connection to the database. Adding the server side code to expose this URL is something you should already be familiar with, so I will exclude any code samples for that here. You would want to add some code similar to the following to your summary.jsp page.
<script type="text/javascript">
TDPortalManager.onPortletRemove('${context}', function() {
// Make an AJAX call here to your disconnect URL here
});
</script>
Hope that helps!
Thank you for your reply.
I used the first suggestion and for spring context I used this:
AfbManager afbManager = null;
HttpSession session = sessionEvent.getSession();
ApplicationContext ctx =
WebApplicationContextUtils.
getWebApplicationContext(session.getServletContext());
afbManager = (AfbManager) ctx.getBean("afbManager");
Another question:
How can I read from the ControllerContext the Session. I need the sessionId...
You have the HttpSession object. You can get the session ID by calling getId() on this object.
Thank you for the reply.
How can I get the PortletSession ID? | http://community.teradata.com/t5/Viewpoint/Log-out-action-method/td-p/51443 | CC-MAIN-2018-17 | refinedweb | 468 | 58.28 |
changeset: 2727:5a3018702f8b
tag: tip
user: Kris Maglione <kris_AT_suckless.org>
date: Tue Jun 15 12:21:35 2010 -0400
files: alternative_wmiircs/python/pygmi/fs.py
description:
[pygmi] Make sure Ctl#ctl strings are unicode before joining them. Fixes issue #194.
diff -r 96ef87fb9d23 -r 5a3018702f8b alternative_wmiircs/python/pygmi/fs.py
--- a/alternative_wmiircs/python/pygmi/fs.py Mon Jun 14 10:46:46 2010 -0400
+++ b/alternative_wmiircs/python/pygmi/fs.py Tue Jun 15 12:21:35 2010 -0400
@@ -65,7 +65,7 @@
"""
Arguments are joined by ascii spaces and written to the ctl file.
"""
- client.awrite(self.ctl_path, ' '.join(args))
+ client.awrite(self.ctl_path, u' '.join(map(unicode, args)))
def __getitem__(self, key):
for line in self.ctl_lines():
Received on Tue Jun 15 2010 - 16:21:43 UTC
This archive was generated by hypermail 2.2.0 : Tue Jun 15 2010 - 16:24:04 UTC | https://lists.suckless.org/hackers/1006/2820.html | CC-MAIN-2022-33 | refinedweb | 145 | 52.97 |
Some of the most interesting algorithms in Computer Science are involving trees. They are simple and often leverage recursion. For instance, pre-order traversal of a tree, of any complexity, can be written as follows:
void preorder(tree t) { if(t == NULL) return; printf("%d ", t->val); preorder(t->left); preorder(t->right); }
For a hobby project, I was faced with an interesting problem of converting a flat representation of a tree into a nested data structure. A flat representation of a tree looks like this:
0 0 1 1 2 3 2 1
Each number refers to the nesting level within a tree. After conversion to a nested structure, it should look as follows (square brackets is the Python syntax for a list):
[ 0, 0, [ 1, 1, [ 2, [ 3 ], 2], 1]]
I expected this algorithm to be fairly easy to find, but I didn’t have much success with Google. So, as any self respecting programmer would, I rolled up my sleeves and wrote a Python implementation:
def treeify(cs): cur = 0 tree = [] stack = [tree] for c in cs: if c['level'] > cur: l = [c] stack[-1].append(l) stack.append(l) elif c['level'] < cur: while 1: stack.pop() cur = stack[-1][0] if c['level'] == cur['level']: break stack[-1].append(c) else: stack[-1].append(c) cur = c['level'] return tree
I have tried to make the best use of Python lists by treating them like lists in LISP. Basically, both languages treat variables as references to actual data. So the lists are nothing but lists of references. This leads to some very useful properties. For e.g. in the above code:
cs: Flat tree structure. This is slightly different from our previous example. It is a list with elements which are hash tables or dictionaries of the form {‘level’: 0, …}. This is better if your node contains other data that can be easily stored in a dictionary.
tree: Nested tree structure. This is what we would finally return back
stack: A stack of trees. Top of the stack is the lowermost node of the tree where the next item of the same level can be added as its child. Next element is the stack is its parent node and so on.
Go ahead and read the source for a mind bending experience! | http://arunrocks.com/treeify_-_converting_tree_data_structures/ | CC-MAIN-2015-18 | refinedweb | 387 | 71.95 |
Object-oriented programmingEdit
Programming languages and stylesEdit
programming language language!programming programming style object-oriented programming functional programming procedural programming programming!object-oriented programming!functional programming!procedural
There are many programming languages in the world, and almost as many programming define what object-oriented programming is, but here are some of its characteristics:
itemize
Object definitions (classes) usually correspond to relevant real-world objects. For example, in Chapter deck, the creation of the Deck class was a step toward object-oriented programming.
The majority of methods are object methods (the kind you invoke on an object) rather than class methods (the kind you just invoke). So far all the methods we have written have been class methods. In this chapter we will write some object methods.
The language feature most associated with object-oriented programming is inheritance. I will cover inheritance later in this chapter.
itemize
inheritance
Recently object-oriented programming has become quite popular, and there are people who claim that it is superior to other styles in various ways. I hope that by exposing you to a variety of styles I have given you the tools you need to understand and evaluate these claims.
Object and class methodsEdit
object method method!object class method method!class static
There are two types of methods in Java, called class methods and object methods. So far, every method we have written has been a class method. Class methods are identified by the keyword static in the first line. Any method that does not have the keyword static is an object method.
Although we have not written any object methods, we have invoked some. Whenever you invoke a method ``on an object, it's an object method. For example, drawOval is an object method we invoked on g, which is a Graphics object. Also, the methods we invoked on Strings in Chapter strings were object methods.
Graphics class!Graphics
Anything that can be written as a class method can also be written as an object method, and vice versa. Sometimes it is just more natural to use one or the other. For reasons that will be clear soon, object methods are often shorter than the corresponding class methods.
The current objectEdit
current object object!current this
When you invoke a method on an object, that object becomes the current object. Inside the method, you can refer to the instance variables of the current object by name, without having to specify the name of the object.
constructor
Also, you can refer to the current object using the keyword this. We have already seen this used in constructors. In fact, you can think of constructors as being a special kind of object method.
Complex numbersEdit
complex number Complex class!Complex arithmetic!complex
As a running example for the rest of this chapter we will consider a class definition for complex numbers. Complex numbers are useful for many branches of mathematics and engineering, and many computations are performed using complex arithmetic. A complex number is the sum of a real part and an imaginary part, and is usually written in the form , where is the real part, is the imaginary part, and represents the square root of -1. Thus, .
The following is a class definition for a new object type called Complex:
verbatim class Complex
// instance variables double real, imag;
// constructor public Complex () this.real = 0.0; this.imag = 0.0;
// constructor public Complex (double real, double imag) this.real = real; this.imag = imag;
verbatim
There should be nothing surprising here. The instance variables are two.
instance variable variable!instance constructor
In main, or anywhere else we want to create Complex objects, we have the option of creating the object and then setting the instance variables, or doing both at the same time:
verbatim
Complex x = new Complex (); x.real = 1.0; x.imag = 2.0; Complex y = new Complex (3.0, 4.0);
verbatim
A function on Complex numbersEdit
operator!Complex method!function pure function
Let's look at some of the operations we might want to perform on complex numbers. The absolute value of a complex number is defined to be . The abs method is a pure function that computes the absolute value. Written as a class method, it looks like this:
verbatim
// class method public static double abs (Complex c) return Math.sqrt (c.real * c.real + c.imag * c.imag);
verbatim
This version of abs calculates the absolute value of c, the Complex object it receives as a parameter. The next version of abs is an object method; it calculates the absolute value of the current object (the object the method was invoked on). Thus, it does not receive any parameters:
verbatim
// object method public double abs () return Math.sqrt (real*real + imag*imag);
verbatim:
verbatim
// object method public double abs () return Math.sqrt (this.real * this.real + this.imag * this.imag);
verbatim
But that would be longer and not really any clearer. To invoke this method, we invoke it on an object, for example
verbatim
Complex y = new Complex (3.0, 4.0); double result = y.abs();
verbatim
Another function on Complex numbersEdit
Another operation we might want to perform on complex numbers is addition. You can add complex numbers by adding the real parts and adding the imaginary parts. Written as a class method, that looks like:
verbatim
public static Complex add (Complex a, Complex b) return new Complex (a.real + b.real, a.imag + b.imag);
verbatim
To invoke this method, we would pass both operands as arguments:
verbatim
Complex sum = add (x, y);
verbatim
Written as an object method, it would take only one argument, which it would add to the current object:
verbatim
public Complex add (Complex b) return new Complex (real + b.real, imag + b.imag);
verbatim.
dot notation
verbatim
Complex sum = x.add (y);
verbatim
From these examples you can see that the current object (this) can take the place of one of the parameters. For this reason, the current object is sometimes called an implicit parameter.
A modifier
modifier method!modifier
As yet another example, we'll look at conjugate, which is a modifier method that transforms a Complex number into its complex conjugate. The complex conjugate of is .
As a class method, this looks like:
verbatim
public static void conjugate (Complex c) c.imag = -c.imag;
verbatim
As an object method, it looks like
verbatim
public void conjugate () imag = -imag;
verbatim. It just seems odd to invoke the method on one of the operands and pass the other as an argument.
On the other hand, simple operations that apply to a single object can be written most concisely as object methods (even if they take some additional arguments).
The toString methodEdit
toString method!toString
There are two object methods that are common to many object types: toString and equals. toString converts the object to some reasonable string representation that can be printed. equals is used to compare objects.
When you print an object using print or println, Java checks to see whether you have provided an object method named toString, and if so it invokes it. If not, it invokes a default version of toString that produces the output described in Section printobject.
Here is what toString might look like for the Complex class:
verbatim
public String toString () return real + " + " + imag + "i";
verbatim
The return type for toString is String, naturally, and it takes no parameters. You can invoke toString in the usual way:
verbatim
Complex x = new Complex (1.0, 2.0); String s = x.toString ();
verbatim
or you can invoke it indirectly through print:
verbatim
System.out.println (x);
verbatim
Whenever you pass an object to print or println, Java invokes the toString method on that object and prints the result. In this case, the output is 1.0 + 2.0i.
This version of toString does not look good if the imaginary part is negative. As an exercise, fix it.
The equals methodEdit
equals method!equals
When you use the == operator to compare two objects, what you are really asking is, ``Are these two things the same object? That is, do both objects refer to the same location in memory.
For many types, that is not the appropriate definition of equality. For example, two complex numbers are equal if their real parts are equal and their imaginary parts are equal.
type!object
When you create a new object type, you can provide your own definition of equality by providing an object method called equals. For the Complex class, this looks like:
verbatim
public boolean equals (Complex b) return (real == b.real && imag == b.imag);
verbatim
By convention, equals is always an object method. The return type has to be boolean.
The documentation of equals in the Object class provides some guidelines you should keep in mind when you make up your own definition of equality:
quote
The equals method implements an equivalence relation:
equality identity
itemize.
For any reference value x, x.equals(null) should return false.
itemize
quote
The definition of equals I provided satisfies all these conditions except one. Which one? As an exercise, fix it.
Invoking one object method from another
method!invoking
As you might expect, it is legal and common to invoke one object method from another. For example, to normalize a complex number, you divide through (both parts) by the absolute value. It may not be obvious why this is useful, but it is.
Let's write the method normalize as an object method, and let's make it a modifier.
verbatim
public void normalize () double d = this.abs(); real = real/d; imag = imag/d;
verbatim
The first line finds the absolute value of the current object by invoking abs on the current object. In this case I named the current object explicitly, but I could have left it out. If you invoke one object method within another, Java assumes that you are invoking it on the current object.
As an exercise, rewrite normalize as a pure function. Then rewrite it as a class method.
Oddities and errorsEdit
method!object method!class overloading
If you have both object methods and class methods in the same class definition, it is easy to get confused. A common way to organize a class definition.
static
Now that we know what the keyword static means, you have probably figured out that main is a class method, which means that there is no ``current object when it is invoked.
current object this instance variable variable!instance
Since there is no current object in a class method, it is an error to use the keyword this. If you try, you might get an error message like: ``Undefined.
InheritanceEdit
inheritance
The language feature that is most often associated with object-oriented programming is inheritance. Inheritance is the ability to define a new class that is a modified version of a previously-defined.
Drawable rectanglesEdit
Rectangle class!Rectangle drawable definition looks like this:
verbatim import java.awt.*;
class DrawableRectangle extends Rectangle
public void draw (Graphics g) g.drawRect (x, y, width, height);
verbatim
Yes, that's really all there is in the whole class definition. The first line imports the java.awt package, which is where Rectangle and Graphics are defined.
AWT import statement!import
The next line indicates that DrawableRectangle inherits from Rectangle. The keyword extends is used to identify the parent class.
The rest is the definition of the draw method, which refers to the instance variables x, y, width and height. It might seem odd to refer to instance variables that don't appear in this class definition, but remember that they are inherited from the parent class.
To create and draw a DrawableRectangle, you could use the following:
verbatim
public static void draw
(Graphics g, int x, int y, int width, int height)
DrawableRectangle dr = new DrawableRectangle (); dr.x = 10; dr.y = 10; dr.width = 200; dr.height = 200; dr.draw (g);
verbat.
constructor
We can set the instance variables of dr and invoke methods on it in the usual way. When we invoke draw, Java invokes the method we defined in DrawableRectangle. If we invoked grow or some other Rectangle method on dr, Java would know to use the method defined in the parent class.
The class hierarchyEdit
class hierarchy Object parent class class!parent, Slate extends Frame (see Appendix slate),.
Object-oriented designEdit
object-oriented design
Inheritance is a powerful feature. Some programs that would be complicated without inheritance can be written concisely and simply with it. Also, inheritance can facilitate code reuse, since you can customize the behavior of build-in classes without having to modify them.
On the other hand, inheritance can make programs difficult to read, since it is sometimes not clear, when a method is invoked, where to find the definition. For example, one of the methods you can invoke on a Slate is getBounds. Can you find the documentation for getBounds? It turns out that getBounds is defined in the parent of the parent of the parent of the parent of Slate.
Also, many of the things that can be done using inheritance can be done almost as elegantly (or more so) without it.
GlossaryEdit
description
.
object method class method current object this implicit explicit | http://en.m.wikibooks.org/wiki/The_Way_of_the_Java/Object-oriented_programming | CC-MAIN-2014-15 | refinedweb | 2,204 | 57.16 |
Open source, low-cost robotic quadraped for reinforcement learning
I've assembled version 3! There are a couple of bugs :/. And room for many improvements. But it's useable for testing some of the control hardware & software.
The servos are driven by an Adafruit I2C PWM driver (PCA9685) connected to a Raspberry PI over I2C.
Here a simple range of motion test of all the servos:
I've worked on the shoulder/torso assembly and made some refinements to the legs and shoulder. I've replaced the 4mm smooth shaft that had to be cut with M4 socket screws and lock nuts. This requires a bit less effort and is cheaper.
The shoulder joint is M6 threaded rod that will have to be cut with a hacksaw. The shoulder to hip rods are M8 threaded and will also have to be cut from 1m stock.
The mechanical parts (rods, screws, lock nuts) have been ordered from RS.
I'm getting the 3mm plexiglass today and having it laser cut:
Except I just realised forgot something :/
Next update will be when I've assembled this version...
I've made some progress in writing code for converting from the Autodesk Fusion design to Mujoco for physics simulation.
Here's the Fusion design:
and the Mujoco simulation:
The Fusion design has to follow some restrictions and naming convention for conversion, but I'm confident I can proceed to a more complex leg design, then a simulation of all 4 legs using OpenAI reinforcement learning.
This implies that I will be able to modify the design in a high-level tool like Fusion and automatically do the reinforcement learning for walking. Because Fusion is parametric, it also implies I can write code to vary the design within certain parameter ranges (e.g. leg upper and lower leg lengths), and optimise the design using the simulation output.
I programmed a basic 4-phase gait:
#include <Arduino.h> #include <Servo.h> Servo upper; Servo lower; int upperfrom = 25; int upperto = 125; int lowerfrom = 55; int lowerto = 125; void setup() { upper.attach(9); lower.attach(10); upper.write(90); lower.write(90); delay(1000); upper.write(upperfrom); lower.write(lowerfrom); delay(1000); } int phase = 0; int tick_counter = 0; int ticks_per_phase[] = {100, 50, 50, 50}; // Fast // int ticks_per_phase[] = {50, 20, 20, 20}; // Slow int upper_from_per_phase[] = {25, 75, 125, 75}; int lower_from_per_phase[] = {80, 55, 125, 125}; void loop() { int upper_from = upper_from_per_phase[phase]; int upper_to = upper_from_per_phase[(phase + 1) % 4]; int upper_pos = (upper_to - upper_from) * tick_counter / ticks_per_phase[phase] + upper_from; int lower_from = lower_from_per_phase[phase]; int lower_to = lower_from_per_phase[(phase + 1) % 4]; int lower_pos = (lower_to - lower_from) * tick_counter / ticks_per_phase[phase] + lower_from; upper.write(upper_pos); lower.write(lower_pos); tick_counter += 1; if (tick_counter == ticks_per_phase[phase]) { phase = (phase + 1) % 4; tick_counter = 0; } delay(10); }
You'll have to turn your head upside-down...
After the last update I put some effort into the linkages. I bought a bunch of different hobby ones to see how they might work:
The one on the right, where you can make your own length link seemed ideal.
But I realised there's a problem. Version 1 had both servos in the "shoulder", but that severely limited the range of motion of the lower leg. I designed it like this initially to reduce the mass of the leg. It might still work but I need to carefully do a simulation to ensure proper range of motion.
So I moved the 2nd servo to the top leg in Fusion. I also added links with ball joint, and experimented with the "Motion Study" feature.
Sadly this seems to be pretty buggy... :/
With the new servo position, I updated the laser cutting drawings and got the parts back today.
Assembled. Eyeballed how long the top link should be:
Repeated for bottom link.
Now it's assembled and ready for Arduino:
Close-ups:
Looks decent. Hopefully I can now hook up an Arduino and experiment with controlling the leg. Easter weekend project...
TL;DR - I built the first version of a leg:
I do want to make this into a kit, but anyone should be able to gather the components and make one themselves with the help of a makerspace.
I've been using Autodesk Fusion for the first time to design this robot, it's taken a bit of getting used to but I do like the parametric design capabilities (I'm used to using Rhino). Version one of the robot looks something like this:
So we have 3 degrees of freedom for each leg, 12 in total. Servos are relatively cheap compared to torque motors ($15 vs $3000) so I'm going with servos for now.
Some people who have built similar projects actuate the legs with the servo directly. I'm not convinced by this. I'm also not convinced that using a linkage is the right approach. Each has its advantages and disadvantages. In essence though, I don't like putting the servo under strain perpendicular to the servo axis, so in this design the leg segments have their own joints based on a metal shaft (more on this later).
I had the prototype parts lasercut from acrylic (NB the thinking at the moment is to use CNC routed 3mm carbon fibre sheets, NOT 3mm acrylic sheets, but I have some acrylic handy and it's cheap so I'm using that for now)
First set of parts:
And the first problem:
I didn't take into account the insertion angle of the servo :/
Took out the file and started filing... and here's the two servos embedded:
You can see I used 25mm standoffs. The nice thing here is the servos are 28mm from bottom to where they meet the acrylic, which is 3mm thick. So they fit snugly against the opposite layer:
Slick!
Then I hit my 2nd snag :/
In the one corner, a 4mm Precision Round Bar with a hardness of HRC60-62.
In the other corner, a small hacksaw.
Hacksaw lost badly. It didn't even make a scratch!
What to do.... I went to the hardware store to find a 4mm wooden dowel to use temporarily... but got a 5mm one instead. Fail.
Then I thought about something.... I have a little drill that also has tiny grinding discs. Which I've never used... Will it work?
Sparks! Here's to buying unnecessary tools. Got some safety glasses and set to work.
Shout-out to Zortrax who supplies a set of safety glasses with their M200 3D printer! That machine is fantastic. Highly recommended.
The world tiniest angle grinder did the trick. 1 times 4mmx44mm shaft:
I'm using 4mm shaft collars for the joint. Nothing fancy at this point. No bearing(s):
3rd issue (3rd? I've lost count):
The 4th standoff support is too close to the joint. I removed it. I don't actually think it's necessary since the shaft itself is fairly rigid.
Femur attached:
I have not idea how to create the link between the servo horn and the femur yet. Ball joint? It needs to be rigid...
Tibia:
Front and right views:
I'm pretty satisfied with this, considering it's the very first iteration. As soon as I've made a plan with the linkages I'll put an Arduino onto it for a bit of a demo.
Please add questions and suggestions if you have any!
Woof.
This is really an AI project, not a hardware project.
What I'm trying to do, is to create an economical physical robot by utilising simulation as far as possible. Building a hardware prototype, then working on the control system, then iterating on the hardware, etc. etc. is a very laborious, expensive process.
The good people at OpenAI (and others), are using simulated environments to train and develop robots orders of magnitude more cheaply (as far as labour is concerned). Using a simulated environment allows you to iterate very rapidly, and conduct experiments in faster than real-time. Here's an example of an OpenAI robot being developed:
I want to build a robot dog, but my budget is very close to zero (relatively speaking, compared to a company like Boston Dynamics), and I'm interested in using machine learning to achieve this goal.
i’m using OpenAI Gym as a starting point, together with Tensorflow and Mujoco. Mujoco is a physical simulation environment, designed for simulating robotics.
Essentially, I need to create a simulated environment that is close enough to the real world physical properties of the mechanics, actuators and sensors of the robot so the controls mechanisms can be transplanted to a real robot.
Lucky for me, there is a good starting point in OpenAI Gym — half a cheetah. I found the best listed solution in the OpenAI gym, written by a certain generous Mr. Pat Coady:
Using his code and OpenAI Half Cheetah I trained this:
Then improved the mechanical parameters and the reward:
Added more legs:
Created a body similar to the Netflix Black Mirror Metalhead bot:
Improved the mechanical and reward parameters:
More improvements:
And now I have a reasonable model. Bridging the gap between the simulated model and an actual mechanical one is going to be tricky. It would require
It's going to be quite a challenge.
If you're interested in the code, all the code is on Github:
Sorry for the lack of details, please ask questions if you have any :)
hi,
they are servos with feedback: where did you find them?
From a local supplier, it was one of these two (can't remember exactly):
Servo Large 20kg.cm with Feedback
Servo Large 25kg.cm with Feedback
re ben
with what you already done it will come soon ;)
will it use a npu ?
will you make it to handle tools ?
Whats' an NPU?
No plans to handle tools yet, this is going to be hard enough as it is.
Neural Processing Unit
what is hard ?
wait for the 99$ pine64 with npu to play with nn
with it a bot will learn offlne
Re NPU: Aha. Interesting. Maybe!
Re "hard": I'm more concerned with making it walk/jog/run/ avoid obstacles/navigate bad terrain than I am with it handling tools.
I assume that the machine learning is based around genetic algorithms, how detailed is the simulator? If you can model the prototype including weight distribution, center of mass, etc, you could theoretically create the programming for it in a simulator and then port it to your prototype. I'd imagine it would have to be near perfect.
Nope, the machine learning is not based on genetic algorithms, it's based on reinforcement learning ()
The simulator is very detailed, Mujoco was designed specifically for robotics amonght other things.
Yes the idea is to get it as close as possible in simulation, but it's still going to be hard to model things like slight variations in geometry and real-world issues like joints that can deflect, bearings that aren't perfect, etc. We'll see...
Best of luck. I hope your project turns out well. I'm going to read up on this reinforcement learning.
Is the project active? | https://hackaday.io/project/79440-openk9 | CC-MAIN-2019-35 | refinedweb | 1,857 | 63.8 |
See Also
allowDeleting
Specifies whether a user can delete rows. It is called for each data row when defined as a function.
The following code allows a user to delete only even data rows:
jQuery
$(function(){ $("#dataGridContainer").dxDataGrid({ // ... editing: { allowDeleting: function(e) { if(e.row.rowIndex % 2 == 1) { return true }; return false; }, } }) })
Angular
import { DxDataGridModule } from "devextreme-angular"; // ... export class AppComponent { allowDeleting(e) { if(e.row.rowIndex % 2 == 1) { return true }; return false; } } @NgModule({ imports: [ // ... DxDataGridModule ], // ... })
<dx-data-grid ... > <dxo-editing [allowDeleting]="allowDeleting"> </dxo-editing> </dx-data-grid>
See Also
allowUpdating
Specifies whether a user can update rows. It is called for each data row when defined as a function.
See an example in the allowDeleting option.
See Also; instead, use a column's editCellTemplate
- readOnly; instead, use allowEditing
- editorType; instead, use onEditorPreparing
- any event handler (options whose name starts with "on..."); instead, handle the editorPreparing or editorPrepared event to customize the form editors.
Also, the colCount option defaults to 2, but it can be redefined.
If you need to customize an individual form item, use the formItem object.ides these options.
The popup always contains a form whose items are used for editing. Use the form option to customize the form items.
refreshMode
The following table shows the operations that are performed after saving changes in different modes:
** - Set repaintChangesOnly to true to repaint only elements whose data changed.
*** - Set remoteOperations to false and cacheEnabled to true to avoid data reloading.
When the refreshMode is "reshape" or "repaint", the server should respond to the insert or update request by sending back the data item saved in the database. See the
DataGridWebApiController tab in the CRUD Operations demo for an example of the server-side implementation. The
InsertOrder and
UpdateOrder actions illustrate this case.
Use the
GridEditRefreshMode enum to specify this option when the widget is used as an ASP.NET MVC Control. This enum accepts the following values:
Full,
Reshape, and
Repaint.
useIcons
Specifies whether the editing column uses icons instead of links.
If you have technical questions, please create a support ticket in the DevExpress Support Center.
We appreciate your feedback. | https://js.devexpress.com/Documentation/18_2/ApiReference/UI_Widgets/dxDataGrid/Configuration/editing/ | CC-MAIN-2022-05 | refinedweb | 353 | 50.94 |
#include <assert.h>
#include <float.h>
#include <inttypes.h>
#include <math.h>
#include <stdbool.h>
#include <stdlib.h>
#include <string.h>
#include "nvim/api/buffer.h"
#include "nvim/api/extmark.h"
#include "nvim/api/private/defs.h"
#include "nvim/ascii.h"
#include "nvim/buffer.h"
#include "nvim/buffer_updates.h"
#include "nvim/change.h"
#include "nvim/charset.h"
#include "nvim/cursor.h"
#include "nvim/decoration.h"
#include "nvim/diff.h"
#include "nvim/digraph.h"
#include "nvim/editio.h"
#include "nvim/fold.h"
#include "nvim/garray.h"
#include "nvim/getchar.h"
#include "nvim/highlight.h"
#include "nvim/highlight_group.h"
#include "nvim/indent.h"
#include "nvim/input.h"
#include "nvim/log/ops.h"
#include "nvim/option.h"
#include "nvim/os/input.h"
#include "nvim/os/os.h"
#include "nvim/os/shell.h"
#include "nvim/os/time.h"
#include "nvim/os_unix.h"
#include "nvim/path.h"
#include "nvim/plines.h"
#include "nvim/quickfix.h"
#include "nvim/regexp.h"
#include "nvim/screen.h"
#include "nvim/search.h"
#include "nvim/spell.h"
#include "nvim/strings.h"
#include "nvim/syntax.h"
#include "nvim/tag.h"
#include "nvim/ui.h"
#include "nvim/undo.h"
#include "nvim/vim.h"
#include "nvim/window.h"
Case matching style to use for :substitute.
Append output redirection for the given file to the end of the buffer
In an argument search for a language specifiers in the form "@xx". Changes the "@" to NUL if found, and returns a pointer to "xx".
Check if it is allowed to overwrite a file. If b_flags has BF_NOTEDITED, BF_NEW or BF_READERR, check for overwriting current file. May set eap->forceit if a dialog says it's OK to overwrite.
Closes any open windows for inccommand preview buffer.
":ascii" and "ga" implementation
Handle the ":!cmd" command. Also for ":r !cmd" and ":w !cmd" Bangs in the argument are replaced with the previously entered command. Remember the argument.
start editing a new file
:move command - move lines line1-line2 to line dest
Call a shell to execute a command. When "cmd" is NULL start an interactive shell.
Give message for number of substitutions. Can also be used after a ":global" command.
":wall", ":wqall" and ":xall": Write all changed files (and exit).
write current buffer to file 'eap->arg' if 'eap->append' is TRUE, append to the file
if *eap->arg == NUL write to current file
":left", ":center" and ":right": align text.
":insert" and ":append", also used by ":change"
":change"
":copy"
":exusage"
":file[!] [fname]".
Execute a global command of the form:
g/pattern/X : execute X on all lines where pattern matches v/pattern/X : execute X on all lines where pattern does not match
where 'X' is an EX command
The command character (as well as the trailing slash) is optional, and is assumed to be 'p' if missing.
This is implemented in two passes: first we scan the file for the pattern and set a mark for each line that (not) matches. Secondly we execute the command for each line that has a mark. This is required because after deleting lines we do not know where to search for the next match.
":help": open a read-only window on a help file
":helpclose": Close one help window
":helptags"
List v:oldfiles in a nice way.
":retab".
":sort".
:substitute command
If 'inccommand' is empty: calls do_sub(). If 'inccommand' is set: shows a "live" preview then removes the changes. from undo history.
":update".
":viusage"
Handle ":wnext", ":wNext" and ":wprevious" commands.
":write" and ":saveas".
Find all help tags matching "arg", sort them and return in matches[], with the number of matches in num_matches. The matches will be sorted with a "best" match algorithm. When "keep_lang" is true try keeping the language of the current buffer.
After reading a help file: May cleanup a help buffer when syntax highlighting is not used.
Try to abandon the current file and edit a new or existing file.
Execute
cmd on lines marked with ml_setmarked().
Return a heuristic indicating how well the given string matches. The smaller the number, the better the match. This is the order of priorities, from best match to worst match:
Create a shell command from a command string, input redirection file and output redirection file.
Check the 'write' option.
Set up for a tagpreview.
Print a text line. Also in silent mode ("ex -s").
Skip over the pattern argument of ":vimgrep /pat/[g][j]". Put the start of the pattern in "*s", unless "s" is NULL.
Get old substitute replacement string
Tries to enter to an existing window of given buffer. If no existing buffer is found, creates a new split.
Set substitute string and timestamp
submust be in allocated memory. It is not copied. | https://neovim.io/doc/dev/ex__cmds_8c.html | CC-MAIN-2022-21 | refinedweb | 775 | 72.83 |
Platform Updates: Operation Developer Love
by Ian Wilkes - January 4, 2012 at 6:30pm
This past week, we provided an update on our Facebook Platform SDKs, a retrospective of Platform changes in 2011 and launched the following changes:
Stories In Timeline App Tab
Stories published from an app using Feed dialog or Graph API will now show in the app's tab on the user's timeline. This is consistent with Open Graph actions published from an app showing in the app's tab.
Improved Insights for Web Sites.
Breaking Changes effective this week
- Requests 2.0 Efficient: The Request 2.0 Efficient migration will be enabled for all apps over the next few days. Please make sure to update your apps now to respect the updated request ID format to avoid any disruption of service. On January 15th this setting will be set to enabled for all apps. For more information please see the Requests docs.
- FB.Canvas.getPageInfo: The getPageInfo method now requires a callback function and no longer returns a value synchronously. For more information on this change please see this blog post.
Upcoming Breaking Changes on February 1, 2012
- Removing canvas_name field from application object: We will be deprecating the canvas_name field in favor of namespace field on the application object. See this blog post for more information.
- Removing App Profile Pages: We will be deleting all App Profile Pages and redirecting all traffic directly to the App. See this blog post for more information.
Bug activity from 12/27 to 1/3
- 131 bugs were reported
- 28 bugs were reproducible and accepted (after duplicates removed)
- 19 bugs were by design
- 12 bugs were fixed
- 56 bugs were duplicate, invalid, or need more information
Reported bugs fixed between 12/27 and 1/3
- Show Stream for Like Box shows only checkins
- max-width of photo incorrectly stated in Graph API doc
- Insights not updating
- Oauth Validation error throws Invalid signed request with Facebook C# SDK
- Broken cross-reference links in Advanced Topics -> FQL -> profile
- migrating old comments
- date_format not applied on all feed story properties
- Feed Dialog Produces Captcha Screens and Always Fails
- Friend's name for test account is missing
- Call to me/friends with a test user no longer returns name,first_name and middle_name fields
- Cannot set custom stream privacy from FB.ui feed dialog
- Requests 2.0 lose the "data" parameter on mobile browsers
Activity on facebook.stackoverflow.com this week
- 133 questions asked
- 47 answered, 35% answered rate
- 74 replied, 56% reply rate | https://developers.facebook.com/blog/post/625/ | CC-MAIN-2014-15 | refinedweb | 421 | 58.52 |
Hello,
It's been a long time since I've done some polymorhpic programming and nowI'm struggling to get my head round it again
Here's what I've got...
an interface
public interface INote { string note { get; set; } }
I've 2 options dervived from this
public class Disclipinary : INote { public string note { get; set; } } public class Absence : INote { public string note { get; set; } }
Now I've got the conccept of saving. So, I new up a global saver which I figured would act like delegate in order to call the right class to save it but sadly it doesn't work... or rather I'm ina pickle
public class GlobalSaver { public void saveMe(INote note ) { } } public interface businessRules { void Save(INote note); } public class AbsenceSaver : businessRules { public void Save(INote note) { } } public class DisciplinarySaver : businessRules { public void Save(INote note) { } }
And then called from main
Absence a = new Absence(); GlobalSaver saver = new GlobalSaver(); saver.saveMe(a);
What I'm after is newing up a absence or disciplinary and then saving it without having to either do a switch or a cast in order to know which save routine to call. Surely it should know which one to call based on the object? | https://www.daniweb.com/programming/software-development/threads/451659/confused-polymorhpic-question | CC-MAIN-2018-43 | refinedweb | 204 | 56.63 |
Fills an array with sample data from the clip.
The samples are floats ranging from -1.0f to 1.0f. The sample count is determined by the length of the float array.
Use the offsetSamples parameter to start the read from a specific position in the clip. If the read length from the offset is longer than the clip length, the read will wrap around
and read the remaining samples from the start of the clip.
Note that with compressed audio files, the sample data can only be retrieved when the Load Type is set to Decompress on Load in the audio importer. If this is not the case then the array will be returned with zeroes for all the sample values.
using UnityEngine; using System.Collections;
public class ExampleClass : MonoBehaviour { void Start() { AudioSource aud = GetComponent<AudioSource>(); float[] samples = new float[aud.clip.samples * aud.clip.channels]; aud.clip.GetData(samples, 0); int i = 0; while (i < samples.Length) { samples[i] = samples[i] * 0.5F; ++i; } aud.clip.SetData(samples, 0); } } | https://docs.unity3d.com/kr/560/ScriptReference/AudioClip.GetData.html | CC-MAIN-2019-35 | refinedweb | 171 | 68.06 |
This is the mail archive of the cygwin@cygwin.com mailing list for the Cygwin project.
On Fri, 14 Nov 2003, Thomas Hammer wrote: > Hi. > > I run the latest version of cygwin, with gzip, on WindowsXP SP1 and have a > problem with gzip. Please see <> for guidelines on how to report Cygwin problems. > The command > > $ cat binaryfile.bin | gzip -c > bin.gz > > produces an invalid .gz file: > > $ gunzip bin.gz > gunzip: bin.gz: invalid compressed data--crc error > gunzip: bin.gz: invalid compressed data--length error > > If I do > > gzip binaryfile.bin > > everything works as expected - i.e. the resulting .gz file is valid. > > If I do this instead: > > cat binaryfile.bin > acopy.bin > > The two .bin files are identical. > > My gzip version: > > $ gzip --version > ASMV > Written by Jean-loup Gailly. > > My conclusion is that gzip's handling of stdout is broken in some way. I bet you have a text mount. Please *attach* the output of "cygcheck -svr", as per <> - this will confirm the guess. > The reason I report this problem, instead of just avoiding it by not using > the -c option for gzip, is that I discovered the problem while running make > dist on a Makefile.in created by automake. And it's automake that creates > the gzip command line. I'll send a patch to the automake people and ask them > to fix this - but since it's really gzip that is broken, I think it should > be fixed here too. > > I think you could fix this by setting the mode for the filehandle for stdout > to O_BINARY, i.e. something like this: > > #if defined (_WIN32) > setmode(FileHandle, O_BINARY); /* Make sure it is in binary mode. */ > #endif FYI, Cygwin doesn't define _WIN32. Besides, the above is not a proper patch. Igor > It would be nice if you could look into it. > > Best Regards, > Thomas Hammer > hammer@sim: | http://cygwin.com/ml/cygwin/2003-11/msg00575.html | CC-MAIN-2015-32 | refinedweb | 311 | 78.96 |
Installing the new Visual Studio 2012, I was surprised to find no "C" project template, only C++. Googling yielded no C-only templates and few concrete examples; it seems C has fallen to the wayside... even though the compiler still supports it. So a method of creating a C project from the C++ template is provided.
Also discussed is how to setup this project to use a Windows version of Flex and Bison - tools for lexical analysis and parsing of data streams.
Lex and Yacc originated in the UNIX world long ago. Yacc stands for 'yet another compiler compiler', and Lex is a 'lexical analyzer'. Lex reads a file or data stream and breaks it up into its component pieces, which it calls tokens. Tokens may be numbers, keywords, strings, punctuation, etc. Yacc commonly receives this stream of tokens, structures them according to rules, then looks for matching patterns. If a match is found, some action is taken. These two tools together make it possible to parse an input stream and act on it, much more cleanly and effectively than writing hundreds of lines of traditional C code. The Lex and Bison syntax is very powerful and very compact.
Today, the GNU equivalents of Lex and Yacc are called Flex and Bison (heh!) They are still only Unix platform tools, however. Nothing quite like these have ever made it to the Win32 world. A few ports of these were made over the years, but they required lots of GNU support libraries and were not updated regularly at all. (The last Flex port I found is almost a decade old!) Of course, it was too old to work with the example used.
But a few developers over at SourceForge have created a package called WinFlexBison. This is a compact, all-inclusive port of modern Flex and Bison for Win32, and is maintained regularly. This is the package we will use.
As a side note, WinFlexBison can output C, C++, and Java code, but only C is used here.
To create a Win32 "C" project in Visual Studio 2012:
1. Create a new project. Pick Win32 C++ Console. Give it a descriptive name. Next.
2. Uncheck the box for "use precompiled headers" and check the box for "create empty project." Click Finish.
3. Right-click "Source Files", click "Add" then "New Item." Call it main.c. Click it and copy this text into it:
#include <stdio.h>
int main(void)
{
printf("Press ENTER to close. ");
getchar();
return 0;
}
To use Flex and Bison in windows,
1. Get WinFlexBison from and extract somewhere meaningful, then add that path to the system path. If you need help with the system path, see this and this. Close any command prompts.
2. Create these new files in the project: Expression.c/.h, Lexer.l/.c/.h, Parser.y/.c/.h
3. Copy the file data from into these files (some will remain empty.) Incorporate the pause code from earlier into the end of the real main(). Be sure to SAVE all of these files as data is entered, we will need it for the next step.
4. Open a command prompt, navigate to your .Y and .L files, and run these commands:
No error messages from Flex or Bison? Good. Can also test other settings this way. This example liked having the --wincompat option turned on.
5. Try to build the project. It will probably reveal a few warnings. You will need to work through these in a minute.
Now to automate the building of the .l and .y files:
1. Right-click Parser.y -> Properties -> "Item Type" to "Custom Build Tool."
2. Click "Custom Build Tool" on the left, and type "c:\your-path-to-winflexbison\win_bison parser.y" as the command, substituting the real path.
3. Change the description to "BISON Custom Build Tool."
4. Set "Outputs" to "parser.h,parser.c"
Repeat for Lexer.l, but with "yourpath\win_flex --wincompat lexer.l" as the command, etc.
Hit F5... Voila! Now go get them bugs. You should end up with this:
If you follow this guide verbatim (and the example at Wikipedia hasn't changed), there should be two "errors." One, "yyerror" will be defined twice. This is because this function appears in both Lexer.l and Parser.y. The solution is to change one of them to just:
int yyerror(SExpression **expression, yyscan_t scanner, const char *msg);
SExpression *getAST(char *expr) // was const char, removed const.
Revision 1.0 on 2013.9. | http://www.codeproject.com/Articles/652229/VS-C-project-w-Flex-Bison | CC-MAIN-2015-11 | refinedweb | 749 | 77.13 |
Generic Methods
It is possible to declare a generic method that uses one or more type parameters of its own. You can define generic methods in an ordinary or generic class. Methods within a generic type definition can also have independent type parameters. Generic methods are smarter and can figure out their parameter types from their usage context without having to be explicitly parameterized. (In reality, of course, it is the compiler that does this.) This is called type argument inference. In addition, if a generic method is static, it has no access to the generic type parameters of the class, so if it needs to use genericity it must be a generic method.
Like generic classes, generic methods have a parameter type declaration using the <> syntax. To define a generic method, you simply place a generic parameter list before the return value, like this :
This showUV( ) method has its own parameter type declaration that defines the type variable U and V. This method is a generic method and can appear in either a generic or nongeneric class. The scope of U and V is limited to the method showUV( ) and hides any definition of U and V in any enclosing generic class. As with generic classes, the type U or V or both can have bounds :
Unlike a generic class, it does not have to be instantiated with a specific parameter type for U and V before it is used. Instead, it infers the parameter type U and V from the type of its arguments, entry. This is called type argument inference. For example :
The showUV( ) calls look like normal method calls. For the calls to showUV( ) that use primitive types, autoboxing comes into play, automatically wrapping the primitive types in their associated objects. In fact, generic methods and autoboxing can eliminate some code that previously required hand conversion.
The following program declares a nongeneric class called GenMethDemo and a static generic method within that class called showUV( ). The showUV( ) method prints the U and V types. Notice that the type V is upper-bounded by Number. Thus, V must be the type of Number. The Number is a superclass of all numeric classes, such as Integer, Float and Double . In the first call to showUV( ), the type of the first argument is String , which causes String to be substituted for U. The second argument is Integer , which makes Integer a substitute for V, too. In the second call, Integer and Double are used, and the types of U and V are replaced by Integer and Double.
Although type argument inference will be sufficient for most generic method calls, you can explicitly specify the type argument if needed. Notice the third call to showUV( ) in main( ). The type arguments are specified. Of course, in this case, there is nothing gained by specifying the type arguments. Furthermore, JDK 8 improved type inference as it relates to methods. As a result, there are fewer cases in which explicit type arguments are needed.
Program
class GenMethDemo { static <U,V extends Number> void showUV(U u,V v) { System.out.println("AnyType : "+u.getClass().getName()); System.out.println("NumberType : "+v.getClass().getName()); } } public class Javaapp { public static void main(String[] args) { GenMethDemo.showUV("JAVA",11); GenMethDemo.showUV(50,90.5); GenMethDemo.<Float,Double>showUV(50.5f,7.5); } } | https://hajsoftutorial.com/java-generic-methods/ | CC-MAIN-2019-47 | refinedweb | 555 | 55.34 |
Hi everyone,
I have a "This should be simple!" kind of problem. I need to do a little socket programming on a SunOS machine. I went back to an old school assignment I did years ago, cut-n-pasted that code, intending to basically cannibalize it for the program I need to write now. Trouble is, my compiler chokes on the socket-specific terms like getaddrinfo() or inet_ntop.
Here is the output when I try to compile:
----------------------------------------------------------------------------
--------------------------------------------------------------------------------------------------------------------------------------------------------Code:
bash-3.00$ g++ Main.cpp
Undefined first referenced
symbol in file
getaddrinfo /var/tmp//ccCpuGRv.o
freeaddrinfo /var/tmp//ccCpuGRv.o
inet_ntop /var/tmp//ccCpuGRv.o
gai_strerror /var/tmp//ccCpuGRv.o
ld: fatal: Symbol referencing errors. No output written to a.out
collect2: ld returned 1 exit status
bash-3.00$
I'll post the latest code I'm using, but I don't think my problem lies in the code. I've tried compiling my own code plus about a dozen examples I've found online. I *ALWAYS* get the same problem, this "undefined symbol" problem with some socket-specific terms. I've also tried alternating between g++ and gcc; both compilers essentially complain about the same "undefined symbol" problem.
(As a side note, I can copy the EXACT SAME CODE over to my school's Linux machine, and it compiles just fine there!)
So I'm thinking this is (A) a problem with the SunOS machine I'm working on, or (B) I am forgetting a library or something. I'm hoping it is not (A), as I'm not the system admin and don't have the power to install anything on this platform.
Has anyone seen something like this? I need to know what the problem is... and how do I get around it??? :(
Many thanks!
-Pete
Below is the code I am compiling in the above example. Full disclosure: This is an example I found online which I have been fiddling with. It compiles on the Linux machine.
----------------------------------------------------------------------------
--------------------------------------------------------------------------------------------------------------------------------------------------------Code:
#include <sys/types.h>
#include <sys/socket.h>
#include <netinet/in.h>
#include <netdb.h>
#include <string.h>
#include <unistd.h>
#include <stdio.h>
#include <cstdlib>
#include <arpa/inet.h>
#include <map>
#include <iostream>
#include <sstream>
#include <fstream>
#include <time.h>
#include <vector>
#include <algorithm>
#include <queue>
#include <cstring>
#include <string>
#include <math.h>
#include <stdlib.h>
#include <cstdio>
#include <time.h>
#include <map>
using namespace std;
int main(int argc, char **argv) {
//Usage of getaddrinfo()
int status;
struct addrinfo hints, *p;
struct addrinfo *servinfo; //will point to the result
char ipstr[INET6_ADDRSTRLEN];
memset(&hints, 0, sizeof hints); // make sure the struct is empty
hints.ai_family = AF_UNSPEC;
hints.ai_socktype = SOCK_STREAM;
hints.ai_flags = AI_PASSIVE;
if ((status = getaddrinfo(NULL, "4321", &hints, &servinfo)) == -1) {
fprintf(stderr, "getaddrinfo error: %s\n", gai_strerror(status));
exit(1);
}
for (p=servinfo; p!=NULL; p=p->ai_next) {
struct in_addr *addr;
if (p->ai_family == AF_INET) {
struct sockaddr_in *ipv = (struct sockaddr_in *)p->ai_addr;
addr = &(ipv->sin_addr);
}
else {
struct sockaddr_in6 *ipv6 = (struct sockaddr_in6 *)p->ai_addr;
addr = (struct in_addr *) &(ipv6->sin6_addr);
}
inet_ntop(p->ai_family, addr, ipstr, sizeof ipstr);
}
cout<<"Address:"<<ipstr<<endl;
freeaddrinfo(servinfo);
return 0;
} | http://cboard.cprogramming.com/cplusplus-programming/128326-undefined-symbol-when-compiling-socket-program-printable-thread.html | CC-MAIN-2015-14 | refinedweb | 517 | 61.33 |
Hi,
I am trying to build the sync tool kit solution in visual studio 2012, but it does not.
I do get errors in the WP7Client project - The type or namespace name 'DataAnnotations' does not exist in the namespace 'System.ComponentModel' (are you missing an assembly reference?) c:\D Drive\Win8\Moderen Apps\Microsoft Sync Framework Toolkit\C#\ClientCommon\IsolatedStorage\SyncErrorInfo.cs 17 29 WP7Client
Prashant Nagori
I corrected assemblies & now it builds but not able to call the login service.
There is no doc. about how to deploy & consume the login service.
Could you please tell how to set up the login service ?
Thanks
Prashant | https://social.microsoft.com/Forums/en-US/07ce0e92-321b-4931-925c-2e67b7142583/the-released-solution-does-not-build-in-visual-studio-2012?forum=synclab | CC-MAIN-2020-50 | refinedweb | 106 | 59.3 |
This article will describe how to change the screen mode for the GBA and how to draw images in the bitmap modes 3, 4, and 5.
Let's begin.
[size="5"]Getting the Compiler Working
I hope you downloaded ALL the files needed for your operating system from the DevKitAdv site, as they will all be indispensable (except for maybe the C++ additions, but I like C++ and will probably use it in this article). Unzip all the files to the root directory, and you should have a new directory called DevKitAdv. Congratulations, you just installed DevKitAdv with everything you need.
Now open your friendly Notepad and type in some code. Save this code as anything you want with a .c extension (I'll use test.c for this example).
Yes, I know it doesn't do anything. This is just an example to get the compiler working. Also, make sure you hit return after the final brace or else your compiler will whine at you (for some reason there has to be a new line at the end of every file).
#include
int main()
{
return 0;
}
Now open up your text editor again and type in the following:
[bquote][font="Courier New"][color="#000080"]path=c:\devkitadv\binSave this file as Make.bat. Note that some of the information in the file might have to change depending on the name of your file and what drive DevKitAdvance is installed on.
gcc -o test.elf test.c
objcopy -O binary test.elf test.bin[/color][/font][/bquote]
gcc -o test.elf test.c
objcopy -O binary test.elf test.bin[/color][/font][/bquote]
Now double click on Make.bat. Wait for the program to end. Congratulations, you just wrote your first program. What the make file does is call gcc to create test.elf from test.c, then it calls objcopy to create test.bin from test.elf. You might want to read that a few times if you didn't get it. However, regardless of if you understand that or not, those are the only three lines you'll need to put in a make file to get your program to compile (although those lines may have to vary depending on how many source files you use and the names of these files). For example, if you were using two source files named test.c and input.c, you'd simply change the second line to:
[bquote][font="Courier New"][color="#000080"]gcc -o test.elf test.c input.c[/color][/font][/bquote]I hope you understand, because things only get more complicated from here. :-)
Now that we know how to compile a simple program, let's move on.
[size="5"]Using What You Create
When you compile your program, two files should be created - a .elf and a .bin. You want to use the .bin. Simply run this .bin using your respective emulator to view your creations.
If you have a linker and a cart, use the linker (instructions are included and more info can be found at) to write the .bin to the cart. Push the cart in your GBA, turn the GBA on, and viola! Your creations are running on hardware!
[size="5"]Screen Modes, Among Other Things
As I said earlier, the GBA has 6 different screen modes to choose between. From the moment I said that, I bet you were wondering, "How do I change between them?" The answer lies in the REG_DISPCNT that is defined in gba.h.
However, you may be wondering where this elusive "gba.h" is. Well, the answer is quite simple: you don't have it. It's rather long, and its contents are going to be divulged throughout the course of all these articles, so I'm going to do what I don't like doing and just throw the entire thing at you. Get it in the attached resource file.
This wonderful file has pointers to sprite data, input data, screen data, and just about every piece of data you're ever going to mess with. You'll need it in every GBA project you create.
REG_DISPCNT is a 16 bit register at memory address 0x4000000. Each bit within this register controls something of the GBA. Here's a description:
Handy for one 16 bit number, 'eh? Personally I probably would have rather remembered a bunch of different variable names than one variable name and the specifics on every bit in it, but oh well. Now that we know what each bit stands for, we need a way to change the values. This can be done by bit masking (the | operator) a series of numbers into the register. But who wants to remember a bunch of numbers when we can create a header file and use variable names instead? Let's do that.
There we have our header file. Now if we wanted to set the screen mode to screen mode 3 and support sprites, we'd simply include this file in our main source file and say:
//screenmodes.h
#ifndef __SCREENMODES__
#define __SCREENMODES__
#define SCREENMODE0 0x0 //Enable screen mode 0
#define SCREENMODE1 0x1 //Enable screen mode 1
#define SCREENMODE2 0x2 //Enable screen mode 2
#define SCREENMODE3 0x3 //Enable screen mode 3
#define SCREENMODE4 0x4 //Enable screen mode 4
#define SCREENMODE5 0x5 //Enable screen mode 5
#define BACKBUFFER 0x10 //Determine backbuffer
#define HBLANKOAM 0x20 //Update OAM during HBlank?
#define OBJMAP2D 0x0 //2D object (sprite) mapping
#define OBJMAP1D 0x40 //1D object(sprite) mapping
#define FORCEBLANK 0x80 //Force a blank
#define BG0ENABLE 0x100 //Enable background 0
#define BG1ENABLE 0x200 //Enable background 1
#define BG2ENABLE 0x400 //Enable background 2
#define BG3ENABLE 0x800 //Enable background 3
#define OBJENABLE 0x1000 //Enable sprites
#define WIN1ENABLE 0x2000 //Enable window 1
#define WIN2ENABLE 0x4000 //Enable window 2
#define WINOBJENABLE 0x8000 //Enable object window
#define SetMode(mode) (REG_DISPCNT = mode)
#endif
Naturally, this header file is terribly important, and I recommend including it in all your projects.
SetMode( SCREENMODE3 | OBJENABLE );
[size="5"]Drawing to the Screen
Drawing to the screen is remarkably easy now that we have the screen mode set up. The video memory, located at 0x6000000, is simply a linear array of numbers indicating color. Thus, all we have to do to put a pixel on the screen is write to the correct offset off this area of memory. What's more, we already have a #define in our gba.h telling us where the video memory is located (called VideoBuffer). One minor thing you should take note of is that in the 16bit color modes, colors are stored in the blue, green, red color format, so you'll want a macro to take RGB values and convert them to the appropriate 16bit number. Furthermore, we only actually use 15 bytes. Also, when you set the screen mode to a pixel mode, Background 2 must be turned on because that's where all the pixels are drawn.
Here's an example:
All this example does is plot a white pixel at screen position 10,10. Very easy.
#include "gba.h"
#include "screenmodes.h"
u16* theVideoBuffer = (u16*)VideoBuffer;
#define RGB(r,g,b) (r+(g<<5)+(b<<10)) //Macro to build a color from its parts
int main()
{
SetMode( SCREENMODE3 | BG2ENABLE ); //Set screen mode
int x = 10, y = 10; //Pixel location
theVideoBuffer[ x + y * 240 ] = RGB( 31, 31, 31 ); //Plot our pixel
return 0;
}
Mode 5 is nearly identical. Simply replace the SCREENMODE3 with SCREENMODE5 in the SetMode macro, and instead of theVideoBuffer[ x + y * 240 ] say theVideoBuffer[ x + y * 160 ] because the screen is only 160 pixels in width. If you want to use the second buffer for Mode 5, read on, because I'll describe double buffering for Mode 4 in just a bit. Double buffering is identical in both screen modes - the method of drawing to the buffer is slightly changed, though.
Mode 4 is slightly more complex since it is only an 8bit color mode and it utilizes the backbuffer. Naturally, change the SCREENMODE3 to SCREENMODE4 in SetMode (this should be self-explanatory, and I'm not going to note the change anymore). However, now we need a few more things.
Before we do any drawing with Mode 4, we need a palette. The palette is stored at 0x5000000 and is simply 256 16bit numbers representing all the possible colors. The memory is pointed to in gba.h, and the pointer is called BGPaletteMem.
To put a color in the palette we simply say
u16* theScreenPalette = (u16*)BGPaletteMem;
Now mind you, I don't recommend you hard code all the palette entries. There are programs on the internet to get the palette from files and put it in an easy to use format. Furthermore, the color at position 0 should always be black (0,0,0) - this will be your transparent color.
theScreenPalette[ position ] = RGB( red, green, blue );
Now we need to point to the area of memory where the backbuffer is stored (0x600A000). Guess what - this pointer is already in our gba.h. Behold, BackBuffer!
Now always write to the backbuffer instead of the video buffer. Then you can call the Flip function to switch between the two. Basically, the Flip function changes which part of video memory is being viewed and changes the pointers around. Thus, you will always be viewing the buffer in the front and always be drawing to the buffer in the back. Here's the function:
u16* theBackBuffer = (u16*)BackBuffer;
However, you only want to call this function one time in your main loop. That time is during the vertical blank. The vertical blank is a brief period when the GBA hardware isn't drawing anything. If you draw any other time, there is a possibility that images will be choppy and distorted because you're writing data at the same time the drawing is taking place. This effect is known as "Shearing."
void Flip()
{
if (REG_DISPCNT & BACKBUFFER)
{
REG_DISPCNT &= ~BACKBUFFER;
theVideoBuffer = theBackBuffer;
}
else
{
REG_DISPCNT |= BACKBUFFER;
theVideoBuffer = theBackBuffer;
}
}
Luckily, the function to wait for the vertical blank is very simple. All it does is get a pointer to the area that stores the position of the vertical drawing. Once this counter reaches 160 (the y resolution, and thus the end of the screen), the vertical blank is occurring. Nothing should happen until that vertical blank happens, so we trap ourselves in a loop. Note that in Mode 5, the y resolution is only 128, so you'll want to change your wait accordingly.
Take that all in? Good, because there's another pitfall to Mode 4. Although it is an 8bit mode, the GBA hardware was set up so you can only write 16 bits at a time. This basically means that you either have to do a little extra processing to take into account 2 adjacent pixels (which is slow and unrecommended) or draw two pixels of the same color at a time (which cuts your resolution and is also unrecommended).
void WaitForVblank()
{
#define ScanlineCounter *(volatile u16*)0x4000006;
while(ScanlineCounter<160){}
}
Here's a "short" example of drawing a pixel to help you take all this in, because it's a lot of information all at once.
There you have it. What does that all build up to?
#include "gba.h"
#include "screenmodes.h"
u16* theVideoBuffer = (u16*)VideoBuffer;
u16* theBackBuffer = (u16*)BackBuffer;
u16* theScreenPalette = (u16*)BGPaletteMem;
#define RGB(r,g,b) (r+(g<<5)+(b<<10)) //Macro to build a color from its parts
void WaitForVblank()
{
#define ScanlineCounter *(volatile u16*)0x4000006
while(ScanlineCounter<160){}
}
void Flip()
{
if (REG_DISPCNT & BACKBUFFER)
{
REG_DISPCNT &= ~BACKBUFFER;
theVideoBuffer = theBackBuffer;
}
else
{
REG_DISPCNT |= BACKBUFFER;
theVideoBuffer = theBackBuffer;
}
}
int main()
{
SetMode( SCREENMODE4 | BG2ENABLE ); //Set screen mode
int x = 0, y = 0; //Coordinate for our left pixel
theScreenPalette[1] = RGB( 31, 31, 31); //Add white to the palette
theScreenPalette[2] = RGB(31,0,0); //Add red to the palette
u16 twoColors = (( 1 << 8 ) + 2); //Left pixel = 0, right pixel = 1
theBackBuffer[x + y * 240 ] = twoColors; //Write the two colours
WaitForVblank(); //Wait for a vertical blank
Flip(); //Flip the buffers
return 0;
}
I don't recommend you use Mode 4 unless you really have to or you plan everything out meticulously, but I thought you should have the information in case you needed it.
[size="5"]Drawing a Pre-Made Image
Drawing an image made in another program is remarkably easy - I've practically given you all the information already.
First, get an image converter/creation program. is a great place to get one. With an image converter, take the .bmp or .gif or .pcx or whatever (depending on what the program uses), and convert the image to a standard C .h file. If your image uses a palette, this .h file should have both palette information and image information stored in separate arrays.
Now, all drawing the image consists of is reading from the arrays. Read from the palette array (if there is any) and put the data into the palette memory. To draw the image, simply run a loop that goes through your array and puts the pixels defined in the array at the desired location.
Naturally, these steps will be slightly different depending on what program you're using. However, being the nice guy that I am, I'm going to provide you with a quick example.
The program: Gfx2Gba v1.03 by Darren (can be found at, read the readme for instructions)
The image: A 240*160, 8 bit image named gbatest.bmp
The command line: gfx2gba gbatest.bmp gbatest.h -8 -w 240
The code:
And that's all! I didn't use the backbuffer, because I was just trying to make a point. I used Mode 4 so that I could press the fact that you MUST write 16 bits at a time. If you were using Mode 3/16 bit color, you wouldn't need the tempData pointer. You would also change the 120s back to 240s.
#include "gba.h"
#include "screenmodes.h"
#include "gbatest.h"
u16* theVideoBuffer = (u16*)VideoBuffer;
u16* theScreenPalette = (u16*)BGPaletteMem;
#define RGB(r,g,b) (r+(g<<5)+(b<<10)) //Macro to build a color from its parts
int main()
{
SetMode(SCREENMODE4|BG2ENABLE);
//Copy the palette
u16 i;
for ( i = 0; i < 256; i++ )
theScreenPalette[ i ] = gbatestPalette[ i ];
//Cast a 16 bit pointer to our data so we can read/write 16 bits at a time easily
u16* tempData = (u16*)gbatest;
//Write the data
//Note we're using 120 instead of 240 because we're writing 16 bits
//(2 colors) at a time.
u16 x, y;
for ( x = 0; x < 120; x++ )
for ( y = 0; y < 160; y++ )
theVideoBuffer[ y * 120 + x ] = tempData[ y * 120 + x ];
return 0;
}
In fact, that's the end of this article. By now, you should have bitmapped modes mastered, or at least you should have a relatively firm grasp on them.
[size="5"]Next Article
In the next article, I'm going to give you information on a rather easy topic - input. If you're lucky, I might even make a game demo to show off.
[size="5"]Acknowledgements
I would like to thank dovoto, as his tutorials have been my biggest source of information on GBA development since I started. Check out his site at. I'd also like to thank the guys in #gamedev and #gbadev on EFNet in IRC for all their help, and all the help they'll be giving me as I write these articles. Furthermore, I would like to thank. The site is a great resource, and it is definitely worth your time.). | https://www.gamedev.net/articles/programming/general-and-gameplay-programming/gba-development-from-the-ground-up-volume-2-r1780/ | CC-MAIN-2018-17 | refinedweb | 2,588 | 71.14 |
ALLIED FOOD PRODUCTS: Capital Budgeting and Cash Flow Estima
ALLIED FOOD PRODUCTS: Capital Budgeting and Cash Flow Estimation
After seeing Snapple's success with non-cola soft drinks and learning of Coke's and Pepsi's interest, Allied Food Products has decided to consider an expansion of its own in the fruit juice business. The product being considered is fresh lemon juice.
Assume that you were recently hired as%, 45%, 15%, and 7%..
Question 1:
Use the data to finish the partially completed table, attached. Also calculate the NPV, IRR, and payback.
Question 2:
Explain why this project should, or should not, be accepted.
Question 3:
Explain and illustrate with two examples: how one would apply scenario and sensitivity analyses to this decision.
Solution Preview
Allied Food Products Capital Budgeting
1. See the attachment for the completed table. All cells are formulas, so you can see how each amount was calculated.
2. This project should be rejected because the present value of the future cash flows is negative. The project earns 9.28% annualized return (IRR) assuming the returns can be reinvested at that rate. This is not enough to exceed the firm's cost of capital of 10%. Since the capital cost 10%, projects that don't even pay for the capital, heaven help throw off profits in excess of the cost of capital, should be rejected. It would be better to ...
Solution Summary
Your tutorial includes completing the model, showing two other models with a variable changes to illustrate sensitivity analysis, and a discussion of 429 words about this project. | https://brainmass.com/business/capital-budgeting/allied-food-products-capital-budgeting-and-cash-flow-estima-468741 | CC-MAIN-2018-09 | refinedweb | 262 | 56.25 |
Remote Communication Made Easy
This article was contributed by Ken Reed (see the demo project for contact details).
Environment: VC6, NT4, Windows 2000 Professional, XP Professional
Now that networked computers are so common, it is becoming an increasingly frequent programming task to get a program running on one PC to talk to one on another (as in multi-player games, for example). There are lots of ways to do it, but they all seem to require so much hard work. So, starting off with a blank sheet of paper, how would I like to send something to another program?
some_file << "Here is some value " << some_value << "\n";
Seems easy enough. Why can't you do exactly the same to send something to another program? Well, if you write a little bit of code you can, and you'll find that code in the demo project in the file socket.cpp.
Behind the scenes it's all done with sockets. However, I'm not going to teach you how to use sockets (there are some good books on that subject). The idea is for me to hide all the detail away so you don't have to know how sockets work and you can just get on and get programs to talk to each other easily. Hopefully, the Socket class provided will let you do just that.
To use the socket class, you will need to include socket.h in your code and incorporate socket.cpp as a module. Having done that, you're ready to go and you'll find examples in the demo project. There is a little more to do than just streaming the things we want to send; we also have to do the equivalent of opening a file (for example, saying which computer and which program we want to talk to). Looking at the example demo project, we have two programs. One is a "server" that sits waiting for some other program to come and talk to it. When one does, it just writes the information coming from the other program into a window. The other program is the "client" that connects to the server and just sends it the time every second.
Ignoring all the Windows "plumbing" code, the server does the following:
string text; Socket socket; socket.bind(3333); socket.listen(); while (true) { socket >> text; if (text == "exit") { socket << "ok\n"; socket.close(); PostMessage(main_window, WM_CLOSE, 0, 0); break; } SetWindowText(static_text, text.c_str()); }
The server first creates a socket. Then it tells it which port number to use (this is the bind call). A port number is needed so that more than one program on the same computer can all use sockets (as long as each program uses a different port number). The port number can be any number not in use on your computer (and should be greater than 1024). To find out which ports are in use, issue the following command from a DOS prompt:
netstat -a
The ports in use are the numbers after the colon in the local address column.
Next, the server calls the listen function. This basically waits until some other program wants to talk. When it does, the listen call returns and we can start reading data from the remote program. This is done just by using the >> operator. In this example, we've read in a string. If the string is "exit", hen we close the socket and post a message to shut down the server (we also send back an "ok" to show that the communication isn't just one-way). If it isn't an exit command, we display the text in a window (the time in our demo example).
On the client side, the code is just as simple:
server.connect("localhost", 3333); while (! shutdown) { GetTimeFormat(0, 0, 0, 0, buffer, sizeof(buffer)); SetWindowText(static_text, buffer); text = buffer; server << text << "\n"; Sleep(1000); } string reponse; server << "exit\n"; server >> response; server.close();
The client issues a connect call saying which computer and which port (or program) it wants to connect to. In this case, we have used localhost to specify the local machine but it could be the computer name of anything on the network.
Having done that, it gets the time and sends it to the server. When shutting down, it sends an exit command to the server, asking it to shut down too (and reads back the response from the server although the demo program does nothing with it).
Note the end-of-line character ('\n') when writing to the socket. This is important because the standard library likes streamed data to be surrounded by white space. Programs can appear to hang if you don't provide it.
Try it out (the demo project was created using Visual Studio Version 6). Compile the client and server programs in the demo project and run the server. It will just display the text "Waiting". Then, start the client. Both windows will then display the time, counting up in seconds. Now, close the client window. The server window will also close (because the client told it to shut down).
If all you want to do is get programs to talk to each other, you can stop reading now. However, if you want to "get technical," read on.
Now, what are all those other things in the socket include file?
// This file needs -*- c++ -*- mode
I use emacs. If you do, too, you know what this means. If you don't use emacs, you don't care.
#include "exception.h"
C++ lets you handle errors with exceptions. Some people like them; some people don't. I do, so if I detect an error, I throw an exception (the exception class that I use is included in the demo project). I recommend you use exceptions because), the standard library will catch the thrown exception and quietly set the stream state to bad or fail (depending on the error). If you stream your data, you must use the standard library error test functions fail() and bad() and not rely on an exception being caught.
void bind (const int port); void close (); void connect (const char * const host, const int port); void listen ();
These have been covered in the example above; there's really not much more to say apart from the fact that if you give a port number of zero to the bind call, it will automatically allocate a free port number. If you want to find out what number has been allocated, use the following call:
int get_number ();
Of course, you still have to let the clients who want to talk to you know this number, but that is a little esoteric for this introductory article.
int bytes_read (bool reset_count = false); int bytes_sent (bool reset_count = false);
You can read (and reset) the number of bytes sent and received over a socket. I used this when I wanted to display a progress bar while sending files over a socket. do a write_binary at the sender and a read_binary at the receiver. You won't get data transfer between PCs much faster than that.
void set_trace (const char * filename);
If things are not working as expected and you can't figure out what's going wrong, ();
If you need to use these functions, you are smart enough to look at the code to find out what they do. The intended user of the Socket class does not need to go down to this level.
private: Socket (const Socket & Socket); // No copying allowed
No copying of Sockets is allowed because I haven't written a copy constructor ... yet (and I'm not sure I want to). Why? The Socket class contains a buffer where it builds up a line of text to send. Taking a copy (passing the socket as a parameter to a function) and adding to the buffer (inside the function) and then reverting to the original buffer (returning from the function call) is just too error prone. I don't even want to think about it. Does this mean you can't pass a socket as a function parameter? No, it doesn't; just pass it by reference rather than by copy. For example:
void my_function(Socket & socket);
Other Odds and Ends
"What versions of Windows will it run under?" I've tried it on NT4, Windows 2000 Professional, and Windows XP Professional. In principle, it should work on anything from Windows 95 onwards, but I haven't been able to try it out on anything other than those three versions mentioned.
"What's the deal on licencing?" The socket and exception class are free software. You can use, modify, and redistribute the code as described by the GNU Public Licence (version 2). A copy of the licence is included in the demo project.
"What's in the demo project?" Ignoring the Visual Studio-generated files, you will find the following files in Socket_demo.zip:
\Contact.txt How to contact me \Socket.html This article in CodeGuru HTML format common\exception.h Exception include file common\exception.cpp Exception implementation common\socket.h Socket include file common\socket.cpp Socket implementation common\GNU_Public_Licence.txt Software licence for the above two classes client\client.cpp Demonstration client program server\server.cpp Demonstration server program
thank you!Posted by johnhuke on 06/09/2011 11:13pm
thank you!Reply
Client generates an unhandled exception, if started first.Posted by Legacy on 11/27/2003 12:00am
Originally posted by: Venelin
Is your Socket object thread safe?Posted by Legacy on 11/11/2003 12:00am
Originally posted by: dzhao
Could your Socket object be shared safely by several threads?Reply
Please help - it won't compile for me !!!Posted by Legacy on 11/07/2003 12:00am
Originally posted by: Patrick
Sorry, but it won't compile for me. I get an error saying that StdAfx.h is missing from the 'Common' folder.
Please help.
ThanksReply
Patrick | http://www.codeguru.com/cpp/i-n/network/serialcommunications/article.php/c5409/Remote-Communication-Made-Easy.htm | CC-MAIN-2014-10 | refinedweb | 1,659 | 73.07 |
Jan 07, 2008 12:42 AM|Vladan Strigo|LINK
I was thinking of sending Phil an email with this suggestion.. but as it hasn't been looked over from all angles (didn't use it in real code... just thinking out loud because of some testability issues I am having) didn't do it and had an idea to post it here to see what others think.
Ok... first to say that currently it's written as inherited from the default Controller, but my suggestion would actually be that this IS how the default controller looks (or better said, take a look at the bits and imagine that the rest of the Controller code is there :) and that this is not an inherited implementation of Controller, but rather the Controller itself)
A trully generic controller:
public class TestableController<TViewData, TTempData> : Controller {
public string ViewName
{
get
{
return _viewName;
}
}
private string _viewName;
public string MasterName
{
get
{
return _masterName;
}
}
private string _masterName;
public TViewData ViewData {
get {
return _viewData;
}
}
private TViewData _viewData;
public TTempData TempData
{
get
{
return _tempData;
}
}
private TTempData _tempData;
public virtual void RenderView(string viewName)
{
_viewName = viewName;
base.RenderView(viewName);
}
public virtual void RenderView(string viewName, TViewData viewData)
{
_viewName = viewName;
_viewData = viewData;
base.RenderView(viewName, viewData);
}
public virtual void RenderView(string viewName, string masterName, TViewData viewData)
{
_viewName = viewName;
_masterName = masterName;
_viewData = viewData;
base.RenderView(viewName, masterName, viewData);
}
}
And the default generics-less implementation (to support the dictionary scenario which you can use now which is ok):
public class TestableController : TestableController<Dictionary<string,
object>, Dictionary<string,
object>>
{
}
What do you think, is this worthwhile suggesting or?
Jan 07, 2008 05:41 AM|CVertex|LINK
I like the TTempData type parameter idea, but I think the view should be specifying the view data type, not the controller.
It would be nice to have a generic RenderView though.
Something like RenderView<TViewData>(string viewName, TViewData data);
Member
55 Points
Jan 07, 2008 06:08 AM|ChadThiele|LINK
What problem would the generic renderview be addressing? You already have the capability to send typed data to the view. I'm just curious, perhaps there's something I'm overlooking.
Thanks!
Jan 07, 2008 08:16 AM|Vladan Strigo|LINK
Actually it's a both ways thing... if you want to be type-safe it needs to be specified on both view and controller.
Jan 07, 2008 08:31 AM|Vladan Strigo|LINK
Several things:
- The current controller is not type-safe - although your view is <SomeType> for ViewData in your controller you can basically *suff* anything in the ViewData as it accepts an object - it suffers from all problems we've dealt with generics (and it's not clear for the developers what they should send down the pipe to view)
- When I was trying to test the controller and the data sent down the pipe (ViewData) I had to both check if it had the typed data or if it had that data in the dictionary - this is not intention revealing, this is not clear, this is not good, it has too much noise... not good... You should be able to say:
List<Order> orders = controller.ViewData as List<Order>
And that's it... this should be un-ambigious - especially now that we have the means (= Generics)
- Why like this... because currently to get the Rendered View, ViewData and Master in a test you need to subclass that controller... yuck... with this approach you could upon RenderView save it and extract it through the properties (which don't exist for ViewName and MasterName) and again... get the type-safed ViewData which was sent down the pipe
Jan 07, 2008 08:38 AM|Vladan Strigo|LINK
That's why there is a non generic version of the Controller - which inherits the generic one, only implements it with the dictionaries - that is the correct way to handle this.
But then - as with typed way - the intent is clear
Jan 07, 2008 09:03 AM|Vladan Strigo|LINK
Ok... I've found why this is so (actually not sure how I didn't figure this out before). Each action can send it's own type of ViewData, and one can use specific ViewData, other doesn't need to... That's why it was made both ways.
But still... this suffers from all the problems above... wonder what MS guys think? Any plans for the next CTP which will adress these concerns a bit (I already know that RenderView will be made public for testing purposes)?
8 replies
Last post Jan 07, 2008 09:03 AM by Vladan Strigo | http://forums.asp.net/t/1201681.aspx?What+do+you+think+should+default+controllers+look+like+this+ | CC-MAIN-2015-11 | refinedweb | 760 | 55.58 |
Postfix operators are unary operators that work on a single variable which can be used to increment or decrement a value by 1(unless overloaded). There are 2 postfix operators in C++, ++ and --.
In the postfix notation (i.e., i++), the value of i is incremented, but the value of the expression is the original value of i. So basically it first assigns a value to expression and then increments the variable. For example,
#include<iostream> using namespace std; int main() { int j = 0, i = 10; // If we assign j to be i++, j will take i's current // value and i's value will be increatemnted by 1. j = i++; cout << j << ", " << i << "\n"; return 0; }
This will give the output −
10, 11 | https://www.tutorialspoint.com/What-are-postfix-operators-in-Cplusplus | CC-MAIN-2021-25 | refinedweb | 124 | 62.68 |
How MetaMask’s Latest Security Tool Can Protect Developers From Theft
Introduction
On Saturday February 20th 2021, as many as 50 smart contract developers let hackers into their computers. These were sophisticated computer users who were using their skills to build secure smart contracts for others. These weren’t the first victims of this type of attack. By becoming more informed and with a new tool from MetaMask called
@lavamoat/allow-scripts, this attack may soon be the last of its kind.
This attack was possible because NomicLabs’ HardHat, a library used for Ethereum smart contract development was hit with a targeted phishing attack. The attack was a type of phishing known as ‘typo squatting’, which relies on users mis-typing or being redirected to a namespace that looks very similar to the original intended name. The most common example of this appears with domains, where phishers purchase a lookalike domain to a genuine, usually trusted website. Often, the webpage will look and feel legitimate, but act with malicious intent. Here at MetaMask, we’re constantly at war with fake websites trying to impersonate us and siphon user credentials. It’s a well known problem, however this particular incident with HardHat caught our attention.
What Happened
The attack didn’t occur with a lookalike domain. Instead, the attacker registered a name on NPM, the primary trusted resource for open source javascript libraries. The name of the genuine package in question was
@nomiclabs/hardhat-waffle. The attacker registered the simpler
hardhat-waffle. This means the exploit relied upon users mistakenly typing
hardhat-waffle instead of
@nomiclabs/hardhat-waffle. Upon installation, the package would run a
postinstall script that uploaded the contents of
package.json,
/etc/hosts,
/etc/passwd and Kubernetes credential files (
~/.kube/config) to a remote server.
This type of attack isn’t new. In 2018, a Bitcoin wallet known as Copay was the victim of malicious code in a 3rd party package that stole users’ Bitcoin & Ethereum keys. The HardHat situation differs in that it was a completely separate package, whereas the Copay incident occurred from the widely-used
event-stream. In both cases, the malicious actors targeted the projects’ dependency chains. These cases directly exemplify the double edged nature of open source distributed software.
How This Could’ve Been Avoided
From design to engineering and beyond, security is the core of everything we do at MetaMask. After all, MetaMask is a tool directly involved with people’s money. For a couple of years now, we’ve been working on a tool called LavaMoat. LavaMoat is a set of tools that protects projects from malicious code in the software supply chain. For the purpose of this write up, our primary focus will be on a recent tool we’ve created under LavaMoat called
@lavamoat/allow-scripts.
@lavamoat/allow-scripts is a lightweight and simple tool that enables developers to explicitly allow the execution of npm lifecycle scripts such as
preinstall &
postinstall for a trusted package as needed. The philosophy is that 3rd party software doesn’t automatically get special permissions to run in an unsafe environment like the command line, they must be explicitly granted. This tool has the potential to drastically mitigate attacks like the HardHat incident. All it takes is a simple install and quick configuration.
If the affected developers who installed
hardhat-waffle had first configured
@lavamoat/allow-scripts on their projects, they would have been immune to its install script attack.
Usage
Increase the security of your project in 3 steps:
- Create a
.yarnrcor
.npmrc, with the entry:
ignore-scripts true. This will prevent new modules you add from running arbitrary scripts!
- In your project directory, run
yarn add -D @lavamoat/allow-scripts. This will allow you to selectively allow any modules that absolutely require scripts to run as part of their setup.
- Automatically generate a configuration by running the command
yarn allow-scripts auto. This will automatically generate configuration in your
package.jsonlike below. You can customize it, or leave it as it is.
{
"lavamoat": {
"allowScripts": {
"keccak": true,
"core-js": false
}
}
}
From now on,
yarn or
npm install runs with lifecycle scripts disabled by default, and only permits them according to this policy.
Any scripts from newly installed packages won’t execute. You may either manually whitelist the new package in
package.json, or run
yarn allow-scripts auto again. Running this command will not overwrite the config, it will only add to it.
Conclusion
We’re working to maintain top notch security standards at MetaMask, which in turn benefits the entire open source javascript ecosystem. By using
@lavamoat/allow-scripts to your project, you can make yourself a little safer today. Let us know how it works, we’re eager to make it the best it can be. | https://medium.com/metamask/how-metamasks-latest-security-tool-could-protect-smart-contract-developers-from-theft-e12da346aa53?source=collection_home---4------4----------------------- | CC-MAIN-2021-31 | refinedweb | 795 | 55.95 |
>> ternary operator (? X : Y) in C++?
C in Depth: The Complete C Programming Guide for Beginners
45 Lectures 4.5 hours
Practical C++: Learn C++ Basics Step by Step
50 Lectures 4.5 hours
Master C and Embedded C Programming- Learn as you go
66 Lectures 5.5 hours
The conditional operator (? :) is a ternary operator (it takes three operands). The conditional operator works as follows −
-. The evaluation of the conditional operator is very complex. The steps above were just a quick intro to it. Conditional expressions have right-to-left associativity. The first operand must be of integral or pointer type.
- The following rules apply to the second and third operands −
- If both operands are of the same type, the result is of that type.
- both operands are of type void, the common type is type void.
- If both operands are of the same user-defined type, the common type is that type.
- If the operands have different types and at least one of the operands has user-defined type then the language rules are used to determine the common type. (See warning below.)
example
#include <iostream> using namespace std; int main() { int i = 1, j = 2; cout << ( i > j ? i : j ) << " is greater." << endl; }
Output
This will give the output −
2 is greater.
- Related Questions & Answers
- What is ternary operator in C#?
- What is a Ternary operator/conditional operator in C#?
- What is a ternary operator (?:) in JavaScript?
- Ternary Operator in C#
- Ternary Operator in Java
- Ternary Operator in Python?
- Changing ternary operator into non-ternary - JavaScript?
- Java Ternary Operator Examples
- C/C++ Ternary Operator
- Conditional ternary operator ( ?: ) in C++
- Ternary Operator in Dart Programming
- How Ternary operator in PowerShell Works?
- Find larger of x^y and y^x in C++
- How to overload python ternary operator?
- Find number of pairs (x, y) in an array such that x^y > y^x in C++
Advertisements | https://www.tutorialspoint.com/What-is-ternary-operator-X-Y-in-Cplusplus | CC-MAIN-2022-40 | refinedweb | 317 | 60.51 |
Hi,
Just need some assistance.
When an object is present on a webpage but not visible (as in you need to scroll down to see it), does katalon fail in this regard by default?
For some reason i thought katalon worked with the DOM???
Hi,
Just need some assistance.
When an object is present on a webpage but not visible (as in you need to scroll down to see it), does katalon fail in this regard by default?
For some reason i thought katalon worked with the DOM???
If theres an element on a page but maybe not in view, just doing the click action should scroll for you, what issues are you running into to? share any error logs etc.
Hi @hpulsford,
I asked the question just as a general question in terms of how objects are interacted with.
If you look at the below screen shots, I am trying to get the text value from a label. The label is present but not visible as you need to scroll down to see the label.
Scenario 1:
Now, when execute the getText method and the object is visible by executing the scroll, I get a value back (See screen shot and debug viewer value for the footerText variable).
When I execute the getText method and the object is not visible and i do not execute the scroll to method, I get a blank value.
Scenario 2:
When a checkbox is present and not visible, I get an exception that the object is not interactable.
When the checkbox is present and visible, all seems well.
Same thing happened to other controls as well
Create a custom keyword/package that does something like:
package com.my_company public class utils { /** * @param selector (String) A DOM/CSS selector (not an XPath) */ static void scrollIntoView(String selector) { String js = "document.querySelector('" + selector + "').scrollIntoView(true);" WebUI.executeJavaScript(js, null); } }
In your test case, call it like this:
import static com.my_company.utils.* ... scrollIntoView("#my_checkbox_id")
def static void scrollIntoView(TestObject to) { String path = to.getSelectorCollection().get(SelectorMethod.CSS) println path String js = "document.querySelector(" + path + ").scrollIntoView(true);" println js WebUI.executeJavaScript(js, null); }
i user a testpbject
test obejct css: button.mainly.jq-order-approve
when i run always say ,Unable to execute JavaScript
javascript error: button is not defined
(Session info: chrome=76.0.3809.87)
Build info: version: ‘3.141.59’, revision: ‘e82be7d358’, time: ‘2018-11-14T08:25:53’
System info: host: ‘CNCDUW0218’, ip: ‘172.17.103.139’, os.name: ‘Windows 7’, os.arch: ‘amd64’, os.version: ‘6.1’, java.version: ‘1.8.0_181’
Driver info: com.kms.katalon.selenium.driver.CChromeDriver
Capabilities {acceptInsecureCerts: false, browserName: chrome, browserVersion: 76.0.3809.87, chrome: {chromedriverVersion: 76.0.3809.68 (420c9498db8ce…, userDataDir: C:\Users\cecichen\AppData\L…}, goog:chromeOptions: {debuggerAddress: localhost:57369}, javascriptEnabled: true, networkConnectionEnabled: false, pageLoadStrategy: normal, platform: XP, platformName: XP, proxy: Proxy(), setWindowRect: true, strictFileInteractability: false, timeouts: {implicit: 0, pageLoad: 300000, script: 30000}, unhandledPromptBehavior: dismiss and notify}
Session ID: f87077d75d9d18e625a8ef43e84f3961
I am not sure but it looks like CSS locator is wrong. Calling out to @Russ_Thomas
Yes I prefer waiting for an element to be visible than waiting an element to be present as you said. And yes
you can use also scrollToElement method before interacting with that element.
The CSS is fine if it carries the same classes as are present in the HTML document. A quick trip to the browser console to check it would prove that.
The error says the JS is broken - and it is - you (and I) missed the inner quotes:
String js = 'document.querySelector("' + path + '").scrollIntoView(true);'
Or, instead:
String path = '"' + to.getSelectorCollection().get(SelectorMethod.CSS) + '"'
i’s work now.
can you help this issue | https://forum.katalon.com/t/visible-vs-present-and-scrollto/33742 | CC-MAIN-2022-27 | refinedweb | 617 | 57.16 |
.
8 Replies
Oct 28, 2011 at 6:50 UTC
Bluecube ICT is an IT service provider.
Dhcp or Static?
Oct 28, 2011 at 6:51 UTC
static ips for the clustered resources
Oct 28, 2011 at 8:28 UTC
When you use the IP address to connect, which address are you using? There are 3 addresses in play here; The 2 physical servers you have clustered and a third 'virtual address' that is the place that the DNS servers use to identify the \\namespace\Share That location is controlled by the Cluster setting loadbalancing app.
Oct 28, 2011 at 8:34 UTC
I'm using the 'virtual address', the one that is controlled by the cluster.
Oct 28, 2011 at 10:36 UTC
I just checked on the Microsoft site for changes on how the cluster service works. It appears that the server 2008 cluster service has changed their scope and only allow the \\clustername\share connection.
Edit: here is an explanation from a Microsoft engineer:
Impact? Scripts or apps that use IP addresses as part of their UNC path to avoid, perhaps, the additional time it takes for DNS\WINS name resolution, will not work against W2K8 clusters. They will have to be changed to use the NN resource. And, OBTW, do not use CNAME records in DNS for a cluster NN, that will not work.
Thanks. Chuck Timon Senior Escalation Engineer (SEE) Microsoft corporation"
Oct 28, 2011 at 10:58 UTC
That's not a bug, that's a feature!
Oct 28, 2011 at 11:40 UTC
oh well, that makes it a lot harder!!! thanks for the help guys
Oct 29, 2011 at 3:56 UTC
Wow. That's dumb.
Dale, do you have a Technet link you can share with that? | https://community.spiceworks.com/topic/165699-clustered-file-shares | CC-MAIN-2016-44 | refinedweb | 295 | 77.87 |
Download HelloWorldApp.zip - 69.94 KB
Let's create a simple Hello World application for Windows Phone 7 and then dissect it.
First, go and grab the Windows Phone SDK 7.1 from here:- App Hub.
It is a free download and provides you all the tools that you need to
develop applications and games for Windows Phone 7 devices. After
installing it, open Visual Studio 2010 Express for Windows Phone.
From the Start Page, select New Project. This can also be done by
choosing New Project from the File menu or by using the shortcut key
combination: Ctrl+Shift+N.
A New Project dialog is brought up as shown below.
From the Installed Templates, choose Silverlight for Windows Phone and
then select the Windows Phone Application template. Type in the name of
the application: HelloWorldApp. Browse to the folder you want the
project to be stored in, and then click OK.
This displays a prompt as shown below. Since, we want to develop for the
latest version, i.e., Windows Phone 7.1 OS codenamed "Mango", so go
ahead and click OK.
Your screen should look like the one below. It consists of the design view and the code view of your application.
Now, bring up the Toolbox. To do this, select Other Windows from the
View menu and then select Toolbox. This contains a list of the controls
that can be used in your application.
Click and drag the Button control into the design area.
You can drag the button to any location in the design view. Position it in the center of the screen.
Next, let's change the text of the button. Select the Content attribute
from the Properties Window and change it to "Click Me!"(without the
double quotes). Then hit Enter and you will see the change reflected on
the button.
Similarly, add a Textblock control just below the button, and then
delete its Text content from the Properties Window. The Textblock now
turns into an empty rectangle.
Now, double click on the Button control in the design pane. This will open up another file called MainPage.xaml.cs
which will contain the main logic behind your application. Note that, a
button click event handler has been automatically created for you.
You just need to add a line of code in the method, so that it looks like this:
private void button1_Click(object sender, RoutedEventArgs e)
{
textBlock1.Text = "Hello World!!!"; //line to be added
}
Also, don't forget to change the application name, by selecting the
Title and then changing its Text property to HELLO WORLD APP just as you
had done for the Textblock.
Voila! Your app is ready. Click on the green arrow as shown above or Hit F5 to start your application.
This builds your application, creating a xap file. If you had done
everything as instructed above, then you would not see any error. The
Windows Phone emulator will start up automatically and then the package
will be deployed to it. Your application should start up automatically.
Click on the "Click Me!" button and you will see the message as shown
below.
How easy was it, huh???
Now, let's go behind the scenes to see what exactly happened when you created your first project.
After loading the project in Visual Studio, examine the Solution
Explorer window. You will find many files that the IDE has created for
you. We will examine them one by one.
The App.xaml and MainPage.xaml files are Extensible Application Markup Language (XAML) files whereas the App.xaml.cs and MainPage.xaml.cs
are C# code files. The two code files are actually "code-behind" files
associated with the two XAML files. They provide code in support of the
markup. We will examine these in detail a little later.
The other files are images that are used by the application. The
ApplicationIcon.png is the icon image that represents your application
in the phone's application list. The Background.png is the tile image of
your application. This appears when you pin the app to the Start
screen. The last image is SplashScreenImage.jpg which appears for a
brief moment when you start the application, during which it loads
content into memory.
The References section contains a list of libraries(assemblies) that
provide services and functionality that the application requires to
work. The Properties section contains three files:-
Now, open the App.xaml.cs file. You will see a namespace definition that is the same as the project name and a class named App that derives from the Silverlight class Application. All Silverlight programs contain an App class that derives from Application. This is where operations like application-wide initialization, startup and shutdown are performed.
namespace HelloWorldApp
{
public partial class App : Application
{
public PhoneApplicationFrame RootFrame { get; private set; }
public App()
{
...
}
...
}
}
Next, look at App.xaml. You will recognize it as XML,
but more precisely it is a XAML file. You should use this file for
storing resources such as color schemes, gradient brushes and styles
that are used throughout the application. Notice that the root element
is Application, which is the same Silverlight class that we came across earlier. It contains four XML namespace declarations ('xmlns').
<Application
x:Class="HelloWorldApp.App"
xmlns=""
xmlns:x=""
xmlns:phone="clr-namespace:Microsoft.Phone.Controls;assembly=Microsoft.Phone"
xmlns:
...
</Application>
The first XML namespace declaration is the standard namespace for
Silverlight which helps the compiler locate and identify Silverlight
classes such as Application itself. The second XML
namespace declaration is associated with XAML itself. This allows the
file to reference some elements and attributes that are a part of XAML
rather than specifically Silverlight. The prefix 'x' refers to XAML. The
last two are unique to the phone.
The App.xaml and App.xaml.cs files really define two halves of the same App class. During compilation, Visual Studio parses App.xaml and generates another code file App.g.cs. This generated file contains another partial definition of the App class. It contains a method named InitializeComponent that is called from the constructor in the App.xaml.cs file.
So, when the application is run, the App class creates an object of type PhoneApplicationFrame
and sets that object to its own RootVisual property. This frame is 480
pixels wide and 800 pixels tall and occupies the entire display surface
of the phone. The PhoneApplicationFrame object then navigates to an object called MainPage(kinda like a browser).
Now, examine MainPage.xaml.cs. It contains many using directives. The ones beginning with System.Windows are for the Silverlight classes. The Microsoft.Phone.Controls namespace contains extensions to Silverlight for the phone. The partial class named MainPage derives from the Silverlight class PhoneApplicationPage. This is the class that defines the visuals that you actually see on the screen when you run the application.
namespace HelloWorldApp
{
public partial class MainPage : PhoneApplicationPage
{
public MainPage()
{
InitializeComponent();
}
...
}
}
Open the MainPage.xaml file. The first four XML namespace declarations are the same as in App.xaml.
The 'd'(designer) and 'mc'(markup compatibility) namespace declarations
are for the benefit of XAML design programs. The compilation of the
program also generates another file named MainPage.g.cs that contains another partial class definition for MainPage with the InitializeComponent method called from the constructor in MainPage.xaml.cs.
<phone:PhoneApplicationPage
x:Class="HelloWorld>
</phone:PhoneApplicationPage>
You will also see settings in MainPage.xaml for FontFamily, FontSize and Foreground that apply to the whole page. The body of the MainPage.xaml file contains several nested elements named Grid, StackPanel and TextBlock in a parent-child hierarchy.
Our simple application has only one page, called MainPage. This MainPage contains
a Grid, which contains a StackPanel(named 'TitlePanel') with a couple
of TextBlock elements, and another Grid(named 'ContentPanel'). The two
textblocks are for the application title and the page title.
<Grid x:
<Grid.RowDefinitions>
<RowDefinition Height="Auto"/>
<RowDefinition Height="*"/>
</Grid.RowDefinitions>
<!--<span class="code-comment">TitlePanel contains the name of the application and page title--></span>
<StackPanel x:
<TextBlock x:
<TextBlock x:
</StackPanel>
<!--<span class="code-comment">ContentPanel - place additional content here--></span>
<Grid x:
<Button Content="Click Me!" Height="72" HorizontalAlignment="Left"
Margin="146,143,0,0" Name="button1" VerticalAlignment="Top"
Width="160" Click="button1_Click" />
<TextBlock Height="59" HorizontalAlignment="Left" Margin="146,287,0,0"
Name="textBlock1" Text="" VerticalAlignment="Top" Width="160" />
</Grid>
</Grid>
When you dragged a Button control and a TextBlock control from the
Toolbox onto the design area, Visual Studio automatically added the
appropriate lines in this file. The properties of these controls can be
modified in the XAML itself. So, you could have set the Content property
of the Button to 'Click Me!' here, instead of going to the Properties
window.
Next, when you double-clicked on the button, you, or
rather, Visual Studio created a click event handler for the button and
redirected you to the C# code file, right into the method. There you
just wrote one line that modified the text property of the TextBlock.
On pressing F5, Visual Studio checked your project for errors, compiled it and then built the whole application into a xap file which is essentially a zip file containing all the assets that your program needs to run on a Windows Phone device. | http://www.codeproject.com/Articles/312963/Hello-World-Application-in-Windows-Phone-7?fid=1677932&df=90&mpp=10&sort=Position&spc=None&tid=4138528 | CC-MAIN-2014-42 | refinedweb | 1,546 | 59.19 |
Comparing Uglify and Closure in Babel/Rollup Javascript Build Environment
I’ve been experimenting with creating a build environment for a React project that uses Rollup and Babel. One of the choices you can make is how to minify the generated js. I compare using two methods of compacting: Uglify and Closure.
I have a project that uses React, which normally requires compilation with Babel. (For JSX). I’ve been working on a build environment using an advanced JS packaging tool – Rollup. One of the nice things about rollup is that once you have the environment set up its easy to add methods to optimize the generated javascript.
Two good methods are to run the code through either Uglify or Closure. Here is my rollup.config.js, with both enabled:
import babel from 'rollup-plugin-babel'; import commonjs from 'rollup-plugin-commonjs' import nodeResolve from 'rollup-plugin-node-resolve' import uglify from 'rollup-plugin-uglify' import replace from 'rollup-plugin-replace' import closure from 'rollup-plugin-closure-compiler-js'; export default { entry: 'status-react.jsx', dest: '../files/status-react.js', format: 'iife', plugins: [ babel({ exclude: 'node_modules/**' }), closure(), nodeResolve({ jsnext: true }), commonjs({ include: 'node_modules/**' }), replace({ 'process.env.NODE_ENV': JSON.stringify( 'production' ) }), uglify({ compress: { screw_ie8: true, warnings: false }, output: { comments: false }, sourceMap: false }) ] };
Uglify is a more standard minifier. It not only does standard minification, but some limited code rewriting like reducing the size of variable names.
Closure is a complete JS compiler. It does a complete analysis, removes dead code, and can do some pretty deep rewriting of your code.
Normally Closure would be much better at compressing JS because it can do much more dead code removal. Rollup does its own dead code removal already – so there is a lot less benefit to using Closure over Uglify in a rollup environment.
With rollup it pretty easy to try both so I did a comparison, using one of my projects.
In this table I compare the rollup compiled JS file, with no minification against using one or the other or both of the minification methods. I give the sizes for the file itself, and after gzip compression (since when transferred the file will likely be gzip compressed):
Essentially using at least one minification method has big advantages, but either one or both is about the same. There is a small advantage to using closure – but it barely matters. Closure adds a significant amount of compilation time.
I didn’t test performance of the generated code, so its possible that one of these methods has significantly better performance. At least for now, I’m going to be using just uglify, since it way simpler and faster to compile and generates code that is similar in size. | https://oroboro.com/comparing-uglify-closure-babelrollup-javascript-build-environment/ | CC-MAIN-2020-50 | refinedweb | 454 | 52.8 |
class Lab(Model):
responsible = ForeignKey(User)
This is a very simplified version of my Django model. Basically the problem is in the Django admin, when I want to edit or add a new
Lab object, the drop-down list containing the
User objects only displays the
User.username value, which are only numbers in my case.
I want the drop-down list to display
User.last_name value.
How can I do this?
You have to define your own user choice field for lab admin.
from django import forms from django.contrib import admin class UserChoiceField(forms.ModelChoiceField): def label_from_instance(self, obj): return obj.last_name # Now you have to hook this field up to lab admin. class LabAdmin(admin.ModelAdmin): def formfield_for_foreignkey(self, db_field, request=None, **kwargs): if db_field.name == 'responsible': kwargs['form_class'] = UserChoiceField return super(LabAdmin, self).formfield_for_foreignkey(db_field, request, **kwargs)
Something like that. I didn't tested it so there may be some typos. Hope this helps!
Default
User model in django returns
username as the object representation. If you want to change this, then you have to overwrite
__unicode__ method.
You cant write a
proxy model to extend the user model and then overwrite the
__unicode__ method to return
Something like this:
class MyUser(User): class Meta: proxy = True def __unicode__(self): return self.last_name class Lab(Model): responsible = ForeignKey(MyUser) | http://m.dlxedu.com/m/askdetail/3/6de816cb9d1a778aafee88d493387dc3.html | CC-MAIN-2019-22 | refinedweb | 223 | 52.46 |
Heads up! To view this whole video, sign in with your Courses account or enroll in your free 7-day trial. Sign In Enroll
Summary1:10 with Jay McGavren
You've just built a Sinatra app completely from scratch. Let's review all the concepts we had to learn to get here.
Learning More
Sinatra has lots more functionality than we can cover in this course. You can learn more on the official Sinatra site.
Security
Because we include the contents of a text file into the
show.erb page verbatim, malicious users could embed HTML code and even JavaScript into the page, which will be run when other users view the page. For example, try entering the following as a page's content:
<script>alert('boo');</script>
When another user views that page, a JavaScript alert dialog will appear. And if a malicious user can do that, they can do other nasty things as well.
We can prevent this by escaping any HTML code that appears in a page's content - replacing characters that would normally be treated as HTML markup with entities that are shown in the browser instead. For example, the above malicious code would look like this when it's escaped:
<script>alert('boo');</script>
But it would look exactly like the original code when viewed in a browser. (It just wouldn't be executed or treated as markup.)
To escape any HTML that might appear in a string, we can call the
Rack::Utils.escape_html method on that string. We can add a method at the top of the
wiki.rb file that does this. There's a method in Rails named
h that does this same thing, so we'll name this method
h as well:
def h(string) Rack::Utils.escape_html(string) end
The
Rack::Utils library gets loaded when Sinatra does, so we don't need to
require it or anything.
Now that our new
h method is defined within
wiki.rb, we can call it within the
show.erb template. We can replace this line:
<p><%= @content %></p>
...with this:
<p><%= h @content %></p>
Restart the app, and try reloading the page that you embedded JavaScript code within. You won't get a dialog message anymore. Instead, the code will be visible exactly as it was entered in the page edit form.
It's generally a good idea to assume that users may enter malicious data into any form you provide to them. Escaping HTML is just one of many techniques developers use to limit the harm that can be done.
Project Ideas
Looking to practice what you've learned? Here are some project ideas.
- In the wiki app, add a list of all the available wiki pages. The
Dirclass from Ruby core has an
eachmethod that will let you get a list of all the files in the
pages/subdirectory; you can use those to build clickable links.
- See if you can replicate the guestbook app from this course's code challenges. Add a feature to view a list of all the signatures, then give users the ability to create, update, or delete signatures. | https://teamtreehouse.com/library/summary-4 | CC-MAIN-2022-33 | refinedweb | 524 | 81.33 |
K means vs K means++
Sign up for FREE 1 month of Kindle and read all our books for free.
Get FREE domain for 1st year and build your brand new site
In this article, we have investigated the distinction between K-means and K-means++ algorithms in detail. Both K-means and K-means++ are clustering methods which comes under unsupervised learning. The main difference between the two algorithms lies in:
- the selection of the centroids around which the clustering takes place
- k means++ removes the drawback of K means which is it is dependent on initialization of centroid
centroids: A centroid is a point which we assume to be the center of the cluster.
cluster: A cluster is defined as a group or collection of data instances categorized together because of some similarities in there properties. algorithm which is used to solve unsupervised clustering problems.
We will go through K means Algorithm first, then explore the disadvantage of K means Algorithm and see how K means++ resolves this.
K-means Algorithm
K-means is one of the most straightforward algorithm which is used to solve unsupervised clustering problems.
In these clustering problems we are given a dataset of instances and the dataset is defined with the help of some attributes. Each instance in the dataset has some relevant values corresponding to those attributes. Our goal is to categorize those instances into different clusters with the help of k-mean algorithm.
Algorithm:
- The first step involves the random initialization of k data points which are called means.
- In this step we cluster each data point to it's nearest mean and after that we update the mean of the current clusters. mean: is the average of a group of values.
- This cycle continues for a given number of repetitions and after that we have our final clusters.
K-means Algorithm Example:
In this example we will be taking a dataset having two attributes A and B and 10 instances.
A Values: 13 , 14 , 2 , 5 , 23 , 27 , 66 , 1 , 69 , 62
B Values: 9 , 29 , 15 , 24 , 70 , 71 , 45 , 42 , 22 , 10
Note: In K-means we always have to predefine the number of clusters we want. In this example we will be taking k=3.
from pandas import DataFrame import matplotlib.pyplot as plt from sklearn.cluster import KMeans points = {'A': [13,14,2,5,23,27,66,1,69,62], 'B': [9,29,15,24,70,71,45,42,22,10] } data = DataFrame(points,columns=['A','B']) kpoints = KMeans(n_clusters=3, init='random').fit(data) center = kpoints.cluster_centers_ print(center) plt.scatter(data['A'], data['B'], c= kpoints.labels_.astype(float), s=50, alpha=0.5) plt.scatter(center[:, 0], center[:, 1], c='black', s=50) plt.show()
The Centroids are as follows:
Scatter Plot Representation:
Drawback of K-means Algorithm
The main drawback of k-means algorithm is that it is very much dependent on the initialization of the centroids or the mean points.
In this way, if a centroid is introduced to be a "far away" point, it may very well wind up without any data point related with it and simultaneously more than one cluster may wind up connected with a solo centroid. Likewise, more than one centroids may be introduced into a similar group bringing about poor clustering.
Example of poor clustering:
How K-means++ Algorithm overcomes the drawback of k-mean Algorithm ?
K-means++ is the algorithm which is used to overcome the drawback posed by the k-means algorithm.
This algorithm guarantees a more intelligent introduction of the centroids and improves the nature of the clustering. Leaving the initialization of the mean points the k-means++ algorithm is more or less the same as the conventional k-means algorithm.
Algorithm:
- In the starting we have to select a random first centroid point from the given dataset.
- Now for every instance say 'i' in the dataset calculate the distance say 'x' from 'i' to the closest, previously chosen centroid.
- Select the following centroid from the dataset with the end goal that the likelihood of picking a point as centroid is corresponding to the distance from the closest, recently picked centroid.
- Last 2 steps are repeated until you get k mean points.
K-means++ Algorithm Example:
from pandas import DataFrame import matplotlib.pyplot as plt from sklearn.cluster import KMeans Data = {'x': [25,34,22,27,33,33,31,22,35,34,67,54,57,43,50,57,59,52,65,47,49,48,35,33,44,45,38,43,51,46,33,67,98,11,34,21], 'y': [79,51,53,78,59,74,73,57,69,75,51,32,40,47,53,36,35,58,59,50,25,20,14,12,20,5,29,27,8,7,45,21,67,44,33,45] } df = DataFrame(Data,columns=['x','y']) kmeans = KMeans(n_clusters=4, init='k-means++').fit(df) centroids = kmeans.cluster_centers_ print(centroids) plt.scatter(df['x'], df['y'], c= kmeans.labels_.astype(float), s=50, alpha=0.5) plt.scatter(centroids[:, 0], centroids[:, 1], c='red', s=50) plt.show()
Contrast between K-means and K-means++:
K-means++
K-means
By following the above system for introduction, we get centroids which are far away from each other. This expands the odds of at first getting centroids that lie in various clusters. | https://iq.opengenus.org/k-means-vs-k-means-p/ | CC-MAIN-2021-17 | refinedweb | 893 | 64.3 |
getsubopt()
Parse suboptions from a string
Synopsis:
#include <stdlib.h> int getsubopt( char** optionp, char* const* tokens, char** valuep );
Arguments:
- optionp
- The address of a pointer to the string of options that you want to parse. The function updates this pointer as it parses the options; see below.
- tokens
- A vector of possible tokens.
- valuep
- The address of a pointer that the function updates to point to the first character of a value that's associated with an option; see below.
Library:
libc
Use the -l c option to qcc to link against this library. This library is usually included automatically.
Description:, at optionp contains only one suboption,,.
Returns:.
Examples:
The following code fragment shows how to process options to the mount(1M) command */ } . . . }
Classification:
Caveats:
During parsing, commas in the option input string are changed to null characters. White space in tokens or token-value pairs must be protected from the shell by quotes. | http://developer.blackberry.com/native/reference/bb10/com.qnx.doc.neutrino.lib_ref/topic/g/getsubopt.html | CC-MAIN-2013-20 | refinedweb | 155 | 57.57 |
NEW: Learning electronics? Ask your questions on the new Electronics Questions & Answers site hosted by CircuitLab.
Microcontroller Programming » Random Number Generator
Hi at all,
i have studied several threads in this and other forums about this topic, but I was not able to figure it out.
All I want is a generator which gives me 4 different number between 1 and 4 everytime the loop runs.
How did I get that work?
I have tried it like that, but I am only getting the same numbers again.
// for NerdKits with ATmega168
#define F_CPU 14745600
#include <avr/io.h>
#include <inttypes.h>
#include <util/delay.h>
#include <stdio.h>
#include <stdlib.h>
#include "../libnerdkits/lcd.h"
int main(void) {
char test[4];
while(1) {
int i;
srand(0);
for(i = 0; i < 4; i++){
test[i]=rand()%4+1;
}
}
return 0;
}
Solving this in Visual Studio is easy, because I can use srand(time(0)) to generate nearly random numbers. But on the MCU it's much more difficult.
Maybe someone can help me to solve this.
Often srand is seeded using a timer. Sometimes a timer and a button press as in a menu might say press a button to continue and when the button is pressed, the current value of the timer is read and the random seed generated from that. That would eliminate the repetition you get from using the same seed number each time. Another method is to read a value from one of the ADC's onboard. This can be done with nothing attached to the ADC and the "noise" on the ADC channel can produce a good random seed.
Hope I gave you some ideas to think about.
Rick
Hi Rick,
i have read about the both possibilities you meantioned.
But I wasn't able to Figure Out how to use them.
Have you used One of this types in your Projects.
Maybe you have an example Code.
Thanks for helping.
An easy one would be using the "Noise" in the ADC (Analog to Digital Converter). You wouldn't need to use rand or srand at all.
Take the temperature sensor program. Set up a variable for your random number.
uint_t rnum=0;
Then, since you want a number between 1 and 4, this is only two bits. If you take two samples from the ADC, and take the least significant bit from each, shift one of them and put them into your number you'll have a pretty random number between zero and 3 add 1 and it's between 1 and 4. Now keep in mind, random means you may get the same number twice in a row.
To do this you could do something like:
rnum = 1 & adc_read();
rnum |= (1 & adc_read()<<1);
rnum ++;
Now this is untested code, and my mind may be a bit rusty, but I think this would make the least significant bit of rnum 1 if the LSB of the first ADC read was 1, and the 2nd bit of rnum a 1 if the LSB of the second ADC read was 1. Thus creating a psudo random number between 0 and 3 increment and now it's between 1 and 4.
I don't guarantee the code works as written, but it should point you in a direction.
Thanks for your Reply.
I am sure your mind is Not rusty.
Thanks for giving an example with Great Explanation.
I will try it when i am back home.
greetings Nino
Please log in to post a reply. | http://www.nerdkits.com/forum/thread/2408/ | CC-MAIN-2020-05 | refinedweb | 589 | 82.65 |
An interview with Kent Tegels
by Douglas Reilly
Kent Tegels is the man behind the Enjoy Every Sandwich blog on the SqlJunkies.com web site. He is also an instructor for DevelopMentor, teaching courses on .NET and SQL Server, with emphasis on SQL Server 2005. He is one of the co-authors of Beginning ASP.NET Databases Using VB.NET, in which he uses his background as an instructor to provide explanations for programming data-driven web sites.
The following questions were answered by Kent via email.
Doug:One of the biggest issues among SQL Server developers is whether or not to use stored procedures. Where do you come down on this issue?
Kent:Absolutes are hard to come by in this business, so what works well in a given scenario varies. Conceptually I’m pro-stored procedures for the same reasons I’m pro-modular programming: Compiled code typically outperforms interpreted; there’s inherent value to layering security; and there is the general goodness of abstraction and encapsulation.
There are times when you need to get down to the bare metal and write the best performing code. Calling parameterized procedures means comparative overhead. Human cycles are more expensive and scarce than machine cycles. Spending hundreds or thousands of dollars to write "better" code that saves tens or hundreds of dollars in application utility seems economically insane.
Doug:Do you think the addition of LINQ (Language Integrated Query) will change the way developers create database-oriented applications?
Kent:Yes and no. One of the more pragmatic bosses I’ve had was fond of pointing out that things are "the same but different." As far as applications are instances of design patterns, LINQ doesn’t change much. But at the code level, you could write really different code to do the same work. Whereas today we have to work with classes that are first-order mappings of database concepts, LINQ gives us an implicit way of doing the same thing.
In ADO.NET, for example, we’re presented with the results of work as result sets mapped to datatables. DLINQ doesn’t do that. It gives us results as an enumerable sequence of typed objects. This second-order mapping is where LINQ’s real power lies. It enables programmers to leverage a small amount of knowledge and skill to a much greater datascape. Once one learns the LINQ pattern, implementing DLINQ and XLINQ can be understood and applied.
Doug:Why is VB.NET your favored .NET language?
Kent:Actually, the first .NET book I worked on was Data-Centric .NET Programming with C# for the original Wrox press, so my roots are in C#. What I like most about .NET is that it really means that language can be a lifestyle choice. While my first programming languages started with the letter A, it was C that had the greatest influence on me.
I wrote a lot of C/C++ for a number of years. Then I spent time with Java and Perl. Same but different again. Even with Classic VB, my code was more in the K&R style. So when I’m writing formal code for the Microsoft platform, I prefer C#. But when I’m tinkering or learning, I like VB.NET, mostly because I’m a sloppy typist with little patience. VB.NET enables those behaviors nicely <grin>.
Doug:What do you think of the interest in bringing back Microsoft support for VB6?
Kent:I suppose the politically correct answer is, I feel your pain, change is hard, blahibity, blahibity, blah. Straight up, though, I’d rather remove my own spleen with a hand grenade than go back. Get over it already, OK?
Doug:Why do you blog, and do you think blogging has changed the face of software development?
Kent:Another boss I had now signs his email with a quote from Aaron Copland: "…the composer who is frightened of losing his artistic integrity through contact with a mass audience is no longer aware of the meaning of the word art." It is no different for developers. The public performance of our art is our programs, but the back channel of critics and growth is the community.
Blogging is a powerful channel that has a deep influence on the art. But it is not the only one. There are also user groups, mailing lists, news groups, forums, instant messaging, code camps, instructor lead training, and so on. So you can’t trivialize it as "surface effect." All channels have an influence at many levels.
A more recent mentor of mine nailed it when he said there are two types of bloggers: reflectors and generators. Reflectors refer to the works of others and maybe add some value by commenting on the referred-to posts. Reflectors influence by amplifying the work of generators, who publish their works and ideas as exemplars for others. Nobody who blogs is exclusively one or the other.
I blog to generate new discussions that are propagated through the reflections of my readers. I still read more than 1,000 blogs a day and frequently point out things that should be reflected to the community. That’s not why I started, though. My original reason for blogging was to practice writing when I wasn’t writing books or articles. It takes daily practice to grow as a writer, just like you have to write code every day to grow as a developer.
My former boss, Ted Kooser, and my mentor, Dan Sullivan, have proven themselves right. I hope to achieve a similar degree of success by implementing the same patterns.
Doug:How did you start working on databases? What was your first database-related job?
Kent:They were one and the same. I was working my way through college at an insurance wholesaler. To achieve a greater degree of scale, the company needed systems for tracking recruiting, generating material shipment orders, and building market intelligence. At the time, our tool of choice was a program called Q&A. Considering how primitive the technology was, it was an amazing work: powerful query, excellent UX and agile reporting. And it didn’t crash every day.
Doug:Where in the development world do you think XML best fits? What have you used it for, in addition to web services?
Kent:I had to laugh when I first heard Don "DonXML" Demsak tell the story of how his nickname evolved, as I’m pretty sure folks were calling me "KentXML" behind my back for the same reason. XML is like a nuclear force in that it binds data particles together. I use it for almost everything in some way or another.
Databases have the same behavior. Same but different again. The big difference in XML is that it has lower infrastructure requirements in exchange for less scale under load. Databases are the exact opposite of that. The real touchstone is how much more you need to do and how quickly, and where you can do it. I think both are useful things with which developers should equip themselves.
If the question is where does XML belong in the database, that’s harder. The same principal applies: The more performance you need at greater scale, the better it is to normalize the XML into relational tables. The less of that you need to do, the more you should leave data sources such as XML alone and use as is in an XML data type instance. That’s unless you wind up doing nothing with the process, in which case it is best to store it as a compressed BLOB or even a network-addressable file reference.
Doug:Have you read any good database-related or general software development books lately?
Kent:I’ve read a couple of amazing books recently. The first is Roger Wolter’s Rational Guide to SQL Server 2005 Service Broker from Rational Press. When I met Roger, he earned my respect by being calm about, yet deeply experienced, in the topic. When I thought about all of the other responsibilities he had in guiding several teams for SQL Server 2005, and at the same time he was working on that book, respect quickly turned to awe.
The other book is Donald Farm’s SQL Server 2005 Integration Services, also from Rational Press. Donald’s work is an exemplar of having a good story to tell and telling it well.
I am also finding Passin’s The Explorers’ Guide to the Semantic Web hard to put down.
Another book has nothing to do with software development per se, except that it is written by a developer. I’m talking about Jeff Hawkin’s On Intelligence. Somewhere down the road that book is going to influence the next generation’s "Gang of Four" the way Alexander’s book on building architecture did. If it hasn’t already, that is.
Doug:What do you think about using VB.NET or C# for stored procedures, user-defined functions and triggers? What guidance can you give people looking to leverage the ability to use procedural code in SQL Server 2005?
Kent:Nothing makes my blood boil more than that topic! Anybody who works with the technology understands how wrong the "marketing spin" is that says we can do that, or more egregiously, how wrong the predictions proved to be of the demise of T-SQL. You can’t begin to use SQLCLR objects unless you have T-SQL, and that’s a good thing. The value in SQLCLR is in extending T-SQL, not in data access.
The best use of SQLCLR is to create assemblies leveraged as user-defined functions (UDFs). Doing some types of complex, procedural calculations is an obvious fit. The neatest use, though, is the ability to easily leverage parts of the .NET framework base class library and user libraries as alternatives to extended stored procedures.
A couple of examples come to mind. I once needed to do schematic validation of stored XML over and above what could be accomplished using XML schema collections. Being able to call System.XML.Schema quickly and easily was a simple thing when using SQLCLR. I was able to "nail jelly to a tree" using the technology.
In another case, I wrote a C# implementation of Huffman’s adaptive compression algorithm when I needed to store long text and XML instances in a database, but the data was opaque in terms of query. I could have written that code in T-SQL, but it would have been a lot harder for me. SQLCLR enabled me to be appropriately lazy.
Triggers have similar value but are more rarely needed since a trigger can efficiently use a function instead. Why tie up logic into a specialized use like a trigger instead of making it available for general use as a function?
CLR stored procedures are also less useful than first thought for the same reason. From a performance point of view, it is probably better to write the logic required in such a way that it calls UDFs from T-SQL. The difference between a stored procedure and a function is that the procedure persistently stores a pre-built execution tree and enables you to perform operations (select, insert, delete, update) where functions aren’t allowed to change the database state.
Stored procedures are acceptable for a few cases in which you have procedural generation or modification of row sets, or when you need to leverage a .NET framework function that alters the database state. Calling web services would be an example of that. Otherwise T-SQL is the better choice most of the time.
User-defined types give me the most concern. They are best when used to extend the existing T-SQL type system, like when you need to work with data containing complex numbers. There’s no other way to create rich data types with their own methods for operations. I become concerned when people insist that they should use this feature to represent business objects. The Java and Oracle folks went down that path before with dismal results, so I’m in no hurry to repeat those "learning opportunities."
The most curious creatures in the SQLCLR menagerie are user-defined aggregators. They have obvious uses over the data types you define as UDTs. There are probably use cases for them over normal data types, as well, like taking a limited set of strings and concatenating them into a single, comma-delimited instance.
If there’s a critical piece of advice I’d offer about all of this, it’s to make sure you fully understand what you can do with T-SQL and how to do that before getting "stupid drunk" on SQLCLR Kool-Aid. There’s a lot you can learn from the work of folks like Ken Henderson, Joe Celko and Kevin Kline.
Doug:As a developer teaches and writes more, it’s difficult to keep a hand in development. What percentage of your time is spent developing code for clients and what percentage is teaching and writing?
Kent:The only real-world code I get to enjoy lately comes from helping people through mailing lists, newsgroups, conferences and talks. The month before last, I worked more than 80 hours a week for many weeks writing production code. I suppose it is a one-third to two-thirds split of real work to teaching and writing.
When I’m teaching and writing, I write code too. I write code every day, usually for two or three hours. Like any art, you have to practice to get better. It’s just that there are no clients to bitch out for lousy specs when you’re writing code for yourself <grin>.
Doug:Long after HTML was created by Tim Berners-Lee, I independently "discovered" markup languages. Have you ever "invented" anything, only to discover much better prior art?
Kent:I’m sure we all have. Remember I mentioned that I created a UDF that implemented Huffman adaptive compression? I presented that at a local conference in June. The talk was well received, but somebody came up and asked why I went to all of the effort of using that code rather than using System.IO.Compression. Of course, I hadn’t a clue that this namespace existed at the time. Ouch.
Doug:Can you think of a cool tip or trick, especially in SQL Server 2005, that many database developers do not know about?
Kent:A couple. First, fire up SQL Server 2005’s Books Online and have a look at forced parameterization. While it has a number of constraints about when it gets used and how, it can be a good way to simplify messy performance problems caused by unfixable code. But for exactly the same reasons we talked about in your first question, it could introduce performance issues that are buggers to track down.
Second, there are three distributed management views that developers should know about and use to tweak their server-side code. The first is sys.dm_exec_query_stats, which can quickly help you find queries that have the longest execution times. You can use the information provided by this query to examine the query plan by looking at sys.dm_exec_query_plan, and the T-SQL text of the query by looking at sys.dm_exec_sql_text. This could be useful for regression testing of code changes as well.
Doug:I have to ask, is the title of your Blog, Enjoy Every Sandwich, a reference to the Warren Zevon CD?
Kent:Not literally, but yes. I’ve been a fan of Zevon’s work for at least 25 years. The title of that CD is a reference to a quote Zevon made during his final appearance on the Late Show with David Letterman a few weeks before he passed way.
To me, it has many meanings. We should enjoy every sandwich. Eating should be enjoyed. In fact, any meal shared with friends and loved ones, even sandwiches, should be enjoyed. It is also a statement about being present in each moment and aware of the world around you. A very Zen thought.
Do you know someone who deserves to be a Database Geek of the Week? Or perhaps that someone is you? Send me an email at editor@simple-talk.com and include "Database Geek of the Week suggestion" in the subject line. | https://www.simple-talk.com/opinion/geek-of-the-week/database-geek-of-the-week-kent-tegels/ | CC-MAIN-2014-15 | refinedweb | 2,733 | 65.32 |
Install the Azure SDK for Go
Welcome to the Azure SDK for Go! The SDK allows you to manage and interact with Azure services from your Go applications.
Get the Azure SDK for Go
The Azure SDK for Go is compatible with Go versions 1.8 and higher. For environments using Azure Stack Profiles, Go version 1.9 is the minimum requirement. If you need to install Go, follow the Go installation instructions.
You can download the Azure SDK for Go and its dependencies via
go get.
go get -u -d github.com/Azure/azure-sdk-for-go/...
Warning
Make sure that you capitalize
Azure in the URL. Doing otherwise can cause case-related import problems
when working with the SDK. You also need to capitalize
Azure in your import statements.
Some Azure services have their own Go SDK and aren't included in the core Azure SDK for Go package. The following table lists the services with their own SDKs and their package names. These packages are all considered to be in preview.
Vendor the Azure SDK for Go
The Azure SDK for Go may be vendored through dep. For stability reasons, vendoring is recommended. To use
dep
in your own project, add
github.com/Azure/azure-sdk-for-go to a
[[constraint]] section of your
Gopkg.toml. For example, to vendor on version
14.0.0, add the following entry:
[[constraint]] name = "github.com/Azure/azure-sdk-for-go" version = "14.0.0"
Include the Azure SDK for Go in your project
To use Azure services from your Go code, import any services you interact with and the required
autorest modules.
You get a complete list of the available modules from GoDoc for
available services and
AutoRest packages. The most common packages you need from
go-autorest
are:
Go packages and Azure services are versioned independently. The service versions are part of the module import path, underneath
the
services module. The full path for the module is the name of the service, followed by
the version in
YYYY-MM-DD format, followed by the service name again. For example, to import the
2017-03-30 version of the Compute service:
import "github.com/Azure/azure-sdk-for-go/services/compute/mgmt/2017-03-30/compute"
It's recommended that you use the latest version of a service when starting development and keep it consistent. Service requirements may change between versions that could break your code, even if there are no Go SDK updates during that time.
If you need a collective snapshot of services, you can also select a single profile version. Right now, the only locked profile is version
2017-03-09, which may not have the latest features of services. Profiles are located under the
profiles module, with their version in the
YYYY-MM-DD format.
Services are grouped under their profile version. For example, to import the Azure Resources management module from the
2017-03-09 profile:
import "github.com/Azure/azure-sdk-for-go/profiles/2017-03-09/resources/mgmt/resources"
Warning
There are also
preview and
latest profiles available. Using them is not recommended. These profiles are rolling versions and service behavior may change at any time.
Next steps
To begin using the Azure SDK for Go, try out a quickstart.
- Deploy a virtual machine from a template
- Transfer objects to Azure Blob Storage with the Azure Blob SDK for Go
- Connect to Azure Database for PostgreSQL
If you want to get started with other services in the Go SDK immediately, take a look at some of the available sample code. | https://docs.microsoft.com/pl-pl/azure/developer/go/azure-sdk-install | CC-MAIN-2020-45 | refinedweb | 598 | 66.44 |
redirected to standard output.
- poll_connection
Whether to have a control connection to the process. This is used to transmit messages from the subprocess to the main process.
-.
- package
Whether to keep the environment of
funcwhen passing it to the other package. Possible values are:
FALSE: reset the environment to
.GlobalEnv. This is the default.
TRUE: keep the environment as is.
pkg: set the environment to the
pkgpackage namespace.
- ...
Extra arguments are passed to
processx::run().
Details
The
r() function from before 2.0.0 is called
r_copycat() now.
Value
Value of the evaluated expression.
Error handling
callr handles errors properly. If the child process throws an
error, then
callr throws an error with the same error message
in the main process.
The
error expert argument may be used to specify a different
behavior on error. The following values are possible:
erroris the default behavior: throw an error in the main process, with a prefix and the same error message as in the subprocess.
stackalso throws an error in the main process, but the error is of a special kind, class
callr_error, and it contains both the original error object, and the call stack of the child, as written out by
utils::dump.frames(). This is now deprecated, because the error thrown for
"error"has the same information..
callr uses parent errors, to keep the stacks of the main process and the subprocess(es) in the same error object._copycat(),
r_vanilla()
Aliases
- r
- r_safe
Examples
# NOT RUN { # Workspace is empty r(function() ls()) # library path is the same by default r(function() .libPaths()) .libPaths() # } | https://www.rdocumentation.org/packages/callr/versions/3.5.1/topics/r | CC-MAIN-2021-04 | refinedweb | 263 | 67.04 |
UNIX Message Queues vs. Sockets
advanced
Login to Discuss or Reply to this Discussion in Our Community
Thread Tools
Search this Thread
Top Forums
UNIX for Advanced & Expert Users
UNIX Message Queues vs. Sockets
#
1
03-19-2007
zen29sky
Registered User
1,
1
Join Date: Mar 2007
Last Activity: 20 March 2007, 12:18 AM EDT
Posts: 1
Thanks Given: 0
Thanked 1 Time in 1 Post
UNIX Message Queues vs. Sockets
If I use sockets for IPC, and can easily distribute my applications.
UNIX Message Queues are local to the processor.
As I understand it, Message Queues still incur system call overhead, just like socket calls.
What advantage does a UNIX Message Queue provide versus a TCP or UDP Socket, and when should they be used?
This User Gave Thanks to zen29sky For This Post:
mikehenrty
zen29sky
View Public Profile for zen29sky
Find all posts by zen29sky
#
2
03-20-2007
Perderabo
Administrator Emeritus
9,926,
461
Join Date: Aug 2001
Last Activity: 26 February 2016, 12:31 PM EST
Location: Ashburn, Virginia
Posts: 9,926
Thanks Given: 63
Thanked 461 Times in 270 Posts
There are two flavors of message queues... the old System V version (msgsend(), msgget(), etc) and the newer Posix version (mq_send(), mq_recieve(), etc). The Posix version is newer and more efficient.
Someone must be listening to a socket or you can't use it. A message queue stores the data until some process reads it. This could be minutes or hours later. Also you could have several processes take turns reading a message queue...like the tellers at a bank taking the next customer from a common queue. If the queue gets too long, add another teller at the bank or another instance of a server process on your system.
This User Gave Thanks to Perderabo For This Post:
mikehenrty
Perderabo
View Public Profile for Perderabo
Find all posts by Perderabo
#
3
03-21-2007
Naanu
Registered User
34,
0
Join Date: Feb 2005
Last Activity: 5 November 2007, 9:04 PM EST
Posts: 34
Thanks Given: 0
Thanked 0 Times in 0 Posts
zen,
i doubt if you could use select() for the the different message queues, while you can do that for sockets. If you have multiple sockets and need to do event based handling depending on which socket you recv and what type of message you get, sockets are the ay to go. But the point to note is that its reliable, if you are pushing UDP packets internally within the system.
Naanu
View Public Profile for Naanu
Find all posts by Naanu
Login or Register to Ask a Question
Previous Thread
|
Next Thread
10 More Discussions You Might Find Interesting
1.
UNIX for Advanced & Expert Users
Performance calculation for Message Queues
i have a program(C++ Code) that sends/receives information through queue's (Uses MQ) Is there any UNIX/LINUX tool that calculates the load and performance time for the same. If not how do i design the program that calculates the performance time. i know that time.h can be used but it gives...
(2 Replies)
Discussion started by: vkca
2 Replies
2.
Shell Programming and Scripting
Cleaning Message Queues
i have an application installed on AIX 5.3 and i have made a script that shutdown a proccesses that exceeded 10000kb of memory usage but i have a problem with cleaning the message queues of these proccesses after shutting them down. Is there any way to clean the message queues for this particular...
(8 Replies)
Discussion started by: Portabello
8 Replies
3.
Programming
Persisting message queues to disk
Hi, I have searched the forums and could not find a relavant thread discussing my use case, hence the new post. Basically am trying to pass on work to dummy worker instances from controller which will pass on work to workers (client) To make use of host capacity, am planning to serialize...
(2 Replies)
Discussion started by: matrixmadhan
2 Replies
4.
UNIX for Dummies Questions & Answers
message queues
can any body provide a tutorial that explains the concept of message queues in UNIX in great detail
(1 Reply)
Discussion started by: asalman.qazi
1 Replies
5.
UNIX for Advanced & Expert Users
message queues
#include <sys/ipc.h> #include <sys/msg.h> int main() { int qid; int t; struct msgbuf mesg; qid=msgget(IPC_PRIVATE,IPC_CREAT); mesg.mtype=1L; mesg.mtext=1; t=msgsnd(qid,&mesg,1,0); printf("%d",t); } the program prints -1 as the result of msgsnd ,which means that msgsnd doesn't...
(1 Reply)
Discussion started by: tolkki
1 Replies
6.
Programming
shared memory and message queues
Hi, According to my understanding.. When message queues are used, when a process post a message in the queue and if another process reads it from the queue then the queue will be empty unlike shared memory where n number of processess can access the shared memory and still the contents remain...
(2 Replies)
Discussion started by: rvan
2 Replies
7.
Linux
maximun number of message queues
how to check the maximun number of message queues in current linux enviornment? is there any command ?
(4 Replies)
Discussion started by: princelinux
4 Replies
8.
Solaris
rogue message queues solaris 9
We have message queues created from our ERP system to our tax system via an application api written by the ERP software vendor. Occasionally when a user does not gracefully exit the ERP application, the message queue hangs. After a few months, this becomes a problem as the queues are all used...
(2 Replies)
Discussion started by: MizzGail
2 Replies
9.
UNIX for Dummies Questions & Answers
message queues
let 3 processes a, b and c are sharing msgs using msg queues.process 'a' sending msg to 'c' and in turn 'c' send sthat msg to 'b'.if something happens to c how can 'a' and 'b' know that 'c' is not available??????
(2 Replies)
Discussion started by: sukaam
2 Replies
10.
Programming
Message queues
Hi all, I've been trying for hours to figure out how to turn my 2-program (one to send and one to receive) "chat system" using message queues, into a single program where each concurrent component (entity) will both send and receive messages. PLEASE give me a hand with this, I'm starting to...
(9 Replies)
Discussion started by: mgchato
9 Replies
Login or Register to Ask a Question
Member Badges and Information Modal
×
Featured Tech Videos | https://www.unix.com/unix-for-advanced-and-expert-users/36311-unix-message-queues-vs-sockets.html?s=abac939604b0db705f950cdd3166feeb | CC-MAIN-2022-05 | refinedweb | 1,071 | 69.82 |
I have an array of 1000 random 3D points & I am interested in the closest 10 points to any given point. In essence the same as this post.
I checked the 2 solutions offered by J.F. Sebastian, namely a brute force approach & a KD Tree approach.
Although both give me the same indices for the closest points, they give different results for the distances
import numpy as np
from scipy.spatial import KDTree
a = 100 * np.random.rand(1000,3)
point = a[np.random.randint(0, 1001)] # point chosen at random
# KD Tree
tree = KDTree(a, leafsize=a.shape[0]+1)
dist_kd, ndx_kd = tree.query([point], k=10)
# Brute force
distances = ((a-point)**2).sum(axis=1) # compute distances
ndx = distances.argsort() # indirect sort
ndx_brt = ndx[:10]
dist_brt = distances[ndx[:10]]
# Output
print 'KD Tree:'
print ndx_kd
print dist_kd
print 'Brute force:'
print ndx_brt
print dist_brt
KD Tree:
[[838 860 595 684 554 396 793 197 652 330]]
[[ 0. 3.00931208 8.30596471 9.47709122 10.98784209
11.39555636 11.89088764 12.01566931 12.551557 12.77700426]]
Brute force:
[838 860 595 684 554 396 793 197 652 330]
[ 0. 9.05595922 68.9890498 89.81525793 120.73267386
129.8587047 141.3932089 144.37630888 157.54158301 163.25183793]
KDTree algorithm is computing the nearest points based on the square-root of the same distance used by
Brute-force algorithm.
Basically KDtree uses:
sqrt(x^2+y^2+z^2)
and Brute-force algorithm uses:
x^2+y^2+z^2 | https://codedump.io/share/zAEiU7wA96RW/1/kd-tree-gives-different-results-to-brute-force-method | CC-MAIN-2018-13 | refinedweb | 251 | 75.4 |
#include <sys/ioctl.h> #include <linux/fs.h>
If a filesystem supports files sharing physical storage
between multiple files, this ioctl(2) operation can be
used to make some of the data in the
src_fd file appear in the
dest_fd file by
sharing the underlying storage if the file data is identical
("deduplication"). Both files must reside within the same
filesystem. This reduces storage consumption by allowing the
filesystem to store one shared copy of the data. If a file
write should occur to a shared region, the filesystem must
ensure that the changes remain private to the file being
written. This behavior is commonly referred to as "copy on
write".
This ioctl performs the "compare and share if identical"
operation on up to
src_length bytes from file
descriptor
src_fd at
offset
src_offset.
This information is conveyed in a structure of the following
form:
Deduplication is atomic with regards to concurrent writes, so no locks need to be taken to obtain a consistent deduplicated copy.
The fields
reserved1 and
reserved2 must be zero.
Destinations for the deduplication operation are conveyed
in the array at the end of the structure. The number of
destinations is given in
dest_count, and the destination
information is conveyed in the following form:
Each deduplication operation targets
length bytes in file
descriptor
dest_fd at
offset
logical_offset. The field
reserved must be
zero.
Upon successful completion of this ioctl, the number of
bytes successfully deduplicated is returned in
bytes_deduped and a status code
for the deduplication operation is returned in
status.
The
status code is
set to
0 for success, a
negative error code in case of error, or
FILE_DEDUPE_RANGE_DIFFERS if the data did
not match.
Error codes can be one of, but are not limited to, the following:
dest_fd and
src_fd are not
on the same mounted filesystem.
One of the files is a directory and the filesystem does not support shared regions in directories..
src_fd is
not open for reading;
dest_fd is not open for
writing or is open for append-only writes; or the
filesystem which
src_fd resides on does
not support deduplication.
dest_fd is
immutable.
One of the files is a swap file. Swap files cannot share storage.
This can appear if the filesystem does not support deduplicating either file descriptor.
This ioctl operation first appeared in Linux 4.5. It was
previously known as
BTRFS_IOC_FILE_EXTENT_SAME and was private
to Btrfs.
Because a copy-on-write operation requires the allocation of new storage, the fallocate(2) operation may unshare shared blocks to guarantee that subsequent writes will not fail because of lack of disk space.
Some filesystems may limit the amount of data that can be deduplicated in a single call. | http://manpages.courier-mta.org/htmlman2/ioctl_fideduperange.2.html | CC-MAIN-2017-17 | refinedweb | 448 | 53.71 |
I just checked, and found that RT 1.0.7 does not put an
In-Reply-To header in the messages it sends (auto-reply, reply).
I want that enough to write the (probably trivial) patch myself
(I’m always grumbling about luser MS mudware that doesn’t put
them in), but if somebody’s already written the patch, I’ll take
it
Is there a patch to set auto-acks for all incoming mail (such
as “You have added the following correspondence to incident
ticket xxx”)? Danger with mail loops makes this non-trivial, of
course.
#include <std_disclaim.h> Lorens Kockum | https://forum.bestpractical.com/t/patches-in-reply-to-mail-ack/5569 | CC-MAIN-2018-51 | refinedweb | 102 | 70.13 |
I file to it and added a WebMethod that waits a random time and return true/false randomly:
public partial class _Default : System.Web.UI.Page { ... public class MethodReturnedValue { public int Time { get; set; } public bool Success { get; set; } } [WebMethod(true)] public static MethodReturnedValue SomeMethod() { MethodReturnedValue retVal = new MethodReturnedValue(); retVal.Time = random.Next(5000); Thread.Sleep(retVal.Time); if (random.Next() % 2 == 0) { retVal.Success = true; ; } else { retVal.Success = false; } return retVal; } }
Than, I added few html elements:
<body> <form id="form1" runat="server"> <asp:ScriptManager <div> <a id="execute_page_method" href="">Click!</a> <img id="ajax_loading_image" src="ajax-loader.gif" alt="Ajax Loader" /> <label id="message" >hello</label> </div> </form> </body>
and style
<style type="text/css"> a { float:left; width:30px;} img {float:left; width:30px; margin-left:10px; } label { float:left; margin-left:10px; } </style>
I want to activate the web method when clicking the anchor, than display a ajax loading image which I downloaded from here, and when the call returns change the color of the body and display a message according to the values in the result. This is the javascript code:
<script src="jquery-1.2.3.js" type="text/javascript"></script> <script type="text/javascript"> $(document).ready(function(){ // Initialization $("#ajax_loading_image").hide(); // mousemove event $().mousemove(function(e){ window.status = e.pageX +', '+ e.pageY; }); // hook up the click event $("#execute_page_method").click(function(){ $("#message").text("I'm working...") ; $("#ajax_loading_image").show("fast"); $("#execute_page_method").hide("fast"); // Call some page method... PageMethods.SomeMethod(function(result, userContext, methodName){ $("#ajax_loading_image").hide("slow"); $("#execute_page_method").show("slow"); if( result.Success == true ) { $("body").css("background","Green"); } else { $("body").css("background","Red"); } $("#message").text( "This took me " + result.Time + " milliseconds... " ); }); return false ; }); }); </script>
As you can see, I hook up the click event of execute_page_method element, call the page method and handle the callback with anonymous function.
I Like jQuery since it makes the code very readable, and although i used only the very basic of it, It has many great features that will save you lots of work…
We pay for user submitted tutorials and articles that we publish. Anyone can send in a contributionLearn More
Kevin Deenanauth Said on Apr 30, 2008 :
Great work!
superjason Said on Apr 30, 2008 :
I don’t get it. What does jQuery have to do with calling the server method?
It looks like the line calling the server method is standard JavaScript, and you just did some fancy effects with jQuery.
Joel Said on Apr 30, 2008 :
The title is a little misleading, you’re not actually calling the web method with jQuery. In fact you’re doing everything with jQuery EXCEPT call the web method - still a nice example for people new to jQuery.
Shahar A Said on Apr 30, 2008 :
The call is in JavaScript, the purpose was to show a begginer example on how to hook the event and handle the result with the help of jQuery.
Darrell Said on May 1, 2008 :
I like jQuery but also like the asp.net ajax framework. Personally I would use one or the other on a website not both.
Daniel Said on May 2, 2008 :
Good presentation for those new to jQuery / ASP.NET AJAX (service proxy calls)! You might also like Rick Strahl’s recent posts on jQuery with WCF. Glad the post led you to something new.
vvvlad Said on Jul 28, 2008 :
Thanks for a great article!
One question…
How do you pass a parameter to the webmethod?
Amit Said on Jul 28, 2008 :
@ vvvlad
Like what?
Most parameters you will need are available to you at the code behind.
vvvlad Said on Jul 28, 2008 :
Hi,
I have already found a solution for my question.
You call your webmethod without any parameters from client side.
If you would define something like this:
[WebMethod(true)]
public static MethodReturnedValue SomeMethod(string someParam)
then you would call it this way:
PageMethods.SomeMethod(param, callback_success, callback_timeout, callback_error);
Kel Said on Nov 10, 2008 :
Are u using .NET 2.0 or 3.5? I’m using 2.0 and i get an error using your exact code.
MethodReturnedValue.Time.get’ must declare a body because it is not marked abstract or extern
Amit Said on Nov 11, 2008 :
@ Kerl
We are using ASP.NET 3.5 with Visual Studio 2008
What are your settings?
Kel Said on Nov 11, 2008 :
@Amit
ASP.NET 2.0 with VS ‘05. I’ve noticed alot of tuturials on how to do this but a majority of the sites don’t specify what version of the .NET framework they are using. It seems like something changed from the 2.0 to 3.5 frameworks in regards to the [WebMethods].
Jabber Said on Dec 18, 2008 :
I found a similar method for ext js at if anyone is interested. | http://www.dev102.com/2008/04/30/call-aspnet-webmethod-from-jquery/ | crawl-002 | refinedweb | 796 | 67.45 |
Post Syndicated from Ashcon Partovi original
Today, we’re excited to announce Workers KV is entering general availability and is ready for production use!
What is Workers KV?
Workers KV is a highly distributed, eventually consistent, key-value store that spans Cloudflare’s global edge. It allows you to store billions of key-value pairs and read them with ultra-low latency anywhere in the world. Now you can build entire applications with the performance of a CDN static cache.
Why did we build it?
Workers is a platform that lets you run JavaScript on Cloudflare’s global edge of 175+ data centers. With only a few lines of code, you can route HTTP requests, modify responses, or even create new responses without an origin server.
// A Worker that handles a single redirect, // such a humble beginning... addEventListener("fetch", event => { event.respondWith(handleOneRedirect(event.request)) }) async function handleOneRedirect(request) { let url = new URL(request.url) let device = request.headers.get("CF-Device-Type") // If the device is mobile, add a prefix to the hostname. // (eg. example.com becomes mobile.example.com) if (device === "mobile") { url.hostname = "mobile." + url.hostname return Response.redirect(url, 302) } // Otherwise, send request to the original hostname. return await fetch(request) }
Customers quickly came to us with use cases that required a way to store persistent data. Following our example above, it’s easy to handle a single redirect, but what if you want to handle billions of them? You would have to hard-code them into your Workers script, fit it all in under 1 MB, and re-deploy it every time you wanted to make a change — yikes! That’s why we built Workers KV.
// A Worker that can handle billions of redirects, // now that's more like it! addEventListener("fetch", event => { event.respondWith(handleBillionsOfRedirects(event.request)) }) async function handleBillionsOfRedirects(request) { let prefix = "/redirect" let url = new URL(request.url) // Check if the URL is a special redirect. // (eg. example.com/redirect/<random-hash>) if (url.pathname.startsWith(prefix)) { // REDIRECTS is a custom variable that you define, // it binds to a Workers KV "namespace." (aka. a storage bucket) let redirect = await REDIRECTS.get(url.pathname.replace(prefix, "")) if (redirect) { url.pathname = redirect return Response.redirect(url, 302) } } // Otherwise, send request to the original path. return await fetch(request) }
With only a few changes from our previous example, we scaled from one redirect to billions − that’s just a taste of what you can build with Workers KV.
How does it work?
Distributed data stores are often modeled using the CAP Theorem, which states that distributed systems can only pick between 2 out of the 3 following guarantees:
- Consistency – is my data the same everywhere?
- Availability – is my data accessible all the time?
- Partition tolerance – is my data stored in multiple locations?
. This also means that if a client writes to a key and that same client reads that same key, the values may be inconsistent for a short amount of time.
To help visualize this scenario, here’s a real-life example amongst three friends:
- Suppose Matthew, Michelle, and Lee are planning their weekly lunch.
- Matthew decides they’re going out for sushi.
- Matthew tells Michelle their sushi plans, Michelle agrees.
- Lee, not knowing the plans, tells Michelle they’re actually having pizza.
An hour later, Michelle and Lee are waiting at the pizza parlor while Matthew is sitting alone at the sushi restaurant — what went wrong? We can chalk this up to eventual consistency, because after waiting for a few minutes, Matthew looks at his updated calendar and eventually finds the new truth, they’re going out for pizza instead.
While it may take minutes in real-life, Workers KV is much faster. It can achieve global consistency in less than 60 seconds. Additionally, when a Worker writes to a key, then immediately reads that same key, it can expect the values to be consistent if both operations came from the same location.
When should I use it?
Now that you understand the benefits and tradeoffs of using eventual consistency, how do you determine if it’s the right storage solution for your application? Simply put, if you want global availability with ultra-fast reads, Workers KV is right for you.
However, if your application is frequently writing to the same key, there is an additional consideration. We call it “the Matthew question”: Are you okay with the Matthews of the world occasionally going to the wrong restaurant?
You can imagine use cases (like our redirect Worker example) where this doesn’t make any material difference. But if you decide to keep track of a user’s bank account balance, you would not want the possibility of two balances existing at once, since they could purchase something with money they’ve already spent.
What can I build with it?
Here are a few examples of applications that have been built with KV:
-.
We’ve highlighted several of those use cases in our previous blog post. We also have some more in-depth code walkthroughs, including a recently published blog post on how to build an online To-do list with Workers KV.
What’s new since beta?
By far, our most common request was to make it easier to write data to Workers KV. That’s why we’re releasing three new ways to make that experience even better:
1. Bulk Writes
If you want to import your existing data into Workers KV, you don’t want to go through the hassle of sending an HTTP request for every key-value pair. That’s why we added a bulk endpoint to the Cloudflare API. Now you can upload up to 10,000 pairs (up to 100 MB of data) in a single PUT request.
curl " \ $ACCOUNT_ID/storage/kv/namespaces/$NAMESPACE_ID/bulk" \ -X PUT \ -H "X-Auth-Key: $CLOUDFLARE_AUTH_KEY" \ -H "X-Auth-Email: $CLOUDFLARE_AUTH_EMAIL" \ -d '[ {"key": "built_by", value: "kyle, alex, charlie, andrew, and brett"}, {"key": "reviewed_by", value: "joaquin"}, {"key": "approved_by", value: "steve"} ]'
Let’s walk through an example use case: you want to off-load your website translation to Workers. Since you’re reading translation keys frequently and only occasionally updating them, this application works well with the eventual consistency model of Workers KV.
In this example, we hook into Crowdin, a popular platform to manage translation data. This Worker responds to a
/translate endpoint, downloads all your translation keys, and bulk writes them to Workers KV so you can read it later on our edge:
addEventListener("fetch", event => { if (event.request.url.pathname === "/translate") { event.respondWith(uploadTranslations()) } }) async function uploadTranslations() { // Ask crowdin for all of our translations. var response = await fetch( "" + "/:ci_project_id/download/all.zip?key=:ci_secret_key") // If crowdin is responding, parse the response into // a single json with all of our translations. if (response.ok) { var translations = await zipToJson(response) return await bulkWrite(translations) } // Return the errored response from crowdin. return response } async function bulkWrite(keyValuePairs) { return fetch( "" + "/:cf_account_id/storage/kv/namespaces/:cf_namespace_id/bulk", { method: "PUT", headers: { "Content-Type": "application/json", "X-Auth-Key": ":cf_auth_key", "X-Auth-Email": ":cf_email" }, body: JSON.stringify(keyValuePairs) } ) } async function zipToJson(response) { // ... omitted for brevity ... // (eg.) return [ {key: "hello.EN", value: "Hello World"}, {key: "hello.ES", value: "Hola Mundo"} ] }
Now, when you want to translate a page, all you have to do is read from Workers KV:
async function translate(keys, lang) { // You bind your translations namespace to the TRANSLATIONS variable. return Promise.all(keys.map(key => TRANSLATIONS.get(key + "." + lang))) }
2. Expiring Keys
By default, key-value pairs stored in Workers KV last forever. However, sometimes you want your data to auto-delete after a certain amount of time. That’s why we’re introducing the
expiration and
expirationTtloptions for write operations.
// Key expires 60 seconds from now. NAMESPACE.put("myKey", "myValue", {expirationTtl: 60}) // Key expires if the UNIX epoch is in the past. NAMESPACE.put("myKey", "myValue", {expiration: 1247788800})
# You can also set keys to expire from the Cloudflare API. curl " \ $ACCOUNT_ID/storage/kv/namespaces/$NAMESPACE_ID/ \ values/$KEY?expiration_ttl=$EXPIRATION_IN_SECONDS" -X PUT \ -H "X-Auth-Key: $CLOUDFLARE_AUTH_KEY" \ -H "X-Auth-Email: $CLOUDFLARE_AUTH_EMAIL" \ -d "$VALUE"
Let’s say you want to block users that have been flagged as inappropriate from your website, but only for a week. With an expiring key, you can set the expire time and not have to worry about deleting it later.
In this example, we assume users and IP addresses are one of the same. If your application has authentication, you could use access tokens as the key identifier.
addEventListener("fetch", event => { var url = new URL(event.request.url) // An internal API that blocks a new user IP. // (eg. example.com/block/1.2.3.4) if (url.pathname.startsWith("/block")) { var ip = url.pathname.split("/").pop() event.respondWith(blockIp(ip)) } else { // Other requests check if the IP is blocked. event.respondWith(handleRequest(event.request)) } }) async function blockIp(ip) { // Values are allowed to be empty in KV, // we don't need to store any extra information anyway. await BLOCKED.put(ip, "", {expirationTtl: 60*60*24*7}) return new Response("ok") } async function handleRequest(request) { var ip = request.headers.get("CF-Connecting-IP") if (ip) { var blocked = await BLOCKED.get(ip) // If we detect an IP and its blocked, respond with a 403 error. if (blocked) { return new Response({status: 403, statusText: "You are blocked!"}) } } // Otherwise, passthrough the original request. return fetch(request) }
3. Larger Values
We’ve increased our size limit on values from
64 kB to
2 MB. This is quite useful if you need to store buffer-based or file data in Workers KV.
Consider this scenario: you want to let your users upload their favorite GIF to their profile without having to store these GIFs as binaries in your database or managing another cloud storage bucket.
Workers KV is a great fit for this use case! You can create a Workers KV namespace for your users’ GIFs that is fast and reliable wherever your customers are located.
In this example, users upload a link to their favorite GIF, then a Worker downloads it and stores it to Workers KV.
addEventListener("fetch", event => { var url = event.request.url var arg = request.url.split("/").pop() // User sends a URI encoded link to the GIF they wish to upload. // (eg. example.com/api/upload_gif/<encoded-uri>) if (url.pathname.startsWith("/api/upload_gif")) { event.respondWith(uploadGif(arg)) // Profile contains link to view the GIF. // (eg. example.com/api/view_gif/<username>) } else if (url.pathname.startsWith("/api/view_gif")) { event.respondWith(getGif(arg)) } }) async function uploadGif(url) { // Fetch the GIF from the Internet. var gif = await fetch(decodeURIComponent(url)) var buffer = await gif.arrayBuffer() // Upload the GIF as a buffer to Workers KV. await GIFS.put(user.name, buffer) return gif } async function getGif(username) { var gif = await GIFS.get(username, "arrayBuffer") // If the user has set one, respond with the GIF. if (gif) { return new Response(gif, {headers: {"Content-Type": "image/gif"}}) } else { return new Response({status: 404, statusText: "User has no GIF!"}) } }
Lastly, we want to thank all of our beta customers. It was your valuable feedback that led us to develop these changes to Workers KV. Make sure to stay in touch with us, we’re always looking ahead for what’s next and we love hearing from you!
We’re also ready to announce our GA pricing. If you’re one of our Enterprise customers, your pricing obviously remains unchanged.
- $0.50 / GB of data stored, 1 GB included
- $0.50 / million reads, 10 million included
- $5 / million write, list, and delete operations, 1 million included
During the beta period, we learned customers don’t want to just read values at our edge, they want to write values from our edge too. Since there is high demand for these edge operations, which are more costly, we have started charging non-read operations per month.
Limits
As mentioned earlier, we increased our value size limit from
64 kB to
2 MB. We’ve also removed our cap on the number of keys per namespace — it’s now unlimited. Here are our GA limits:
- Up to 20 namespaces per account, each with unlimited keys
- Keys of up to 512 bytes and values of up to 2 MB
- Unlimited writes per second for different keys
- One write per second for the same key
- Unlimited reads per second per key
Try it out now!
Now open to all customers, you can start using Workers KV today from your Cloudflare dashboard under the Workers tab. You can also look at our updated documentation.
We’re really excited to see what you all can build with Workers KV! | https://noise.getoto.net/tag/bash/ | CC-MAIN-2021-31 | refinedweb | 2,102 | 57.27 |
You can subscribe to this list here.
Showing
14
results of 14
On 6 Mar 2000, Ingo Ruhnke wrote:
> just another little script-fu. It creates a menu entry under
> "Xtns/Script-Fu/Tuxrcare/New Level" and on click it creates a new
> level from a given height and width with usefull default colors and
> the correct layer names.
Great! I modified it slightly to work under Gimp-1.0 (1.0 doesn't have
FG-IMAGE-FILL). It's on the web.
Here's an idea: I bet a level design tutorial would be useful for some
people. Does anybody feel like writing one?
Thanks,
Jasmin
On Mon, 6 Mar 2000, Steffen Sobiech wrote:
> Am Mon, 06 Mdr 2000 schrieben Sie:
> >
> > Look good. It worked on my system (a heavily patched RedHat 6.0 with
> > KDE 1.1.1), though there were a few small problems: it gives an error
> > message "ERROR: KFM is not running", even though it is. It also won't
> > let me change the installation directory (though perhaps that's on
> > purpose so that you don't have to write a ~/.tuxracer file).
>
> Yes, it was done on purpose...
> The problem is that the .tuxracer file needs to be in every user's
> home directory. Something like this would work with system wide
> defaults. Maybe I can have KDE start a script instead of the actual
> binary. That script could create a new ~/.tuxracer file if it doesn't
> already exist...
Ah yes, I hadn't thought about that. I think I'll add an (optional)
system-wide configuration file (/etc/tuxracer) where default values would be
obtained.
> > Thanks a lot! I'll place it on the web site today.
> Thank you, too!
You're very welcome.
Cheers,
Jasmin
P.S. Taking this back on-list...
I finished setting up the CVS repositories. There are two modules,
tuxracer and tuxracer-data. If would encourage level designers to use
CVS with their levels, if possible. You don't have to be religious about
it, but it will make file sharing easier.
It would also be nice if a few people could stay up to date on the CVS
version; that would help catch bugs like the one in 0.11. :-) I'll try
to post a message here after I do a commit of any significance.
Steve, I've added you to the developers list. Ingo, you don't seem to be
registered on SourceForge. If you send me your userid I'll add you too.
If anybody else would like write access to the CVS tree, let me know;
likewise if anybody would like extra/fewer privileges.
Other tidbits: I added Ingo's Script-Fu script to the web page, as well
as Steffen's GUI installer.
Cheers,
Jasmin
I introduced a silly bug into 0.11, so I've released Tux Racer 0.11.1.
If Tux Racer 0.11 coredumped on you, this should fix it.
Thanks,
Jasmin
On Mon, 6 Mar 2000, Sten Eriksson wrote:
> With "tuxracer-0.11.tar.gz" I get a Segmentation fault.
> Below is a patch that fixes this
Thanks very much. It figures that the very last change I did before 0.11 is
the culprit. :-) I'll have 0.11.1 out in a few minutes.
Cheers,
Jasmin
With "tuxracer-0.11.tar.gz" I get a Segmentation fault.
Below is a patch that fixes this
-- cut here --
*** string_util.c Mon Mar 6 10:10:36 2000
--- string_util.c.org Fri Feb 25 02:56:59 2000
***************
*** 34,45 ****
! if ((s1 != NULL) && (s2 != NULL)) {
! s1c = string_copy( s1 );
! s2c = string_copy( s2 );
! string_to_lower( s1c );
! string_to_lower( s2c );
! retval = strcmp( s1c, s2c );
! free( s1c );
! free( s2c );
! return retval;
! } else {
! return (s1 == s2);
! }
--- 34,41 ----
! s1c = string_copy( s1 );
! s2c = string_copy( s2 );
! string_to_lower( s1c );
! string_to_lower( s2c );
! retval = strcmp( s1c, s2c );
! free( s1c );
! free( s2c );
! return retval;
-- cut here --
-----------------------------------------------------------------
Sten Eriksson ! E-mail: sten.eriksson@...
UDAC AB / Datorhotellet ! Tel, work: +46 18 471 78 20
Box 174 ! Tel, mob: +46 70 542 47 03
SE-751 04 Uppsala ! Tel, fax: +46 18 51 66 00
SWEDEN !
-----------------------------------------------------------------
I've released Tux Racer 0.11. This release introduces several changes
for the benefit of course designers; they can now customize lighting,
fog, and particle colour to their liking.
The biggest change to gameplay is that courses can now be played in
"mirrored" mode, for added variety.
Two new user-contributed courses are also included.
All of the configuration variables in ~/.tuxracer are now documented in
the README file.
Several other changes and bug fixes were performed; please see the
ChangeLog.
Enjoy!
Jasmin
On 6 Mar 2000, Ingo Ruhnke wrote:
> >. %-)
:-) That patch made it into 0.11, so you might just want to look at that
instead.
Cheers,
Jasmin
On 6 Mar 2000, Ingo Ruhnke wrote:
> "Ingo's Speedway" sounds nice :-), have at the moment no idea for a
> better name, so lets use it.
Ok, I just released it in the 0.11 release.
Cheers,
Jasmin
On 6 Mar 2000, Ingo Ruhnke wrote:
> I just finished a little script-fu script. It takes an multi layer
> image and saves it as a set of rgb images, using the layer names as
> filename (elev, trees, terrain).
> So a level can be created as an .xcf file, which makes it much easier
> to maintain the level, than having three different files.
That's great! I'd been meaning to write something like that. I'll put it on
the site tomorrow; I'm sure people will find it very helpful.
> To install the script, simply copy it to: ~/.gimp-1.1/scripts/ and
> restart Gimp.
BTW, I also created a version that works with 1.0.
Thanks!
Jasmin
Jasmin Patry <jfpatry@...> writes:
>>
>
> Wow!! I love it!
Thanks.
> * It's easy in a few spots to avoid the paths and canyons that you've
> created; for example, I find it faster to climb on the large flat
> areas above the rest of the track (about halfway down). Sticking trees/rocks
> up there would discourage that.
Jupp, didn't notice that before. I placed some new trees to stopp that.
> The curvy icy bridge thing near the beginning is also easy to
> avoid by hitting it dead on and jumping it -- though that's kind
> of fun. :-)
That wasn't intentional when I created it, but its makes fun and gives
the level a lot more speed, I think we leave it that way.
> * A few trees are very close to the edge of the course on the right hand
> side, and because of a bug in the heightmap interpolation code (I
> think), they're suspended in mid air. You might want to move them in
> a bit until I get that fixed.
Ok, I fix that.
> * The paths in the course.tcl file should be relative; the file is
> interpreted with the CWD in the course's directory.
Yep, correct. I only used that to have the levels in my home dir
(symlink to /usr/...), I didn't noticed the ~/.tuxracer file, where I
can configure that.
> I'd like to include it in tuxracer-data-0.10.1; is that OK?
Sure.
>. %-)
--
ICQ: 59461927 |
Ingo Ruhnke <grumbel@...> |
------------------------------------------------------------------------+
Jasmin Patry <jfpatry@...> writes:
>>
>
> I'd like to get 0.10.2 releases of tuxracer and tuxracer-data out today.
> I've added the ability to specify the name and author of the course,
> which is displayed on the start screen.
> Can you suggest a name for your course? The best I can come up with
> is "Ingo's Speedway". :-)
"Ingo's Speedway" sounds nice :-), have at the moment no idea for a
better name, so lets use it.
--
ICQ: 59461927 |
Ingo Ruhnke <grumbel@...> |
------------------------------------------------------------------------+ | http://sourceforge.net/p/tuxracer/mailman/tuxracer-devel/?viewmonth=200003&viewday=6 | CC-MAIN-2015-40 | refinedweb | 1,284 | 86.5 |
Count of Distinct Groups of Strings Formed after Performing an Equivalent Operation
Introduction
As the title of this article suggests, we will be discussing the problem based on the famous data structure Strings. Strings are a sequence of characters like “abc” is a string containing ‘a’, ‘b,’ ‘c’ are the sequence of characters.
String offers various methods for simplicity in programming. In Java, a string is immutable, meaning it cannot be changed once created.
We will discuss a problem where we have to count the distinct group of strings formed after checking the equivalence between the two strings.
Without any delays, let’s move to our problem statement.
Problem Statement
We are given an array of strings, and we aim to find the number of distinct groups formed after performing an equivalent operation.
Now, What do you mean by an equivalent operation?
If the same character is present in both strings, then the two strings are said to be equivalent, and if there is another string that matches one of the strings in a group of equivalent strings, then that string is also equivalent to that group.
Example:
Input: {“ab”, “bc”, “abc”}
Output: 2
Explanation:
The string “ab” and “bc” have a common character as ‘b,’ and strings “bc,” and “abc” also have a common character as ‘bc’ as well as strings “b” and “abc” also have a character in common, i.e., ‘ab’ so the strings “ab”, “bc”, “abc” are equivalent.
Therefore the distinct group of strings is ‘1’.
Approach
We can use the approach of Disjoint Set Union to solve this question. We will consider all the Strings to be nodes of a graph. We will connect two nodes if the Strings have a common character. After connecting all the nodes using DSU, we just need to find the number of Disconnected components.
Refer to the below implementation of the above approach.
Implementation
import java.util.*; import java.io.*; public class Main { public static void main (String[] args) { Scanner sc = new Scanner(System.in); int n = sc.nextInt(); String a[] = new String[n]; for(int i=0;i<n;i++) { a[i] = sc.next(); } System.out.println(solve(a, n)); } static int solve(String a[], int n) { DSU dsu = new DSU(26); for(int i=0;i<n;i++) { for(int j=0;j<a[i].length();j++) { dsu.join(a[i].charAt(j)-'a', j); } } int cnt = 0; for(int i=0;i<n;i++) { if(dsu.findPar(i) == i) { cnt++; } } return cnt; } } class DSU { int par[]; int size[]; DSU(int n) { par = new int[n]; size = new int[n]; Arrays.fill(size, 1); for(int i=0;i<n;i++) par[i] = i; } int findPar(int x) { if(x == par[x]) return x; return par[x] = findPar(par[x]); } boolean join(int u,int v) { int fu = findPar(u); int fv = findPar(v); if(fu!=fv) { if(size[fu]>size[fv]) { par[fv] = fu; size[fu] += size[fv]; } else { par[fu] = fv; size[fv] += size[fu]; } return true; } else return false; } }
3 ab bc abc
Output
1
FAQs
- What is the role of Disjoint Set in data structures?
A disjoint set is used to find the number of disconnected components in a graph or join the node satisfying a particular condition.
- What is the time complexity to solve this problem?
The time complexity to solve the problem in O(N * log(N))
Key Takeaways
This blog has covered the problem based on Strings in the Java language, where we have to count the group of distinct strings after checking the equivalence between them.
You can give it a shot to this blog for guidance to master the Graph Algorithms.
You can use CodeStudio for various DSA questions typically asked in interviews for more practice. It will help you in mastering efficient coding techniques. | https://www.codingninjas.com/codestudio/library/count-of-distinct-groups-of-strings-formed-after-performing-an-equivalent-operation | CC-MAIN-2022-27 | refinedweb | 637 | 70.84 |
I had to write a program that would analyze a large amount of data. In fact, too much data to actually analyze all of. So I resorted to random sampling of the data, but even so, it was going to take a long time. For various reasons, the simplistic program I started with would stop running, and I’d lose the progress I made on crunching through the mountain of data.
You’d think I would have started with a restartable program so that I wouldn’t have to worry about interruptions, but I guess I’m not that smart, so I had to get there iteratively.
The result worked well, and for the next time I need a program that can pick up where it left off and make progress against an unreasonable goal, here’s the skeleton of what I ended up with:
import os, os.path, random, shutil, sys
import cPickle as pickle
class Work(object):
"""The state of the computation so far."""
def __init__(self):
self.items = []
self.results = Something_To_Hold_Results()
def initialize(self):
self.items = Get_All_The_Possible_Items()
random.shuffle(self.items)
def do_work(self, nitems):
for _ in xrange(nitems):
item = self.items.pop()
Process_An_Item_And_Update_Results(item)
Display_Results_So_Far()
def main(argv):
pname = "work.pickle"
bname = "work.pickle.bak"
if os.path.exists(pname):
# A pickle exists! Restore the Work from
# it so we can make progress.
with open(pname, 'rb') as pfile:
work = pickle.load(pfile)
else:
# This must be the first time we've been run.
# Start from the beginning.
work = Work()
work.initialize()
while True:
# Process 25 items, then checkpoint our progress.
work.do_work(25)
if os.path.exists(pname):
# Move the old pickle so we can't lose it.
shutil.move(pname, bname)
with open(pname, 'wb') as pfile:
pickle.dump(work, pfile, -1)
if __name__ == '__main__':
main(sys.argv[1:])
The “methods” in the Strange_Camel_Case are pseudo-code where the actual particulars would get filled in. The Work object is pickled every once in a while, and when the program starts, it reconstitutes the Work object from the pickle so that it can pick up where it left off.
The program will run forever, and display results every so often. I just let it keep running until it seemed like the random sampling had gotten me good convergence on the extrapolation to the truth. Another use of this skeleton might need a real end condition.
For a bit of extra robustness, you may want to write the new pickle before you move the old one:That way if something goes wrong with the serialisation, the last known good pickle will still be in place.
You might also want to keep a copy of the pickle file before you replace it.
I've had situations like the above where running out of memory causes the pickle to get corrupted during write...
Chaps, poo happening requires `cp work.pickle.bak work.pickle`. Anything more is over-engineering, surely?
A question: How to extend this to allow parallel workers? Perhaps one pickle object per worker, or one pickle object representing the state of many workers?
@Nick: I see that your extra step would leave the pickle in the proper place. I was willing to rename the backup file by hand if something really bad happened.
@Chris: I do keep a copy of the pickle, or are you talking about something else?
@Bill: The simplest thing for parallel workers is for them to somehow segment the population of items. For example, each worker when started is given a number N: 0, 1, 2, 3. Then they only work on items whose id is N mod 4. Each worker writes their own pickle with their number in the file name, and a separate program knows how to read the pickles and combine them together. This isn't very sophisticated, but is simple.
Re Ned's response to Bill: the map/reduce/rereduce pattern seems to fit.
If the data are so huge that you can't process all of them, doesn't loading of all data in Work and the frequent pickling take too much time ? (and memory)
@Yannick: I'm not loading all the data, I'm loading all the ids of the data. Each work item involves reading data from the db, reading associated files on disk, etc.
My god just use a queue? Beanstalkd, bam now you have a binlog and can have multiple workers on multiple computers. Or use GAE and essentially the same thing wih defer. Tricky though!! I like beanstalkd best though. Also turtles
Add a comment: | https://nedbatchelder.com/blog/201106/longrunning_restartable_worker.html | CC-MAIN-2021-31 | refinedweb | 764 | 75.2 |
Good Day,
I try send mail via script, but I get error.
def SendEmail(Message): try: useSSL = 'True' schema = "" mConfig = Sys.OleObject["CDO.Configuration"] mConfig.Fields.Item[schema + "sendusing"] = 2 mConfig.Fields.Item[schema + "smtpserver"] = outlook.office365.com mConfig.Fields.Item[schema + "smtpserverport"] = 587 mConfig.Fields.Item[schema + "sendusername"] = MyEmailAddress mConfig.Fields.Item[schema + "sendpassword"] = MyEmailPassword mConfig.Fields.Item[schema + "smtpauthenticate"] = 1 mConfig.Fields.Item[schema + "STARTTLS"] = True mConfig.Fields.Item[schema + "smtpconnectiontimeout"] = 30; mConfig.Fields.Update() mMessage = Sys.OleObject["CDO.Message"] mMessage.Configuration = mConfig mMessage.From = MyEmailAddress mMessage.To = MyEmailAddress mMessage.Subject = 'TestComplete Result' mMessage.HTMLBody = Message mMessage.Send(); except Exception as exp: Log.Error('E-mail cannot be sent', str(exp)) return False Log.Message('Message was successfully sent') return True def test (): SendEmail('Hi')
Wht is wrong in my code?
Solved! Go to Solution.
Good day,
After days and days, I have find a solution for this problem.
The CDO do not support TLS therefor this problem apear in my Case. I use another codes from this Link
maybe that is better to attache that codes in TC's help. 🙂
what is the error?
Any reason you are not using the Builtin.SendMail( ) function in Testcomplete instead of using the CDO objects?
Also without sharing the error, it is difficult to help
Cheers
Lino
@anupamchampati @LinoTadros Many thanks for your quick reply
('The transport could not connect to the server.', 0, 'CDO.Message.1', '', 0, -2147220973)
It would be helpful to know what the error is you get and on which line.
Just as a rough guess, though, if this is a copy and paste of your actual code the problem is with
mConfig.Fields.Item[schema + "smtpserver"] = outlook.office365.com
That needs to be set as a string. Change to
mConfig.Fields.Item[schema + "smtpserver"] = "outlook.office365.com"
That is not a TestComplete issue. You have a problem on the machine you are executing this code from regarding the CDO COM object registration, or Firewall.
To eliminate the possible issues, try to execute the code from your Python IDE of choice and you will probably get the same error.
If you have Visual Studio, you can try similar code in C# to send a message with CDO
Once you are successful using the CDO object in another product, TestComplete will work.
You can even use Excel with VBScript to test the CDO object
Currently I am pretty sure it is the COM server for CDO or the Firewall that is cause the connection not to work
Cheers
-Lino
I must using the STARTTLS.
@tristaanogre many thanks for your Tip,
I change it with "outlook.office365.com"
and agin not working
the error ist:
('The transport could not connect to the server.', 0, 'CDO.Message.1', '', 0, -2147220973)
There are other things in that code that I'm assuming are variables. MyEmailAddress and MyEmailPassword for example. The assumption is that they are being populated somewhere. Please check to make sure you have those values correct.
@tristaanogre Thanks,
I check that, that is correct. So, Maybe the case of this Error is our Firewall and hosting department. I will try this code from another infrastructure. | https://community.smartbear.com/t5/TestComplete-Desktop-Testing/send-mail-via-script/m-p/193624/highlight/true | CC-MAIN-2020-40 | refinedweb | 527 | 61.53 |
Hyper learning library to tune the hyperparameters of Keras deep learning models.
After reading this post you will know:
- How to wrap Keras models for use in scikit-learn and how to use grid search.
- How to grid search common neural network parameters such as learning rate, dropout rate, epochs and number of neurons.
- How to define your own hyperparameter tuning experiments on your own projects.
Let’s get started.
- Update Nov/2016: Fixed minor issue in displaying grid search results in code examples.
-<<
How to Grid Search Hyperparameters for Deep Learning Models in Python With Keras
Photo by 3V Photo, some rights reserved.
Overview
In this post I want to show you both how you can use the scikit-learn grid search capability and give you a suite of examples that you can copy-and-paste into your own project as a starting point.
Below is a list of the topics we are going to cover:
- How to use Keras models in scikit-learn.
- How to use grid search in scikit-learn.
- How to tune batch size and training epochs.
- How to tune optimization algorithms.
- How to tune learning rate and momentum.
- How to tune network weight initialization.
- How to tune activation functions.
- How to tune dropout regularization.
- How to tune the number of neurons in the hidden layer.
How to Use Keras Models in scikit-learn.
For example:
The constructor for the KerasClassifier class can take default arguments that are passed on to the calls to model.fit(), such as the number of epochs and the batch size.
For example:
The constructor for the KerasClassifier class can also take new arguments that can be passed to your custom create_model() function. These new arguments must also be defined in the signature of your create_model() function with default parameters.
For example:
You can learn more about the scikit-learn wrapper in Keras API documentation.
How to Use Grid Search in scikit-learn
Grid search is a model hyperparameter optimization technique.
In scikit-learn this technique is provided in the GridSearchCV class.
When constructing this class you must provide a dictionary of hyperparameters to evaluate in the param_grid argument. This is a map of the model parameter name and an array of values to try.
By default, accuracy when then construct and evaluate one model for each combination of parameters. Cross validation is used to evaluate each individual model and the default of 3-fold cross validation is used, although this can be overridden by specifying the cv argument to the GridSearchCV constructor.
Below is an example of defining a simple grid search:
Once completed, you can access the outcome of the grid search in the result object returned from grid.fit(). The best_score_ member provides access to the best score observed during the optimization procedure and the best_params_ describes the combination of parameters that achieved the best results.
You can learn more about the GridSearchCV class in the scikit-learn API documentation.
Problem Description
Now that we know how to use Keras models with scikit-learn and how to use grid search in scikit-learn, let’s look at a bunch of examples.
All examples will be demonstrated on a small standard machine learning dataset called the Pima Indians onset of diabetes classification dataset. This is a small dataset with all numerical attributes that is easy to work with.
- Download the dataset and place it in your currently working directly with the name pima-indians-diabetes.csv.
As we proceed through the examples in this post, we will aggregate the best parameters. This is not the best way to grid search because parameters can interact, but it is good for demonstration purposes.
Note on Parallelizing Grid Search
All examples are configured to use parallelism (n_jobs=-1).
If you get an error like the one below:
Kill the process and change the code to not perform the grid search in parallel, set n_jobs.
How to Tune Batch Size and Number of Epochs
In this first simple example, we look at tuning the batch size and number of epochs used when fitting the network.
The batch size in iterative gradient descent is the number of patterns shown to the network before the weights are updated. It is also an optimization in the training of the network, defining how many patterns to read at a time and keep in memory.
The number of epochs is the number of times that the entire training dataset is shown to the network during training. Some networks are sensitive to the batch size, such as LSTM recurrent neural networks and Convolutional Neural Networks.
Here we will evaluate a suite of different mini batch sizes from 10 to 100 in steps of 20.
The full code listing is provided below.
Running this example produces the following output.
We can see that the batch size of 20 and 100 epochs achieved the best result of about 68% accuracy.
How to Tune the Training Optimization Algorithm
Keras offers a suite of different state-of-the-art optimization algorithms.
In this example, we tune the optimization algorithm used to train the network, each with default parameters.
This is an odd example, because often you will choose one approach a priori and instead focus on tuning its parameters on your problem (e.g. see the next example).
Here we will evaluate the suite of optimization algorithms supported by the Keras API.
The full code listing is provided below.
Running this example produces the following output.
The results suggest that the ADAM optimization algorithm is the best with a score of about 70% accuracy.
How to Tune Learning Rate and Momentum
It is common to pre-select an optimization algorithm to train your network and tune its parameters.
By far the most common optimization algorithm is plain old Stochastic Gradient Descent (SGD) because it is so well understood. In this example, we will look at optimizing the SGD learning rate and momentum parameters.
Learning rate controls how much to update the weight at the end of each batch and the momentum controls how much to let the previous update influence the current weight update.
We will try a suite of small standard learning rates and a momentum values from 0.2 to 0.8 in steps of 0.2, as well as 0.9 (because it can be a popular value in practice).
Generally, it is a good idea to also include the number of epochs in an optimization like this as there is a dependency between the amount of learning per batch (learning rate), the number of updates per epoch (batch size) and the number of epochs.
The full code listing is provided below.
Running this example produces the following output.
We can see that relatively SGD is not very good on this problem, nevertheless best results were achieved using a learning rate of 0.01 and a momentum of 0.0 with an accuracy of about 68%.
How to Tune Network Weight Initialization
Neural network weight initialization used to be simple: use small random values.
Now there is a suite of different techniques to choose from. Keras provides a laundry list.
In this example, we will look at tuning the selection of network weight initialization by evaluating all of the available techniques.
We will use the same weight initialization method on each layer. Ideally, it may be better to use different weight initialization schemes according to the activation function used on each layer. In the example below we use rectifier for the hidden layer. We use sigmoid for the output layer because the predictions are binary.
The full code listing is provided below.
Running this example produces the following output.
We can see that the best results were achieved with a uniform weight initialization scheme achieving a performance of about 72%.
How to Tune the Neuron Activation Function
The activation function controls the non-linearity of individual neurons and when to fire.
Generally, the rectifier activation function is the most popular, but it used to be the sigmoid and the tanh functions and these functions may still be more suitable for different problems.
In this example, we will evaluate the suite of different activation functions available in Keras. We will only use these functions in the hidden layer, as we require a sigmoid activation function in the output for the binary classification problem.
Generally, it is a good idea to prepare data to the range of the different transfer functions, which we will not do in this case.
The full code listing is provided below.
Running this example produces the following output.
Surprisingly (to me at least), the ‘linear’ activation function achieved the best results with an accuracy of about 72%.
How to Tune Dropout Regularization
In this example, we will look at tuning the dropout rate for regularization in an effort to limit overfitting and improve the model’s ability to generalize.
To get good results, dropout is best combined with a weight constraint such as the max norm constraint.
For more on using dropout in deep learning models with Keras see the post:
This involves fitting both the dropout percentage and the weight constraint. We will try dropout percentages between 0.0 and 0.9 (1.0 does not make sense) and maxnorm weight constraint values between 0 and 5.
The full code listing is provided below.
Running this example produces the following output.
We can see that the dropout rate of 0.2% and the maxnorm weight constraint of 4 resulted in the best accuracy of about 72%.
How to Tune the Number of Neurons in the Hidden Layer
The number of neurons in a layer is an important parameter to tune. Generally the number of neurons in a layer controls the representational capacity of the network, at least at that point in the topology.
Also, generally, a large enough single layer network can approximate any other neural network, at least in theory.
In this example, we will look at tuning the number of neurons in a single hidden layer. We will try values from 1 to 30 in steps of 5.
A larger network requires more training and at least the batch size and number of epochs should ideally be optimized with the number of neurons.
The full code listing is provided below.
Running this example produces the following output.
We can see that the best results were achieved with a network with 5 neurons in the hidden layer with an accuracy of about 71%.
Tips for Hyperparameter Optimization
This section lists some handy tips to consider when tuning hyperparameters of your neural network.
- k-fold Cross Validation. You can see that the results from the examples in this post show some variance. A default cross-validation of 3 was used, but perhaps k=5 or k=10 would be more stable. Carefully choose your cross validation configuration to ensure your results are stable.
- Review the Whole Grid. Do not just focus on the best result, review the whole grid of results and look for trends to support configuration decisions.
- Parallelize. Use all your cores if you can, neural networks are slow to train and we often want to try a lot of different parameters. Consider spinning up a lot of AWS instances.
- Use a Sample of Your Dataset. Because networks are slow to train, try training them on a smaller sample of your training dataset, just to get an idea of general directions of parameters rather than optimal configurations.
- Start with Coarse Grids. Start with coarse-grained grids and zoom into finer grained grids once you can narrow the scope.
- Do not Transfer Results. Results are generally problem specific. Try to avoid favorite configurations on each new problem that you see. It is unlikely that optimal results you discover on one problem will transfer to your next project. Instead look for broader trends like number of layers or relationships between parameters.
- Reproducibility is a Problem. Although we set the seed for the random number generator in NumPy, the results are not 100% reproducible. There is more to reproducibility when grid searching wrapped Keras models than is presented in this post.
Summary
In this post, you discovered how you can tune the hyperparameters of your deep learning networks in Python using Keras and scikit-learn.
Specifically, you learned:
- How to wrap Keras models for use in scikit-learn and how to use grid search.
- How to grid search a suite of different standard neural network parameters for Keras models.
- How to design your own hyperparameter optimization experiments.
Do you have any experience tuning hyperparameters of large neural networks? Please share your stories below.
Do you have any questions about hyperparameter optimization.
As always excellent post,. I’ve been doing some hyper-parameter optimization by hand, but I’ll definitely give Grid Search a try.
Is it possible to set up a different threshold for sigmoid output in Keras? Rather then using 0.5 I was thinking of trying 0.7 or 0.8
Thanks Yanbo.
I don’t think so, but you could implement your own activation function and do anything you wish.
My question is related to this thread. How to get the probablities as the output? I dont want the class output. I read for a regression problem that no activation function is needed in the output layer. Similiar implementation will get me the probabilities ?? or the output will exceed 0 and 1??
Hi Shudhan, you can use a sigmoid activation and treat the outputs like probabilities (they will be in the range of 0-1).
Sound awesome!Will this grid search method use the full cpu(which can be 8/16 cores) ?
It can if you set n_jobs=-1
Hi,
Great post,
Can I use this tips on CNNs in keras as well?
Thanks!
They can be a start, but remember it is a good idea to use a repeating structure in a large CNN and you will need to tune the number of filters and pool size.
Hi Jason, First of all great post! I applied this by dividing the data into train and test and used train dataset for grid fit. Plan was to capture best parameters in train and apply them on test to see accuracy. But it seems grid.fit and model.fit applied with same parameters on same dataset (in this case train) give different accuracy results. Any idea why this happens. I can share the code if it helps.
You will see small variation in the performance of a neural net with the same parameters from run to run. This is because of the stochastic nature of the technique and how very hard it is to fix the random number seed successfully in python/numpy/theano.
You will also see small variation due to the data used to train the method.
Generally, you could use all of your data to grid search to try to reduce the second type of variation (slower). You could store results and use statistical significance tests to compare populations of results to see if differences are significant to sort out the first type or variation.
I hope that helps.
hi, I think this will best tutorial i ever found on web….Thanks for sharing….is it possible to use these tips on LSTM, Bilstm cnnlstm
Thanks Vinay, I’m glad it’s useful.
Absolutely, you could use these tactics on other algorithm types.
Best place to learn the tuning.. my question – is it good to follow the order you mentioned to tune the parameters? I know the most significant parameters should be tuned first
Thanks. The order is a good start. It is best to focus on areas where you think you will get the biggest improvement first – which is often the structure of the network (layers and neurons).
when I am using the categorical_entropy loss function and running the grid search with n_jobs more than 1 its throwing error “cannot pickle object class”, but the same thing is working fine with binary_entropyloss. Can you tell me if I am making any mistake in my code:
def create_model(optimizer=’adam’):
# create model
model.add(Dense(30, input_dim=59, init=’normal’, activation=’relu’))
model.add(Dense(15, init=’normal’, activation=’sigmoid’))
model.add(Dense(3, init=’normal’, activation=’sigmoid’))
# Compile model
model.compile(loss=’categorical_crossentropy’, optimizer=optimizer, metrics=[‘accuracy’])
return model
# Create Keras Classifier
print “——————— Running Grid Search on Keras Classifier for epochs and batch ——————”
clf = model = KerasClassifier(build_fn = create_model, verbose=0)
param_grid = {“batch_size”:range(10, 30, 10), “nb_epoch”:range(50, 150, 50)}
optimizer = [‘SGD’, ‘RMSprop’, ‘Adagrad’, ‘Adadelta’, ‘Adam’, ‘Adamax’, ‘Nadam’]
grid = GridSearchCV(estimator=model, param_grid=param_grid, n_jobs=4)
grid_result = grid.fit(x_train, y_train)
print(“Best: %f using %s” % (grid_result.best_score_, grid_result.best_params_))
Strange Satheesh, I have not seen that before.
Let me know if you figure it out.
excellent post, thanks. It’s been very helpful to get me started on hyperparameterisation.
One thing I haven’t been able to do yet is to grid search over parameters which are not proper to the NN but to the trainign set. For example, I can fine-tune the input_dim parameter by creating a function generator which takes care of creating the function that will create the model, like this:
# fp_subset is a subset of columns of my whole training set.
create_basic_ANN_model = kt.ANN_model_gen( # defined elsewhere
input_dim=len(fp_subset), output_dim=1, layers_num=2, layers_sizes=[len(fp_subset)/5, len(fp_subset)/10, ],
loss=’mean_squared_error’, optimizer=’adadelta’, metrics=[‘mean_squared_error’, ‘mean_absolute_error’]
)
model = KerasRegressor(build_fn=create_basic_ANN_model, verbose=1)
# define the grid search parameters
batch_size = [10, 100]
epochs = [5, 10]
param_grid = dict(batch_size=batch_size, nb_epoch=epochs)
grid = GridSearchCV(estimator=model, param_grid=param_grid, n_jobs=1, cv=7)
grid_results = grid.fit(trX, trY)
this works but only as a for loop over the different fp_subset, which I must define manually.
I could easily pick the best out of every run but it wuld be great if I could fold them all inside a big grid definition and fit, so as to automatically pick the largest.
However, until now haven’t been able to figure out a way to get that in my head.
If the wrapper function is useful to anyone, I can post a generalised version here.
Good question.
You might just need to us a loop around the whole lot for different projections/views of your training data.
Thanks. I ended up coding my own for loop, saving the results of each grid in a dict, sorting the hash by the perofrmance metrics, and picking the best model.
Now, the next question is: How do I save the model’s architecture and weights to a .json .hdf5 file? I know how to do that for a simple model. But how do I extract the best model out of the gridsearch results?
Well done.
No need. Once you know the parameters, you can use them to train a new standalone model on all of your training data and start making predictions.
I may have found a way. How about this?
best_model = grid_result.best_estimator_.model
best_model_file_path = ‘your_pick_here’
model2json = best_model.to_json()
with open( best_model_file_path+’.json’, ‘w’) as json_file:
json_file.write(model2json)
best_model.save_weights(best_model_file_path+’.h5′)
Hi Jason, I think this is very best deep learning tutorial on the web. Thanks for your work. I have a question is :how to use the heuristic algorithm to optimize Hyperparameters for Deep Learning Models in Python With Keras, these algorithms like: Genetic algorithm, Particle swarm optimization, and Cuckoo algorithm etc. If the idea could be experimented, could you give an example
Thanks for your support volador.
You could search the hyperparameter space using a stochastic optimization algorithm like a genetic algorithm and use the mean performance as the cost function orf fitness function. I don’t have a worked example, but it would be relatively easy to setup.
Hi Jason, very helpful intro into gridsearch for Keras. I have used your guidance in my code, but rather than using the default ‘accuracy’ to be optimized, my model requires a specific evaluation function to be optimized. You hint at this possibility in the introduction, but there is no example of it. I have followed the SciKit-learn documentation, but I fail to come up with the correct syntax.
I have posted my question at StackOverflow, but since it is quite specific, it requires understanding of SciKit-learn in combination with Keras.
Perhaps you can have a look? I think it would nicely extend your tutorial.
Thanks, Jan
Sorry Jan, I have not used a custom scoring function before.
Here are a list of built-in scoring functions:
Here is help on defining your own scoring function:
Let me know how you go.
Yup, same sources as I referenced in my post at Stackoverflow.
Excellent. Good luck Jan.
Good tutorial again Jason…keep on the good job!
Thanks Anthony.
Hi Jason
First off, thank you for the tutorial. It’s very helpful.
I was also hoping you would assist on how to adapt the keras grid search to stateful lstms as discussed in
I’ve coded the following:
# create model
model = KerasRegressor(build_fn=create_model, nb_epoch=1, batch_size=bats,
verbose=2, shuffle=False)
# define the grid search parameters
h1n = [5, 10] # number of hidden neurons
param_grid = dict(h1n=h1n)
grid = GridSearchCV(estimator=model, param_grid=param_grid, n_jobs=-1, cv=5)
for i in range(100):
grid.fit(trainX, trainY)
grid.reset_states()
Is grid.reset_states() corrrect? or would you suggest creating function callback for reset states.
Thanks,
Great question.
With stateful LSTMs we must control the resetting of states after each epoch. The sklearn framework does not open this capacity to us – at least it looks that way to me off the cuff.
I think you may have to grid search stateful LSTM params manually with a ton of for loops. Sorry.
If you discover something different, let me know. i.e. there may be a way in the back door to the sklearn grid search functionality that we can inject our own custom epoch handing.
Hi Jason
Thanks a lot for this and all the other great tutorials!
I tried to combine this gridsearch/keras approach with a pipeline. It works if I tune nb_epoch or batch_size, but I get an error if I try to tune the optimizer or something else in the keras building function (I did not forget to include the variable as an argument):
def keras_model(optimizer = ‘adam’):
model = Sequential()
model.add(Dense(80, input_dim=79, init= ‘normal’))
model.add(Activation(‘relu’))
model.add(Dense(1, init=’normal’))
model.add(Activation(‘linear’))
model.compile(optimizer=optimizer, loss=’mse’)
return model
kRegressor = KerasRegressor(build_fn=keras_model, nb_epoch=500, batch_size=10, verbose=0)
estimators = []
estimators.append((‘imputer’, preprocessing.Imputer(strategy=’mean’)))
estimators.append((‘scaler’, preprocessing.StandardScaler()))
estimators.append((‘kerasR’, kRegressor))
pipeline = Pipeline(estimators)
param_grid = dict(kerasR__optimizer = [‘adam’,’rmsprop’])
grid = GridSearchCV(pipeline, param_grid, cv=5, scoring=’neg_mean_squared_error’)
Do you know this problem?
Thanks, Thomas
Thanks Thomas. I’ve not seen this issue.
I think we’re starting to push the poor Keras sklearn wrapper to the limit.
Maybe the next step is to build out a few functions to do manual grid searching across network configs.
Great resource!
Any thoughts on how to get the “history” objects out of grid search? It could be beneficial to plot the loss and accuracy to see when a model starts to flatten out.
Not sure off the cuff Jimi, perhaps repeat the run standalone for the top performing configuration.
Thanks for the post. Can we optimize the number of hidden layers as well on top of number of neurons in each layers?
Thanks
Yes, it just may be very time consuming depending on the size of the dataset and the number of layers/nodes involved.
Try it on some small datasets from the UCI ML Repo.
Thanks. Would you mind looking at below code?
def create_model(neurons=1, neurons2=1):
# create model
model = Sequential()
model.add(Dense(neurons1, input_dim=8))
model.add(Dense(neurons2))
model.add(Dense(1, init=’uniform’, activation=’sigmoid’))
# Compile model
model.compile(loss=’binary_crossentropy’, optimizer=’adam’, metrics=[‘accuracy’])
return model
# define the grid search parameters
neurons1 = [1, 3, 5, 7]
neurons2=[0,1,2]
param_grid = dict(neurons1=neurons1, neurons2=neurons2)
grid = GridSearchCV(estimator=model, param_grid=param_grid, n_jobs=-1)
grid_result = grid.fit(X, Y)
This code runs without error (I excluded certain X, y parts for brewity) but when I run “grid.fit(X, Y), it gives AssertionError.
I’d appreciate if you can show me where I am wrong.
Update” It worked when I deleted 0 from neurons2. Thanks
Excellent, glad to hear it.
A Dense() with a value of 0 neurons might blow up. Try removing the 0 from your neurons2 array.
A good debug strategy is to cut code back to the minimum, make it work, then and add complexity. Here. Try searching a grid of 1 and 1 neurons, make it all work, then expand the grid you search.
Let me know how you go.
I keep getting error messages and I tried a big for loops that scan for all possible combinations of layer numbers, neuron numbers, other optimization stuff within defined limits. It is very time consuming code, but I could not figure it out how to adjust layer structure and other optimization parameters in the same code using GridSearch. If you would provide a code for that in your blog one day, that would be much appreciated. Thanks.
I’ll try to find the time.
Hi Jason,
Many thanks for this awesome tutorial !
I’m glad you found it useful Rajneesh.
Hi Jason,
Great tutorial! I’m running into a slight issue. I tried running this on my own variation of the code and got the following error:
TypeError: get_params() got an unexpected keyword argument ‘deep’
I copied and pasted your code using the given data set and got the same error. The code is showing an error on the grid_result = grid.fit(X, Y) line. I looked through the other comments and didn’t see anyone with the same issue. Do you know where this could be coming from?
Thanks for your help!
same issue here,
great tutorial, life saver.
Hi Andy, sorry to hear that.
Is this happening with a specific example or with all of them?
Are you able to check your version of Python/sklearn/keras/tf/theano?
UPDATE:
I can confirm the first example still works fine with Python 2.7, sklearn 0.18.1, Keras 1.2.0 and TensorFlow 0.12.1.
The only differences are I am running Python 3.5 and Keras 1.2.1. The example I ran previously was the grid search for the number of neurons in a layer. But I just ran the first example and got the same error.
Do you think the issue is due to the next version of Python? If so, what should my next steps be?
Thanks for your help and quick response!
It’s a bug in Keras 1.2.1. You can either downgrade to 1.2.0 or get the code from their github (where they already fixed it).
Yes, I have a write up of the problem and available fixes here:
Thank you so much for your help!
Jason,
Can you use early_stopping to decide n_epoch?
Yes, that is a good method to find a generalized model.
Hi Jason,
Really great article. I am a big fan of your blog and your books. Can you please explain your following statement?
“A default cross-validation of 3 was used, but perhaps k=5 or k=10 would be more stable. Carefully choose your cross validation configuration to ensure your results are stable.”
I didn’t see anywhere cross-validation being used.
Hi Jayant,
Grid search uses k-fold cross-validation to evaluate the performance of each combination of parameters on unseen data.
Hi Jason,
thanks for this awesome tutorial !
I have two questions: 1. In “model.compile(loss=’binary_crossentropy’, optimizer=’adam’, metrics=[‘accuracy’])”, accuracy is used for evaluate results. But GridSearchCV also has scoring parameter, if I set “scoring=’f1’”,which one is used for evaluate the results of grid search? 2.How to set two evaluate parameters ,e.g. ‘accuracy’and ’f1’ evaluating the results of grid search?
Hi Jing,
You can set the “scoring” argument for GridSearchCV with a string of the performance measure to use, or the name of your own scoring function. You can learn about this argument here:
You can see a full list of supported scoring measures here:
As far as I know you can only grid search using a single measure.
Thank you so much for your help!
I find no matter what evaluate parameters used in GridSearchCV “scoring”,”metrics” in “model.compile” must be [‘accuracy’],otherwise the program gives “ValueError: The model is not configured to compute accuracy.You should pass ‘metrics=[“accuracy”]’ to the ‘model.compile()’method. So, if I set:
model.compile(loss=’binary_crossentropy’, optimizer=’adam’, metrics=[‘accuracy’])
grid = GridSearchCV(estimator=model, param_grid=param_grid, scoring=’recall’)
the grid_result.best_score_ =0.72.My question is: 0.72 is accuracy or recall ? Thank you!
Hi Jing,
When using GridSearchCV with Keras, I would suggest not specifying any metrics when compiling your Keras model.
I would suggest only setting the “scoring” argument on the GridSearchCV. I would expect the metric reported by GridSearchCV to be the one that you specified.
I hope that helps.
Great Blogpost. Love it. You are awesome Jason. I got one question to GridsearchCV. As far as i understand the crossvalidation already takes place in there. That’s why we do not need any kfold anymore.
But with this technique we would have no validation set correct? e.g. with a default value of 3 we would have 2 training sets and one test set.
That means in kfold as well as in GridsearchCV there is no requirement for creating a validation set anymore?
Thanks
Hi Dan,
Yes, GridSearchCV performs cross validation and you must specify the number of folds. You can hold back a validation set to double check the parameters found by the search if you like. This is optional.
Thank you for the quick response Jason. Especially considering the huge amount of questions you get.
I’m here to help, if I can Dan.
What I’m missing in the tutorial is the info, how to get the best params in the model with KERAS. Do I pickup the best parameters and call ‘create_model’ again with those parameters or can I call the GridSearchCV’s ‘predict’ function? (I will try out for myself but for completeness it would be good to have it in the tutorial as well.)
I see, but we don’t know the best parameters, we must search for them.
Hi, Jason. I am getting
/usr/local/lib/python2.7/dist-packages/keras/wrappers/scikit_learn.py in check_params(self=, params={‘batch_size’: 10, ‘epochs’: 10})
80 legal_params += inspect.getargspec(fn)[0]
81 legal_params = set(legal_params)
82
83 for params_name in params:
84 if params_name not in legal_params:
—> 85 raise ValueError(‘{} is not a legal parameter’.format(params_name))
params_name = ‘epochs’
86
87 def get_params(self, _):
88 “””Gets parameters for this estimator.
89
ValueError: epochs is not a legal parameter
It sounds like you need to upgrade to Keras v2.0 or higher.
Nice tutorial. I would like to optimize the number of hidden layers in the model. Can you please guide in this regard, thanks
Thanks Usman.
Consider exploring specific patterns, e.g. small-big-small, etc.
Do you know any way this could be possible using a network with multiple inputs?
Hi Jason, great to see posts like this – amazing job!
Just noticed, when you tune the optimisation algorithm SGD performs at 34% accuracy. As no parameters are being passed to the SGD function, I’d assume it takes the default configuration, lr=0.01, momentum=0.0.
Later on, as you look for better configurations for SGD, best result (68%) is found when {‘learn_rate’: 0.01, ‘momentum’: 0.0}.
It seems to me that these two experiments use exactly the same network configuration (including the same SGD parameters), yet their resulting accuracies differ significantly. Do you have any intuition as to why this may be happening?
Hi Daniel, yes great point.
Neural networks are stochastic and give different results when evaluated on the same data.
Ideally, each configuration would be evaluated using the average of multiple (30+) repeats.
This post might help:
Hi Jason!
absolutely love your tutorial! But would you mind to give tutorial for how to tune the number of hidden layer?
Thanks
I have an example here:
Thank you so much Jason!
I’m glad it helped Pradanuari.
Hello Jason
I tried to use your idea in a similar problem but I am getting error : AttributeError: ‘NoneType’ object has no attribute ‘loss’
it looks like the model does not define loss function?
This is the error I get:
b\site-packages\keras-2.0.4-py3.5.egg\keras\wrappers\scikit_learn.py in fit(self=, x=memmap([[[ 0., 0., 0., …, 0., 0., 0.],
…, 0., 0., …, 0., 0., 0.]]], dtype=float32), y=array([[ 0., 0., 0., …, 0., 0., 0.],
…0.],
[ 0., 0., 0., …, 0., 1., 0.]]), **kwargs={})
135 self.model = self.build_fn(
136 **self.filter_sk_params(self.build_fn.__call__))
137 else:
138 self.model = self.build_fn(**self.filter_sk_params(self.build_fn))
139
–> 140 loss_name = self.model.loss
loss_name = undefined
self.model.loss = undefined
141 if hasattr(loss_name, ‘__name__’):
142 loss_name = loss_name.__name__
143 if loss_name == ‘categorical_crossentropy’ and len(y.shape) != 2:
144 y = to_categorical(y)
AttributeError: ‘NoneType’ object has no attribute ‘loss’
___________________________________________________________________________
Process finished with exit code 1
Regards
Ibrahim
Does the example in the blog post work on your system?
Ok, I think your code needs to be placed after
if __name__ == ‘__main__’:
to work with multiprocess…
But thanks for the post is great…
Not on Linux and OS X when I tested it, but thanks for the tip.
Hello Jason!
I do the first step – try to tune Batch Size and Number of Epochs and get
print(“Best: %f using %s” % (grid_result.best_score_, grid_result.best_params_))
Best: 0.707031 using {‘epochs’: 100, ‘batch_size’: 40}
After that I do the same and get
print(“Best: %f using %s” % (grid_result.best_score_, grid_result.best_params_))
Best: 0.688802 using {‘epochs’: 100, ‘batch_size’: 20}
And so on
The problem is in the grid_result.best_score_
I expect that in the second step (for ample tuning optimizer) I will get grid_result.best_score_ better than in the first step (in the second step i use grid_result.best_params_ from the first step). But it is not true
Tune all Hyperparameters is a very long time
How to fix it?
Consider tuning different parameters, like network structure or number of input features.
Thanks a lot Jason!
Hello,
I’d like to have your opinion about a problem:
I have two loss function plots, with SGD and Adamax as optimizer with same learning rate.
Loss function of SGD looks like the red one, whereas Adamax’s looks like blue one.
()
I have better scores with Adamax on validation data. I’m confused about how to proceed, should I choose Adamax and play with learning rates a little more, or go on with SGD and somehow try to improve performance?
Thanks!
Explore both, but focus on the validation score of interest (e.g. accuracy, RMSE, etc.) over loss.
For example, you can get very low loss and get worse accuracy.
Thanks for your response! I experimented with different learning rates and found out a reasonable one, (good for both Adamax and SGD) and now I try to fix learning rate and optimizer and focus on other hyperparameters such as batch-size and number of neurons. Or would be better if I set those first?
Number of neurons will have a big effect along with learning rate.
Batch size will have a smaller effect and could be optimized last.
Thanks for this post!
One question – why not use grid search on all the parameters together, rather than preforming several grid searches and finding each parameter separately? surly the results are not the same…
Great question,
In practice, the datasets are large and it can take a long time and require a lot of RAM.
Hi Jason,
Excellent post!
It seems to me that if you use the entire training set during your cross-validation, then your cross-validation error is going to give you an optimistically biased estimate of your validation error. I think this is because when you train the final model on the entire dataset, the validation set you create to estimate test performance comes out of the training set.
My question is: assuming we have a lot of data, should we use perhaps only 50% of the training data for cross-validation for the hyperparameters, and then use the remaining 50% for fitting the final model (and a portion of that remaining 50% would be used for the validation set)? That way we wouldn’t be using the same data twice. I am assuming in this case that we would also have a separate test set.
Yes, it is a good idea to hold back a test set when tuning.
Thanks for your valuable post. I learned a lot from it.
When I wrote my code for grid search, I encountered a question:
I use fit_generator instead of fit in keras.
Is it possible to use grid search with fit_generator ?
I have some Merge layers in my deep learning model.
Hence, the input of the neural network is not a single matrix.
For example:
Suppose we have 1,000 samples
Input = [Input1,Input2]
Input1 is a 1,000 *3 matrix
Input2 is a 1,000*3*50*50 matrix (image)
When I use the fit in your post, there is a bug….because the input1 and input2 don’t have the same dimension. So I wonder whether the fit_generator can work with grid search ?
Thanks in advance!
Please ignore my previous reply.
I find an answer here:
Right now, the GridsearchCV using the scikit wrapper for network with multiple inputs is not available.
Hi Jason, thank you for your good tutorial of the grid research with Keras. I followed your example with my own dataset. It could be run. But when I using the autoencoder structure, instead of the sequential structure, to gird the parameters with my own data. It could not be run. I don’t know the reason. Could you help me? Are there any differences between the gird of sequential structure and the grid of model structure?
The follows are my codes:
from keras.models import Sequential
from keras.layers import Dense, Input
from keras.wrappers.scikit_learn import KerasClassifier
from sklearn.model_selection import StratifiedKFold
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import GridSearchCV
import numpy as np
from keras.optimizers import SGD, Adam, RMSprop, Adagrad
from keras.regularizers import l1,l2
from keras.models import Model
import pandas as pd
from keras.models import load_model
np.random.seed(2017)
def create_model(optimizer=’rmsprop’):
# encoder layers
encoding_dim =140
input_img = Input(shape=(6,))
encoded = Dense(300, activation=’relu’,W_regularizer=l1(0.01))(input_img)
encoded = Dense(300, activation=’relu’,W_regularizer=l1(0.01))(encoded)
encoded = Dense(300, activation=’relu’,W_regularizer=l1(0.01))(encoded)
encoder_output = Dense(encoding_dim, activation=’relu’,W_regularizer=l1(0.01))(encoded)
# decoder layers
decoded = Dense(300, activation=’relu’,W_regularizer=l1(0.01))(encoder_output)
decoded = Dense(300, activation=’relu’,W_regularizer=l1(0.01))(decoded)
decoded = Dense(300, activation=’relu’,W_regularizer=l1(0.01))(decoded)
decoded = Dense(6, activation=’relu’,W_regularizer=l1(0.01))(decoded)
# construct the autoencoder model
autoencoder = Model(input_img, decoded)
# construct the encoder model for plotting
encoder = Model(input_img, encoder_output)
# Compile model
autoencoder.compile(optimizer=’RMSprop’, loss=’mean_squared_error’,metrics=[‘accuracy’])
return autoencoder
I’m surprised, I would not think the network architecture would make a difference.
Sorry, I have no good suggestions other than try to debug the cause of the fault.
the command of autoencoder.compile is modified as the follows:
# Compile model
autoencoder.compile(optimizer=optimizer, loss=’mean_squared_error’,metrics=[‘accuracy’])
Can we do this for functional API as well ?
Perhaps, I have not done this.
Thanks for a great tutorial Jason, appreciated.
njobs=-1 didn’t work very well on my Windows 10 machine: took a very long time and never finished. seems to suggest this is (or at least was in 2015) a known problem under Windows so I changed to n_jobs=1, which also allowed me to see throughput using verbose=10.
Thanks for the tip.
Jason —
Given all the parameters it is possible to adjust, is there any recommendation for which should be fixed first before exploring others, or can ALL results for one change when others are changed?
Great question, see this paper:
Thanks Jason, I’ll check it out.
Hi and thank you for the resource.
Am I right in my understanding that this only works on one machine?
Any hints / pointers on how to run this on a cluster? I have found as a potential avenue using Spark (no Keras though).
Any comment at all? Information on the subject is scarce.
Yes, this example is for a single machine. Sorry, I do not have examples for running on a cluster.
Hi Jason,
I’m a little bit confused about the definition of the “score” or “accuracy”. How are they made? I believe that they are not simply comparing the results with target, otherwise it will be the overfitting model being the best (like the more neurons the better).
But on the other hand, they are just using those combinations of parameters to train the model, so what is the difference between I manually set the parameters and see my result good or not, with risk of overfitting and the grid search that creates an accuracy score to determine which one is the best?
Best regards,
The grid search will provide an estimate of the skill of the model with a set of parameters.
Any one configuration in the grid search can be set and evaluated manually.
Neural networks are stochastic and will give different predictions/skill when trained on the same data.
Ideally, if you have the time/compute the grid search should use repeated k-fold cross validation to provide robust estimates of model skill. More here:
Does that help?
I’m new to the NN, a little bit puzzled. So say, if I have to many neurons that leads to overfitting (good on the train set, bad on the validation or test set), can grid search detect it by the score?
My guess is yes, because there is a validation set in the GridsearchCV. Is that correct?
A larget network can overfit.
The idea is to find a config that does well on the train and validation sets. We require a robust test harness. With enough resources, I’d recommend repeated k-fold cross validation within the grid search.
One more very useful tutorial, thank Jason.
One question about GridSearch in my case. I have tried to tune parameters of my neural network for regression with 18 inputs size 800 but the time to use GridSearch totally long, like forever even though I have limited to the number. I saw in your code:
grid = GridSearchCV(estimator=model, param_grid=param_grid, n_jobs=-1)
Normally, n_jobs=1, can I increase that number to improve the performances?
We often cannot grid search with neural nets because it takes so long!
Consider running on a large computer in the cloud over the weekend.
Hi Jason,
Any idea how to use GridSearchCV if you don’t want cross validation?
GridSearch supports k-fold cross-validation by default. That is what the “CV” is in the name: | http://machinelearningmastery.com/grid-search-hyperparameters-deep-learning-models-python-keras/ | CC-MAIN-2017-26 | refinedweb | 7,282 | 57.98 |
On 02/06/2012 06:50 AM, Peter Krempa wrote: > This patch changes behavior of virPidFileRead to enable passing NULL as > path to the binary the pid file should be checked against to skip this > check. This enables using this function for reading files that have same > semantics as pid files, but belong to unknown processes. > --- > src/util/virpidfile.c | 21 +++++++++++++-------- > 1 files changed, 13 insertions(+), 8 deletions(-) > > diff --git a/src/util/virpidfile.c b/src/util/virpidfile.c > index 1fd6318..f1f721f 100644 > --- a/src/util/virpidfile.c > +++ b/src/util/virpidfile.c > @@ -184,6 +184,9 @@ int virPidFileRead(const char *dir, > * resolves to @binpath. This adds protection against > * recycling of previously reaped pids. > * > + * If @binpath is NULL the check for the executable path > + * is skipped. > + * > * Returns -errno upon error, or zero on successful > * reading of the pidfile. If the PID was not still > * alive, zero will be returned, but @pid will be > @@ -209,16 +212,18 @@ int virPidFileReadPathIfAlive(const char *path, > } > #endif > > - if (virAsprintf(&procpath, "/proc/%d/exe", *pid) < 0) { > - *pid = -1; > - return -1; > - } > + if (binpath) { > + if (virAsprintf(&procpath, "/proc/%d/exe", *pid) < 0) { This will need to be rebased if my pid_t cleanup patches go in first: My ACK from v3 still stands. Also, I'm torn on whether this still qualifies for 0.9.10 (it's a useful feature fix, but due to slow reviews, we've let it slip pretty far past the freeze date). Thoughts? -- Eric Blake eblake redhat com +1-919-301-3266 Libvirt virtualization library
Attachment:
signature.asc
Description: OpenPGP digital signature | https://www.redhat.com/archives/libvir-list/2012-February/msg00565.html | CC-MAIN-2015-11 | refinedweb | 261 | 55.74 |
Buttons are the most commonly used and simplest controls in iOS applications and are often used to respond to user actions. Generally we use UIButton class to implement the button. This section will focus on adding buttons, beautifying buttons, and how to implement button responses.
1. Use Code To Add Button.
To add a button to the main view in source code, you first need to instantiate a button object using the UIButton class, then set the location and size, and finally add the button object to the main view using the addSubview() method.
The following code adds a button object with orange background color to the main view.
import UIKit class ViewController: UIViewController { override func viewDidLoad() { super.viewDidLoad() // Create UIButton object. let button=UIButton(frame: CGRect(x: 143, y: 241, width: 88, height: 30)) button.backgroundColor=UIColor.orange // Add UIButton object. self.view.addSubview(button) } ...... }
When you run the program, you can see below picture.
2. Beautify The Button.
Beautify a button is simply setting the properties of the button. There are two ways to set the button properties. One way is to use the property inspector in the interface builder, the other way is to set it by source code. The following sections will focus on how to use code to set buttons properties.
2.1 Set Button Appearance.
You can setup button’s title, image, etc to change button appearance. Below lists some of the commonly used methods for setting button appearance.
- setBackgroundImage : Sets the background image of the button.
- setImage : Set button image.
- setTitle : Set button title.
- setTitleColor : Set button title color.
- setTitleShadowColor : Set button title shadow color.
Below code will add a button to the main view. The title of this button is ‘Click Me’ and the color of the title is black.
import UIKit class ViewController: UIViewController { override func viewDidLoad() { super.viewDidLoad() // Create button. let button=UIButton(frame: CGRect(x: 135, y: 241, width: 97, height: 30)) // Set button title. button.setTitle("Click Me", for: UIControlState.normal) // Set button title color. button.setTitleColor (UIColor.black, for: UIControlState.normal) // Add button to the main view. self.view.addSubview(button) } ...... }
2.2 Set Button State.
When setting the title and color of the button, the state of the button also needs to be set. Button state indicating what the title and title color of the button will look like in a certain state. For example,
UIControlState.normal represents normal state of a button. For a view like a button that can accept user input is also called a control. These controls all have their own state. Below is the control state list.
- normal : normal state.
- highlighted : highlight state.
- disabled : disable state and do not accept any events.
- selected : selected state.
- application : application flags.
2.3 Set Button Type.
There are various types of buttons. For example, in the address book, the button to add a new contact is a plus, while the button to see the details of the call is an exclamation point, etc. All these button types can be implemented using
UIButtonType when instantiating the button object. Below is the
UIButtonType list.
- system : System default style button.
- custom : Custom style button.
- detailDisclosure : Blue exclamation point button, mainly used for detail description.
- infoLight : Bright color exclamation point.
- infoDark : Dark exclamation point.
- contactAdd : The cross plus button, which is usually used as add contact entry.
The following code adds two different styles of buttons to the main view.
import UIKit class ViewController: UIViewController { override func viewDidLoad() { super.viewDidLoad() // Add first button with contactAdd type. let button1=UIButton(type: UIButtonType.contactAdd) button1.center=CGPoint(x: 190, y: 250) self.view.addSubview(button1) // Add second button with detailDisclosure type. let button2=UIButton(type: UIButtonType.detailDisclosure) button2.center=CGPoint(x: 190, y: 450) self.view.addSubview(button2) } ...... }
When you run the program, you can see below effect.
3. Add Button Response.
There are two methods to add button response code, one is through interface builder, the other is through source code.
3.1 Use interface builder to add button response code.
Add a button using the interface builder can use drag and drop to implement the button response code, which is also the simplest way to implement response.
The following will implement tap button to change main view background color effect.
- Open the Main.storyboard file, drag the button control from the view library to the main view, and set the title to “Tap me, Change View Color”.
- Click the Show the Assistant editor button to adjust the Xcode interface builder to the effect shown in below picture.
- Hold down the Ctrl key and drag the button object in the interface, and a blue line appears, dragging the blue line to the white space of the ViewController.swift file.
- When you release the mouse, a dialog box that declares the associated socket variable will pop up.
- Set the Connection option to Action indicate that an Action is associated. Set the Name to tapButton, indicate that the associated action is named tapButton, you can specify any name here.
- Click the Connect button and you’ll see the code in the ViewController.swift file. Now a method called tapButton() will be triggered when the user taps the button.
- Note: above method bind the action declaration and UI object association at same time, and there is also another way to declare the action before it is associated. To declare action method you can use the keyword IBAction. This keyword tells the Main.storyboard that this method is an operation method and can be triggered by a control.
- After declaring an action, a small hollow circle appears in front of the code, indicating that the action has not been associated.
- After the action is declared, the association can be made. First, use the tool in the adjustment window to adjust the interface builder. Then, hold down the Ctrl key and drag the button object in the interface, and a blue line appears, associating the blue line with the action in the file ViewController.swift.
- Finally, when the mouse is released, the button object is associated with the action method. At this point, the hollow circle in front of the action becomes to a solid circle, which indicates that the action has been associated.
- Open the ViewController.swift file and write the code that will implement the button response.
import UIKit class ViewController: UIViewController { var isYellow:Bool=false @IBAction func tapButton(_ sender: AnyObject){ if(isYellow){ self.view.backgroundColor=UIColor.white isYellow=false }else{ self.view.backgroundColor=UIColor.yellow isYellow=true } } }
3.2 Use code to add button response.
To add response method to button that is created in source code, you need to use the addTarget (action:for:) method. The syntax is as follows.
func addTarget(_ target: AnyObject?, action: Selector, for controlEvents: UIControlEvents)
- target : Represents the target object. It is the sender of the action message.
- action : Represents a selector used to identify action method. It must not be empty.
- controlEvents : Represents control events that trigger the action method.
There are 19 control events in iOS.
- touchDown : Single touch press event: when a user touches the screen, or when a new finger drops.
- touchDownRepeat : Multi-touch press event, the touch count is greater than 1: when the user presses the second, third, or fourth finger.
- touchDragInside : When a touch is dragged inside the control window.
- touchDragOutside : When a touch is dragged outside the control window.
- touchDragEnter : When a touch is dragged from the control window outside to inside.
- touchDragExit : When a touch is dragged from the inside of the control window to the outside.
- touchUpInside : All touch lift events within the control.
- touchUpOutside : All touch lift events outside the control.
- touchCancel : All touch cancellation events, such as when a touch is cancelled due to place too many fingers on the screen, or is interrupted by a lock or a phone call.
- valueChanged : Send notifications when the value of the control changes. Used for controls like sliders, segmented controls etc. Developers can configure when the slider control sends notifications.
- editingDidBegin : Send notifications when editing begins in a text control.
- editingChanged : Send notifications when text in a text control is changed.
- editingDidEnd : Send notifications when editing in a text control is finished.
- editingDidEndOnExit : A notification is sent when editing in a text control ends by pressing the enter key (or equivalent action).
- allTouchEvents : Notifies all touch events.
- allEditingEvents : Notifies all events about text editing.
- applicationReserved : Application reserved events.
- systemReserved : All the system reserved events.
- AllEvents : Contains all events.
Below code will change the main view background color when tap the button.
import UIKit class ViewController: UIViewController { var isCyan:Bool=false override func viewDidLoad() { super.viewDidLoad() let button=UIButton(frame: CGRect(x: 90, y: 545, width: 225, height: 30)) button.setTitle("Tap me,Change View Color", for: UIControlState()) button.setTitleColor (UIColor.black, for: UIControlState()) self.view.addSubview(button) button.addTarget(self, action:#selector(UIViewController.tapbutton), for:UIControlEvents.touchUpInside) } @objc func tapbutton(){ if(isCyan){ self.view.backgroundColor=UIColor.white isCyan=false }else{ self.view.backgroundColor=UIColor.cyan isCyan=true } } } | https://www.code-learner.com/ios-uibutton-introduction/ | CC-MAIN-2021-43 | refinedweb | 1,503 | 52.36 |
Dear RoR Community
My new RoR experience went really smooth for some time until I got stuck
with a seemingly easy problem and the more I have tried to read about it
(PickAxe, Agile WD, forum, etc.) the more confused I got!
What I have done so far as a newbie is to use the scaffold and then
build my own ideas around it. What I am trying to do now is
allow/disallow an action in the input form (in this case ‘save’)
depending on whether the ‘status’ == WIP or not. However with the code
below I keep getting error messages like ‘wrong number of arguments (0
for 1)’ or ‘Template is missing’.
My 2 questions are:
- Can anyone see what is wrong with my code?
- Can anyone please give me some info or point me into the direction of
where I can find info about instance variables, initialization for
newbies like me?
Thanks a lot,
Alex
class AdminController < ApplicationController
…
def wipinput
@job = Job.find(params[:id])
end
def initialize(status)
@status = status
end
def wipupdate
if params[:save] and @status == ‘WIP’
@job = Job.find(params[:id])
if @job.update_attributes(params[:job])
@job.update_attribute(:status,“WIP”)
flash[:notice] = ‘Job was successfully updated.’
redirect_to :action => ‘wip’
else
render :action => ‘wipinput’
end
… | https://www.ruby-forum.com/t/initialize-instance-variable-wrong-number-of-arguments/60462 | CC-MAIN-2021-39 | refinedweb | 211 | 63.39 |
Symptom
A quality expert or developer want to analyze development objects within a prefix namespace ( e.g. /MY_NAMESPACE/ ) with the ABAP Test Cockpit (ATC). It is not possible to register a namespace for ATC check, because the list of the report SATC_AC_INIT_NAMESPACE_REG is empty. When the "Register" (Create and transport registrat) button is pressed then the warning message "Select at least one entry without registration" is shown.
Read more...
Environment
- SAP NetWeaver 7.02 or higher
- ABAP Test Cockpit (ATC)
- Code Inspector (SCI)
Product
Keywords
Namespace, Namensraum, Rolle Produzent, ABAP Test Cockpit, Analyzing Objects Using Arbitrary Prefix Namespaces, , KBA , BC-DWB-TOO-ATF , ABAP Test Frameworks , BC-CTS-ORG , Workbench/Customizing Organizer , BC-ABA-LA-EPC , Extended Program Check (SLIN) , BC-DWB-AIE-QTT , Quality and Test Tools: ABAP Unit Test, ABAP Test Cockpit , How To
About this pageThis is a preview of a SAP Knowledge Base Article. Click more to access the full version on SAP ONE Support launchpad (Login required).
Visit SAP Support Portal's SAP Notes and KBA Search. | https://apps.support.sap.com/sap/support/knowledge/preview/en/2439348 | CC-MAIN-2018-39 | refinedweb | 172 | 51.58 |
Incomplete data sets, whether they are missing individual values or full rows and columns, are a common problem in data analysis. Luckily for us, Pandas has lots of tools to help us make these data sets easier to handle.
In this article, we will run through 2 of Pandas’ main methods for bundling dataFrames together – concatenating and merging.
Let’s set ourselves up with three one-row dataFrames, with stats from our recent matches.
import numpy as np import pandas as pd match1 = pd.DataFrame({'Opponent':['Selche FC'], 'GoalsFor':[1], 'GoalsAgainst':[1], 'Attendance':[53225]}) match2 = pd.DataFrame({'Opponent':['Sudaton FC'], 'GoalsFor':[3], 'GoalsAgainst':[0], 'Attendance':[53256]}) match3 = pd.DataFrame({'Opponent':['Ouestjambon United'], 'GoalsFor':[4], 'GoalsAgainst':[1], 'Attendance':[53225]}) match3
Concatenation
Our simplest method of joining data is to simply stick one on the ned of the other – as the concatenate method allows us to do. ‘pd.concat()’ with a list of our data frames will do the trick:
AllMatches = pd.concat([match1,match2,match3]) AllMatches
Merging
‘pd.merge()’ will allow us to stick data together left-to-right. Let’s first create more details for our matches above that we can then merge then:
match1scorers = pd.DataFrame({'First':['Sally'], 'Last':['Billy'], 'Opponent':['Selche FC']}) match2scorers = pd.DataFrame({'First':['Sally'], 'Last':['Pip'], 'Opponent':['Sudaton FC']}) match3scorers = pd.DataFrame({'First':['Sally'], 'Last':['Sally'], 'Opponent':['Ouestjambon United']}) AllScorers = pd.concat([match1scorers,match2scorers,match3scorers]) AllScorers
pd.merge(AllMatches, AllScorers, how='inner',on='Opponent')
Let’s break down ‘pd.merge()’. Anyone with any SQL experience will have a head start here, as this essentially mimicks merging in SQL.
‘pd.merge()’ takes four arguments. The first two are the dataFrames that we want to merge.
Next, we have the ‘how’ argument, which dictates the type of join that we need. ‘Inner’ is the simplest, and we use that in this example but you should read up on the other types.
Finally is ‘on’, which is the column that we build the dataFrame around. Pandas looks for this value in all merged dataFrames, then adds the other values from the matching rows.
Summary
With merge and concatenate, we have two examples of Pandas tools used to join datasets.
Concatenation allows us to stitch together two datasets with the same columns. Useful if we are adding to an already-existing dataset or glueing together two iterations of data collection. It is much quicker than opening Excel and pasting the data manually!
We have also seen an example of merge, which connects two datasets across a column that is common between the two dataFrames.
These examples are relatively simple and only serve as an introduction to joining data. For more complex cases, check out resources on pandas.join() and more in-depth examples of pandas.merge() | http://fcpython.com/data-analysis/joining-data | CC-MAIN-2018-51 | refinedweb | 457 | 58.89 |
help! excuse my english since this morning I have that horible error on sevral scripts I did not tuch as: CharacterMotor or FpsInputController:
namespace UnityEngine could not be found may be you forgot to add an assembly reference.
but in the editor i have no error in the console, just in monoDevelop...
I know that this question was already ask but I am beginner and no one was a good issue for me, and I did not understand many of them.
if some one can help!
Why would you compile in MonoDevelop anyway. I don't understand...
Ok, i am super beginer, and i need to understand, you mean that i should not use the two build buttons in monodevelop? is that what you meant?
Yes. Unity compiles all scripts on it's own. You just need to save your scripts, switch to Unity and wait a few.
transform could not be found
1
Answer
Trouble using savepanel and unityeditor namespace in built exe
0
Answers
UnityEditor.iOS.Xcode does not exist
0
Answers
"UnityEditor" namespace not found...
9
Answers
"Classic" namespace "UnityEditor" error
4
Answers | https://answers.unity.com/questions/468898/namespace-unityeditor-could-not-be-found.html | CC-MAIN-2019-51 | refinedweb | 186 | 74.19 |
Rename all public #defines to have a DRIZZLED_ prefix
Registered by Monty Taylor on 2009-12-24
Defines can't be namespaced, but they still get sucked into people's code. We want to play nice, so it would be very bad for us to #define anything in any header in drizzled/ that defines something which isn't prefixed by DRIZZLE_. The exception to this rule are defines that are being defined because they don't exist on some systems. Any of these should be protected by an #ifdef, and should probably have a comment indicating why we can't just rely on the OS to define it for us.
Blueprint information
- Status:
- Not started
- Approver:
- None
- Priority:
- Low
- Drafter:
- Monty Taylor
- Direction:
- Needs approval
- Assignee:
- None
- Definition:
- New
- Implementation:
Unknown
- Started by
-
- Completed by
- | https://blueprints.launchpad.net/drizzle/+spec/code-cleanup-rename-all-defines | CC-MAIN-2018-51 | refinedweb | 135 | 53.95 |
).
- djhopkins2 last edited by
@ck
Thanks for posting that info, djhopkins2, it's hard to come across!
I have found that this function keeps the board from shutting off after a few seconds, but I only started using an hour ago; I don't know if it has any adverse effects on the hardware yet:
void disable_shutoff() { return M5.I2C.writeByte(0x75, 0x02, 0x00); }
Also, here are the corrected voltage and current functions alluded to above; I don't think the voltage can go negative, but the current certainly does, and displays meaningless values without this correction:
int16_t readBatV(uint8_t Reg) { uint8_t dataV[2] = {0, 0}; M5.I2C.readBytes(0x75, Reg, 2, dataV); if(dataV[1] & 0x20) dataV[1] |= 0xC0; // 14 bit 2's complement to 16 bit return (dataV[1]<<8) | dataV[0]; } int16_t readBatI(uint8_t Reg) { uint8_t dataI[2] = {0, 0}; M5.I2C.readBytes(0x75, Reg, 2, dataI); if(dataI[1] & 0x20) dataI[1] |= 0xC0; // 14 bit 2's complement to 16 bit return (dataI[1]<<8) | dataI[0]; } | https://forum.m5stack.com/topic/1643/powerc/1 | CC-MAIN-2020-45 | refinedweb | 171 | 59.77 |
File Lib/unittest/loader.py (right):
Lib/unittest/loader.py:231: raise TypeError('Can not use builtin modules as
dotted module names') from None
Some of these lines are too long.
Lib/unittest/loader.py:312:
There's some trailing space that should be removed, here and elsewhere.
I haven't looked at what this patch actually does, so I don't have a feedback
there. However, there are a couple things that needs to be addressed relative
to using the loader's '_path' attribute (which you shouldn't).
File Lib/unittest/loader.py (right):
Lib/unittest/loader.py:223: if hasattr(the_module.__loader__, '_path'):
Using _path isn't okay. Once PEP 451 lands in the next couple days you should
be using the module's spec:
spec = the_module.__spec__
if spec.loader is None:
if spec.submodule_search_locations is not None:
is_namespace = True
for path in the_module.__path__:
...
Lib/unittest/loader.py:225: for path in the_module.__loader__._path:
Regardless of using the spec, you should not be using the loader's _path here.
You should be using the_module.__path__.
Thanks for doing that, especially now that PEP 451 has landed. :)
File Lib/unittest/loader.py (right):
Lib/unittest/loader.py:223: spec = the_module.__spec__
There is no guarantee that the_module has a __spec__ attribute (if the module
replaced itself in sys.modules). So you might want to handle AttributeError
here appropriately.
Lib/unittest/loader.py:238: elif spec.name in sys.builtin_module_names:
This should probably stay the_module.__name__, since there is no guarantee that
the module's published name will actually match the name in the spec. (In
practice it will match though.) | https://bugs.python.org/review/17457/ | CC-MAIN-2019-47 | refinedweb | 274 | 62.64 |
So now that you have a little idea what Android is all about, we will create our very first Android Project.
Launch Eclipse IDE. Navigate to File->New->Android Application Project. The “New Android Application” dialog comes to the front.
Depending on the version of Eclipse and Android SDK you are using, some options may or may not be present and some might be distributed over a number of dialogs.
- Project Name is the name of the project you want your application to be in.
- Application Name is the name of the application. This is the name by which your application will be installed in users’ devices.
- Package Name is the package in which your application resides. It often looks something like “com.example.yourappname” and automatically appears once you put in the Application Name. You can choose to edit it or leave it be. Usually developers replace “example” with their name.
- Minimum Required SDK or Min SDK Version shows the lowest Android version that your application supports. For instance if you set this to API 9 (Gingerbread), devices with API 1-8 will not be able to install your application. It is generally advisable to use as low an API as possible. But then there arises a problem that you will not be able to include features or elements that have been introduced in the later versions. You need to analyze this trade-off and choose this accordingly.
- Target SDK or Build Target is the version of Android that your applications targets.
- Compile With should generally be the same as Target SDK.
- Theme shows the basic theme that you want your application to have. Choose whichever suits your needs.
- Create Activity is to be checked and the name of an activity to be provided. Usually, this is the activity that appears when your applications is launched and is named Main or MainActivity.
- If you have the latest versions of Android SDK you will be taken to a number of dialogs from here where you can select the icon of your application, navigation type etc. Since it is our first application, we will let them be at their default values.
- When you are done Click Finish.
Now to the left of your coding area, you can see a “Package Explorer” tab that lists all packages your workspace contains. You package sould be visible in this area.
If you can see a small cross in the icon of you project, don’t worry. Android needs a little time to set itself up, after which the cross vanishes. In fact all packages will have this cross every time you launch Eclipse.
UNDERSTANDING THE SKELETON APPLICATION.
You have just created an Android Application Project. In order to assist you Android creates a “Hello World” skeleton application on it’s own. Navigate to <yourprojectname> -> src -> <com.yourname.yourappname> -> Main.java. In newer SDKs Main.java is named MainActivity.java. Once this opens up, take a good look at the source code. If you have prior java experience, you can see some familiar things. You can see the import packages, class name and the parent class name from which this class is inherited. You can also see some functions/methods in the source code. Below is the explanation of all the elements of the source code.
- import android.app.activity : This is the Android Activity package. This package contains all the functions that one can possibly use in an Activity.
- public class MainActivity extends Activity : Each activity in your application will be in the form of a Class. This class inherits from the Class Activity. This class/activity hence should override the required functions of the original class.
- onCreate() : This is the function that is called when an activity is first created. This takes an argument Bundle savedInstanceState. For the time being, it will be sufficient for you to know that the state of an activity is saved in a Bundle. We will get into further details later.
- super.onCreate() : This calls the onCreate() method of the super class i.e. the Activity class with the same arguments.
- setContentView() : This assigns a layout to the Activity. The argument to this function is R.layout.activity_main. We’ll come to it in a minute.
So, when the activity is first created a couple of things are done. The super class method is called and a layout is assigned to the activity. There are many other functions such as onPause(), onStart(), onResume() etc. in order to monitor the life-cycle of an activity. We’ll discuss this in detail at a later time.
In order to understand what a layout is navigate to yourprojectname -> res -> layout. You can now see the “activity_main.xml” file. In some systems it might be named “main.xml”. This is the layout of your activity. Open it and have a look at the source code.
- You can see that there are two tabs at the bottom – Graphical Layout and activity_main.xml
- The Graphical Layout shows what your layout would look like in the device/emulator.
- activity_main.xml shows the xml code of the layout.
Depending on the version of SDK some of the below components may or may not be present in your xml code.
- LinearLayout : It tells you that all the components in your layout will be laid down in a linear fashion i.e. one below the other.
- RelativeLayout : It tell you that all the components in your layout will be laid down in a relative fashion i.e. you can choose the height and width of it’s location from the axes.
- TextView : This is a rather widely used component in applications. It is used to create static text in an application.
You can also see some properties inside the tags in the xml code. Here’s a brief description of them
- layout_width/height determines the width and height respectively of the particular element. match_parent/fill_parent means that it covers the entire width/height of it’s parent layout i.e. the layout it is inside of. wrap_content means that it only occupies the space that it requires.
- margin_top/bottom/left/right determines the margin between two components or the borders of the screen
- orientation determines the orientation in which elements will be added within this layout. If the orientation is horizontal elements will be added side-by-side. It is mainly a feature of Linear Layout.
- text is present in the textview. It determines the text that the textview is to show on the screen.
So, now that you are familiar with the skeleton application that Android provided you with, it is time to see how it will look like in a device. For this we will use an AVD that we have previously created. If you haven’t created an AVD create one now.
Click the “Run” button which will pop up a dialog box in which you should select Android Application. This dialog appears only the first time an activity of an application is run. If the emulator was already running when you ran the application it installs on your emulator and if not, the most suitable emulator from your created AVDs is selected and launched. Once the application is installed it runs automatically. You can see the “Hello World, Main!” message on the screen.
Although this is a basic way to get started with an application, I’m gonna go ahead and tell you a couple of things that might interest you. Navigate to <yourprojectname> -> gen -> R.java.
Caution : Take care not to edit the contents of R.java file as it is an Auto-Generated file by the Android system.
Let’s look at how things work
- With every resources that you add to your project (resources reside in <yourprojectname> -> res) Android is going to add an entry to the R.java file.
- drawable/layout/string are all resources that you can see in <yourprojectname> -> res. You can now understand why the argument in the setContentView() is R.layout.activity_main.
- In order to refer to each resource, Android assigns a numerical value that is represented in the hexadecimal form.
Another interesting thing is the res folder. It contains all the resources that your application might need.
- drawable hdpi/ldpi/mdpi/xhdpi/xxhdpi : These contain the pictures or icons that you have in your applications. In order to put an icon in you application, navigate to <eclipseworkspacelocation> -> <yourprojectname> -> res -> drawablehdpi, paste the icon in it and from your code refer to it as R.drawable.<youriconname>
- You can choose to have the same icon in different resolutions for devices that have hdpi/ldpi/mdpi/xdhpi/xxhdpi screens.
- You can also see values folder in the “res” folder. It contains dimens.xml, strings.xml, styles.xml. Click on each one and have a look at the source code.
Now, that you have an idea how to create an application, I will discuss about the elements of the layout and multiple screen support in my later posts. | http://www.codemarvels.com/2013/07/creating-your-first-android-application/ | CC-MAIN-2020-34 | refinedweb | 1,500 | 67.04 |
Hi,
New to community.
I couldnt able to run scripts marked as solutions for TechCorner Challenge #10 . ( Reading messages from Queue).
Could you suggest any pre-requisite to run this scripts. (If any jars, where to refer)
Also what should i give Host & port number ?
def host = "tcp://x.x.x.x:61616"
Many Thanks
Solved!
Go to Solution.
The scripts of mine that were marked as an answer require the ActiveMQ JAR to be copied and placed into the LIB folder within your ReadyAPI installation.
The line def host = "tcp://x.x.x.x:61616" needs to be pointed at the ActiveMQ server you wish to utilize the scripts to run against. These scripts require an actively running ActiveMQ server to communicate with. You can get the ActiveMQ Client Jar from the Maven Repository
View solution in original post
Thank you for the clarification Matthew, this is much appreciated!
@sprasadboga did this help you run the script?
Many Thanks !
More than welcome, and I am always happy to help! I've just been tied up a lot so I missed the message you sent at first. Just FYI, that's why it's better to make a post instead of private messaging someone. 😁 | https://community.smartbear.com/t5/API-Functional-Security-Testing/Fail-to-Connect-to-JMS-using-Groovy-script/m-p/207242/highlight/true | CC-MAIN-2021-21 | refinedweb | 204 | 77.03 |
0
I'm having difficulties understanding and printing the max value of a size_t type.
Apparantly the size_t is typedefed to a unsigned long int on my system. Is this safe to assume its the same on all c++ compilers?
The 'sizeof' function returns the number of bytes(size char), used to represent the given datatype.
Why does 'printf("size of size_t: %d\n",sizeof(size_t ));' make a warning?
The size_t is a unsigned long int, so apparantly it uses 8 bytes,
thats 8*8 bits, thats 64 bit
Why cant I do a 63 bit left shift?
It seems that the max value of a size_t on my 64bit ubuntu system is
18446744073709551615
If I had one hell of a machine would this mean that I should be able to allocate an array with this extreme big dimension?
Thanks in advance
#include <iostream> #include <limits> int main(){ std::cout<<(size_t)-1 <<std::endl; std::cout<<std::numeric_limits<std::size_t>::max()<<std::endl; printf("max using printf:%lu\n",std::numeric_limits<std::size_t >::max()\ ); printf("size of size_t: %lu\n",sizeof(size_t )); printf("size of size_t: %d\n",sizeof(size_t )); printf("size of long int: %d\n",sizeof(long int)); printf("size of unsingned long int: %d\n",sizeof(unsigned long int)); unsigned long int tmp = 1<<63; std::cout<<tmp<<std::endl; return 0; }
-*- mode: compilation; default-directory: "~/" -*- Compilation started at Sun Mar 29 00:24:02 g++ test.cpp -Wall test.cpp: In function ‘int main()’: test.cpp:10: warning: format ‘%d’ expects type ‘int’, but argument 2 has t\ ype ‘long unsigned int’ test.cpp:11: warning: format ‘%d’ expects type ‘int’, but argument 2 has t\ ype ‘long unsigned int’ test.cpp:12: warning: format ‘%d’ expects type ‘int’, but argument 2 has t\ ype ‘long unsigned int’ test.cpp:14: warning: left shift count >= width of type Compilation finished at Sun Mar 29 00:24:02
./a.out 18446744073709551615 18446744073709551615 max using printf:18446744073709551615 size of size_t: 8 size of size_t: 8 size of long int: 8 size of unsingned long int: 8 0 | https://www.daniweb.com/programming/software-development/threads/184011/size-t-sizeof-long-unsigned-int-printf-max-value | CC-MAIN-2017-26 | refinedweb | 349 | 60.65 |
Results 1 to 2 of 2
Thread: PHP vs Python and Ruby
- Join Date
- May 2012
- 16
- Thanks
- 4
- Thanked 0 Times in 0 Posts
PHP vs Python and Ruby
Hello,
In recently time, I more and more listen, that Ruby or Python are bests languages that PHP.
I interested this is correctly? If yes, to tell least one example.
that is:
"Python (or Ruby) can "something", but php cann't this, so, php is worse!"
Please tell this "something".
- Join Date
- May 2012
- 7
- Thanks
- 0
- Thanked 1 Time in 1 Post
It is difficult to define what readability and usability means to programming
language users. PHP follows a very classical approach, is extensively documented
and will probably be the most familiar to former C-programmers. Python with
its strict indentation enforcements and the small set of keywords will probably
be the best choice for programming beginners. Finally Ruby will probably be
attractive for Smalltalk-enthusiasts and experienced programmers, that look for
elegant and powerful programming expressiveness.
While Python seems to have the most readable syntax of the three languages
(because of the enforced program strucuture), Ruby seems to be the most usable
one (because of its principle of least surprise). Of course PHP is a readable
language too, because most programmers are familiar with C-based syntax.
Ruby is a language that has only one major web framework in the market: Ruby
on Rails. It makes use of CGI as gateway but also provides its own web server,
which is recommended for development and testing only. I will skip a hello world
example here and continue with listing 1.6, the check login function in Ruby.
Listing 1.6. Checking login data in a Ruby
r e q u i r e ' d i g e s t /md5 '
def che c k l o g in ( username , password )
hash = Dige s t : :MD5. hexdi g e s t ( "#fpasswordg" )
username = db . e s c a p e s t r i n g ( "#fusername g" )
r e s = db . query ( "
s e l e c t u s e r i d from us e r s
where username = ' " + username +" '
and password = ' " + password +" ' ; " )
row = r e s . f e t ch r ow
unless row . ni l ?
return row
Cloud Redundancy | http://www.codingforums.com/other-server-side-languages-issues/261618-php-vs-python-ruby.html | CC-MAIN-2015-48 | refinedweb | 384 | 70.13 |
Setting Up Your PYTHONPATHDescription: What to do to find other python modules
Tutorial Level: BEGINNER
Next Tutorial: Using numpy with rospy
Contents
Commonly in python, when you use another python module, you use:
import foo
in your python code and it is up to the user of your code to make sure module "foo" is in his PYTHONPATH. In ROS, it is important that people can install multiple ROS libraries side-by-side on the same computer. That means there may be two different modules "foo", and the right one needs to be in the PYTHONPATH.
The module "roslib" allows this situation, it will search for a ROS package in your current ROS_PACKAGE_PATH, and make sure that the python modules can be imported.
You have already written a manifest.xml file that declares your dependencies, so ROS will use this same manifest file to help you set your PYTHONPATH (we try to use the DRY -- Don't Repeat Yourself -- principle as much as possible).
This is all you need to do:
Make sure that your dependencies are properly listed in your Manifest
Add the following line to the top of your Python code:
import roslib; roslib.load_manifest('your_package_name') import foo
So if you declared the package of module foo in your manifest.xml, the line import foo will work after the roslib.load_manifest call.
NOTE: This line doesn't need to go at the top of every Python file; they just need to go at the top of any file that is a 'main' entry point.
roslib will load up your manifest file and set sys.path to point to the appropriate directory in every package you depend on. These two lines work because roslib is required to be on the PYTHONPATH of every ROS user.
Example
For this example we create a new package and try to reuse the message Num that we defined in our beginner_tutorials package:
For rosbuild we create a new package in our sandbox:
$ roscd $ cd sandbox $ roscreate-pkg listener_extend rospy beginner_tutorials
This should already put the dependency to beginner_tutorials in our manifest.xml file:
$ cat listener_extend/manifest.xml ... <depend package="rospy"/> <depend package="beginner_tutorials"/> ...
Next we create a node using the message defined in beginner_tutorials. Create a diretory nodes first:
$ cd listener_extend $ mkdir nodes
In the nodes directory, create a file listener_extend.py with these contents:
Make this file executable and run it:
$ chmod u+x nodes/listener_extend.py $ nodes/listener_extend.py num: 0
As you can see, the Num module of beginner tutorials was successfully found even though it is not on your PYTHONPATH.
In ROS versions earlier than ROS Groovy, a library called roslib achieved this for us.
With catkin, python imports are done without roslib:
import foo
Catkin sets up PYTHONPATH in your catkin workspace and some relay files so that this works even with two python modules in your src folder. So if you have two modules in your catkin workspace, where one depends on the other, you need to configure and build the modules first, and then you can run them.
Example
For this example we create a new package and try to reuse the message Num that we defined in our beginner_tutorials package:
For catkin we create a new package in our source folder:
$ cd catkin_ws/src $ catkin_create_pkg listener_extend rospy beginner_tutorials
Next we create a node using the message defined in beginner_tutorials. Create a diretory nodes first:
$ cd listener_extend $ mkdir nodes
In the nodes directory, create a file listener_extend.py with these contents:
Note that in comparison to rosbuild, no roslib call is required.
Make this file executable and run it:
$ chmod u+x nodes/listener_extend.py $ python nodes/listener_extend.py num: 0
If this fails,make sure you have run
$ source ~/catkin_ws/devel/setup.bash
before executing nodes/listener_extend.py
As you can see, the Num module of beginner_tutorials was successfully. | https://wiki.ros.org/rospy_tutorials/Tutorials/PythonPath | CC-MAIN-2020-29 | refinedweb | 644 | 51.58 |
I'm currently working on a TypeScript project that is using Socket.io to communicate between a Next.js frontend and a custom Express server backend.
While setting up Socket.io I struggled to find documentation explaining how you could set up Socket.io in a TypeScript project using the ES6
import syntax rather than
require. It was even more difficult to find anything that explained how it should all fit together with Next.js.
And so this post was born...
If you're starting from scratch...
If you want to make a TypeScript/Express custom server Next.js project, mine was created by combining the custom Express Server example and custom TypeScript Server example located in the Next.js repository.
First I created the project using the command
npx create-next-app --example custom-server-typescript to create the custom TypeScript server. Then I retrofitted Express into it by looking at the custom Express server example. The resulting
server.ts is at the bottom of this post.
Why didn't I follow another example?
Most of the examples I saw online want you to do something like the following:
import express from 'express'; const app = express(); const server = require('http').Server(app); const io = require('socket.io')(server);
But I didn't want two or any random
require statements in my TypeScript code if I thought they could be avoided.
My
server.ts with only ES6 import
The dependencies you need (In addition to Next.js/React/TypeScript):
npm install -s express @types/express socket-io
The code you've been waiting for:
import express, { Express, Request, Response } from 'express'; import * as http from 'http'; import next, { NextApiHandler } from 'next'; import * as socketio from 'socket.io'; const port: number = parseInt(process.env.PORT || '3000', 10); const dev: boolean = process.env.NODE_ENV !== 'production'; const nextApp = next({ dev }); const nextHandler: NextApiHandler = nextApp.getRequestHandler(); nextApp.prepare().then(async() => { const app: Express = express(); const server: http.Server = http.createServer(app); const io: socketio.Server = new socketio.Server(); io.attach(server); app.get('/hello', async (_: Request, res: Response) => { res.send('Hello World') }); io.on('connection', (socket: socketio.Socket) => { console.log('connection'); socket.emit('status', 'Hello from Socket.io'); socket.on('disconnect', () => { console.log('client disconnected'); }) }); app.all('*', (req: any, res: any) => nextHandler(req, res)); server.listen(port, () => { console.log(`> Ready on{port}`); }); });
server.ts explanation
The main difference between my
server.ts and the ones produced by the Next.js examples is the use of the
http module to run the server whereas before Express ran it. This is required so that Socket.io can attach to the server once it's setup.
Additional changes:
- Changed
appto be
nextAppso that it is clearer that it was a
nextapp, also changed
handlerto
nextHandlerfor the same reason. In addition, it's the convention to use the
appvariable with Express.
- Used
http.CreateServer()rather than
const server = require("http").Server(app);to create the HTTP server.
- Used
io.attach()to attach to the HTTP server rather than using require e.g.
const io = require("socket.io")(server);.
Summary
This post demonstrates how to use Socket.io with a Next.js custom server using ES6
import rather than
require.
If this post helped you drop me a reaction! Found something I could improve? Let me know in the comments.
Thanks for reading!
Discussion (3)
nice, very helpful!
I was struggling to use socket.io with es6. Thanks for providing solution
Thanks, happy I could help! | https://dev.to/jameswallis/how-to-use-socket-io-with-next-js-express-and-typescript-es6-import-instead-of-require-statements-1n0k | CC-MAIN-2021-17 | refinedweb | 578 | 54.18 |
Figure 1 The Visual Studio 2010 SharePoint Project Templates. If you have new columns specified in a workflow, they will automatically be added to the schema of the list that you associate with the workflow. These are known as association columns and can be used throughout the SharePoint Designer workflow. User profile data can be bound to properties in workflow, making it possible to get information about the SharePoint user profile. For example, you could look up the name of a direct manager for approvals..
item or document. This is such a common scenario that developers will create dummy lists and items in order to just to add workflows. In Visual Studio 2010, you simply pick Site Workflow when creating the workflow project item, as shown in Figure 2.
Figure 2 The Site/List Workflow Dialog Page You can get to the activated site workflows by choosing Site Workflows from the Site Actions menu in SharePoint. It shows you new workflows you can start, any running site workflows and completed workflows. Since site workflows dont have a list item or document to start from, they must be started manually through the SharePoint user interface or via the SharePoint API. dont have access to the list fields to access data since no specific list structure is connected.
Figure 3 The SharePoint Designer Workflow Showing a High Privilege Impersonation Step SPTimer Location SPTimer service. Using a new option in SharePoint Central Administration, you can now set the preferred server where the SPTimer service runs. To do this, click in the Manage Content Databases menu of the Application Management section of SharePoint Central Administration. Then click on your content database and scroll down to the setting for Preferred Server for Timer Jobs (as shown in Figure 4). You can also manually stop the SPTimer service on any servers you dont want it to run on.
Figure 5 The Event Receivers That Can Be Built in Visual Studio 2010 SharePoint 2007 made it easy to involve people in workflows, but it was difficult to send and receive messages with external systems. The recommended approach was to use task items to send a message to the external system using a Web service and have the external system update the task to return the result. SharePoint 2010 adds support for pluggable workflow services. These will be familiar to Windows Workflow Foundation (WF) developers and are defined by creating an interface containing methods and events. This interface is connected to a pair of activities called CallExternalMethod and HandleExternalEvent. A tool comes with WF in the .NET Framework called WCA.exe to generate strongly-typed activities for sending and receiving based on the interface. These generated activities can be used in SharePoint workflow also. They do not require the interface to be set as properties and instead can be dragged directly from the toolbox to the workflow design surface. The service code that integrates with SharePoint workflow is loaded into the GAC for access by the workflow. It needs to inherit from the SPWorkflowService base class and it needs to be referenced in the web.config. Using these new activities, you can send asynchronous messages to an external system right from within the SharePoint workflow. I will investigate creating and using these activities later in this article in the second walkthrough.
Creating a new SharePoint workflow project is easy. Choose sequential workflow on the new project template selector as shown previously in Figure 1. This can also be done with the state machine workflow style. The new workflow project wizard has four pages. The first is a page that all SharePoint tools project templates have in common, where you identify the URL of the local SharePoint site you want to use to deploy and debug your solution, as shown in Figure 6.
Figure 6 First Page of the New Workflow Project Wizard The second is a page where you supply the name of the workflow and choose whether to associate it with a list or as a site. This page is shown earlier in Figure 2. The third page is where you decide if Visual Studio will automatically associate the workflow for you or if you can do it manually after it has been deployed. If you choose to leave the box checked, it has selections for the list to associate with (if you previously chose a list-based workflow), the workflow history list to use and the task list to use (see Figure 7). Typically, the history and task list will not need changing.
Figure 7 Third Page of the New Workflow Project Wizard The fourth and final page is where you select how the workflow can be started (see Figure 8). You should not unselect all three of these or your workflow will be very difficult to start. For a site workflow, you can only choose manually starting the workflow. For a list-based workflow, you can also choose to start the workflow instance when a document is created or to start when a document is changed.
Figure 8 Fourth Page of the New Workflow Project Wizard The new blank workflow model looks like Figure 9 in the design surface in Visual Studio 2010. The workflow activated activity is derived from the HandleExternalEvent activity and provides initialization data from SharePoint to the workflow instance. I will cover more of the HandleExternalEvent activity later.
Figure 9 The Default Blank Workflow Showing the WorkflowActivated Activity Step 2 Add an initiation form to the workflow An initiation form can be shown to the user when they start the workflow. It allows the workflow to gather parameters before it gets started. This can be added in Visual Studio 2010 easily by right clicking on the workflow item in Solution Explorer and choosing Add then New Item, as shown inFigure 10. Select the Workflow Initiation Form template and the new form is automatically associated with the workflow. It is an ASPX form that you edit in HTML (see Figure 11).
Figure 10 Adding a Workflow Initiation Form One method called GetInitiationData needs editing in the code behind the ASPX file. This method only returns a string; if there are multiple values, it is recommended you serialize them into an XML fragment before returning them. Once the workflow instance is running, it is easy to get to this string just by referencing workflowProperties.InitiationData. The multiple values will need to be de-serialized from the XML fragment if they were serialized in GetInitiationData.
For the initiation form, a single text field will be added and then that field will be returned from the GetInitiationData method. The tags in the following code are added inside the first asp:Content tag, like so:<asp:TextBox <br />
The GetInitiationData method already exists and just needs to have code added to return the MyID.Text property. The following code shows the updated method code:private string GetInitiationData() { // TODO: Return a string that contains the initiation data that will be passed to the workflow. Typically , this is in XML format. return MyID.Text; }
Step 3 Add a workflow log activity The LogToHistoryList activity is extremely easy to use. Each workflow instance created has a history list that can display in the SharePoint user interface. The activity takes a single string parameter and adds an item to that list. It can be used for reporting the status of workflow instances to users in production. Simply drag the LogToHistoryList activity from the toolbox to the workflow design and set the description property (see Figure 12).
Figure 12 The Visual Studio Toolbox Showing SharePoint Workflow Activities Properties in workflows are called dependency properties. These are bound at runtime to another activities dependency property, such as a field, a property or a method. This process is often called wiring up and it is what allows activities to work together in a workflow even though they dont have specific type information for each other at compile time. Each property in the property window in Visual Studio is wired to a class field or class property in the workflow class. Fields are the simplest to create and the dialog that comes up when wiring up the workflow property allows for creating new fields, creating new properties or wiring to existing ones.
Figure 13 Dependency Property Binding to a Created Field Figure 13 shows adding a new field with the default name. The following shows the added code to the MethodInvoking event handler for the activity to set the property to the workflow initiation data:
You can actually set the HistoryDescription property directly in code since this property doesnt connect between activities, but it is a good simple activity in order to learn a little about dependency properties. Step 4 Add a CreateTask activity This next step is the main part of human workflow interaction known as the SharePoint task item. Create a task, assign it to a person and then wait for the person to make changes to that task. The CreateTask activity has to be dragged onto the workflow design surface and then configured with all the required properties. Figure 14 shows the CreateTask properties window just after the activity is dragged on to it.
Figure 14 The Properties Pane Showing the CreateTask Activity The first thing needed here is a correlation token for the task. A correlation token is used for message correlation in workflow. It provides a unique identifier that enables a mapping between a task object in a specific workflow instance and the workflow runtime in SharePoint. This is used so that when SharePoint receives a message for the workflow instance it can locate the correct task within the correct workflow instance. The CorrelationToken property must be configured and its not recommended to use the WorkflowToken for tasks, although this isnt prevented in the tool. Enter the new name for the correlation token as TaskToken and press enter. Then expand the (+) symbol which appears and click the drop down to the right of OwnerActivityName and choose the workflow. The TaskId must be configured and a new GUID specified for the task id. This is done by selecting the TaskId property and clicking the [] ellipsis to bring up the property editor. Click the Bind to a new member tab, choose Create Field, and then click OK. The same must be done for the TaskProperties property by again selecting the property, clicking on the ellipsis and adding a new field. Next, double click on the new CreateTask activity on the workflow design surface to bring up the createTask1_MethodInvoking handler code and set the properties in code. The new task must be given a title, and for good measure I will set the task description to the string I got from the initiation form. Once all those properties have been set, this is the added code:public Guid createTask1_TaskId1 = default(System.Guid); public SPWorkflowTaskProperties createTask1_TaskProperties1 = new Microsoft.SharePoint.Workflow.SPWorkflowTaskProperties();
Step 5 Add the OnTaskChanged and CompleteTask activities Use the While activity to wait for multiple changes to the task until you see the changes you want. The While activity must contain another activity, such as the OnTaskChanged activity. The Listen activity can also be used to listen for multiple events at one time by adding a Listen activity with actual event receiver activities inside the Listen branches. Alternatively, you can just wait until the task is deleted with the OnTaskDeleted activity. A common requirement is to escalate or timeout waiting for an event. This is done by using a Listen activity to listen for both the task changed message and a timer message by using a Delay activity. The first message to be received by either contained activity resumes the workflow and the other activity stops waiting. The Delay activity is sent a message when the timeout occurs. There are lots of options described above but the simplest is to use OnTaskChanged which will wait for any change to the task item and then continue on. To configure the OnTaskChanged, you need to wire up the CorrelationToken and the TaskId properties to the same fields as the CreateTask. Last, add a CompleteTask activity and again set the CorrelationToken and TaskId properties. As shown in Figure 15, the activities that wait for a message are green while activities that send a message are blue. This is consistent throughout SharePoint workflow.
Figure 15 The Completed Simple Task Workflow Model Step 6 Deploy and Test the Workflow
Now the workflow is almost complete, so press F5 and wait for the deployment. Pressing F5 will compile the workflow, package the workflow into a WSP, deploy that WSP to SharePoint, activate the SharePoint features, attach the Visual Studio 2010 debugger to SharePoint and start Internet Explorer with the SharePoint site. Once the SharePoint site appears, choose Site Workflows from the Site Actions menu. You will see all of the site workflows and you can click on yours to start it. Once it is selected, you will see your workflow initiation form, as shown in Figure 16.
Figure 16 The Workflow Initiation Form Other improvements can be added to this basic starter workflow model, such as using a content type to define the task you assign to a person, using email to notify the user they have been assigned a task, and even using InfoPath forms to create more complex forms for users to complete in email or online.
received. The correlation token is used to ensure the correct workflow instance is resumed when a response is received. As an example, consider two common examples of long-running work. One example is a synchronous Web service request where the caller is blocked until a response is received on the channel. This is not suited to run within a workflow activity. The workflow service needs to wait for the Web service call to complete and then to send a message back to the workflow instance with the response. Another example is a CPU intensive work item that needs to run but because it may take longer than a second it needs to run outside of the workflow activity. Again, you can use a workflow service and create a thread in the workflow service to execute the CPU intensive work item. The new thread can send a message back once it is finished. Both of these examples are implemented with the same pattern. The following walkthrough shows how to create a new SharePoint Sequential Workflow and call a pluggable workflow service that factors prime numbers to identify how many prime numbers there are under 100,000,000. This work takes too long to execute within the main line of an activity. Step 1 Create a Pluggable Workflow Service To create a pluggable workflow service in Visual Studio 2010, you need to implement the service as an interface and class in your project. Figure 17 shows an example of a pluggable workflow service which can be added to your project in a new class file called MyService.cs. In the code, the interface definition is in IMyService, the interface implementation in class MyService and the MessageOut method runs most of the logic through an anonymous method delegate on a separate thread. In the separate thread, the call to RaiseEvent will send the message back to the waiting HandleExternalMessage activity. Figure 17 The Pluggable Workflow Service// Interface declaration [ExternalDataExchange] public interface IMyService { event EventHandler<MyEventArgs> MessageIn; void MessageOut(string msg); }
// Arguments for event handler [Serializable] public class MyEventArgs : ExternalDataEventArgs { public MyEventArgs(Guid id) : base(id) { } public string sAnswer; }
// Class for state class FactoringState { public SPWeb web; public Guid instanceId; public FactoringState(Guid instanceId, SPWeb web) { this.instanceId = instanceId; this.web = web; } }
// Interface implementation class MyService : Microsoft.SharePoint.Workflow.SPWorkflowExternalDataExchangeService, IMyService { public event EventHandler<MyEventArgs> MessageIn; public void MessageOut(string msg) { ThreadPool.QueueUserWorkItem(delegate(object state) { FactoringState factState = state as FactoringState; DateTime start = DateTime.Now; int topNumber = 100000000; BitArray numbers = new System.Collections.BitArray(topNumber, true);
for (int i = 2; i < topNumber; i++) { if (numbers[i]) { for (int j = i * 2; j < topNumber; j += i) numbers[j] = false; } } int primes = 0; for (int i = 2; i < topNumber; i++)
{ if (numbers[i]) primes++; }
string sAnswer = "Found " + primes + " in " + Math.Round(DateTime.Now.Subtract(start).TotalSeconds, 0) + " seconds";
// Send event back through CallEventHandler RaiseEvent(factState.web, factState.instanceId, typeof(IMyService), "MessageIn", new object[] { sAnswer }); }, new FactoringState(WorkflowEnvironment.WorkflowInstanceId, this.CurrentWorkflow.ParentWeb)); }
// Plumbing that routes the event handler public override void CallEventHandler(Type eventType, string eventName, object[] eventData, SPWorkflow workflow, string identity, System.Workflow.Runtime.IPendingWork workHandler, object workItem) { var msg = new MyEventArgs(workflow.InstanceId); msg.sAnswer = eventData[0].ToString(); msg.WorkHandler = workHandler; msg.WorkItem = workItem; msg.Identity = identity; // If more than one event - you'd need to switch based on parameters this.MessageIn(null, msg); } public override void CreateSubscription(MessageEventSubscription subscription) { throw new NotImplementedException(); } public override void DeleteSubscription(Guid subscriptionId) { throw new NotImplementedEx:<WorkflowServices><WorkflowService Assembly="WorkflowProject1, Version=1.0.0.0, Culture=neutral, PublicKeyToken=YOURPUBLICKEY" Class="WorkflowProject1.MyService"> </WorkflowService></WorkflowServices>
The assembly name and public key information can be obtained using GACUTIL.EXE /l. Step 3 Create the workflow model The rest of the work is easy. Add Call External Method and Handle External Event activities to the workflow and configure them to point to this service, as shown in Figure 18. The Call External Method activity will call MessageOut in the service and the Handle External Event activity will wait for the event MessageIn. This involves setting the InterfaceType and EventName properties in each of the two activities.
Figure 19 Completed Prime Calculating Pluggable Workflow Service Step 4 Run the workflow When this workflow is run, it will start a separate thread in the pluggable workflow service to do the prime number calculations. After 10-15 seconds of running, a workflow history item appears as shown in Figure 19. It shows that there are 5,761,455 prime numbers under 100,000,000. As shown, there is some code to write for pluggable workflow services and more code to write for each system you connect to. There are two options for reducing the amount of code required to communicate with external services. One is by interfacing with BizTalk Server and making use of the BizTalk adapter library for connecting other systems. The other is using the business connectivity services in SharePoint 2010 which provide a way to expose data from another system through external lists. | https://de.scribd.com/document/201343907/Collaborative-Workflow | CC-MAIN-2021-04 | refinedweb | 3,063 | 51.89 |
I did some consulting work recently for a company that had a lot of JavaScript embedded in pages that was used used to perform advanced client-side functionality and make AJAX calls back to the server. The company needed additional team members to be able to contribute to the application without spending a lot of time learning client-side Web technologies. One solution was to provide good documentation of the JavaScript objects and methods that could be called. This still required some fundamental knowledge of JavaScript though. The focus, however, seemed to be on getting other team members involved with learning C# and server-side technologies so that they could also build back-end code tiers rather than having everyone spend time learning JavaScript and other related client-side technologies such as CSS and DHTML/DOM.
After seeing the existing application and hearing about the challenges the team was facing, I provided a demo of how custom ASP.NET server controls could have JavaScript embedded in them fairly easily. By going this route the people that knew JavaScript could still leverage their existing skills, but they could wrap the client-side functionality in a server control that other developers could more easily consume and use without having to be client-side gurus. Going this route also allowed properties that scripts may utilitize to be exposed along with documentation of what the properties do.The following steps show how to accomplish this type of JavaScript encapsulation. The steps extend ASP.NET's standard GridView control and customize it by adding additional client-side functionality. I'll admit upfront that this is a fairly simple example designed only as a starting point. However, the concepts can be applied to more advanced cases. I'm using the exact same principles in version 2.5 of my OrgChart.NET server control (due out soon).
Step 1: Choose how to create your custom control. You have two main choices when creating custom server controls. You can choose to write a control from scratch, or you can extend an existing control provided by ASP.NET or a 3rd party vendor. Let's say that you'd like to create a "fancy" grid control that highlights rows as a user mouses over them and allows users to select one or more rows and highlight them. All of this is done on the client-side without any postback operations. While you could write the grid functionality from scratch, why not leverage what Microsoft has already done if it gets you to the desired end result faster and satisfies project requirements? This is easily done by creating a new Class Library project in VS.NET 2005 and deriving the custom control class from GridView as shown next:
The ToolboxData attribute defines what markup code should be added to an ASP.NET page as a developer drags the control onto a page's design surface.
Step 2. Write the client-side code. This step could certainly be performed later, but I like to write the client-side enhancements upfront. In this case, if we want end users to be able to see highlighted rows as they mouse over them, or select rows by clicking them, we can use code similar to the following.
Since we want to embed the JavaScript code directly into the control, add a new JavaScript file into the Class Library project and set its Build Action to "Embedded Resource". This can be done by right-clicking on the .js file in the Solution Explorer, selecting Properties, and changing the Build Action property.
Step 3. Define the JavaScript file as a Web Resource. There are two main ways to output JavaScript from a server-side control. First, you can embed JavaScript in the control's code and then output it using the ClientScriptManager class's RegisterClientScriptBlock() method (or another custom technique). This works, but makes the control's code less maintainable since JavaScript is embedded in VB.NET or C# code. Also, when the JavaScript is output it is embedded in the page's HTML which isn't a good option when you want to leverage client-side caching of JavaScript files.
The second option (the better option in my opinion) is to use ASP.NET 2.0's WebResource.axd HttpHandler in conjunction with the ClientScriptManager class to dynamically output embedded resources such as JavaScript files. By using WebResource.axd the custom control can output a <script src="..."></script> tag into the page that dynamically references the JavaScript resource on the server..
To allow the custom control to write out <script src="..."></script> tags, the target JavaScript file must first be added as a resource into the project as shown in the previous step. After doing that, the following code should be added into the AssemblyInfo.cs file (this file is added automatically when you create a Class Library project).
Step 4: Render the script to the browser. Once the WebResource attribute has been added into AssemblyInfo.cs, the following code can be added into the custom control to cause a <script src="..."></script> tag referencing the embedded script resource to be output to the page. The call to the ClientScriptManager's RegisterClientScriptResource() method does all the work. It takes the type of control being rendered as well as the resource name defined in AssemblyInfo.cs as parameters.
Step 5: Hook the custom control to JavaScript functions. Now that the JavaScript is ready to be output, the custom control needs to be "hooked" to the client-side functions shown earlier in Step 2 in order to perform the desired behaviors. The following code uses the GridView's RowCreated event to add client-side mouse over, mouse out and mouse down attributes to rows in the grid. Adding these event handlers could also be done entirely on the client-side in cases where the size of the HTML markup sent from the server to the browser needs to be kept to a minimum.
The end result of following these steps is that JavaScript is dynamically output from the control in an efficient manner. An example of what the ClientScriptManager outputs to reference the JavaScript resource is shown next (some characters were removed for the sake of brevity):
The control is now self-contained and does not require any additional files to be deployed before it can be used. It's also easy to maintain since the JavaScript is stored as an external resource rather than being embedded in VB.NET or C# code. While this is a simple example, more complex scripts and code can be written to handle the needs of any Web application. The complete code for the custom control discussed in this article can be downloaded here.
Very nice article.
submitted in queue @ tweako ( )
PingBack from
Hi Dan,
Thanks for posting this article, I'm trying to figure out better ways of incorporating Javascript into my applications and this really got me thinking.
Two issues I ran into with your sample code: I needed to remove some stray characters to get it to compile and the Javascript effect doesn't work with Firefox.
No problem. I fixed the stray characters (not sure how those got in there when I zipped up the folders) and changed the onmouseenter and onmouseleave calls (which were there intentially since the company I was working with only used IE internally) to onmouseover and onmouseout so that FireFox can still play. The new download has the changes.
Great article, this is just what I was looking for, well almost!
Problem is I need to add an external Javascript or CSS file where certain elements are dynamically generated somehow, for instance...
string startUpScript = @"
var linkElement = document.getElementById(""" + this.ClientID + "_link" + @""");
linkElement.style.display = ""inline"";
";
this.Page.ClientScript.RegisterStartupScript(this.GetType(), this.ID, startUpScript, true);
I know I could write a Javascript function where the ID is passes in as an argument, but I have other more complex examples that would take too long to explain here.
Dynamic scripts make it a little trickier. However, you can still add the embedded script through the technique shown here, but then dynamically output another piece of script that calls into the one you're embedding. Similar to what you're doing above, but instead have it call a well-known function in the embedded script and pass in the ClientID. That way you still get the dynamic code you need, but also get the benefits of cached script files that are embedded in the assembly and less messy to maintain.
Cool Article
Hi
this article is really very helpfull.....but i m not able to call my javascript function it gives me an error object required.
Thanks!
I took your code and rewrote it in VB, hoping to learn how to do this.
After fighting a bit with the RegisterClientScriptResource type and namespaces for the resource, I got it working, and now am extending it my way.
Is there a reason not to use CSS styles instead of the background colors?
No problem, glad that helped.
The JavaScript is actually using CSS style properties although I'm assuming you're referring to CSS classes. You could certainly tweak the code to do that...it would actually simplify things a bit. Having both options (the ability to set the colors directly and the ability to assign a CSS class) would be a good thing to have since some people may not want to go to the trouble to embed a CSS class somewhere in a page or external .css file.
Fantastic! - just what I was looking for. Need one enhancement and not sure the best way...
Let's say the CustomGridView is wrapped in an ASP.NET AJAX UpdatePanel. When I mousedown, I get the client side selection and then on the server side, I set the color of the selected row so that it shows up on the partial postback. No probs.
The issue is when I mousedown the second time, I am not clearing the previous selected row until I get the partial postback (i.e. set from the server side). How do I clear the the previously selected row from the client side?
I'm guessing that there's a reason you need the partial-page postback to occur (such as tracking the currently highlighted row on the server-side to perform an action). If that's not actually needed I'd recommend just doing it client-side.
Before the partial-page update occurs you'd need to write some JavaScript to perform the clearing of the previous row which means you'd need to track the currently highlighted row in a client-side variable or on the GridView as an HTML attribute used to track state. You could handle this with some script associated with the GridView, or use the PageRequestManager from ASP.NET AJAX to know when a partial-page update is occurring and then clear the previous row. You can read more about the PageRequestManager here and I have a video on my blog about using it as well.
ajax.asp.net/.../default.aspx
It would probably be easier to handle it within the GridView if possible though.
This article has given me the new market for Custom Web Control, on basis of this article, i have started my business of creating and selling customize web control.
Thanks a lot
Your right, I need the to track the highlighted row as on the server the click event gets data associated with that row to fill another GridView on the same page.
I used the PageRequestManager and client side variable to track the row as you suggested. Works perfect - for 1 GridView. The problem is that there are 5 GridViews on the page in an UpdatePanel and the JavaScript Function MouseDown does not know which GridView the row was selected on belongs to.
You mentioned about adding HTML attributes to the GridView to track state. Any suggestions on how to do that?
Thanks again for your help!
You'll find an example of doing that with the control described in this post actually. You can call getAttribute() and setAttribute() to add data directly to a GridView on the client-side. That way you don't have to worry about having more than one GridView since each one tracks its own state. It's a nice little trick that avoids having to track state using JavaScript arrays that I use where possible.
Paresh,
I'm glad you've decided to make a business out of controls and honored that my post helped you get there. Best of luck!
Hey thanks for the walkthrough. This made it alot easier for me to include the javascript that I wanted with my control. I do have one question Is there any way to have the link to the external javascript appear with the <head> tags? I read some post about javascript not always working correctly when thay are outside the <head> tags. Also I was wondering if there would be an easy way to add a comment /* javascript required to support (x) control */. I think that this would be very cool.
Pingback from Making my life easier with the scriptaculous drag'n drop library - mangelp Jaws blog!
Controle ASP.NET customizado com JavaScript Embedded (usando WebResource.axd)
Pingback from attributes of a good manager
本次选择示例来自第12篇(因为最此示例简单): 新在a sp.net2 .0是能够添加r unat= “server”页面< head>标记。 这可让您以编程方式修改头部元素和其他要素里面
Inserire file Javascript in un Custom Control ASP .NET
In questo post viene spiegato come incorporare un file Javascript all'interno di un Custom Control | http://weblogs.asp.net/dwahlin/archive/2007/04/29/creating-custom-asp-net-server-controls-with-embedded-javascript.aspx | crawl-002 | refinedweb | 2,267 | 63.09 |
Up to [DragonFly] / src / sys / dev / disk / vn>.
Add '-l' support to vnconfig(8) and supporting VNGET ioctl to vn(4). Inspired-by: OpenBSD (with updates to vnconfig -l UI & swap support)
Fix device recognition, /dev/vn0 now uses WHOLE_SLICE_PART, not partition . Use the generic disk_info structure to hold template information instead of the disklabel structure. This removes all references to the disklabel structure from the MBR code and leaves mostly opaque references in the slice code.
MFC rev 1.30: pbn * vn->sc_secsize may wraparound, because both pbn and vn->sc_secsize are of type int. Cast to off_t before multiplication.
pbn * vn->sc_secsize may wraparound, because both pbn and vn->sc_secsize are of type int. Cast to off_t before multiplication. unused label.
* Ansify function definitions. * Minor style cleanup. Submitted-by: Alexey Slynko <slynko@tronet.ru>). cdevsw and dev reference counting ops to fix a warning that occurs when a system is halted or reboot 1: namespace cleanups. Add a NAMEI_ prefix to CREATE, LOOKUP, DELETE, and RENAME. Add a CNP_ prefix too all the name lookup flags (nd_flags) e.g. ISDOTDOT->CNP_ISDOTDOT..105.2.4 | http://www.dragonflybsd.org/cvsweb/src/sys/dev/disk/vn/vn.c?f=h | CC-MAIN-2015-22 | refinedweb | 186 | 60.72 |
Additional commentary from Damon Armstrong, Telligent consultant and author of Pro ASP.NET 2.0 Website Programming.
ASP.NET 2.0 and Visual Studio 2005 have been out for several months now, and many preople have had a chance to use the product in a day-to-day environment. Rick Strahl takes stock of the major changes and provides a personal perspective on some of the highs and lows.
Ed: This article originally appeared on Rick’s weblog (and in Code Magazine) where it provoked much debate. With Rick’s permisison, I edited the piece, summarized some of the major discussion points and invited contribution from Damon. Enjoy.
I have a bit of a love/hate relationship with ASP.NET 2.0 and Visual Studio 2005 but, nevertheless, I’ve decided to move up most of my internal applications to 2.0 and not look back. Basically, there are just too many new features in ASP.NET 2.0 that make my life easier, and make it hard go back to 1.1 for coding.
So far it has worked out great. I’ve found many improvements that have reduced code complexity and volume, improved performance by 10-20%, and reduced memory footprint (very important to larger applications), with almost no effort at all on my part.
Incorporating 2.0 features in existing applications is not something that will happen overnight – the biggest time sink is removing ‘legacy functionality’: things that the .NET framework deems outdated and wants you to replace with other things. However, I find that I am nevertheless rapidly adopting many 2.0 features into my new applications. In the end, going back to ASP.NET 1.1 and VS 2003 would seem like a huge step backwards.
Highs
So, let’s take a tour through some of the major changes, starting with the good and then on to the “not-so-good”.
File based projects in Visual Studio 2005 for development
In Visual Studio 2005 you can now open a directory as a web project, which is very nice. On my development machine, I probably have 50 different web projects hanging around and, using VS2003, it is a real pain to configure and maintain all of these as virtual directories in IIS, and to keep the project references right. You don’t think so? Have you ever tried to move projects to a new machine? In VS 2005 you simply point at a directory and the project opens. You can use. With file-based projects you now have – at least in development scenarios – the true promise of xCopy projects. This is a great feature, but not one that comes without pain (more on this later).
DJA: Everyone I talk to loves the Cassini web server built into Visual Studio, and I’m a big fan too because it simplifies a lot of things. However, there are a few pitfalls to its use. Primarily, you have to watch out for minor variances between Cassini and IIS. For example, Cassini passes all requests, regardless of extension, to ASP.NET. If you have custom handlers that process specific file types (e.g. dynamically built Excel reports, etc) then you have to remember to set up custom mappings in IIS for the extension when you deploy your application. Otherwise, IIS will not pass the request to ASP.NET. I’ve seen people lose a lot of time trying to troubleshoot their application at deployment because they did not have to worry about configuration settings in Cassini during development.
Master Pages
You can now define a Master Page template that can be reused throughout your application. This is a huge timesaver. There were ASP.NET 1.x based implementations of this concept floating around before 2.0 shipped, but to me the key feature that makes this viable is the visual support for it in Visual Studio. This enables you to see the master layout, along with the ContentPlaceholders used in each page to provide the page level content.
In addition to the important visual aspect of the designer, Master Page templates also provide a great means of hooking together related and reusable code. Because Master Pages tend to wrap a fair amount of functionality that previously required several user controls (for example, Header, Footer, Sidebar), they can isolate logic more efficiently than was previously possible.
DJA: Also noteworthy is the fact that you can dynamically change the Master Page at runtime, giving you an even greater level of visual flexibility. This support is great for allowing users to change the look and feel of an application in ways that cannot be achieved by merely switching style sheets.
Visual representation of user controls
Call me vain, but I really prefer to see what the whole page looks like while I’m designing it. As with Master Pages, Visual Studio 2005 can now display a rendered User Control, in place, in the Web Form Editor. Instead of the old, non-descript grey box with a control name, you now get a fully rendered layout right there in the designer. Double-click on it and VS takes you to the User Control Designer. I don’t use a lot of user controls, and foresee many of my existing ones being replaced with Master Pages, but nevertheless I find that this visual representation of the user control makes design mode a lot more useful. This is especially true for my existing 1.1 applications, which generally still use these controls for headers, sidebars and footers.
DJA: I wouldn’t call you vain. Full-rendering of User Controls is a huge time saver. I’ve wasted a lot of time flipping back and forth from the IDE to a browser trying to see how User Controls appear fully rendered. No more.
Generics
Ok, this isn’t an ASP.NET specific feature, but the introduction of Generics in .NET 2.0 has had a profound effect on the way I write code. In the past, I’ve been very wary of creating custom collections because, frankly, it was a pain in the butt to subclass from CollectionBase and then re-implement the same code over and over again. For custom control development, in ASP.NET especially, I find that Generic collections work really well when you need collection properties.
You simply use List, or one of the specialized Generic collections, as a property of the control and you’re done. Visual Studio sees the collection and, in most cases, provides the collection editor for you. Use of Generic Lists also makes it easy to replace many ArrayList based lists with strongly typed lists, which will often make code cleaner.
Finally, use of dynamic type replacement in business objects removes the need for the funky initialization code that had to be specified in each business object to specify which entity type is related to it. Prior to Generics a small stub of code was needed to tie business object and entity together, now with generics that code is gone in favor of a single generic type parameter. Once done, all class-level code can use that generic class description to automatically generate the correct types at runtime. This has removed a large chunk of cut-and-paste code from all of my business objects in favor of one type parameter and a couple of parent class level methods. There are many more places where Generics have had a similar effect on my code and I find it very hard to revert to not using Generic types and, especially, collections.
DJA: Generics is going to be most prevalent with collections and business objects, but you can even get some benefits using it in base classes for Pages and User Controls. I recently saw Kyle Beyer, one of the prodigies at Telligent, build a generic base page to automatically load business object data and setup our Ajax callback mechanisms for updating those objects. What would have been a headache on each new page suddenly became extremely simple.
Support for embedded resources
I tend to build a lot of custom controls for my applications and occasionally for tools and demos that I put out. Often, these controls require dependent resources, such as images, CSS files, XML Resources and so on. In such cases, any consumers of the control had to remember to distribute the appropriate files with their applications. No longer. Using Web Resources in ASP.NET you can easily embed these resources in a project, and then access them via a dynamic URL that ASP.NET generates. You simply add the [WebResource] attribute to your control’s AssemblyInfo file and then use Page.ClientScript.GetWebResourceUrl to retrieve the URL that contains the resource content.
The Visual Studio ASP.NET code editor
The code editor in Visual Studio represents a huge step up from the 2003 version. The most important new “feature” from my perspective is the fact that the new editor doesn’t automatically “wreck” my code formatting unless I reformat the document. I try to keep my content organized the way I like it, and it was a huge issue for me when VS 2003 would gleefully reformat the HTML whenever new controls were added to the page. In VS 2005, the editor preserves your code formatting in most cases and also does a much better job handling the insertion of control markup into the code.
A really big productivity-enhancer is the incorporation of Intellisense in the HTML editor – it’s everywhere! I frequently embed <%= %> expressions into a page and Intellisense means that I avoid typos. ASP.NET 2.0 also compiles the page and checks the embedded script code generated, so that errors in the HTML markup can be caught at design time rather than runtime.
Intellisense works for all controls, including your own, so no longer do you have to provide an undocumented schema file. Visual Studio will simply find your control and internally manage the Intellisense. The Intellisense support is so good that it almost makes sense to skip the visual designer altogether and work in code. You’ll see why this is actually more important then you might think in a minute.
DJA: As a recent convert from Visual Basic to C#, I am extremely pleased with Visual Studio 2005’s C# IntelliSense support. Previous version of Visual Studio seemed to have a fairly sizable gap in IntelliSense support between VB and C#, and I constantly found myself wanting the VB-like IntelliSense help when I ventured into C# code. Now that gap has been closed, making the language switch even easier.
Lows
The truth is that I could go on and on about features in ASP.NET 2.0 and VS2005 that I now use frequently. Suffice it to say there are many useful and productivity enhancing features in VS2005 that make it a very hard call to have to go back to ASP.NET 1.1 and VS2003.
Nevertheless,.
I’ll note upfront that some of the “lows” that I cover here – namely deficiencies surrounding page compilation, web projects and deployment – are partially or even fully corrected by a new VS 2005 Web Application Project option that is being developed by Scott Guthrie and his team. The RC1 was released on 5th April and it’s definitely worth checking out.
Visual Studio 2005 is slow, slow, slow – especially for ASP.NET applications
I used to think that Visual Studio 2003 wasn’t exactly a speed demon when it came to bringing up projects and switching views between documents, and especially when switching between code, HTML and Design views in Web Forms. Unfortunately, the performance of the editing environment in VS 2005, especially for context switches between code and the designer, is even worse. It must be pretty bad because now when I go back to VS2003 I’m amazed at just how fast it is!
The main issue seems to be the Web Form editor which is dreadfully slow in rendering controls onto the form. I have several large forms with about 50-70 controls on each, and these forms take upwards of 15 seconds to load – for every context switch. I recently bought a new dual core laptop and everything on my machine now runs “blazing fast”, except Visual Studio 2005, which is the one application that is still not even close to being as responsive as I would like. If you are upgrading, make sure you have a fast machine and LOTS of memory. Running Visual Studio commonly results in nearly 500 MB of memory usage on my machine for Devenv.exe alone. Add to that either IIS’s worker process or the VS Web Server, plus browsers, and you can easily chew through 2 GB of memory.
DJA: Performance is definitely an issue with Visual Studio 2005, but performance is the price you pay for all of the new features. It takes time to load up and render out that user control instead of showing a little gray box, and to display a Master Page with editable content regions inside the IDE. One way I’ve found to reduce the pain is to configure the IDE to open web forms in HTML mode instead of in the designer. IntelliSense support even in HTML mode is awesome, and I find it preferable to use HTML mode if I’m only making one or two minor changes. If you need to do some heavier work, then you can always switch into design mode manually.
Overall, I have not found the performance to be much of a deterrent. On the bright side, it gives you a great reason to tell your boss that you really do need that 64-bit dual core system with 8 gigs of ram that you’ve been coveting. The 24 inch LCD may be a harder sell.
Visual Studio 2005 Web Designer bugs
The biggest time sink in VS2005 has been in dealing with a variety of editor bugs that I and many others have hit. Most of these are non-critical but they are highly annoying and waste lots of time.
My list of small but time sapping little bugs in VS2005 Web Designer is quite long; the following section just highlights a few of the more annoying examples.
Visual Designer refuses to switch focus
The most frequent bug I run into is where the visual designer refuses to switch focus to another control. You click on a new control, but the property sheet stays locked on the first control. You have switch into HTML view, then back to get it to work. Alternatively, you can right click on the new control and select properties, the focus is then changes and you can edit the correct control properties.
Renaming difficulties
In some instances, VS won’t let you rename a control, saying that there are already references in the mark up asking if you want to continue. Whether you click continue or not, the rename doesn’t take and you have to switch to HTML view to change the name. Even when renames from the Web designer do work, they are very, very slow as VS does a full refactoring on the new name. In a large project, it can take 10-20 seconds plus a dialog click for the rename to complete. Now bear in mind that the HTML view there are no checks at all – I would expect the Web designer to behave the same.
Another annoying one occurs when using stock collections and the default Collection Editor. If you try to enter a custom name in pace of the default item name, VS2005 simply doesn’t save the changes. In fact, it simply blows away the properties and items you’ve entered. You have to use the default name or change the name in HTML view, then you can edit the item with a new name with the Collection Editor.
In any event, onto the form and move them into place, then use HTML view for most of my property assignments – at this point it’s simply more efficient to work this way. But it shouldn’t have to be this way.
Doubling up of style sheet references
If you have style sheet references in the page, VS2005 doubles up the style references randomly. I recently opened a page that had 10 of the same CSS references in the file, only one of which I had added.
HTML validation “Errors”
Another issue that’s not a bug but an odd implementation of a feature: VS 2005 flags a HTML validation error as an Error. A validation error is grouped in the same error list as a break-the-build error. Not only is the HTML validation overly anal with what it flags (example: it doesn’t understand named colors), but it actually obscures ‘real’ application errors. It’s not uncommon for me to have 20-30 HTML ‘errors’ and 1 code error. The code error will be at the bottom where it’s the least useful and there’s no visual separation. Validation is useful, but it needs to be separated and easily toggled on and off, ideally with a separate error tab. You can turn off HTML validation altogether, but the option is buried in the slow Tools | Options dialog in the Html Editing section.
DJA: As a side note, HTML validation errors only display when the page on which the error occurs is open in the IDE. If you find yourself wading through a swath of HTML errors, you can close down your open windows to weed them out. Annoying? Yes. But it gives you an alternative if you don’t want to turn off validation errors altogether.
ASP.NET page compilation
I mentioned earlier that the new file-based Web Project model is nice for development. In order for that model to work, ASP.NET has introduced some fairly radical changes to how web applications are compiled and deployed. Specifically, by default, each ASP.NET page in a project compiles into a single assembly. On the plus side, this makes it possible to unload each page individually so you can re-run it after making a change. On the down side, it means that each of these assemblies is dynamically generated and cannot be easily referenced from within your applications. In other words, it’s very difficult to get a strongly typed reference to another Page or UserControl in your application, because there’s no known class name to which you can cast.
This proves to be problematic if you have subclassed Page classes whereby one Page inherits from the stock behavior of another, based on an ASPX file. In ASP.NET 1.x, you simply referenced the first class with its namespace and classname. In 2.0 there’s no namespace, since the name of the class is dynamically generated and doesn’t exist in the same assembly.
The biggest problem with this behavior is manifested in applications that rely on dynamically loaded user controls. It’s nearly impossible to get a reference to a user control that is not explicitly referenced using the <%@ Register %> directive. In most scenarios using the <%@ Register %> directive does the trick, as long as you know exactly where you’re loading the control from – you have to specify the path of the user control in the @Register command. The Register tag then forces the Page assembly to reference the related assembly for the control.
However, if you’re dynamically loading a user control at runtime it may be that no Register directive is in place. So while you can load the control with Control. LoadControl() there’s no way to cast it to the control type. A highly visible example of this is DotNetNuke which heavily relies on user controls as well as .Text (which I use for my WebLog) both of which don’t run in ASP.NET 2.0 out of the box.
You might expect that if you write some code like that it should both compile and run as expected:
MyUserControl_ascx x = (MyUserControl_ascx)LoadControl(“~/UserControls/MyUserControl.ascx”);
x.MyProperty = “hello, world”;
Controls.Add(x);
However, it depends on how the app is compiled. If you compile per directory and the control is in the same directory then this works, but if you compile a single page then it won’t work. I think this is what VS 2005 does during development because it works with the control in the same directory. Move it into a different directory and it no longer works. You’ll get an invalid cast on the control.
DJA: We have experienced a number of issues pre-compiling applications that reference user controls by class. Our workaround was to implement interfaces for the controls instead of using the User Control classes directly. Another caveat to pre-compilation is that when you call LoadControl(“~/UserControls/ControlName.ascx”) the .ascx file you reference may only be a marker file. A marker file contains the text “This is a marker file generated by the pre-compilation tool, and should not be deleted!” Naturally, the LoadControl method can’t do much with that, so it fails if you try to load that user control. You have to specify tell the pre-compiler (aspnet_compiler.exe) to make the application updateable (via the -u command-line parameter). This tells the aspnet_compiler to actually write out your entire .ascx file, and not just the marker file it normally creates.
This is not a very common scenario, but there are a number of highly visible products that use dynamically loaded user controls in 1.1 to handle theming of applications. In those scenarios controls are loaded out of potentially unknown directories and there’s no way to get any sort of @Register or @Reference directive into the page to get a reference to the control. This particular scenario is more of a problem for migrations from 1.1 as many of these scenarios can be addressed with ASP.NET 2.0 Themes and Master Pages (for example:), but if you have to port a project that uses these techniques there are no easy workarounds.
There are a few workarounds, from creating base classes and interfaces that provide the published interface, but none of them do much good for existing applications. Fortunately, the Web Application Project option mentioned above addresses this concern. With that option everything is compiled into a single assembly – so direct references across pages/user-controls are allowed.
ASP.NET Web Projects aren’t real projects
ASP.NET projects aren’t real VS2005 projects – they’re a custom project type that by default are based on the directory structure in operating system file system. If you open an ASP.NET Web Project it pulls in any and all files below that project folder. This means you really have no control over what gets pulled into your ‘project’.
As a particularly disastrous example of the problem with this, it took about 20 minutes to open the root for my own web site in VS. It sits on top of a large number of virtuals; there are probably 50,000 files below the Web root. When I opened the root it pulled in everything – images, XML files, doc files, utilities, the lot…
File projects have no way of excluding anything. Any file in the tree becomes part of your project, be it an image, support file, configuration utility, or documentation file, and regardless of whether it really has much to do with your project. The reason for this is that there really is no project. A few things like the path mapping etc are stored with the solution, but overall there’s none of the typical project configuration that you see in class projects.
ASP.NET 2.0 compiles projects using a new ASPNET_COMPILER utility which copies everything to a ‘deployment’ directory. So not only is your project huge, but during development 30-40 seconds is just too long. The problem is so bad that I rarely compile my web project. Most of the time I work on a page, test run it and move on, but rarely do I do a full compile. The compile cycle is too slow.
Any code that is outside of CodeBehind pages and is stored in the same path as the ASPX files need to reside in the APP_CODE folder. Everything in APP_CODE is compiled and can contain plain classes and ‘stock’ projects in VS 2005. There’s no stock AssemblyInfo file, no support for Xml Comments or direct MSBUILD support for a Web, but other than the the file limiting aspect the project is still not a real project.
To Microsoft’s credit, the Web Application Projects add-in also addresses most of the concerns I’ve raised here. It provides VS 2003 style Web projects (with some enhancements for the 2.0 functionality including file based project access).
Deployment of ASP.NET applications
Deploying ASP.NET applications got a whole lot more complicated with ASP.NET 2.0. In ASP.NET 1.x deploying applications was pretty simple: you compiled your project and you ended up with ASPX pages and a single assembly of the compiled CodeBehind files that you had to deploy to the Web Server.
In ASP.NET 2.0, you now have many, many options for deploying your applications, many of which overlap, none of which are simple, and none of which produce a repeatable install, except the uncompiled option which copies all files – source and all to and compiles everything on the fly.
The uncompiled compile is the simplest as it is an in-place mechanism that allows you to copy exactly what you have on your local machine to the server. But this install mechanism requires source code to be installed on the server, which is not the best of ideas for security and code protection.
All other mechanisms use the new ASPNET_COMPILER utility is used to compile projects into a deployment folder. Options exist to compile every page in a separate assembly, group all pages into a single assembly per directory, but none of the options produce a single assembly to deploy. In fact, there are lots of files to deploy including stub files and assemblies for APP_CODE and Master Pages.
Microsoft apparently intended people to completely redeploy applications every time an update.
The bottom line is that, with this new mode, you can’t update a site by simply copying one file. You can deploy compiled assemblies for each page, which creates one assembly per page which are stored in the BIN directory. That would be OK, except that every time you run ASPNET_COMPILER, the timestamp and file name changes, so the next time you upload you have to clean up the old compiled files. Options allow you to compile both ASPX and CS files and – you guessed it – these change name on every compile. So the number of files is no less with this installation. In both cases additional assemblies are created for the APP_CODE directory, Master Pages and a few other separated files – all of which are dynamically named. The filenames generated for all of these installs change so the builds are not repeatable.
The worst thing about this deployment is that there’s no quick and dirty way to deploy with the stock options – you pretty much have to copy all the files in the BIN directory to server every time and you have to remove existing files on the server before you copy them back up. And while you’re copying files to the server your application is in a very unstable state as there are many files which can become out of sync while uploading. In short, live updates are not really possible on a busy site.
Again, Microsoft has heard the message from developers, although a little late. The Web Deployment Projects tool builds on top of the ASPNET_COMPILER functionality and not only can it combine all the generated assemblies into a single assembly, but it can also create stub files that have consistent names, so you can create a repeatable deployment image. If you use this tool to build it is possible to update your application on the server by simply overwriting the one or assemblies it generates on the server.
The tool also allows custom build actions to modify Web.Config files easily for deployment on the fly. This add-in provides a new VS project type and you can use MSBUILD to customize the process. I’ve been using the command line version of the tool for my deployment needs and it works much better than the stock behaviors. It makes the process much more manageable.
With all these confusing choices available, I created a GUI front end to ASPNET_COMPILER and the Web Deployment Projects with a ASP.NET 2.0 Compiler Utility. The tool lets you experiment with the various different compilation modes, save the configuration, create batch files and it includes support for the Web Deployment Projects tool.
Will Microsoft do the right thing?
As you can tell, there are a number of things that bug me about ASP.NET 2.0 and Visual Studio 2005. In my estimation, the productivity and feature gains are offset by and the slowness and bugs in terms of net productivity. But with a little effort on Microsoft’s part it could be so much better.
However, it’s not there yet. One look at the faces of developers I stand in front at presentations when I describe some of these issues says it all – “this stuff looks like Beta software”. It happens all the time. Consider some of these specific comments:
“I hate to bite the hand that feeds, but I have to say that unless we can put the R back in RAD, the future of ASP.NET looks bleak. I have found ASP.NET in its VS 2003 incarnation to be cumbersome enough, but VS 2005 is an even tougher sell if I have to factor in a slower IDE and additional deployment complexity into my time estimates”
“The new ‘nightmare’ deployment, in addition to the sluggish interface of the IDE, inability to exclude files/directories, among other things has been a deal breaker at my company. We just decided to roll back to VS2003.”
I can’t tell you how many people have asked me whether they should upgrade to VS2005 and whether it’s ‘good enough’. I’d say yes it is, but don’t expect a bug free environment in Visual Studio. And it will be a real pity if it fails to fulfill its full potential because of these problems because, overall, there are many, many things that make life a lot easier for developers.
On the whole the process of migration has gone well and I’m reasonably happy with working with VS2005 and ASP.NET 2.0. I’d be really hard pressed to want to go back to 1.x. There are problems but most of them are annoying more than anything, rather than deal breakers. And the most significant ones – the project management and deployment – are being addressed by the forthcoming Web Deployment Projects and Web Application Projects tools that Microsoft has already out in late Beta now. I’m using both and they are stable enough now to use for production work. And both will be rolled into the next service release of Visual Studio, so there’s nothing non-standard about using them. The key is to make sure that developers are aware of them.
In addition, Microsoft is aware of the issues I’ve highlighted in this article and I’ve seen many Microsoft developers get involved in Blogs and message boards to address them. So if and when a Service Release for VS promised it would be for so many years.
DJA: I definitely agree that things could be better, but what application couldn’t be improved? Minesweeper is close, but I’ve still got a couple of things I’d like to see changed in that too. So I doubt the future of ASP.NET looks bleak. Deployment needs some major work, but throw in better project file support, fine-tune the re-factoring, get rid of some of the buggy Visual Designer functionality, and it’s going to be even more solid for development.
I’m running a 3.2 GHz single-core processor with 2 gigs of RAM, and I’m not seeing a significant enough decrease in performance to offset the gains in productivity. | https://www.red-gate.com/simple-talk/dotnet/asp.net/asp.net-2.0-and-vs-2005---you-win-some,-you-lose-some/ | CC-MAIN-2018-05 | refinedweb | 5,441 | 61.16 |
Just a quick note today that if you want to create a mutable Scala array — particularly an array that can grow in size after you first declare it — you need to use the Scala
ArrayBuffer class instead of the
Array class, which can’t grow.
Here’s a short example that shows how to instantiate an
ArrayBuffer object, then add elements to it:
import scala.collection.mutable.ArrayBuffer var fruits = ArrayBuffer[String]() fruits += "Apple" fruits += "Banana" fruits += "Orange"
Once you have an
ArrayBuffer object, you can generally use it like an
Array, getting elements like this:
println(fruits(0))
getting the array length like this:
println(fruits.length)
and so on. You can also cast an
ArrayBuffer to an
Array using its
toArray method.
Again, the only trick here is knowing that you can’t add elements to a Scala
Array, and therefore, if you want to create an array that can be resized, you need to use the
ArrayBuffer class instead.
Add new comment | http://alvinalexander.com/scala/scala-mutable-arrays-adding-elements-to-arrays | CC-MAIN-2017-09 | refinedweb | 165 | 57.61 |
CodePlexProject Hosting for Open Source Software
Hi guys,
I am using blogEngine.net 1.5 and am having some crazy problem with jQuery that I just cannot figure out.
I've tried all of the tips I could find on the net - $j = jQuery.noConflict();, etc, but no luck, it's stressing me out now because I am in a timebox with a project. Any help would be most appreciated.
Problem:
jQuery code called on specific pages (default.aspx, post.aspx, and other static pages) does not execute - even with $j. This is the kind of snippet used on post.aspx, and some of our statics - we fire off a function in
our common.js file that does operations relevant to that page.
<asp:Content
<script type="text/javascript">/*<![CDATA[*/ Namespace.Common.setCurrentPage("articles"); //fires, runs JS, but not jQuery code alert("a javascript alert"); //JS code works $j("body").addClass("test-default"); //jQuery does not work/*]]>*/</script>
</asp:Content>
The main JS file, is as follows. I've changed the namespace name to 'Namespace' for this example.
/*
* Javascript for Namespace.website.com
*
*
* Date: 05-07-2009
* Author: Phil Ricketts
*/
var $j = jQuery.noConflict(); //It was worth a shot
var Namespace= window.Namespace|| {};
// Common functions.
Namespace.Common = function() {
return {
handleError: function(msg) {
alert(msg);
},
hideFocus: function() {
$j("a").click(function(e) {
this.blur();
});
},
currentPage: function() {
var url = document.URL;
var docExt = url.substring(url.length, url.lastIndexOf('/') + 1);
var doc = docExt.substring(0, docExt.lastIndexOf('\.'));
var docPure = doc.replace(/-/g, "").toLowerCase();
//alert("url: " + url + "\r\ndocExt: " + docExt + "\r\ndoc: " + doc + "\r\ndocPure: " + docPure);
$j("ul.niche li." + docExt).addClass("current"); //niche
$j("#nav-top li." + doc.toLowerCase()).addClass("current"); //main navigation highlights
$j("#nestedcategorylist li." + docPure).addClass("current"); //makes main and sub cats current, if current page
$j("#nestedcategorylist li." + docPure + " ul.sub").removeClass("hide"); //shows sub cats
$j("#nestedcategorylist li." + docPure).addClass("current"); //shows sub cats
$j("#nestedcategorylist li ul.sub li." + docPure).parent("ul").removeClass("hide").parent().addClass("current"); //makes sub cat parent current
if ($j.exists("#nestedcategorylist li." + docPure)) { $j("#nav-top li.articles").addClass("current"); }
},
setCurrentPage: function(page) {
page = "#nav-top ." + page;
$j(page).addClass("current"); //why does this not work
},
init: function() {
Namespace.Common.hideFocus();
Namespace.Common.currentPage();
}
}
} ();
// FAQ page functions.
Namespace.Faqs = function() {
return {
setupFaqs: function() {
/*
$j("#page.faqs div.faqitem p").hide();
$j("#page.faqs div.faqitem").each(function(index) {
this.addClass("test" + index);
});
*/
},
init: function() {
Namespace.Faqs.setupFaqs();
},
test: function() {
alert("test is working");
$j("body").addClass("test-using-j"); //no work - why?
$("body").addClass("test-using-dollar"); //no work
jQuery("body").addClass("test-using-jquery"); //no work
}
}
} ();
// DOM ready
$j(function() {
Namespace.Common.init();
});
// Extend jq fn
$j.exists = function(selector) { return ($(selector).length > 0); }
So, on JS load, everything fired on Namespace.Common.init(); works fine as expected.
When I try to call a function like setCurrentPage(); from a static page, it executes the function, but doesn't execute any jQuery code.
Please offer any advice, it's really stumped me - I'm not a JS expert.
Thanks,
Phil
So are you getting an error, or is it just not doing anything? At run time, what is the value of $j, has it correctly idetified the jQuery alias.
Are you using Firefox with Firebug?
morley wrote:
So are you getting an error, or is it just not doing anything? At run time, what is the value of $j, has it correctly idetified the jQuery alias.
The jQuery code does nothing, 'natural' javascript works fine. Everything in currentPage(); works using $j - and $j's functions shows up correctly under DOM in Firebug.
Are you using Firefox with Firebug?
I swear by it.
The jQuery code does nothing, 'natural' javascript works fine. Everything in currentPage(); works using $j - and $j's functions shows up correctly under DOM in Firebug.
Are you using Firefox with Firebug?
I swear by it.
OK cool, so from the code you've posted you're trying to find an element like
#nav-top .articles
Put a breakpoint in the setCurrentPage function on line $j(page).addClass("current"). Refresh your page so code execution runs, you should hit the breakpoint. Leave code execution there and switch to the console tab
Run a few commands in the console tab, lets see if jQuery if functioning OK and can find your element. First, lets find the body - next to >>> command prompt, run
jQuery['body'], it should evaluate and return the [body] response. Can you confirm you are finding that OK? At the very least, it should return empty square brackets which indicate jQuery didn't find anything
If that was OK, try looking for a specific named element, e.g run $j('#nav-top .articles').html()
or whatever, for an element in your DOM you definitely know exists.
Does console window find your element OK?
Just trying to narrow down where it's failing. Once you've finished in console window, press F8 to complete code execution
@morley, thanks so much for your advice - I'll get to this asap.
In the meantime, I've got this error (in IE8) which still leaves me clueless. I tried getting a new copy of jQuery, but the same thing happens:
That's just saying jQuery is throwing an exception, probably because you've given it an element that's invalid for the operation. jQuery will throw an exception if say you pass it null and it's expecting an object to iterate over.
Put a try->catch around the offending code to see what the exception is, e.g
try {
// Your code that is failing
}
catch (e) {
alert(e.message);
}
try {
// Your code that is failing
}
catch (e) {
alert(e.message);
}
This might give you a more descriptive error message so you can see what's going wrong. My problems with jQuery are always self inflicted, usually things like not finding the elements I'm expecting on the page.
I find the debugging technique I described in the previous post quite useful, it lets you evaluate jQuery statements and analyse your DOM. The try catch can be helpful, but a few breakpoints in the right locations will always be more useful :)
this might seam dumb, but is your code beeing run once the DOM is ready ? $('document').ready( function () { ... });
hope it helps ...
Are you sure you want to delete this post? You will not be able to recover it later.
Are you sure you want to delete this thread? You will not be able to recover it later. | http://blogengine.codeplex.com/discussions/65443 | CC-MAIN-2017-51 | refinedweb | 1,086 | 60.61 |
Some months ago, I was facing a problem of having to deal with large amounts of textual data from an external source. One of the problems was that I wanted only the english elements, but was getting tons of non-english ones. To solve that I needed some quick way of getting rid of non-english texts. A few days later, while in the shower, the idea came to me: using NLTK stopwords!
What I did was, for each language in nltk, count the number of stopwords in the given text. The nice thing about this is that it usually generates a pretty strong read about the language of the text. Originally I used it only for English/non-English detection, but after a little bit of work I made it specify which language it detected. Now, I needed a quick hack for my issue, so this code is not very rigorously tested, but I figure that it would still be interesting. Without further ado, here’s the code:
import nltk ENGLISH_STOPWORDS = set(nltk.corpus.stopwords.words('english')) NON_ENGLISH_STOPWORDS = set(nltk.corpus.stopwords.words()) - ENGLISH_STOPWORDS STOPWORDS_DICT = {lang: set(nltk.corpus.stopwords.words(lang)) for lang in nltk.corpus.stopwords.fileids()} def get_language(text): words = set(nltk.wordpunct_tokenize(text.lower())) return max(((lang, len(words & stopwords)) for lang, stopwords in STOPWORDS_DICT.items()), key = lambda x: x[1])[0] def is_english(text): text = text.lower() words = set(nltk.wordpunct_tokenize(text)) return len(words & ENGLISH_STOPWORDS) > len(words & NON_ENGLISH_STOPWORDS)
The question to you: what other quick NLTK, or NLP hacks did you write? | http://www.algorithm.co.il/blogs/programming/python/cheap-language-detection-nltk/?replytocom=64226 | CC-MAIN-2019-30 | refinedweb | 259 | 66.84 |
This article is based on Flex4 in Action, published on 10-November-2010.:
Flex 4 Components ExposedThis article is taken from the book Flex 4 in Action. The authors discuss Spark component architecture and explain how to create custom components. The article also features skinning with the SparkSkin Object.
You’ve used components since the start of this book. They range from controls like the simple Button that accept input from the user, to container components like the VBox. Custom components are “home-brewed” so to speak, and are created by extending the same base classes as the default components that are part of the Flex framework.
Now we’ll tell you about the underlying foundation of the Spark component architecture in Flex 4. Most importantly, you will learn key facets of that are specific to the Flex 4 Spark architecture. This will get you on your way toward building your own collection of Spark-based components that take advantage of Spark architecture.
Spark Component Architecture
If you’ve been a Flex developer for a little while now, or you’ve used a different framework for client side interface creation, you may have heard of the conceptual Model-View-Controller (MVC) architecture. The Flex 4 Spark component architecture puts a little spin on the MVC pattern, and adds a fourth part – the skin. This means that every Spark-based component has the following four pieces to it:
- Model: Properties and business logic are contained here, and are usually written in ActionScript. For the sake of best practice, you really should not be placing visual or behavioral logic along with the model. We will elaborate on that in a moment.
- View: Contains logic used mostly for the layout and positioning of the component on the stage during runtime. This information can be its own ActionScript class, but it is usually more logical to either: A) abstract the logic into a base class so it can be reused by other components, or B) add it to the component’s
controller class.
- Controller: This is responsible for defining the behavior of the component, the states that are contained within it, and provides definitions of the sub-parts that are declared in the Skin class using variable names. This typically comes in the form of ActionScript.
- Skin: Declares all of the visual elements that make up the component. This is usually an MXML file (largely thanks to FXG), and is tightly coupled with the Controller.
There is currently a lot of debate on what we will explain next, and you will ultimately have to exercise best judgment using a combination of your logical reasoning skills, along with the knowledge that you are gaining in this article.
As you saw a moment ago, there are four pieces that make up the Spark component architecture. Where this gets confusing is that you will often only see two classes for these four parts, one in ActionScript and the other MXML. The ActionScript class typically wraps the model, view, and the controller pieces into a single class, while the MXML is strictly meant to act as the skin for the respective component. According to the Flex 4 documentation:
“The general rule is any code that is used by multiple skins belongs in the component class, and any code that is specific to a particular skin implementation lives in the skin.”
This makes sense in theory, but does not usually provide enough of what we often refer to as separation of concerns. This is why you might notice that the better Flex developers are using three layers for their components:
- Component: The controller and the view logic, often written in ActionScript
- Presentation Model: Contains the code for the model.
- Skin: The MXML skin file
With this method, it is reasonable to suggest that this takes things back to the core concept of the Model-View- Controller again; whereas the Component class is the controller, the presentation model class is the model, and the Skin becomes the view element of the MVC architecture.
Let’s put all this theory into practice by taking a look at some real-life situations.
The many flavors of custom components
Creating a custom component means extending a pre-existing class that either directly or indirectly extends one of the Flex framework classes. Picture three 2010 Ford Mustangs side-by-side. The first is the base Mustang Coupe with the standard V6 engine. The second is the Mustang GT with a 4.6-liter V8, upgraded wheels, and some other upgrades. The third is a Shelby Cobra, sporting 510-horsepower, and premium sound and navigation system among other upgrades. The second and third mustangs both extend the base mustang, so they inherit all of the properties of the first, and then add their own additional amenities.
Similarly, the Cobra extends the GT, so the Cobra inherits the properties of the GT and the standard Mustang, while adding its additional features. But wait! How could the Cobra and the GT inherit the properties of the standard Mustang if the engines are different? The answer is that the GT overrides the engine property of the standard Mustang, and the Cobra overrides the engine of the GT.The relationship between these three automobiles is illustrated in the simple class diagram shown in
Figure 1.
When you extend classes in the Flex framework, you inherit the properties, functions, events, and styles from the class you are extending. This makes code a lot more manageable and is one of the fundamental characteristics of object oriented programming and code reusability. Let’s take a look at two different component types.
Simple vs. Composite
There are two major types of custom components: simple and composite. A simple custom component extends a single, pre-existing component.
The fun doesn’t stop there though. When developing Flex applications, you will find it valuable to group simple components together, keeping your application better organized. You can create components that group disparate components in what’s called a composite component, which is a container that holds any number of additional components inside of it.
MXML vs. ActionScript
When developing custom components, you have the option of creating your component in either MXML or ActionScript. In most cases, either one will do. The main advantage of MXML is that you can usually write the code needed with fewer lines. However, some flexibility is lost in regard to object customization, so advanced custom components are almost always written as ActionScript classes.
Creating your component in MXML is often a popular choice for composite components that are not very advanced in nature. An excellent example of this is a shipping information form. It is useful to have a reusable shipping form component because there are many applications that it can be used for. This means that if the component is created with loose couplingin mind,it can be a highly reusable composite component that can be built using MXML.This approach is the easiest to get started with.
MIGRATION TIP
In Flex 3, MXML composite components were usually based on the Flex Canvas container from the Halo library.
In Flex 4, MXML composite components are created using the MXMLComponent base class or the Group class from the Spark library.
Your second option is to create your component in ActionScript. This technique is more advanced because it requires stronger ActionScript skills.You can override the functions of the component class you’re extending from, and you have fine-grained control.
The advantage of this method is that you have far greater power to turn the coolness factor on your custom component into overdrive.The disadvantage is that if you are building a composite component, you can no longer lay out the controls in Flash Builder’s design view. Advanced developers consider this a small price to pay for the increased flexibility, while others feel that it slows down the development process. Finding the formula that works best for you is just a matter of trial-and-error.
Now that you have learned about the options available, the first custom component you will build is of the simple type.
TIP
When it comes to making custom components, your goal should be to make it simple for other developers to use your component (even if it’s complicated inside). Later, we’ll give you tips on how to achieve this goal.
Creating simple custom components
At the most fundamental level, a viewable component (one which is added to the display list of your application) can be categorized according to the purpose served by the component. This includes: controls, containers, item renderers, effects, and skins.
The first custom component that you will build is an extension of the Flex ComboBox, and is considered a control because it is used to control application behavior by initiating an action or sequence of actions when the value is changed.
Build your own home-grown simple ComboBox
Let’s walk through a simple example. Say your application is geocentric, meaning that pieces of information tend to carry location data. As a result, a few forms require the user to specify an address of some sort. To help your users, you want to provide a drop-down menu of U.S. states, as figure 2 demonstrates.
Wouldn’t it be awesome if any part of your application could display this list by calling a single line of code? OK, that’s a loaded statement, but the example is as simple as this:
Listing 1 CBStates.mxml: ComboBox-based custom component
<?xml version="1.0" encoding="utf-8"?> <mx:ComboBox xmlns: <mx:dataProvider> <mx:Object <mx:Object <!-- the rest of the US states --> </mx:dataProvider> </mx:ComboBox>
In code listing 1, you are presented with an example of simple custom ComboBox control that shows a listing of U.S. states when clicked, and allows selection of a state to be submitted as a data object. This type of component is highly reusable, since it is often needed when creating an address submission form. Listing 2 demonstrates how the component is instantiated with MXML in a Flex application.
Listing 2 Main application file for CBStates
2 3 4 5 6 7 <?xml version="1.0"?> <s:Application xmlns: <local:CBStates/> </s:Application>
If you save the two files to the same directory and then run the application, you should end up with a drop-down combo box that lists all the U.S. states, as shown earlier in Figure 1.
One advantage of extending pre-existing Flex user interface components is that anyone who uses it will know its general properties and behaviors as a result of inheritance. In this case, if they know your simple custom component is based on a ComboBox, then they also know your component supports the same properties, events, styles, and functions as the Flex ComboBox control. Now you will take what you just learned and relate it to the Flex 4 Spark library and architecture.
Simple Spark Components
A visual object on the stage is considered a Spark component when it inherits the Spark component architecture by extending a Spark visual component base class. This includes:
- SkinnableContainer
- SkinnableDataContainer
- Group
- DataGroup
- MXMLComponent
- ItemRenderer
- SkinnableComponent
- Any Spark library component (e.g. a Spark List)
- UIComponent
If you are familiar with Flex 3, you may be wondering about the last item in the list, UIComponent. Technically, all visual components in Flex are subclasses of UIComponent, regardless of the component library that the component comes from.
However, certain facets of a component that extends the base UIComponent class can make it a Spark component. For example, if an ActionScript composite component extends UIComponent as the base class, and then instantiates a Group or SkinnableContainer object to hold a set of display objects, then we would refer to it as a Spark composite component.
Most of the time, custom Spark-based components will extend the SkinnableComponent class, as you will see in the examples that will come later.But first, it’s time for some discussion on skins and how the SparkSkin object fits into the picture.
Migration Tip
Beware! One of the biggest compatibility issues that haunted the Flex 4 beta releases involved placing Spark components inside of a Canvas container. The problem reared its ugly head in some unusual ways, even after it was thought to have been resolved. In one of my own experiences, the problem caused compile errors within Flash Builder beta of type “unknown” and source “unknown”. The source of the problem turned out to be a SkinnableContainer that was instantiated inside of a Canvas. The moral of the story here is: if you find yourself debugging and feeling like you are chasing a ghost, consider checking your application to see if you have any Canvas objects before you start banging your head against the wall. If you do, try switching the Canvas to a Group. If that doesn’t fix the problem, we still suggest leaving the Group in place of the Canvas as we have generally found the Group container to be more reliable and functional than Canvas.
Skinning with the SparkSkin object
The decoupling of display logic is one of the most valuable architectural enhancements in the Spark library.
SparkSkin basics
Imagine you just built an application with a set of five custom components. If your five components each extend one of the classes listed earlier from the Spark library, you can create an unlimited number of skins for each of your five custom Spark-based components and keep them organized in an entirely separate package or library. You can then declare a different default skin for each instance of the same component, or even swap skins on the fly by triggering an event during runtime from a user initiated behavior or sequence. Isn’t that cool? Code listing 3 is an example of a Spark skin component.
Listing 3 Example of a skin for a Spark Button
<?xml version="1.0" encoding="utf-8"?> <s:SparkSkin xmlns: <fx:Metadata> [HostComponent("spark.components.Button")] </fx:Metadata> <s:states> <s:State <s:State <s:State <s:State </s:states> <!-- FXG exported from Adobe Illustrator. --> <s:Graphic <s:Group <s:Group d: #D <s:Rect <s:fill> <s:LinearGradient <s:GradientEntry <s:GradientEntry </s:LinearGradient> </s:fill> <s:stroke> <s:SolidColorStroke </s:stroke> </s:Rect> </s:Group> </s:Group> </s:Graphic> <s:Label </s:Label> </s:SparkSkin> #A Skins ALWAYS extend SparkSkin in Flex 4 #B Namespace declaration for Illustrator graphics #C Namespace declaration for FXG graphics #D Group object followed by design #E Label declaration followed by style information
When you create Spark components, you will usually create two classes for every component. The component class holds behavioral logic, such as events that are dispatched by the component, the component’s data model, skin parts that are implemented by the skin class, and view states that the skin class supports.
The skin class on the other hand, is responsible for managing visual appearance of the component and visual sub components, including how everything is laid out and sized. It must also define the supported view states, graphics, and the data representation.
Migration Tip
FXG stands for “Flash XML Graphics”, and in essence, does for MXML what CSS did for HTML. However, FXG has quite a bit more power under the hood, so while its relationship to Flash and Flex components can be loosely compared to the relationship between CSS and HTML, it does not make logical sense to compare FXG directly to CSS. Many of the limitations imposed on CSS do not exist with FXG. Although it may seem strange to compare FXG to CSS, FXG theoretically accomplishes the same thing that CSS does: It separates layout structure and behavioral code from design and graphics code. This makes it easier to create components that can be quickly and easily integrated into any application without even having to look at the component’s code base.
Using Metadata to Bind Component Skins
The component and skin classes must both contain certain metadata in order for them to work properly together.
The component class must:
- Define the skin(s) that correspond to it
- Identify skin parts with the [SkinPart] metadata tag
- Identify view states that are supported by the component using the [SkinState] tag
The skin class must:
- Use the [HostComponent] metadata tag to specify the corresponding component
- Declare view states and define the appearance of each state
- Define the way skin parts should appear on the stage
Let’s take a look at the three basic essentials of a skin component in further detail, starting with the SkinState
metadata tag.
Migration Tip:
Note that skin parts must have the same name in both the skin class and the corresponding component class or your application will show compile errors or throw runtime errors.
CUSTOM COMPONENT VIEW STATES
The view states that are supported by a component and its corresponding skin must be defined by placing a [SkinState] tag for every view state. These tags are placed directly above the class declaration, as seen in Listing 4.
Listing 4 View states are defined directly above the class statement.
package { import spark.components.supportClasses.SkinnableComponent; [SkinState("default")] #A [SkinState("hover")] [SkinState("selected")] [SkinState("disabled")] public class SkinStateExample extends SkinnableComponent { public function SkinStateExample() { super(); } } } #A SkinState tags preceed the class declaration
You can now control the state of the component by setting the currentState property with the component
declaration in MXML, as seen in Listing 5.
Listing 5 Controlling custom component state
<?xml version="1.0" encoding="utf-8"?> <s:Application xmlns: <s:layout> <s:VerticalLayout/> </s:layout> <s:Label <MyComp:SkinStateExample #A </s:Application> #A Use currentState to set the component's state
While states are defined in a component’s implementation class and controlled in the skin, skin parts are defined in the MXML skin and controlled by the implementation class.
DEFINING SKIN PARTS
The importance of the [SkinPart] metadata tag is another critical facet of understanding the relationship between component classes and skin classes in the Spark architecture. As you will see in a moment, this metadata tag is especially useful for creating custom components.
The implementation of the SkinPart metadata is simple. First, define skin parts in the same way as you would declare objects in a standard MXML file. For example, if one of your skin parts was a Spark Button with an id property of myButton, you would define it with the code:
<s:Button
Next, declare the skin parts in the component using the [SkinPart] metadata tag. Make sure that the variable is typed the same as what you just defined in the MXML skin, and that the id of the object in the MXML skin matches the name of the variable in the component. The component code for the myButton skin part that you just saw is illustrated in Listing 6.
Listing 6 The SkinPart tag binds component variables to MXML skin part definitions
public class CustomComponent extends SkinnableComponent { public function CustomComponent() { super(); } [SkinPart(required="true")] #A public var myButton:Button; } #A SkinPart metadata binding on the myButton variable
Notice the use of the required property after the SkinPart metadata tag is declared. Make sure you declare this property because required is – pardon the pun – required.
You have now learned two of the three essentials to Spark skinning. The third essential element is tying the component implementation to the skin using the HostComponent metadata.
WARNING
In order for your skin part bindings to work, you must be sure to type the respective variable the same as the
MXML object definition and the variable must be named the same as the id attribute that is set on the respective
MXML object definition.
DECLARING THE HOST
The last of the three basic essentials to using skins with custom Spark components is the declaration of the host component in the skin. To accomplish this, you use the [HostComponent] metadata tag. An excellent example of this was provided in the example skin at the beginning of this article, in Listing 3. The code from listing 3 that is used for the declaration of the host component is provided again in Listing 7.
Listing 7 Use HostComponent metadata to bind a skin to a component class
<fx:Metadata> [HostComponent("spark.components.Button")] </fx:Metadata>
Speak Your Mind | http://www.javabeat.net/flex-4-components-exposed/ | CC-MAIN-2014-35 | refinedweb | 3,392 | 50.16 |
#include <Request.h>
#include <Request.h>
Collaboration diagram for CORBA::Request:
Provides a way to create requests and populate it with parameters for use in the Dynamic Invocation Interface.
[private]
[static]
Pseudo object methods.
Set the byte order member.
Get the byte order member.
Set the lazy evaluation flag.
Return the arguments for the request.
Return a list of the request's result's contexts. Since TAO does not implement Contexts, this will always be 0.
Mutator for the Context member.
Accessor for the Context member.
Return the exceptions resulting from this request.
Callback method for deferred synchronous requests.
Perform method resolution and invoke an appropriate method.
If the method returns successfully, its result is placed in the result argument specified on create_request. The behavior is undefined if this Request has already been used with a previous call to invoke>, send>, or .
create_request
Request
invoke>
send>
Return the operation name for the request.
Accessor for the input stream containing the exception.
Proprietary method to check whether a response has been received.
Return the result for the request.
Returns reference to Any for extraction using >>=.
Send a oneway request.
Initialize the return type.
Return the target of this request.
[friend]
Parameter list.
Can be reset by a gateway when passing along a request.
List of the request's result's contexts.
Context associated with this request.
List of exceptions raised by the operation.
Invocation flags.
If not zero then the NVList is not evaluated by default.
Protect the refcount_ and response_receieved_.
Operation name.
Pointer to our ORB.
Stores user exception as a CDR stream when this request is used in a TAO gateway.
Reference counting.
Set to TRUE upon completion of invoke() or handle_response().
Result of the operation.
Target object. | https://www.dre.vanderbilt.edu/Doxygen/5.4.6/html/tao/dynamicinterface/classCORBA_1_1Request.html | CC-MAIN-2022-40 | refinedweb | 289 | 54.79 |
It's not exactly easy getting into Flux. There's a lot of terminology, and a lot of syntax. Not to mention all the different libraries. While it's undoubtedly very useful, for someone new to React it's a lot to take in.
Which is fine. Flux is really meant to solve problems for "big" apps. When your UI is handling multiple events and transforming chunks of data.
In this article we'll explain how we got to using Fynx, and then show some example code similar to how we used in in our Hacker News app.
Prelude
Nowadays, beyond rolling your own, there's a ton of libraries for Flux. From Fluxxor to Reflux to Flummox, each brings something unique.
At the time we were building our app Fluxxor seemed the best documented and supported. But, it was still verbose. After a week, out of a desperate want to reduce all the boilerplate we were writing, we extracted a library on top of it. It was flexible, and it simplified things. But, now we were stuck with another dependency.
See, around the same time as Flux was announced, we'd been reading into Om, Omniscient and the recently released Immutable.js. Immutable structures are awesome, and cursors seemed awesomely simple. But, you still want to coordinate your actions somehow.
With Fynx, we got just that, and nothing more. Despite it's awkward ASCII diagram, it actually reduced Flux conceptually to a single thing: actions. How? Lets take a look.
Stores
Well, really just a store. Here's your entire store.js file, with Fynx:
import { createCursorStore } from 'fynx'; import { fromJS } from 'immutable'; module.exports = createCursorStore(fromJS({ dogIds: [], dogs: {} });
That's all. Use it's keys as you would normal Flux stores. There's no store methods, no waitFor, just a single cursor.
Actions
The power of Fynx is in its actions. Our actions.js:
import { createAsyncActions } from 'fynx'; var Actions = createAsyncActions([ 'loadDogs' ]);
Note: the async actions in Fynx just means it will chain with promises. It's nice in some cases, but not a requirement.
And then, we can make our dogActions.js:
import ../../../../2015/03/11/Simplify-Flux-with-Immutable-js-and-Fynx/Actions from .css'./actions'; import store from './store'; // we fetch the ordered array of dogs // then grab their individual data Actions.loadDogs.listen(opts => getDogsListFromAPI(opts).then(res => { store().set('dogIds', res); getDogsData(res); }) ) function getDogsData(res) { res.map(id => { getDogAPI(id).then(dog => { store().setIn(['dogs', dog.id], dog) }) }) } function getDogsListFromAPI() { return Promise.resolve([1, 2, 3]); } function getDogAPI(id) { var data = { 1: { id: 1, breed: 'Jack Russell' }, 2: { id: 2, breed: 'Shih Tzu' }, 3: { id: 3, breed: 'Pitbull' }, } return Promise.resolve(data[id]); }
Some nice things about this:
- All our dog actions are in one place
- I can chain my actions together as much as needed
- Simple functions everywhere
Linking it to React
In our top level Dogs class, lets grab our store and pass it down. We can also grab the store at any level of our app just by importing it.
import store from './store'; import ../../../../2015/03/11/Simplify-Flux-with-Immutable-js-and-Fynx/Dog from .css'./Dog'; module.exports = React.createClass({ render() { var dogIds = store().get('dogIds'); return dogIds.map(id => var dog = store().getIn(['dogs', id]); return <Dog data={dog} />; }); }); });
And lets say in our Dog component, we can either respond to data simply:
this.props.data.set('name', 'Scruffy');
And our store will update, along with our UI. But, this isn't Flux. Lets say we add an action in our dogActions file that reverses our dog list. After we've added the action name in our actions.js we can do this:
Action.reverseDogs.listen(() => store().update('dogIds', dogs => dogs.reverse()) );
And anywhere in our React tree we could then call:
import ../../../../2015/03/11/Simplify-Flux-with-Immutable-js-and-Fynx/Actions from .css'./actions'; Actions.reverseDogs();
shouldComponentUpdate
The final step is to optimize our components shouldComponentUpdate now that we have immutable data throughout our app. Omniscient gives us a really nice one that works out of the box with Immutable.js.
import ../../../../2015/03/11/Simplify-Flux-with-Immutable-js-and-Fynx/shouldComponentUpdate from .css'omniscient/shouldupdate'; React.createClass({ shouldComponentUpdate, render() { //... } })
In Reapp, I use a decorator so I don't have to manually mix it in on every class.
In action
Update: Thanks to snickell, we have a working demo with this code here. Check it out to see a simple example of this in action!
Want to see an app using this Flux technique in production? Download it in the iOS app store and check out the code on GitHub. | http://www.reapp.io/2015/03/11/Simplify-Flux-with-Immutable-js-and-Fynx/ | CC-MAIN-2018-39 | refinedweb | 780 | 59.8 |
Listening to the Vital Signs of TDD
Jan Van Ryswyck
Originally published at
janvanryswyck.com
on
・6 min read
People who know me personally know that I like to go for a long run on a regular basis. I sit and work at a desk all day. So as part of the work I do, labouring the codes, I go for a one or two hour run every other day. I already wrote about this in the past.
Besides the obvious benefits one gets from physical exercise, I also learned something quite valuable. I learned how to listen to my body. Every runner knows that running long distances involves some kind of pain. Over time, one develops a certain threshold for enduring this suffering. I know this doesn’t sound like much fun, but it really is. You can take my word for it 😀.
But over time I learned how to read the signals that my body sends me. These signals can be very subtle. How easy is it to breathe and get air? How are my legs feeling? Do my muscles feel strained? If so, to what degree? I’m constantly evaluating. For example, after a minute or two I know exactly whether I’m able to go for a fast run or that I should take it rather slowly. Even on a number of occasions, I was able to predict upcoming health issues or injuries. In such case, I have the option to reduce the intensity of a workout or even stop altogether. In any case, when something’s up, I have to act accordingly and prevent worse.
I have a similar experience when I’m writing code using Test-Driven Development. I write a small, failing test. I make the test pass as quickly as possible. And then the most important step: I refactor. Very short, successive cycles where each cycle shouldn’t take up more than just a couple of minutes. But why do I point out the “Refactor phase” as the most important one? Because this is the moment where I listen to what the unit tests are trying to tell me. Just as listening to the signals of one’s body is the most important part of running long distances, the same holds true for the whole Test-Driven Development process. In order to write sustainable software, we as developers have to learn about how to be receptive to these signals. But what exactly should we listen for?
Experienced developers often tell you that unit tests drive the design, and in a sense that’s true. But there’s an important step that comes before that. First and foremost, unit tests provide very valuable feedback about the design of the system. From this feedback, design decisions start to emerge. Test-Driven Development can be very unforgiving when the code quality of the system under test is quite poor. When it takes a long time to write a single, failing unit test, then it’s already telling us that we need to take some things into consideration during the refactor phase. From the “Red, Green, Refactor” cycle, the “Red” and “Green” stages should move as quickly as possible. The “Refactor” stage can take bit longer.
This is usually the point where newcomers to Test-Driven Development are put off by this discipled practice. The “Refactor” stage is oftentimes reduced or skipped altogether. And as soon as it becomes difficult to write and maintain unit tests, they shoot the messenger. They blame the test themselves instead of listening and blaming the design of the system being tested. Just as newcomers to running often blame the excessive pain they endure instead of just acting according to the signals their body is sending them along the way. Bad design of the system leads to brittle unit tests of poor quality.
Here’s a small example to illustrate this.
public abstract class CustomerHandlers { public void Handle(RemoveCustomer command) { // Remove a customer ... } } public class RegularCustomerHandlers : CustomerHandlers { public void Handle(CreateRegularCustomer command) { // Create a new regular customer ... } } public class RegularVipCustomerHandlers : CustomerHandlers { public void Handle(CreateVipCustomer command) { // Create a new VIP customer ... } }
We have a system that models two different types of customers: a regular customer and a VIP customer. Creating one of these involves different kind of business logic, but removing a customer is the same for both types. The developer that implemented this functionality decided to create a different handler class for each type of customer. These specific handler classes in turn derive from an abstract base class that provides the implementation for removing a customer. Let’s have a look at the unit tests.
[TestFixture] public class RegularCustomerHandlersTests { [Test] public void TestScenario01ForRemoveCustomer() { // Test scenario 1 for removing a customer } [Test] public void TestScenario02ForRemoveCustomer() { // Test scenario 2 for removing a customer } [Test] public void TestScenarioForCreateRegularCustomer() { // Test scenario for creating a regular customer } } [TestFixture] public class VipCustomerHandlersTests { [Test] public void TestScenario01ForRemoveCustomer() { // Test scenario 1 (duplicate) for removing a customer } [Test] public void TestScenario02ForRemoveCustomer() { // Test scenario 2 (duplicate) for removing a customer } [Test] public void TestScenarioForCreateVipCustomer() { // Test scenario for creating a VIP customer } }
A test fixture has been used for both concrete handler classes. But notice that both test fixtures contain identical unit tests for removing a customer. This is one example where the design of the production code somewhat looks reasonable for a developer, but where the tests are claiming otherwise.
Suppose that we need to make a change to the functionality of removing a customer. If we want use Test-Driven Development, which one of these unit tests should we change first. Those in the RegularCustomerHandlersTests, or in the VipCustomerHandlersTests or both? This smells rather fishy.
Also the developer must have noticed that it somehow wasn’t that easy to write unit tests for the removal functionality. The abstract base class cannot be instantiated, so a concrete class must be used in order to invoke the Handle method. Which one should be chosen? The RegularCustomerHandlers class or the VipCustomerHandlers, or maybe a creating a third one specifically for testing? In the end, probably one of the two has been chosen. In order to make up for the bad feeling, the unit tests for removing a customer have been copied over to the test fixture of the other handler once its functionality has been finished.
And this is what we end up with when we do not properly pick up the signals that unit tests are broadcasting. A slightly better design could be the following:
public class RemoveCustomerHandler { public void Handle(RemoveCustomer command) { // Remove a customer ... } } public class CreateRegularCustomerHandler { public void Handle(CreateRegularCustomer command) { // Create a new regular customer ... } } public class CreateRegularVipCustomerHandler { public void Handle(CreateVipCustomer command) { // Create a new VIP customer ... } }
Here we have a specific handler class for each command. Likewise the unit tests now look like this:
[TestFixture] public class RemoveCustomerHandlerTests { [Test] public void TestScenario01ForRemoveCustomer() { // Test scenario 1 for removing a customer } [Test] public void TestScenario02ForRemoveCustomer() { // Test scenario 2 for removing a customer } } [TestFixture] public class CreateRegularCustomerHandlerTests { [Test] public void TestScenarioForCreateRegularCustomer() { // Test scenario for creating a regular customer } } [TestFixture] public class VipCustomerHandlerTests { [Test] public void TestScenarioForCreateVipCustomer() { // Test scenario for creating a VIP customer } }
Here we have a dedicated test fixture for each handler class.
When it’s difficult to write a unit test, it hints us that the production code needs to be changed. We need to refactor the code so that it becomes easy to test. Production code that is very easy to test, that allows us to write a unit test in just minutes or even seconds, is code that is responsive to change. This is what we should strive for.
But how can we learn to listen? Unfortunately, we can only learn this by doing. Practice, practice and then practice some more. There are plenty of code katas out there. It can not be overstated how important it is to constantly practice outside of the typical work scenarios. But also rigorously apply Test-Driven Development in your daily work as well. The same goes for running. Going out for a workout on a regular basis is how you can learn about yourself, what you’re physical capabilities are, and most importantly what you’re (currently) not capable of. This is how you can improve. Going for longer distances or running faster. This is how you can move forward.
Just as important as learning about Test-Driven Development, is to learn about software design as well. Learning about both at the same time should go hand in hand. When you learn how to pick up the signals from your unit tests, you’re bound to learn something about the design of the code as well. Try out different design approaches. Keep it going. All the time. | https://dev.to/janvanryswyck/listening-to-the-vital-signs-of-tdd-1o | CC-MAIN-2019-43 | refinedweb | 1,460 | 56.25 |
Michele I tried your way but I dont seem to have a good grasp on the concept yet, will read up more for now I think I will try to make it work same way as colors only with decorator as def inside def instead of @, that doesn't make sense quite yet -Alex Goretoy On Sun, Mar 15, 2009 at 3:12 PM, alex goretoy <aleksandr.goretoy at gmail.com>wrote: > this is what I did to define all my color functions by color name, but I am > still going to need a good solution for args > > #import functions by color name into current namespace >> for color in self.colors.keys(): >> setattr(self, color,lambda x,y=color,z="INFO": >> self._hero(x,y,z) ) >> > > Thanks to all of you for helping me learn python > > -Alex Goretoy > > > > > On Sun, Mar 15, 2009 at 1:05 PM, alex goretoy <aleksandr.goretoy at gmail.com > > wrote: > >> this: <> | https://mail.python.org/pipermail/python-list/2009-March/528975.html | CC-MAIN-2016-30 | refinedweb | 156 | 68.7 |
$ cnpm install rearguard
npm i -g rearguard mkdir my-new-app cd my-new-app rearguard init browser app npm start
index.tsxentry point.
import "react"; import "react-dom"; import "mobx";
For create DLL bundle you should run
npm run build, after that you will have DLL bundle and you can run
npm start.
Rearguard is a set of tools for developing client-server applications in which the code base is developed in a mono repository. This doesn't exclude the possibility of working in a familiar way, using separate repositories for the client, server and other libraries. But the way of code organization in the mono repository is considered to be the recommended one.
Rearguard supports the following types of projects: browser (dll, lib, app), node (lib, app), isomorphic (lib, app).
First of all, rearguard covers basic needs:
Second, the rearguard knows a lot about the project and can automatically manage VSCode configurations since VSCode settings are JSON files.
In the third case, the rearguard contains templates for the main project settings such as (
.eslint.json, .eslintignore, .gitignore, Dockerfile, .dockerignore, nginx.conf, .prettierrc, .prettierignore). The rearguard adds these templates to the project and then uses them as settings for Webpack and other users, thus managing configurations such as
.eslint.json. The rearguard allows you to overwrite the settings. If necessary, you can bring the settings to the current default settings, if the rearguard has been updated
rearguard refresh --force.
The rearguard supports two schemes of code organization known as a mono repository and poly repository.
The rearguard also covers a large number of household moments, which eliminates the need to take care of these moments.
The rearguard as a caring parent :-)
true
{ reset: "inherited" }
true
true
true
true
true
{ flexbox: "no-2009", overrideBrowserslist: browserslist }
3
browserslist
postcss.config.js
package.json
Globally, for use in multiple projects.
npm install -g rearguard
Locally, in the project for saves the exact version.
npm install -D rearguard
You can see an example of how to use it in the following projects:
!!! They're not perfect at the moment, but I'll upgrade them to a canonical look as much as I can. | https://developer.aliyun.com/mirror/npm/package/rearguard | CC-MAIN-2020-34 | refinedweb | 360 | 52.09 |
Adapter or driver problems with 82575EB and 2008R2?fxhnb Feb 26, 2010 9:03 AM
Hi intel users,
we have bought 4 servers (Fujitsu RX200S5) with intel network adapter 82575EB to use it as new domain controllers in our AD domain which is currently windows 2003 updated with all neccessary ADPREP commands. Tried to install Windows Server 2008R2 which should be certified on this server hardware. We used vendor certified intel driver version 11.0.103 (14.07.2009).
After promoting server to domain controller all seams to look fine for the first time. But after rebooting the new server there where a lot of errors and warnings in the event-log from NETLOGON, DNS, Intersite Messaging, DFS namespace and Ntp. The NIC itself starts shortly after reboot and states that it has a connection.
The observerd events where (example):
time source event description
--------------------------------------------
12:56:56 e1qexpress 32 Intel... Network link... established
12:57:14 NETLOGON 3096 The PDC... could not be located
12:57:17 Service... 7023 The DNS... error...network is not present
12:57:18 Service... 7023 The Intersite Messaging...error
12:57:20 DfsSvc 4550 The DFS namespace...could not initialize
12:57:21 Time-Service 129 Ntp...was unable...
12:57:43 NETLOGON 5782 Dynanic...failed...No DNS servers
12:58:30 Iphlpsvc 4200 6TO4 interface...brought up.
The point is: later we found that the NIC is starting first but there is no real network connection when AD services are starting thereafter for a quite long time (app. 1:30min).
Found no suitable solution in.
Called Fujitsu support if there are any issues with network (especially teaming) and 2008R2 on the RX200S5. Answer: no.
Broke the team and used only one port - same result.
Found an interesting issue with similar errors only at Mark Minasi's Reader Forum:
"2008R2 DC doesn't see domain on startup"
The problem there was resolved after "set spantree portfast" at the switch. We checked our switch setting twice - where already "spantree portfast".
Used also a thumb hub to connect - same result.
Called Fujitsu support, send many config and diagnostic files to Fujitsu. They could not realize any hardware or driver problem.
But we were testing whole migration from 2003 to 2008R2 in testlab before with virtual machines under VMware ESX and did not run into any trouble! So we dont think that the software from Microsoft is the reason here.
Finally I also downloaded new drivers directly from intel with version 15.0 but the errors persist. After a week we got heavy trouble log in to our domain and still no solution at all. So we had to demote the new 2008R2 DC and clean up the AD.
Looked also at this forum for similar problems, found one server related:
which was solved breaking the team which did not help in our case,
and one client related:
which was solved by using static IP (we always use static IPs for servers).
Maybe we have a similar problem with the NIC or the driver?
By the way, at last resort we bought a single HP NIC with broadcom chipset from local dealer. With this card installed and onboard NICs disabled the ip connectivity is ok only a few seconds after the NIC driver is starting.
So we conclude that the intel NIC driver works incorrect under Windows 2008R2(?)
What do you think - we are right or wrong?
Any ideas from intel users?
Greetings
Frank
1. Re: Adapter or driver problems with 82575EB and 2008R2?fxhnb Mar 8, 2010 12:30 AM (in response to fxhnb)
Short update:
We contacted Fujitsu management and that way our problem was escalated. Finally we where told that now support was able to duplicate the problem. A new Driver Kit (15.1.1) could possibly be the solution, but this version is not yet "released". We where encouraged to test the Beta version. With this driver version the timing issues between NIC driver start and real IP connectivity decreased dramatically (only 8 to 12s insted of more than 90s!).
Now we are waiting for the released version. Meanwhile we will test DCPROMO with this server in testlab only. If problems are solved there is a good chance that this is also true with released driver in production environmement.
Unclear, why did it took such a long time for Fujitsu to acknowledge the driver problem...
And very hard for us to decide who to blame for the errors: Microsoft (OS), intel (Chipset, driver) or Fujitsu (Mainboard, driver). | https://communities.intel.com/thread/11238 | CC-MAIN-2017-30 | refinedweb | 754 | 65.93 |
permissions method about group
Bug #727884 reported by Raimon Esteve () on 2011-03-02
This bug affects 1 person
Bug Description
If we design our method by permision, this method need login user. But permisson method don't chek group; check method permision if this user is "Superuser status"; not group.
Example of method:
@rpcmethod(
def secret():
return "Successfully called a protected method"
Connection:
proxy = xmlrpclib.
prova user, need Superuser status user; not check by group.
user.has_
davidfischer (djfische) on 2011-03-03
davidfischer (djfische) on 2011-05-21
I am not able to reproduce this issue. I have a theory though.
If this is related to issue #727879 which you filed, then you may see the problem you are getting. The decorator checks permissions before the method is called. However, in the example you gave in ticket #727879, you are logging the user in as part of the method body. In this case, the @rpcmethod decorator will check permissions and fail because the user hasn't been logged in since that happens during method execution.
I think that once you setup your authentication properly as I suggested in #727879 that it will resolve your issues. Let me know. | https://bugs.launchpad.net/rpc4django/+bug/727884 | CC-MAIN-2018-22 | refinedweb | 200 | 63.8 |
Thin template HashSet. More...
#include <MAUtil/HashSet.h>
Searches the HashDict for a specified Key. The returned Iterator points to the element matching the Key if one was found, or to HashDict::end() if not.
Deletes an element, matching the specified Key, from the HashDict. Returns true if an element was erased, or false if there was no element matching the Key.
Deletes an element, pointed to by the specified Iterator. The Iterator is invalidated, so if you want to continue iterating through the HashDict, you must use a different Iterator instance.
Returns the number of elements in the HashDict.
Deletes all elements.
Returns an Iterator pointing to the first element in the HashDict.
Returns an Iterator pointing to a place beyond the last element of the HashDict. This Iterator is often used to determine when another Iterator has reached its end.
Inserts a new element into the HashMap.
Returns a Pair. The Pair's second element is true if the element was inserted, or false if the element already existed in the map. The Pair's first element is an Iterator that points to the element in the HashMap.
An element which has the same key as the new one may already be present in the HashMap; in that case, this operation does nothing, and the Iterator returned will point to the old element.
Referenced by MAUtil::HashSet< Key >::insert(). | http://www.mosync.com/files/imports/doxygen/latest/html/class_m_a_util_1_1_hash_set.html | CC-MAIN-2015-18 | refinedweb | 231 | 58.79 |
NAME
utime.h - access and modification times structure
SYNOPSIS
#include <utime.h>
DESCRIPTION
The <utime.h> header shall declare the structure utimbuf, which shall include the following members: time_t actime Access time. time_t modtime Modification time. The times shall be measured in seconds since the Epoch. The type time_t shall be defined as described in <sys/types.h> . The following shall be declared as a function and may also be defined as a macro. A function prototype shall be provided. int utime(const char *, const struct utimbuf *); The following sections are informative.
APPLICATION USAGE
None.
RATIONALE
None.
FUTURE DIRECTIONS
None.
SEE ALSO
<sys/types.h> , the System Interfaces volume of IEEE Std 1003.1-2001, . | http://manpages.ubuntu.com/manpages/hardy/en/man7/utime.h.7posix.html | CC-MAIN-2013-20 | refinedweb | 115 | 63.25 |
Optimize with a SATA RAID Storage Solution
Range of capacities as low as $1250 per TB. Ideal if you currently rely on servers/disks/JBODs
Page 3 of 4
Meanwhile, Flex came along a few years ago, and is now at version 3. Flex is a set of libraries that add important functionality like standardized GUI widgets to the Flash platform to make programming powerful Flash applications easier. Flex also adds an XML-based language called MXML that is pre-processed into ActionScript during compilation. Developers use MXML for the declarative parts of their application, such as what the GUI screens look like, and program in ActionScript directly for their core application logic. I won't cover MXML here because it is not really a part of the ActionScript language itself. I mention it because it comes up in some of the examples and language features that I will discuss later on.
I will focus on the modern features of ActionScript, sticking to what ActionScript 3 provides. Many of these features may date back to earlier versions, but to keep things simple I will just assume that we are talking about the latest version, since that is the version that most Flash and Flex developers have access to today.
I'll also start using the abbreviation "AS3" to mean ActionScript 3 now, because typing out "ActionScript 3" every time is getting tedious. When I invent a language, I'll start with as short a name as possible. Kernighan & Ritchie did it right.
I've always thought that one of the most enjoyable activities in life was comparing syntax rules of different languages. Okay, perhaps not. But it is a good place to start when comparing the languages overall, so we'll start with several simple examples of important syntactic differences between AS3 and Java.
Java declares variables like this:
public int blah; public Object foo = new Object();
AS3 declares variables like so:
public var blah:int; public var foo:Object = new Object();
Variable declaration is where I have tended to bruise my fingers in the Java-to-AS3 migration. After declaring thousands of
variables in Java over the years, my fingers know just what (not) to do. And after months of typing hundreds or thousands
of declarations in AS3, I still find myself having to go back and add the
var keyword, or switch the type definition to live after the variable. It takes some getting used to coming from Java type-typing.
Java has no concept of "undefined":
Object foo; Number num; System.out.println(foo); // outputs 'null' System.out.println(num); // outputs 'null'
AS3 has "undefined", "null", and "NaN" concepts:
var foo; var bar:Object; var num:Number; trace(foo); // outputs 'undefined' trace(bar); // outputs 'null' trace(num); // outputs 'NaN'
Note the use of
trace() in my AS3 snippets. It's a library difference, not a language difference, so I won't discuss it here. Hopefully it is clear
that
trace() is the equivalent of
System.out.println(). But since outputting to the command-line still ranks as one of the developer's main weapons against bug infestation (right
above asking the person in the next cube) I thought I'd at least give a shout out to the lowly
trace() statement.
Java declares packages in Java source files:
package foo; class FooThing { }
AS3 places classes inside package blocks:
package foo { class FooThing { } }
These amount to the same thing: they are both ways to organize and control the visibility of global names. One difference is that the Java package declaration is scoped to the whole file so all classes in that file are put into that package, whereas AS3 allows multiple package definitions per file.
Scope in Java is clearly defined as being within the current block:
// using i for both is fine – completely separate scopes public void foo() { for (int i = 0; i < 10; ++i) {} for (int i = 0; i < 5; ++i) {} }
AS3 variable scope is at the level of the function itself -- regardless of whether the variable is defined within an inner block:
// causes a compiler warning public function foo():void { for (var i:int = 0; i < 10; ++i) {} for (var i:int = 0; i < 5; ++i) {} } // better: public function foo():void { var i:int; for (i = 0; i < 10; ++i) {} for (i = 0; i < 5; ++i) {} }
Java has annotations:
@Foo public class Bar { @Monkey String banana; }
AS3 has metadata:
[Foo] public class Bar { [Monkey] var banana:String; }
Metadata is used for declaring such things as hints to MXML, like the default property to assume when instantiating the class in MXML, and hints for tools, such as default values for properties.
Java;
requires;
semicolons;
after;
statements;
AS3
does
not
except; to; separate; multiple; statements; on; a; single; line;
But please, on behalf of the readers of your code: use semicolons anyway. After so many years of coding in other languages with line-ending syntax, it just makes code more readable, don't you think?
Java uses the
final keyword for constant values:
public static final int FOO = 5;
AS3 uses the
const keyword:
public static const FOO:int = 5;
Java performs casts by putting the type in parentheses before the object in question:
float f = 5; int g = (int)f;
These parenthetical statements always look to me like the code is speaking to me quietly and discretely: "(Hey, psst! You should now consider this
float to be an
int. Pass it on.)"
AS3 casts look more like a function call through the type being cast to:
var f:float = 5; var g:int = int(f);
There is also another way of casting in AS3, using the
as operator:
var g:int = f as int;
Java typically declares exceptions that are thrown:
public void foo() throws MyException { try { // really awesome code } catch (Exception e) { throw new MyException("AAAUUUGGGHHHH!"); } }
AS3 throws exceptions without declaration:
public function foo():void { try { // really awesome code } catch (var e:Error) { throw new Error("AAAUUUGGGHHHH!"); } }
Java has generics and typed collections:
List<FooType> list = new ArrayList<FooType>(); list.add(new FooType()); FooType foo = list.get(0);
AS3 ... does not.
But AS3 does have typed arrays through the
Vector class:
var vec:Vector.<FooType> = new Vector.<FooType>(); vec[0] = new FooType(); var foo:FooType = vec[0];
Some may wonder at the odd angle-bracket syntax of the
Vector declaration. I'm not sure of the history, but I have a feeling that AS3 was just trying to achieve readability and angle-bracket
parity with Java's generics.
Speaking of angle brackets in code, Java handles XML processing through various libraries (many of them), such as JAXP, JAXB, SAX, Xerces, JDOM, etc.
AS3 has E4X integrated into the language itself for queries, manipulation, and the like.
var myXML:XML = <Grob>gable</Grob>; trace(myXML.Grob); // outputs 'gable'
As you've seen by now, AS3 classes look pretty much like Java classes. You haven't seen them yet, but AS3 interfaces also look eerily similar to their Java counterparts. But looks aren't everything (except with supermodels and wax fruit) and there are a few important distinctions in behavior that are worth investigating.
Java allows the same access permission specifiers (
public, protected, private, and package-private) on constructors that are allowed on classes, fields, and methods:
public class FooObject { private FooObject() {} } FooObject foo = new FooObject(); // Compiler error
In AS3, constructors are always public:
public class FooObject { private function FooObject() {} } // Compiler error
Making a constructor private (in a language that supports it, like Java) is not a typical pattern, although it is helpful in some situations, like creating singletons. If you really want only one of something, then it's a good idea to prevent anyone but the class itself from creating it. A workaround used in AS3 involves throwing exceptions from the constructor when called from outside of your singleton accessor, but it is not quite the same thing.
Java allows properties to be declared on interfaces:
public interface IFoo { public int blah = 5; public final int VAL = 7; }
*Both of these properties are implicitly static and final, even though they lack those keywords. Try it, you'll see.
AS3 does not allow properties on interfaces. Only functions can be declared on interfaces. Note, however, that you can declare properties with set/get functions; just not properties as fields. For example, this works:
public interface IProperties { function set blah(value:Number):void; function get blah():Number; }
If the get/set example here makes no sense, don't worry. You'll learn more about properties and these functions in the second half of this article.
Java allows abstract classes:
public abstract class FooObject { public abstract void foo(); }
AS 3 ... does not. There is no concept of "abstract" in AS3.
That's probably enough comparison to get a busy mind spinning, so I'll stop for now. I hope the examples here have whet your appetite, because there's more to come. In the second half of this article we'll get into the advanced topics of properties, dynamic behavior, and functions, with a similar line-up of code-based comparison and usage commentary from, well, me. | http://www.javaworld.com/javaworld/jw-02-2009/jw-02-actionscript1.html?page=3 | CC-MAIN-2013-20 | refinedweb | 1,519 | 56.59 |
xmlrewrite - cleanup XML based on schemas
# =
Convert an XML message into an XML with the same information. A schema is required to enforce the correct information: for instance whitespace removal is only allowed when the type definition permits it.
The command has TWO MODES:
The input message is processed as is, and then some transformations are made on that message. All options will be used to change the output..
This is the first release of rewrite. It still lacks most of the more interesting features which I have in mind. There are also a few real limitiations in the current version:
You can either specify an XML message filename and one or more schema filenames as arguments, or use the options.
The execution mode. The effect of many options will change according to the mode: be careful.
The file which contains the xml message. A single dash (-) means "stdin".
This option can be repeated, or the filenames separated by comma's, if you have more than one schema file to parse. All imported and included schema components have to be provided explicitly, except schema-2001 which is always loaded.
The type of the root element, required if the XML is not namespaceo qualified, although the schema is. If not specified, the root element is automatically inspected.
The TYPE notation is
{namespace}localname. Be warned to use quoting on the UNIX command-line, because curly braces have a special meaning for the shell.
By default (or when the filename is a '-'), the output is printed to stdout.
Put a blank line before (the comments before) each element, only containers (element with childs) or never.
PREFIX and NAMESPACE combination, to be used in the output. You may use this option more than once, and seperate a few definitions in one string with commas.
abc= # prefix abc = # default namespace
UTF-8will be used. It is not possible to fix erroneous encoding information while reading.
1.0will be used.
-1means that there should be no compression. By default, the compression level of the input document is used.
--rm-elements xs:annotation --rm-elements '{}mytype'
Controls whether to keep or remove comments. Comments are interpreted as being related to the element which follow them. Comments at the end of blocks will also relate to the last element before it.
Behavior is different between repair mode and transformation mode.
The default is
ignore, which means that the output message will not add or remove elements and attributes based on their known defaults. With
extend, the defaults will be made explicit in the output. With
minimal, elements and attributes which have the default value will get removed.
Remove
key,
keyref, and
unique elements from the schema. They are used for optimizing XML database queries.
This module is part of Perl's XML-Compile distribution. Website:
This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself. See | http://search.cpan.org/dist/XML-Rewrite/bin/xmlrewrite | CC-MAIN-2014-52 | refinedweb | 489 | 66.84 |
Anne Thomas Manes on SOA, Governance and REST
Recorded at:
- |
-
-
-
-
-
-
Read later
My Reading List
Bio Anne Thomas Manes is a Research Director with Burton Group,.
Sponsored Content
1. We're at QCon and talking to Anne Thomas Manes. So Anne, can you tell us a little bit about yourself?
Thank you for having me. I'm Anne Thomas Manes. I'm a Research Director with Burton Group. My primary area of focus is service oriented architecture, but my group covers all things related to building and managing application systems.
2. You are quite well known in this community and you have been covering it for a long time. What is your current view on the state of SOA?
I've been talking about SOA since 1990 when we first started looking at CORBA and distributed object systems. It's been around for a long time, although I think that the technology has improved and our understanding of how to build service oriented systems has improved a lot since those days. Where people are today? Well they're either just doing experimentation and they are really focusing mostly on the technology. Or they've gotten beyond that stage and they realize that SOA has nothing to do with technology and has everything to do with how you build systems. And they are at that point trying to figure out how to reign things in and get them under control. Or they have actually got to the point where they realize that governance is the main piece of making SOA work and there are not very many people at that point yet.
3. What is the difference between the promises that are being made and the actual reality? Are the promises true? Do you have experiences that actually people managed to get it all together correctly?
I don't think anybody has actually reached the point where they are really getting the promise of SOA because I think SOA is a very long term effort and to truly get to the point where you've got complete flexibility and agility that's going to be 15 to 20 years from now. But people are definitely experiencing benefits from creating services and designing systems according to SOA principles. I think that every single application that you implement that is using SOA principles is going to make your systems more manageable. And so you should start to see real benefits for every single project that you do.
4. The topic of your talk here at QCon is how to sell SOA and how to get funding for that. Can you elaborate a little, how do I get funding for something that will take me 15-20 years?
SOA is about reengineering the IT organization and changing the way you design systems and the way you manage your systems. Today about 80% of an organization's IT budget is spent on legacy systems and only 20% is spent on new development and innovation. And that 80% of the IT budget typically accounts for something like 70% of a corporation's capital expenditures per year. And what's really amazing is that most organizations never bother to look at this enormous allocation of funding. But that's actually where you want to apply your SOA principles, to that existing system. And if you think about how much money gets sunk into legacy systems you realize there are a lot of opportunities for us to fix IT and that's what SOA is about, it's about fixing IT. IT is fundamentally broken and you need to fix it and the only way to do that is by going in and reassessing and reengineering the IT process. And SOA is a completely different way to design systems that is no longer looking at it from an application project by project basis but looking at it in terms of very expensive assets that need to be managed more effectively.
5. That all sound reasonable but that requires the organization to actually recognize this, to perceive this as the problem and what if they don't?
That's actually the big issue, that's why it's so hard. There are great methodologies out there for helping you examine your current system application portfolio management and application life cycle management are the core technologies and methodologies that people are using today to assess their current system and to understand how much money they are spending and whether or not it's a wise investment. And it's actually a very academic exercise, you just sit down and do an analysis of each application and you have to come out with a standard set of metrics to do this but you have a standard way to assess the value that the application is providing to the company and how much it costs on an annual basis to manage and maintain that application. And if cost exceeds value you got an academic opportunity here to say "it's time for us to decommission the application". And if you look at most IT organizations they have hundreds and hundreds of applications. The average large enterprise has 4 or 5 hundred applications and many of them have as many as a thousand or more than a thousand applications. And most of those applications are doing exactly the same thing but they are doing it for different groups within the organization. SOA is about reducing redundancy in your system, it's like "if I have a dozen applications that are all doing the same thing, I should decommission those applications and create a common service and have just a single service that implements that capability".
6. You're argueing that starting from application life cycle management to portfolio management that's the right bridge towards a SOA approach?
You need to have an enterprise architecture effort within the organization, the organization has to recognize that there is a problem in IT, and that you do want to do something to fix the issue, and it's really an academic exercise, what's the cost what's the value. When cost exceeds value it's time to get rid of this investment. If you treat these application systems like real assets, then it's very easy. In fact business people understand this idea: cost exceeds value, get rid of it. If this were a financial portfolio you can guarantee these people would be scrutinizing it a lot more. Now the challenge is if you can't convince management to do this high level assessment of your environment, in that case my recommendation is that you adopt a more stealthy approach to SOA – "Stealth SOA". You have a project, it's funded by a business group, the business group recognizes that this is a project that we need to do, and now take a look at this in terms of how can we apply SOA principles to this system. And you simply apply SOA principles to every project that you do. It takes a lot longer to achieve the true benefits of SOA, if you're doing it in a stealth manner but once you've got a couple of examples in place it's much easier to demonstrate the value of SOA then you get more funding for the process.
7. That naturally leads us towards the issue of governance. If you start those stealth projects, and have everybody do a ‘bottom up' approach, how do you actually manage that and how do you ensure that it conforms on standards, can you elaborate a little on your view on governance?
Yes, the governance is b far the most important aspect of SOA and you want to make sure that people are actually designing systems in such a way that they can be reused, you want to make sure that they are supporting true interoperability. The problem is we haven't identified what the best practices are for SOA so therefore there isn't a single book you can go to and say "Ah, here's how I build a good service". And so it's very much an art form at this point. Governance tools help you because it gives you the ability to identify the right way to do it and define some compliance tests, make sure that when people build a WSDL it complies to the WS-I Basic Profile, you can define some basic requirements for how to build schemas and things like that. When you put something into production you want to make sure that it's managed, secured, you want to have standard ways to represent this information and governance tools can help you in that regard. But if you can't get the budget to go get the governance tools, that means you have to do it on the side.
8. What sort of tools would you classify as governance tools? Could you give an example or a product a category?
The primary governance technologies are: registries, repositories and web services management systems. There are also a number of other smaller products that do policy management, and that's one of my favorite in that category. Policy management allows you to define and codify what's the right way to do something, and typically they provide compliance tests that help you verify that your artifacts conform to the policy that you identified.
9. You mentioned schemas and WSDLs and other web services artifacts, how much do you equate SOA with web services, is it just one way, is it your preferred way?
I would say at this point it's probably my preferred way but it's certainly not the only way to do it and in fact SOA is not about technology, it doesn't matter what technology you use to implement services, but the type of technologies that you use is going to impact how useful the service will be. One of the primary goals you have in creating the service is enabling interoperability at least in most circumstances. There's some times when performance far outweighs requirements for interoperability, or scalability far outweighs requirements for interoperability, and in that case you have to use an appropriate technology to support those requirements. The essence of SOA is the fact that you are designing a service in such a way that it's implementing a core capability that other applications can consume. And that you expose its capabilities using some type of protocol and an interface and a description that other systems can figure out how to work with it. Web Services is a popular way of doing it. The primary reason why Web Services is popular as it is because almost every platform in the world supports it. But it certainly isn't the best way to do things. My guess is that 10 years from now people are going to look back and say "oh my God I can't believe we were using SOAP, it's like the worst thing in the world". Because I'm sure there's going to be something better that comes along. A lot of people are now pushing back at the idea, that WS-* is getting way to hard and to complex.
10. One of the ongoing debates here at InfoQ is about REST vs SOAP. Do you have an opinion on REST?
REST shouldn't be compared with SOAP because REST is an architecture, in the way that SOA is an architecture. And REST is actually a more constraint architecture, you can build services that are RESTful, you can also build RESTful resources that are not necessarily services, but essentially when you're doing something with REST, you are creating a resource and it has a uniform interface. And you're using HTTP as your interface to it, it supports a very simple method invocation, it has got: GET, PUT, POST, DELETE, that's it and none of these special methods that you define on all your different services. The advantages of REST - and that was proven by the web, REST is the architecture of the Web - is that it's extremely scalable, it supports caching, because you know what a GET will do, and the GET will not make any changes to the backend system, you know what a POST, DELETE will do, and because you know exactly what that thing is going to do it allows you to build extremely high scalable systems like the Web. For a lot of people it's a simpler way of doing it. But at the same time, because you don't actually define a lot of semantics into the interface that means that the semantics are actually left to the application and you have to negotiate the semantics of the interaction out of band, and the question is "which is more valuable to you?". There are certain applications where REST is absolutely the only way to go.
11. For example?
Extremely highly scalable systems. It's absolutely the only way to go. If I'm dealing with a much smaller scalable system I might find that it's simpler and easier if I have a more direct type of programming interface available that actually it tells me that I'm submitting an order as opposed to just PUT, or POST, I want to be able to say "submit order" because that gives me a little more semantics in the environment.
12. When you say that REST is the architecture of the Web, is that actually true? Do all systems on the Web follow REST principles?
Well they should, if you want your Web application to support the scalability features that the Web is supposed to support. Then you should design your systems such that they are RESTful. And that means that every resource defined by a URI and that it exposes this common uniform interface, and that when you do a GET it will not actually make a change in the backend and that you have the ability to cache the resource, the representation of the resource. As long as you are following those principles, you get that kind of scalability. But there are many systems out there that are not RESTful, they are on the Web, but what they are doing is tunneling R PCs through POST or something like that. That's a really common thing, that's a lot of the dynamic web out there, a place where you post something, and you send in some information, it's going to go off and call through some web application primary call something to create dynamic stuff. And a lot of those systems are actually not RESTful. They are in fact just tunneling RPCs through the HTTP protocol. There are also a lot of applications out there which claim to be REST but they are really not, they are POX, they are plain XML over HTTP, but they are not actually RESTul in nature.
13. What do you think about the support for REST that is popping up in different frameworks and different technologies?
That hurts me so much because they are not REST they are POX. For example Axis2, which is one of the more popular SOAP engines out there now has REST support. It will take your Java method and expose it as a RESTful method which is in fact simply just pumping a method call through HTTP so it's not REST it's POX and that's actually what most of these tools are doing.
14. Many people claim that what's missing in REST is a formal way of describing a contract, a formal way to describe an interface. What do you think about that?
I think that if you want to develop a RESTful application then you want people to be able to consume it, you will need to tell them what are the expected inputs and outputs in this RESTful service. And in that case, you need to describe it in some way. The RESTful proponents out there keep talking about the fact that REST is self describing because it returns a MIME type, but I don't think MIME types are really going to tell me what information I should be sending and what information I'm sending out. I think the way most people build applications; they are not really designed to be completely arbitrary in terms of what goes in and what goes out. That's the beauty of REST that I can tell you when I send something in "here's what I'm sending you" and then you can tell me when you are sending it back out "here's what I'm sending you". But applications don't work very well that way, they like to know what data they should be sending in and out. And in that case XML Schema is perfect type of mechanism to represent "here's the message format that's expected and here's the message that I'm going to send back out". I don't see a reason why you can't provide that kind of interface description to go along with the REST environment. You could define a WSDL document that explains a RESTful interface, although the RESTafarians will say WSDL is not something that you want. I personally believe in a more middle of the road approach. If you want your application to be able to dynamically convert XML into your favorite programming language, like Java or .Net then you want to have a standard definition of what the messages look like so that you can have a tool that will automatically create the appropriate bindings for you.
15. If we go back to the governance side of things, you mentioned that much of the potential value of introducing SOA is looking at the existing systems instead of the new ones that are being created. Doesn't that mean that a governance solution has to be able to support the governance of old assets as well as new assets? Can I assume that everything is using Web Services or REST? Obviously I cannot, but how do existing tools address this?
That's actually one of the biggest challenges I think that most people have with the tooling out there because most of the tooling is really focused on new development although a lot of the EBSs have tools to take existing applications and expose them as services. My approach to SOA is "don't focus on what the tools can do for you, focus on what it is to trying to accomplish". And once you figure out what you are trying to accomplish look at the available tools that you have at your disposal and figure out which one of those tools is actually going to help you do what you need to do. In an application rationalization process during that you go through and look at each application determine its value and its cost, the most popular tool for that process is a spreadsheet. It's not like you've got a whole bunch of automated tools helping you do this, this is human analysis. And when you are starting to examine your existing applications and figuring out where you want to refactor certain functionalities, that currently exist in a bunch of applications and turn it into a service, at that point you need to do modeling, you need to understand "what is this core capability that I am going to pull out, what's the data that needs to go in and out, how should I actually implement this functionality in such a way that it will support the various application constituencies that need to use it". There's not a lot of tooling out there that necessarily helps you do this, it's very much a roll-your-own kind of thing. But at the same time you have lots of data descriptions that already exist in databases, in applications, in schema documentation. And you suck that information into your repository so now you at least have this information.
One of the things that I find most distressing when I look at people when they are talking about their SOA projects they are always talking about sharing services but they are not really talking about sharing the core artifacts which is a type. The fundamental artifacts that needs to be shared in order to enable true SOA, is a type. It drives me crazy when I see people on the Axis list talking about the fact that they use a code-first approach. I can understand the reasoning behind it, because you can actually automate the development process a lot more if you use a code-first approach, but what you end up doing is creating tons of additional types. So I have the xyz:customer and the abc:customer; are they anything at all the same? Is there a way for me to consolidate this information? No, because you are creating all these new namespaces with all these additional types, you really need to focus on what are the core types, and make sure that your applications are actually sharing the core types.
16. Any parting words for us? What can we expect in the future of SOA within the next few years?
Lots of hard work, we're talking about 15-20 years worth of work to go through and refactor current systems, I suspect that there are lots of organizations which will never refactor all their systems, it doesn't make sense to do that, you need to have a solid business case to do it, focus on sharing of core artifacts not just the services themselves, don't start your SOA project by saying "ok, we're going to adopt a composite application development environment" because that's not going to help you create these core components that you actually want to share, recognize that SOA is about reducing redundancy and keep that in mind as you go through the process.
Hello stranger!You need to Register an InfoQ account or Login or login to post comments. But there's so much more behind being registered.
Get the most out of the InfoQ experience.
Tell us what you think | https://www.infoq.com/interviews/anne-thomas-manes-soa | CC-MAIN-2017-09 | refinedweb | 3,733 | 63.22 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.