text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
19819/how-to-do-a-batch-insert-in-mysql
I have 1-many number of records that need to be entered into a table.
What is the best way to do this in a query?
Should I just make a loop and insert one record per iteration? Or is there a better way?
You can try out the following query:
INSERT INTO tbl_name (a,b,c) VALUES(1,2,3),(4,5,6),(7,8,9);
From the version 5.1.7 onward, MySQL allows ...READ MORE
Reference the System.Web dll in your model ...READ MORE
If you are using MySql workbench then ...READ MORE
You can download the all in one ...READ MORE
The reasons are as follows:
The MySQL extension:
Does ...READ MORE
Please check the below-mentioned syntax and commands:
To ...READ MORE
using MySql.Data;
using MySql.Data.MySqlClient;
namespace Data
{
...READ MORE
There are a lot of ways to ...READ MORE
You can add commas to the left and ...READ MORE
With the help of the SQL count ...READ MORE
OR
At least 1 upper-case and 1 lower-case letter
Minimum 8 characters and Maximum 50 characters
Already have an account? Sign in. | https://www.edureka.co/community/19819/how-to-do-a-batch-insert-in-mysql | CC-MAIN-2022-21 | refinedweb | 200 | 79.26 |
#include <ACEXML/common/FileCharStream.h>
Inheritance diagram for ACEXML_FileCharStream:
Default constructor.
[virtual]
Destructor.
Returns the available ACEXML_Char in the buffer. -1 if the object is not initialized properly.
Implements ACEXML_CharStream.
Close this stream and release all resources used by it.
Determine the encoding of the file.
Read the next ACEXML_Char. Return -1 if we are not able to return an ACEXML_Char, 0 if succees.
[private]
Read the next character as a normal character. Return -1 if EOF is reached, else return 0.
Open a file.
Peek the next ACEXML_Char in the CharStream. Return the character if success, -1 if EOF is reached.
Read the next batch of ACEXML_Char strings
Resets the file pointer to the beginning of the stream. | http://www.theaceorb.com/1.4a/doxygen/acexml/classACEXML__FileCharStream.html | CC-MAIN-2017-51 | refinedweb | 119 | 70.9 |
c++.stl.port - STLport453 Working
I finally got a good build on my STLport. I finally deleted the stlport that came with the purchased package and downloaded and installd 453 from the web site. It built correctly first time around. I do have one apparent learning curve issue with the namespace std - - I think I saw a switch on one of the files to turn off namespaces. Can't remember whare I saw it though. Apparently, this is a function for vackwaeds compatibility back to the 98 standard ANSI. It seem slike the downloads would have namespaces turned on instead of coming configured for the older version of the standard. So now that i have 453 going ok, I'll focus on getting DMC to work with Boost 135.
Jul 06 2008 | http://www.digitalmars.com/d/archives/c++/stl/port/STLport453_Working_323.html | CC-MAIN-2016-30 | refinedweb | 132 | 73.98 |
:Maybe a doubly linked list of/in the namecache entries. Locking would :always lock either just the first entry in the list (I think that's :sufficient) or at least always obey the same order to prevent deadlocks. : This way renames to self (in another incarnation) can be detected as :well (by running through the list). : :cheers : simon I don't think it's going to be that simple. We are going to need a cache coherency mechanism for namespace operations between machines just like we are going to need it for file read/write ops. And, just like as I have described for file I/O, whatever mechanism we come up with will work just as well for competing entities on the same host as they will for competing entities on different hosts. In anycase, as I have said, I do NOT think this is an issue that nullfs itself has to deal with. It is definitely an issue that we want the namecache layer to deal with, or more particularly that we want a generalized cache coherency layer to deal with. -Matt Matthew Dillon <dillon@xxxxxxxxxxxxx> | https://www.dragonflybsd.org/mailarchive/kernel/2006-01/msg00015.html | CC-MAIN-2017-04 | refinedweb | 188 | 67.99 |
This tutorial covers the switch case statement in C++.
Along with the if else statement, the switch case statement is a way of decision making in C++. It’s a more precise way of decision making than if statements which are better for covering a wide range of possibilities.
The C++ switch statement comprises of many code blocks with one “case” for each code block. An expression is evaluated with the resulting value is matched against each possible case in the switch statement. If a match is found, the code block for that case is executed.
C++ Switch case example
Below is a simple example of the switch case statement in C++. The code is pretty self-explanatory so much explanation is not required.
Unlike if statements, you can just use a simple variable here as the “expression”. There aren’t many restrictions on the type either, you can use strings, characters and numbers.
#include <iostream> using namespace std; int main () { int x, y; char op; cout << "Enter number x: "; cin >> x; cout << "Enter number y: "; cin >> y; cout << "Enter arithmetic operator (symbol): "; cin >> op; switch (op) { case '+': cout << "output: " << x + y; break; case '-': cout << "output: " << x - y; break; case '*': cout << "output: " << x * y; break; case '/': cout << "output: " << x / y; break; } }
Output 1
Enter number x: 6 Enter number y: 8 Enter arithmetic operator (symbol): + output: 14
Output 2
Enter number x: 8 Enter number y: 2 Enter arithmetic operator (symbol): / output: 4
You could create an if statement equivalent to this, but it wouldn’t be as clean and simple. The switch statement was designed just for this kind of purpose where as the if statement is more general purpose.
Default keyword
It’s not possible to account for every scenario that may occur, that’s why we are given the use of the default statement.
The code contained within the code block of this statement will execute if none of the conditions in the switch statement have been met.
switch (op) { case '+': cout << "output: " << x + y; break; case '-': cout << "output: " << x - y; break; case '*': cout << "output: " << x * y; break; case '/': cout << "output: " << x / y; break; default: cout << "Please pick from +, -, /, *"; }
Enter number x: 6 Enter number y: 2 Enter arithmetic operator (symbol): = Please pick from +, -, /, *
The default statement here is playing a very important role. It’s easy for someone to use an incorrect operator here. Without the default statement there would be no output and the user would be confused.
This marks the end of the C++ switch case statement article. Any suggestions or contributions for CodersLegacy are more than welcome. Questions regarding the article content can be asked in the comments section below. | https://coderslegacy.com/c/c-switch-case-statement/ | CC-MAIN-2021-21 | refinedweb | 447 | 66.07 |
Item Pickup System in Unreal Engine Using C++
Last time, in history's most impromptu blog post I complained that the resources surrounding Unreal and C++ specifically can be a bit lacking. Never let it be said that I complain about things and don't do anything about it. I've decided to outline how I eventually built my own item pickup system through reading the documentation, translating to C++ from Blueprint, and interpreting some existing tutorials listed below.
Goal
In this tutorial we are going to modify the provided C++ First Person Template. When the player centres their crosshairs on an item that can be used it will become highlighted with a flashing outline. After that, we will bind a key as a "Use" key which, when pressed and centred on a usable object will destroy the static mesh of the object so it is no longer visible. Though the system we are going to develop here is quite simple I hope people following it will see how easy it is to expand upon and fit into existing games. The highlighting will be made using Post-Processing Effects in order to generate the visible outline of any object on the fly.
We'll start by getting some infrastructure in place in the Unreal Editor and then move into Visual Studio to pull all of those pieces together in C++. Just like last time at the end of the tutorial will be a link to gists of the code, so if you'd rather just see the code, scroll down to the bottom.
Background
In order to run this tutorial I assume you have Unreal Engine installed, and fully functional. I am writing this tutorial assuming you have created a fresh "First Person C++" project -- I've called mine 'ItemSample' though I have tried to keep the steps generic enough that they should slot into other types of project, as well as existing ones that you may have without too much difficulty. I didn't include any Starter Content, if you wanted to that would not be a problem.
Though the C++ we use is pretty elementary, some programming know-how would not go amiss, if you are looking to build your project in Blueprint this is not your tutorial.
Unreal Editor
Importing the Highlight
In order to highlight the looked at item we need a Highlighting Material, which works similar to other materials, that will be applied to the Usable Item being looked at when all the post processing effects are applied. You can get some Highlight Materials from the internet, or you can borrow a material from the Content Examples, which is what we will do as developing one of our own is out of the scope of this tutorial.
I'm quickly going to walk you through the process of importing the Material Unreal Engine includes in its Content Examples, you can skip this step if you developed you own Material, or found one online somewhere.
Open an existing, or start a new "Content Examples" Project, find the material in it called "M_Highlight" (when this tutorial was developed it was located at
/ExampleContent/Blueprint_Communication/Materials) and right click on it. In the context menu that pops up click "Asset Actions" to expand the menu and then select 'Migrate' which will show you the path it is exporting the material to then let you select a project to migrate to, select your working project (ItemSample if you are following the tutorial exactly). You can now close (and delete if you like) the Content Example project you just created.
Finally, this step is optional, but when materials are imported from other projects they move to a path based on where they were in the original project, personally, I like to move the material in with all the other materials for my project.
Applying the Post-Processing Volume
In order to apply our highlight we need to use a Post-Processing Volume which allows effects to be applied to the scene after everything else has been built thus calculating the outline of the object we are looking at at the very end.
A Post-Processing Volume can be dragged from the 'Volumes' tab of the 'Modes' panel on the left of the screen into the scene, then look to the "Details" panel to the right to customize the volume.
Within the details panel the first thing we need to do is check the "Unbound" box under the "Post-Processing Volume" heading, this will apply our Highlight material globally within the level there will still be a box in the scene that represents the volume and it can be dragged around but doing so has no real effect, I tend to just move it up out of the way.
After applying the effect globally scroll down to the 'Misc' header and add a 'Blendable' by clicking the '+' beside where it says '0 elements', once added, in the 0th element click on the dropdown find and select the M_Highlight material.
Adding a 'Use' Keybinding
Click on the edit menu and open up the 'Project Settings...' window (located near the bottom of the list). In the window that opens click on the 'Input' tab to the left and select 'Bindings' and add a new 'Action Mapping' by clicking on the '+' sign beside the title -- I set mine to the keyboard letter 'E' but you can use whatever key seems most appropriate. You can also name the key whatever you like, I picked 'Use' as it seemed reasonably descriptive. There is no need to save anything here just close the panel when you are done.
Visual Studio
UsableItem Superclass
In the Unreal Editor we can click the 'File' menu and select the option to "Add Code to Project...", we will add a new 'Actor' type class, you can name this Actor whatever you like I called mine 'UsableActor', click 'Create Class'. Keep clicking next until it prompts you to or opens the file in Visual Studio. The superclass we're building here is really simple and easy to add to. I deleted the references to the
Tick and
BeginPlay methods in the source at the end to make it easier to pick out the relevant information, if you want or need these methods feel free to leave them in the source.
First, adjust the header (
UsableItem.h) so its parent is a
StaticMeshActor not simply an
Actor. Remember to
#include "Engine/StaticMeshActor.h" at the top of the header.
In the source file (
UsableItem.cpp) add the following line into the constructor
SetMobility(EComponentMobility::Movable); which sets the object as 'Movable' to ensure (among other things) that when you pick up the item its shadow also disappears.
In the constructor of
UsableItem in both the header and the source adjust the arguments that the constructor takes in so it includes this argument
const class FObjectInitializer& PCIP which is a built in Unreal construction used to finalize the object. In the source you will also need reference the super class by appending
: Super(PCIP) to the end of the method declaration.
Modify the Protagonist
In Visual Studio, open up your character's .cpp file (those following exactly, this is
ItemSampleCharacter.cpp) and include your newly built 'UsableItem' class with the following line
#include "UsableItem.h". First we will build two auxiliary methods to determine what our character is looking at, and whether or not to apply the Post Processing Volume respectively, then we will add, or modify an existing Tick method to use these methods.
GetItemFocus
The first thing we need to do with the character is figure out what she is looking at, so we can determine if we need to apply post-processing effects to that item. We use raycasts to trace vision from our protagonist to objects. So, first we declare a new method that takes no arguments but returns a
AUsableItem*
AUsableItem* AItemSampleCharacter::GetItemFocus(){}
Declare variables to hold the location and rotation of the camera, and one to set up how far into the distance we want to be able to use items.
FVector CameraLocation; FRotator CameraRotation; int howFar = 300;
Then fill these values with the ones we can get from the character's viewpoint.
Controller->GetPlayerViewPoint(CameraLocation, CameraRotation);
Now we can build the start, end and direction for the trace line.
const FVector StartTrace = CameraLocation; const FVector Direction = CameraRotation.Vector(); const FVector EndTrace = StartTrace + Direction * howFar;
And set some parameters for the line we're going to draw.
FCollisionQueryParams TraceParams(FName(TEXT("")), true, this); TraceParams.bTraceAsyncScene = true; TraceParams.bReturnPhysicalMaterial = true;
Now we draw the line and get back whatever we were looking at.
FHitResult Hit(ForceInit); GetWorld()->LineTraceSingle(Hit, StartTrace, EndTrace, COLLISION_VIEW, TraceParams);
Where
COLLISION_VIEW is the collision channel for the raycast, it must be defined in your project header before you can use it so open up your project's
.h file for this tutorial
ItemSample.pro and add the following line in the file:
#define COLLISION_VIEW ECC_GameTraceChannel1
Finally, back in the
ItemSampleCharacter.cpp's
GetItemFocus() we return whatever we observed casted to a
UsableItem so it will return
null if there wasn't one in view.
return Cast<AUsableItem>(Hit.GetActor());
Make sure you add this new method to your character's header(
ItemSampleCharacter.h) by including the following line
class AUsableItem* GetItemFocus();
ApplyPostProcessing
As before, create a new method, this one should also return
AUsableItem*, but take in two arguments, a
AUsableItem* itemSeen and
AUsableItem* oldFocus -- using these we will be able to turn highlighting on and off based on whether it is a new object in our field of view. The method declaration should look like this:
AUsableItem* AItemSampleCharacter::ApplyPostProcessing(AUsableItem* itemSeen, AUsableItem* oldFocus){}
Let's pause here briefly to look at what we want to do on the high level -- we want to see if on this tick our focus is the same as it was on the last tick, if it is we don't need to change anything but if it isn't we need to apply the highlight to the new object and if we were looking at something on the previous tick we need to turn its highlighting off.
Let's build a conditional to check to see if an item is being looked at on this tick:
if(itemSeen){ } else{ }
We'll first work through the nested conditional for the case where an item is being looked at on this tick, so within the first case of the previous conditional we will add another if statement, it will check if the item being looked at is the same as the one on the last tick. If it is, it resets the highlight on the item in view by applying the CustomRenderDepth or post processing effects to the static mesh of the object being looked at:
if (itemSeen == oldFocus || oldFocus == NULL){ UStaticMeshComponent* mesh = itemSeen->GetStaticMeshComponent(); mesh->SetRenderCustomDepth(true); }
If the object being looked at is not the same as the previous object we were looking at we first set the CustomRenderDepth on the new item and then turn the CustomRenderDepth off on the old item that we aren't looking at anymore
else if (oldFocus != NULL){ UStaticMeshComponent* mesh = itemSeen->GetStaticMeshComponent(); mesh->SetRenderCustomDepth(true); UStaticMeshComponent* oldMesh = oldFocus->GetStaticMeshComponent(); oldMesh->SetRenderCustomDepth(false); }
Finally, we return the item seen so that it can be our '
oldFocus' on the next tick.
return oldFocus = itemSeen;
In the
else of our first conditional we just need to make sure that we turn off post-processing if we were looking at something on the previous tick, to that end we add conditional that checks if we were looking at something else last tick if we were, we turn off the CustomRenderDepth on that item's static mesh component.
if (oldFocus != NULL){ UStaticMeshComponent* mesh = oldFocus->GetStaticMeshComponent(); mesh->SetRenderCustomDepth(false); }
Finally, still inside the else we
return oldFocus = NULL so on the next tick we know we weren't looking at anything.
Make sure you add this new method to your character's header(
ItemSampleCharacter.h) by including the following line
class AUsableItem* ApplyPostProcessing(AUsableItem* itemSeen, AUsableItem* oldFocus);
Tick
Some templates will already have a
Tick(float DeltaSeconds) and some will not. If you do, you can append the next bits of code and if you don't, add a new method with the following signature
void AItemSampleCharacter::Tick( float DeltaTime ){} if you just created the method add the line
Super::Tick( DeltaTime ); as the first line of your new method -- if there was already a Tick method in your character that line should already be there.
After the setup, we want to check whether or not there is an item in focus right now and we want to set the current
itemSeen to that value, so we call our
GetItemFocus() like so:
AUsableItem* itemSeen = GetItemFocus();
Then we want to declare a static variable so its value will persist across separate callings of this method, this will hold the
oldFocus variable, used to track whether or not we were looking at something on the previous tick.
static AUsableItem* oldFocus = NULL;
Finally, we apply the Post Processing and we reset the
oldFocus so it is set to whatever we were looking at on this tick.
oldFocus = ApplyPostProcessing(itemSeen, oldFocus);
If you didn't already have a Tick method in your character, make sure you add it to the header:
virtual void Tick( float DeltaTime ) override;
Adding a Key Listener
This step is pretty easy, it just binds the key we set up in Unreal Editor to the character. In the character class find the
SetupPlayerInputComponent method and at the bottom add the following line:
InputComponent->BindAction("Use", IE_Pressed, this, &AItemSampleCharacter::Use);
If you called the key something other than 'Use' just make sure you change it up there as well.
Applying Action to the 'Use' Key
A more complex use of items is outside the scope of this article, so we are just going to destroy the static mesh of any object that we use. This method can be expanded to fit what you need in your game.
First, make a method for the key binding:
void AItemSampleCharacter::Use(){ }
Then, if the use key is pressed when there is an item in focus, destroy the static mesh for that object.
if (GetItemFocus()){ GetItemFocus()->GetStaticMeshComponent()->DestroyComponent(); }
Be sure to add this method to your header as well
void Use();
Putting it all Together
Save and compile the project in Visual Studio and go back to the Unreal Editor.
Drag an object into the scene, I just grabbed a cube from the Modes bar on the left. Select it, to the right in its details panel you can add a blueprint to the the object, do so and double click to open the blueprint. In the file menu of the Blueprint you can re-parent the object, select our UsableItem class as the new parent, 'Compile' and 'Save' your cube. Save and Play within the scene, when your crosshairs are over the cube it should have a flashing outline (if you used the M_Highlight material), or be outlined if you used a different material. If you press your 'Use' key within the range you set earlier while the item is flashing the static mesh will be destroyed and the object will disappear. (If the shadow remains make sure that you set the item as "Movable" in the UsableItem class).
And there you have it, an item pickup system in C++ for Unreal Engine.
References
- Tom Looman's UsableActor System is what mostly got me on the right track. The tutorial itself can be a bit confusing to a complete Unreal newbie, but his code is exceptionally clear and I highly recommend looking around his GitHub.
- Tesla Dev's tutorial was the first to really make clear exactly what was going on with the object outline. If this tutorial had been in C++ instead of Blueprint it would have been perfect for me! | http://expletive-deleted.com/2015/06/17/item-pickup-system-in-unreal-engine-using-c/ | CC-MAIN-2019-22 | refinedweb | 2,659 | 51.31 |
addressbook.wsiftypes;59 60 import java.util.*;61 import org.w3c.dom.*;62 import javax.xml.parsers.*;63 64 import addressbook.wsiftypes.*;65 66 /**67 * Sample service that provides add/get functionality.68 *69 * @author Matthew J. Duftler (duftler@us.ibm.com)70 * @author Aleksander Slominski71 */72 public class AddressBook {73 private HashMap name2AddressTable = new HashMap();74 75 public AddressBook() {76 addEntry("John B. Good",77 new Address(123, "Main Street", "Anytown", "NY", 12345,78 new Phone(123, "456", "7890")));79 addEntry("Bob Q. Public",80 new Address(456, "North Whatever", "Notown", "ME", 12424,81 new Phone(987, "444", "5566")));82 }83 84 public void addEntry(String name, Address address)85 {86 name2AddressTable.put(name, address);87 }88 89 public void addEntry(String firstName, String lastName, Address address)90 {91 name2AddressTable.put(firstName+" "+lastName, address);92 }93 94 public Address getAddressFromName(String name)95 throws IllegalArgumentException 96 {97 return (Address)name2AddressTable.get(name);98 }99 100 }101 102
Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ | | http://kickjava.com/src/addressbook/wsiftypes/AddressBook.java.htm | CC-MAIN-2020-50 | refinedweb | 173 | 53.17 |
On Thursday, July 27, 2017 at 9:01:11 AM UTC-4, Matthew Flatt wrote: > Declaring (as opposed to instantiating) a compiled module will normally > not raise an exception. Probably it's possible to construct a set of > embedded modules where there will be a declare-time error due to > conflicting or missing declarations, but I don't see how to make that > happen only sometimes.
Good to know. > The escape-catching pattern is needed anywhere that you don't want to > just exit/crash. > > You can certainly call multiple `scheme_...` functions within a single > escape-handling block, including multiple calls to `scheme_eval_string`. Also good to know, thanks. On Wednesday, July 26, 2017 at 11:09:48 AM UTC-4, Matthew Flatt wrote: > At Wed, 26 Jul 2017 07:54:32 -0700 (PDT), Thomas Dickerson wrote: > > One more thing: in terms of repeatedly executing scripts, does it make > > sense > > to set up and tear down the interpreter every time? Or just swap in a fresh > > namespace? > > Between those two options, a fresh namespace is almost certainly > better. For posterity, it's worth noting that you can use the Boost Coroutine library to implement this in C++ in a nice object-oriented fashion to make a loop that executes Racket tasks in a way that doesn't require bouncing your `main` through `scheme_main_setup` or equivalent, which is great if you're embedding a racket interpreter to be reused as an API rather than a one-off program. -- You received this message because you are subscribed to the Google Groups "Racket Users" group. To unsubscribe from this group and stop receiving emails from it, send an email to racket-users+unsubscr...@googlegroups.com. For more options, visit. | https://www.mail-archive.com/racket-users@googlegroups.com/msg35845.html | CC-MAIN-2017-39 | refinedweb | 285 | 61.06 |
Groovy Goodness – ReadWriteLocks
We often use synchronization. Imagine you have a use case in which there is some resource upon which writing is not done as much as compared to reading. So multiple threads should be able to read concurrently same resource without any problems. But if a single thread wants to write to the resource, for that all new threads for reading should be blocked or allowed to finish execution if already in reading process.
In other words while writing no other threads should be able read or write, this can be achieved using read / write lock. In Java one can use an implementation of ReadWriteLock interface in java.util.concurrent.locks package which provides readLock and writeLock objects to achieve this.
Groovy has provided two annotations for above use which are simply wrapper of ReentrantReadWriteLock
– groovy.transform.WithReadLock
– groovy.transform.WithWriteLock
import groovy.transform.*;
We will see an example to understand. So below getResource() method is not blocked until any thread is accessing any of updateAndGetResource() and refresh() methods and vice versa.
public class ResourceProvider { private final Map<String, Object> data = new HashMap<String, Object>(); @WithReadLock public String getResource(String key) throws Exception { return data.get(key); } @WithWriteLock public void refresh() throws Exception { //reload the resources into memory } @WithWriteLock public String updateAndGetResource(String key){ refresh() //updating the shared resource for some special key getResource(key) } //no blocking required public void update(String key){ Object object = data.get(key) //now update object mutable attributes } }
Now you can use synchronization much more efficiently with read/write locks.
Hope this helps!!!
Parampreet Singh
Please explain why update method doesn’t require locking?
It does updates/writes (as name suggests). | http://www.tothenew.com/blog/groovy-goodness-readwritelocks/ | CC-MAIN-2018-05 | refinedweb | 278 | 56.15 |
The .NET Framework defines both byte and character stream classes. However, the character stream classes are really just wrappers that convert an underlying byte stream to a character stream, handling any conversion automatically. Thus, the character streams, although logically separate, are built upon byte streams.
The core stream classes are defined within the System.IO namespace. To use these classes, you will usually include the following statement near the top of your program:
using System.IO;
The reason that you don’t have to specify System.IO for console input and output is that the Console class is defined in the System namespace.
The Stream Class
The core stream class is System.IO.Stream. Stream represents a byte stream and is a base class for all other stream classes. It is also abstract, which means that you cannot instantiate a Stream object. Stream defines a set of standard stream operations. Following Table shows several commonly used methods defined by Stream.
Several of the methods shown in Following Table will throw an IOException if an I/O error occurs. If an invalid operation is attempted, such as attempting to write to a stream that is read-only, a NotSupportedException is thrown. Other exceptions are possible, depending on the specific method.
Notice that Stream defines methods that read and write data. However, not all streams will support both of these operations, because it is possible to open read-only or write-only streams. Also, not all streams will support position requests via Seek( ). To determine the capabilities of a stream, you will use one or more of Stream’s properties. | http://www.loopandbreak.com/the-stream-classes/ | CC-MAIN-2021-10 | refinedweb | 268 | 57.77 |
Commonly used escape sequences are \n, \t and \a. The escape sequences are enclosed in single quotes. \n is a new line character and helps in transferring control to the next line. If more than one line is to be skipped, \n is repeated as many times as the number of lines to be skipped. \n can also be combined with any other message string to be displayed. \t is used for giving tab and \a is used for giving a beep.
Example
Illustrates the use of \t and \a escape sequences.
//newtab.cpp
#include <iostream.h>
void main()
{
int i,j,k,m;
cout<<"Enter the value of i, j, k, m \n";
cin>>i>>j>>k>>m;
cout<<endl;
cout<<'\a '<<i<<'\t'<<j<<'\t'<<k<<'\t'<<m<<'\n';
}
Input and Output:
Enter the value of i, j, k, m
1234
1 2 3 4
Manipulators
These are used for manipulating the display of output in the desired format. Manipulators are used with insertion operator <<. These are included in the <iomanip.h> header file. Hence this header file should be included in the program.
flush Manipulator
This manipulator is used for flushing the buffer.
endl Manipulator
The endl manipulator is used for flushing the buffer and for inserting a new line simultaneously.
setw Manipulator
This manipulator facilitates the user to specify the field width explicitly in order to have better display of output. The syntax is
setw(field width);
The field width must be of int type.
Example
Illustrates the use of the manipulators flush, endl and setw().
//setw.cpp
#include <iostream.h>
#include <iomanip.h>
void main()
{
char entryno[10];
char name[10];
int fee;
cout<<"Enter the entry number "<<endl;
cin>>entryno;
cout<<"Enter the name "<<flush<<'\n';
cin>>name;
cout<<"Enter the fees "<<endl;
cin>>fee;
cout<<endl;
cout<<setw(18)<<"Entry number"<<setw(12)<<entryno<<endl;
cout<<setw(1 O)<<"Name "<<setw(20)<<name<<endl;
cout<<setw(10)<<"Fees "<<setw(20)<<fee<<endl;
}
Input and Output:
Enter the entry number
MAC0797
Enter the name
Sanjay
Enter the fees
1200
Entry number MAC0797
Name Sanjay
Fees 1200
Following is a program to find the minimum and maximum limits for char, short, int, long int, float, double and long double type of data.
Example
The Program finds the limit for char, short, int, long int, float, double and long double type of data. The keywords for finding the limits for float, double and long double (written in capital letters in the program) are found in float.h header file. The keywords for finding the limits for other types are found in limits.h header file.
#include <iostream.h>
#include <limits.h>
#include <float.h>
void main()
{
cout<<"minimum char value is "<<CHAR_MIN<<"\n" ;
cout<<"maximum char value is "<<CHAR_MAX<<"\n" ;
cout<<"minimum short value is "<<SHRT_MIN<<"\n" ;
cout<<"maximum short value is "<<SHRT_MAX<<"\n" ;
cout<<"minimum int value is "<<INT_MIN<<"\n\n" ;
cout<<"maximum int value is "<<INT _MAX<<"\n'" ,
cout<<"minimum long int value is "<<LONG_MIN<<"\n\n";
cout<<"maximum long int value is "<<LONG_MAX<<"\n";
cout<<"min. no. of significant digits in float is "<<FL T_DIG<<"\n";
cout<<"max. no of bits for mantissa is "<<FLT_MANT_DIG<<"\n";
cout<<" min. exponent value in float is "<<FLT_MIN_10_EXP<<"\n";
cout<<"max. exponent value in float is "<<FLT_MAX_10_EXP<<"\n\n";
cout<<"min. no. of significant \n";
cout<<"digits in double is "<<DBL_DIG<<"\n\n";
cout<<"max. no. of bits for mantissa is "<<DBL_MANT_DIG<<"\n";
cout<<"min. exponent value in double is "<<DBL_MIN_10_EXP<<"\n\n";
cout<<"max. exponent value in double is "<<DBL_MAX_10_EXP<<"\n";
cout<<"min. no. of significant\n";
cout<<"digits in long double "<<LDBL_DIG<<"\n\n";
cout<<"max. no. of bits for mantissa is "<<LDBL_MANT_DIG<<"\n";
cout<<"min. exponent value in long double is n<<LDBL_MIN_10_Exp<<n\n";
cout<<"max. exponent value in long double is n<<LDBL_MAX_10_EXP<<"\n\n";
}
Output:
minimum char value is -128
maximum char value is 127
minimum short value is -32768
maximum short value is 32767
minimum int value is -2147483648
maximum int value is 2147483647
minimum long int value is -2147483648
maximum long int value is . 2147483647
min. no. of significant digits in float is 6
max. no. of bits for mantissa is 24
min. exponent value in float is -37
max. exponent value in float is 38
min. no. of significant
digits in double is 15
max. no. of bits for mantissa is 53
min. exponent value in double is -307
max. exponent value in double is 308
min. no. of significant
digits in long double 15
max. no. of bits for mantissa is 53
min. exponent value in long double is -307
max. exponent value in long double is 308 | http://mail.ecomputernotes.com/cpp/introduction-to-oop/escape-sequences-and-manipulators | CC-MAIN-2019-35 | refinedweb | 781 | 58.58 |
Memory issues when loading videos into frames
I have folder with 160 FLV videos, each having 120 frames of size 152, 360 with RGB colors (3 channels) that I would like to load into the numpy array
frames. I do this with the code:
import numpy as np import cv2 import os directory = "data/" # frames = [] frames = np.empty(shape=(160 * 120, 152, 360, 3), dtype=np.float32) for file in os.listdir(directory): if file.endswith(".flv"): file_path = os.path.join(directory, file) nr_file = nr_file + 1 print('File '+str(nr_file)+' of '+str(nb_files_in_dir)+' files: '+file_path) # Create a VideoCapture object and read from input file # If the input is the camera, pass 0 instead of the video file name cap = cv2.VideoCapture(file_path) # Check if camera opened successfully if (cap.isOpened() == False): print("Error opening video stream or file") # Read until video is completed while (cap.isOpened()): # Capture frame-by-frame ret, frame = cap.read() if ret == True: # frames.append(frame.astype('float32') / 255.) frames[nr_frame, :, :, :] = frame.astype('float32') / 255. nr_frame = nr_frame + 1 nb_frames_in_file = nb_frames_in_file + 1 else: break # When everything done, release the video capture object cap.release() # frames = np.array(frames)
Originally I tried to use a list
frames (see the commented lines), instead of the prerallocated numpy array, but it seemed this took too much memory - no idea why though.
However, it seems this did not help much: Still the code is very memory hungry (many GB), even though my videos are just a few KB large. I think it is because the resources of the
cap-objects (the
cv2.VideoCapture-objects) might not freed despite me using
cap.release() - is that correct? What can I do, to make my code memory-efficient?
no, it's not the videocapture, the decompressed frames just need a huge amount of memory.
just do the maths. it is:
you'll have to restrict it somehow ...
oh, you also convert to float, so the whole thing * 4
(why do you think, that's nessecary ?)
@berak: I corrected my mistake: There are only 160 images and they are smaller afterall. If the size of data itself would be the issue, then the allocation of the
framesnumpy array would already eat up all memory, but it does not. The float is because I am putting the thing into a neural network afterwards. The numpy array is actually not that large, even if it is allocated with float32, so this should not be the issue (I think).
still, you're trying to allocate ~25gb of memory for this.
you'll have to feed it into the nn in batches later, so only load 1 batch at a time.
Thanks, that is what I am doing now. Since I need a DataGenerator I implemented a keras.utils.Sequence class and use this for batch-training of my neural network. | https://answers.opencv.org/question/199301/memory-issues-when-loading-videos-into-frames/ | CC-MAIN-2022-05 | refinedweb | 472 | 74.9 |
This is something I've been thinking about a bit lately. Actually, I guess "thinking about" is the wrong turn of phrase, since I haven't so much been thinking about as building one. I'll be "thinking about" public-key auth and OpenId next, hopefully, but the first thing I want to put together is an old-style password-based authentication system.
Oh, yeah. And do it properly.
Which means no Dev 101-level mistakes like storing plaintext passwords, or being subject to injection attacks, or putting up with login hammering, or leaving off the salt. That's a slight increase in challenge from just "set up a user system".
The trivial
gen_server-based user system looks something like
-module(trivial_user). -behaviour(gen_server). -export([start/0, stop/0]). -export([init/1, handle_call/3, handle_cast/2, handle_info/2, terminate/2, code_change/3]). -record(user,{timestamp, username, password}). register(Username, Password) -> gen_server:call(?MODULE, {register, Username, NewPassword}). auth(Username, Password) -> gen_server:call(?MODULE, {auth, Username, Password}). change_password(Username, NewPassword) -> gen_server:call(?MODULE, {change_pass, Username, NewPassword}). exists_p(Username) -> try find(Username) catch error:_ -> false end. handle_call({register, Username, Password}, _From, State) -> Res = case exists_p(Username) of false -> User = #user{username=Username, password=Password, timestamp=now()}, transaction(fun() -> mnesia:write(User) end); _ -> already_exists end, {reply, Res, State}; handle_call({auth, Username, Password}, _From, State) -> try [User] = do(qlc:q([X || X <- mnesia:table(user), X#user.username =:= Name, X#user.password =:= Password])), {reply, User, State} catch error:_ -> {reply, false, State} end; handle_call({change_pass, Username, NewPassword}, _From, State) -> Rec = find(Username), {reply, transaction(fun() -> mnesia:write(Rec#user{password=NewPassword}) end), State}. %%%%%%%%%%%%%%%%%%%% database utility find(Name) -> [Rec] = db) -> State ! {self(), close}, ok. code_change(_OldVsn, State, _Extra) -> {ok, State}.
1> mnesia:create_schema([node()]). ok 2> mnesia:start(). ok 3> rd(user,{username, password, timestamp}). user 4> mnesia:create_table(user, [{type, ordered_set}, {disc_copies, [node()]}, {attributes, record_info(fields, user)}]). {atomic,ok} 5> trivial_user:start(). {ok,<0.90.0>} 6> trivial_user:register("Inaimathi", "password"). ok 7> trivial_user:auth("Inaimathi", "password"). #user{username = "Inaimathi",password = "password", timestamp = {1339,96410,156774}}
In pseudocode it's
def register(username, password): store(username, password, timestamp()) def auth(username, entered_password): if user = find(username) and user.password == entered_password: user else: false def change_pass(username, new_password): store(find(username).password = new_password)
But that hits most of the rookie mistakes I listed above plus a few more. Incidentally, I will murder you if you use this in production and I find out about it. It doesn't hash or salt passwords, it doesn't rate-limit the
auth message, it does get around injection attacks purely through the virtue of being implemented in a symbolic db system, but that probably shouldn't count since it's a consequence of the tools rather than the system itself.
Lets work backwards through the pattern, and see how to arrive at a proper-ish user and authentication system. Firstly, it's important that a potential attacker can't just try 10000 passwords per second. Because if they can, and any of your users use common passwords, then it really doesn't matter how well you store them. You can do something naive, like introducing a return delay when an incorrect password is tried.
... handle_call({auth, Username, Password}, _From, State) -> try [User] = do(qlc:q([X || X <- mnesia:table(user), X#user.username =:= Username, X#user.password =:= Password])), {reply, User, State} catch error:_ -> timer:sleep(2000), {reply, false, State} end; ...
But that blocks. In other words, whenever anyone enters their password incorrectly, everyone waits for two seconds to interact with the user process. Which, shall we say, doesn't scale. Granted, not doing it this way opens up the possibility that someone could just try 10000 parallel requests for a password, but that seems like a lesser evil than making it ridiculously easy to DOS the system.
There are two essential ways of "solving" this problem
- The stateless way would be to decouple authentication from other user actions. We wouldn't have a single authentication process, rather, when a call to
trivial_user:auth/2happens, it should launch a temporary process that tries to authenticate that user. If the correct answer is given, there should be no delay, but there should be a small, non-global delay on a wrong guess.
- The stateful way would be to track how many wrong guesses have been made for a given user name/IP address. At a certain threshold (or perhaps linearly scaling with the number of wrong guesses), impose some sort of limiting factor. This can be as simple as a delay, or as complex as demanding a recaptcha on the front end.
- The ideal way would be to say fuck passwords, collect your users public keys instead, and authenticate them in an actually secure manner. Good luck brute-forcing a 4096 bit RSA key. Then have fun doing it again for every single user. Sadly, this doesn't count as a "solution" because most users are pretty sure they leave their public keys under their welcome mat each morning.
Given the language I'm working with, that first one looks like it'd fit better. In other words, we remove the
auth handler from
trivial_user:handle_call/3
... handle_call({register, Username, Password}, _From, State) -> User = #user{username=Username, password=Password, timestamp=now()}, {reply, transaction(fun() -> mnesia:write(User) end), State}; handle_call({change_pass, Username, NewPassword}, _From, State) -> Rec = find(Username), {reply, transaction(fun() -> mnesia:write(Rec#user{password=NewPassword}) end), State}. ...
and have
trivial_user:auth/2 handle the password checking itself in a child process
auth(Username, Password) -> Pid = self(), Auth = fun() -> User = find(UserName), true = Password =:= User#user.password, Pid ! User end, AuthProc = spawn(Auth), receive Res -> exit(AuthProc, thank_you), Res after 2000 -> false end.
Do note the use of offensive programming in the
Auth function. We don't do any kind of cleanup if the password is incorrect, just let
AuthProc die a horrible, error-induced death and move on with our lives. We do stop waiting for it after two seconds, which is incidentally the delay we wanted to introduce for a wrong entry. Instead of being able to naively try 10000 passwords per second, our theoretical attackers can now try one every ~2, which should make this auth process a slightly harder target.
Next up, we're still storing user passwords as plaintext, which is less than ideal. That means that anyone who succeeds in getting at our data somehow can suddenly impersonate anyone in the system flawlessly. That's why we have to hash them. Now, there are hashing libraries in Erlang, including the built-in crypto parts of which we'll be using, but.
1. Hash functions are tricky to pick, even before you get into cryptographic hash functions. In fact, there are a couple of widely-used ones[1] that have been subject to successful attacks. Given that, I'm leaning towards the SHA-2 algorithms which, as of this writing, have not been successfully broken. DO NOT read that as "I should use SHA256 from now on". Read it instead as "Before deciding on a hash function, I should check which ones are difficult to break at the time I'm making the decision". That complicates things somewhat, because Erlang's
crypto only supports MD5 and SHA-1[2], installing the Erlang SHA256 library seems to be more than trivially difficult, and even if it wasn't
2. Cryptographic functions are tricky to implement. By all means, try to as a learning experience, but there are non-obvious attacks that you can leave your implementation open to, even if you do put everything together properly. The rule is "do NOT roll your own". By extension, "do NOT use a crypto library written by someone merely as smart as you", and "do NOT use a crypto library that hasn't been extensively battle tested". In fact, this is the one place where I'd say going with the herd[3] is the right thing to do. 68 watchers (myself included) isn't quite enough to make me confident that all the bugs and attacks have been shaken out of the implementation.
So, for those borderline-excuse reasons (and also because I want to show how to do it), we'll be using Python's
hashlib with
erlport. It sounds scary, but it is trivial. Once you install
erlport (which is available through
setuptools), you just kind of...
-module(sha256). -behaviour(gen_server). -export([start/0, stop/0]). -export([init/1, handle_call/3, handle_cast/2, handle_info/2, terminate/2, code_change/3]). -export([encode/1]). encode(String) -> gen_server:call(?MODULE, {encode, String}). handle_call({'EXIT', _Port, Reason}, _From, _State) -> exit({port_terminated, Reason}); handle_call(Message, _From, Port) -> port_command(Port, term_to_binary(Message)), receive {State, {data, Data}} -> {reply, binary_to_term(Data), State} after 6000 -> exit(timeout) end. %%%%%%%%%%%%%%%%%%%% generic actions start() -> gen_server:start_link({local, ?MODULE}, ?MODULE, [], []). stop() -> gen_server:call(?MODULE, stop). %%%%%%%%%%%%%%%%%%%% gen_server handlers init([]) -> {ok, open_port({spawn, "python -u sha256.py"}, [{packet, 4}, binary, use_stdio])}. handle_cast(_Msg, State) -> {noreply, State}. handle_info(_Info, State) -> {noreply, State}. terminate(_Reason, State) -> State ! {self(), close}, ok. code_change(_OldVsn, State, _Extra) -> {ok, State}.
## sha256.py from erlport import Port, Protocol, String import hashlib class Sha256Protocol(Protocol): def handle_encode(self, message): return hashlib.sha256(unicode(message)).hexdigest() if __name__ == "__main__": Sha256Protocol().run(Port(packet=4, use_stdio=True))
Incidentally, you probably see why I decided to write myself a quickie templating library for Erlang modules. Not to bust out the SLW here, but in Lisp, I would handle the same problem with one
defmacro, and thereafter be calling the resulting
(define-gen-server handle-call &key (start (gen-server:start-link `(local ,*module*) *module nil nil)) (stop...) ...). But hey, relying on your editor to do shit that should be handled in the language seems to be serving ~67% of the programming world just fine, so whatever the fuck.
Ahem.
What you see above is the trivial string hashing implementation. Writing it took me somewhat less effort than learning how to use rebar[4], and I now get to call
sha256:encode("Something something"). with reasonable confidence that a very large number of people smarter than me have failed to find errors in the code doing the work for me. Now that we've got that, we need to modify two things in the
trivial_user module. First, we need to store the hashed password, both when registering and changing
... handle_call({register, Username, Password}, _From, State) -> false = exists_p(Username), User = #user{username=Username, password=sha256:encode(Password), timestamp=now()}, {reply, transaction(fun() -> mnesia:write(User) end), State}; handle_call({change_pass, Username, NewPassword}, _From, State) -> User = find(Username), {reply, transaction(fun() -> mnesia:write(User#user{password=sha256:encode(NewPassword)}) end), State}. ...
And second, when authenticating, we need to hash the input before comparing a password with what we've got stored
... auth(Username, Password) -> Pid = self(), Auth = fun() -> User = find(UserName), true = sha256(Password) =:= User#user.password, Pid ! User end, AuthProc = spawn(Auth), receive Res -> exit(AuthProc, thank_you), Res after 2000 -> false end. ...
There. Now, if some ne'er-do-well manages to get a hold of our password database somehow, he won't be looking at
[{"John Douchebag", "P@ssword123"}, {"Jane Douchebag", "P@ssword123"}, {"Dave Foobar", "P@ssword123"}, {"Alex Nutsack", "P@ssword231"}, {"Brian Skidmore", "P@ssword123"}, {"Rose Cox", "P@ssword123"}, {"Barbara Lastname", "P@ssword123"}, {"Dora Smartass", "correcthorsebatterystaple"} ...]
he'll instead be looking at
[{"John Douchebag", "62a39df87b501ad40b6fc145820756ccedcab952c64626968e83ccbae5beae63"}, {"Jane Douchebag", "62a39df87b501ad40b6fc145820756ccedcab952c64626968e83ccbae5beae63"}, {"Dave Foobar", "62a39df87b501ad40b6fc145820756ccedcab952c64626968e83ccbae5beae63"}, {"Alex Nutsack", "a52c4ef2c82e00025191375eadfea1e28b6389ab6091f1ab66e7549d1edef2f3"}, {"Brian Skidmore", "62a39df87b501ad40b6fc145820756ccedcab952c64626968e83ccbae5beae63"}, {"Rose Cox", "62a39df87b501ad40b6fc145820756ccedcab952c64626968e83ccbae5beae63"}, {"Barbara Lastname", "62a39df87b501ad40b6fc145820756ccedcab952c64626968e83ccbae5beae63"}, {"Dora Smartass", "cbe6beb26479b568e5f15b50217c6c83c0ee051dc4e522b9840d8e291d6aaf46"} ...]
And that should illustrate exactly why salt is an important thing to use. You'll notice that the same string always hashes to the same output. That's good, because that means we have a simple way to compare passwords later. But. If a lot of your users use the same password (and while this is an exaggerated example, you would be very surprised at how many people pick worse on a regular basis), then someone who guesses what hash algorithm you're using can easily run a rainbow table against the hashes they found to guess large chunks of the plaintexts.
That is not good. And it's precisely the problem that salt is meant to solve. The important part of a salt is that it's unique. Some people like it to be cryptographically secure, but I don't think it has to be. You're trying to avoid the situation where cracking one password gets your attacker access to more than one account. Do note that "unique" means "really, truly, globally unique". As in, don't just set a padded counter starting from 1, because different instances of your system will have some identical salt values. Also, obviously, don't just use a single salt value per server because that would defeat the purpose entirely. It needs to be different per secret, which means we need to change it out when a user changes their password too[5].
Just to drive the point home, if you use a single salt-value per user, the hashes above will look like
[{"John Douchebag", "a26d44677573d3dfdfe116dc46979ce7ff00d9877a05d59158e74d2cf955400c"}, {"Jane Douchebag", "a26d44677573d3dfdfe116dc46979ce7ff00d9877a05d59158e74d2cf955400c"}, {"Dave Foobar", "a26d44677573d3dfdfe116dc46979ce7ff00d9877a05d59158e74d2cf955400c"}, {"Alex Nutsack", "0f751ddd05eb211a8300254701dce2ea045805e39113a821a10adf747243fc27"}, {"Brian Skidmore", "a26d44677573d3dfdfe116dc46979ce7ff00d9877a05d59158e74d2cf955400c"}, {"Rose Cox", "a26d44677573d3dfdfe116dc46979ce7ff00d9877a05d59158e74d2cf955400c"}, {"Barbara Lastname", "a26d44677573d3dfdfe116dc46979ce7ff00d9877a05d59158e74d2cf955400c"}, {"Dora Smartass", "fc5edff6668c8678f4c242cdea531cfd8883add17072e7ff1db76ea21952504b"} ...]
It means that it's slightly harder to crack one of your passwords[6], but if a password is cracked, your attacker still has the benefit of compromising the complete set of users that have that same password.
The absolute simplest, most brain-dead way to generate salt is to run an operation per password that looks something like
make_salt() -> binary_to_list(crypto:rand_bytes(32)).
And that may actually be going overboard by about 16 bytes. Calling
make_salt/0 will return something like
[239,97,166,69,1,8,19,68,253,82,111,74,152,123,103,164,209,44,92,246,177,60,38,201,107,116,72,219,82,204,49], which we then concatenate with a password in order to make the world a slightly better place for people who use passwords like
P@ssword123.
On reflection, this may not be a good thing, but it does make our user system one increment better. We now need to store salt for each user, and use it in our hashing step when comparing and storing passwords. So.
salt(Salt, String) -> sha256:encode(lists:append(Salt, String)). ... auth(Username, Password) -> Pid = self(), Auth = fun() -> User = find(Username), true = salt(User#user.salt, Password) =:= User#user.password, Pid ! User end, AuthProc = spawn(Auth), receive User -> exit(AuthProc, thank_you), {User#user.username, User#user.timestamp} after 2000 -> false end. ... ... handle_call({register, Username, Password}, _From, State) -> false = exists_p(Username), Salt = make_salt(), User = #user{username=Username, password=salt(Salt, Password), salt=Salt, timestamp=now()}, {reply, transaction(fun() -> mnesia:write(User) end), State}; handle_call({change_pass, Username, NewPassword}, _From, State) -> User = find(Username), Salt = make_salt(), {reply, transaction(fun() -> mnesia:write(User#user{password=salt(Salt, NewPassword), salt=Salt}) end), State}. ...
Now that we have effective, per-password salt going, that potentially leaked table looks a bit different.
[{"John Douchebag", [218,207,128,49,205,116,234,236,67,27,74,144,22,45,219,251, 58,82,240,14,233,252,56,105,28,112|...], <<"a0366db583c76fd81901e57f69b4f2f67b9ab779ae76e5ff3ce8c82fdc21b1ea">>}, {"Jane Douchebag", [141,235,133,13,140,199,19,158,169,8,188,147,25,247,31,62, 112,41,175,243,68,139,130,236,112|...], <<"b6dd5d87a80e166dea1b1959526f544b3d9da3818e178fe82e7571c30ea32077">>}, {"Dave Foobar", [248,80,49,63,241,204,182,120,53,181,84,5,51,142,34,240,187, 76,115,55,29,207,149,93|...], <<"6442397fd432fa1fa05d96e2db08c3ea4b840ecddf9b3bcf1f0904ec95a2e7cf">>}, {"Alex Nutsack", [255,116,72,208,37,69,135,169,131,253,115,135,39,54,14,118, 216,35,92,157,183,96,87|...], <<"6a966e1362d27851fac8e5ed44cff1eb7f3b15035d86e20438e228a2b8441a5e">>}, {"Brian Skidmore", [149,22,172,0,14,45,14,228,19,66,214,170,87,238,39,126,65, 229,118,44,49,18|...], <<"da65f803390a3886915c84adf444324c2d90396f6fcfc9a97900d14ed4ffc264">>}, {"Rose Cox", [67,22,142,129,118,7,112,66,187,180,201,168,244,132,118,170, 56,250,127,132,189|...], <<"67235dfae2f44bf68101b67773e2512193383a6d7e965cc423056ad750ab5806">>}, {"Barbara Lastname", [214,17,61,189,60,148,2,168,65,140,87,224,216,40,14,132,129, 145,238,153|...], <<"669b6876ad2cd40b857bd8b0ff67d49df2133498bb7b6a8fd8bbe764889f9c1b">>}, {"Dora Smartass", [191,211,52,128,89,167,168,177,221,238,21,94,121,15,20,22, 144,11,235|...], <<"bdec4a9e62d5f03651720903d2001d82a3167aefd43bc22741c482b98f83ad43">>} ...]
Even if the attacker gets each user's salt as in the above example, check out the password hashes.
"a0366db583c76fd81901e57f69b4f2f67b9ab779ae76e5ff3ce8c82fdc21b1ea", "b6dd5d87a80e166dea1b1959526f544b3d9da3818e178fe82e7571c30ea32077", "6442397fd432fa1fa05d96e2db08c3ea4b840ecddf9b3bcf1f0904ec95a2e7cf", "6a966e1362d27851fac8e5ed44cff1eb7f3b15035d86e20438e228a2b8441a5e", "da65f803390a3886915c84adf444324c2d90396f6fcfc9a97900d14ed4ffc264", "67235dfae2f44bf68101b67773e2512193383a6d7e965cc423056ad750ab5806", "669b6876ad2cd40b857bd8b0ff67d49df2133498bb7b6a8fd8bbe764889f9c1b", "bdec4a9e62d5f03651720903d2001d82a3167aefd43bc22741c482b98f83ad43"
The important part here is that even though 6 of those 8 users use the same passwords, there's no way to find that out based on just the hashes. Meaning that the theoretical attacker here would actually have to crack the password of every account they want access to. Granted, it's still easier to guess a password like "P@ssword123" than a passphrase generated in the correct horse style, but our system is still more secure for having these small steps.
Just to bring it all together, the final code for a proper, salted, hashing user/password system is
-module(trivial_user). -behaviour(gen_server). -include_lib("stdlib/include/qlc.hrl"). -export([start/0, stop/0]). -export([init/1, handle_call/3, handle_cast/2, handle_info/2, terminate/2, code_change/3]). -record(user,{username, password, salt, timestamp}). -export([register/2, auth/2, change_password/2, list/0]). list() -> gen_server:call(?MODULE, list). register(Username, Password) -> gen_server:call(?MODULE, {register, Username, Password}). auth(Username, Password) -> Pid = self(), Auth = fun() -> User = find(Username), true = salt(User#user.salt, Password) =:= User#user.password, Pid ! User end, AuthProc = spawn(Auth), receive User -> exit(AuthProc, thank_you), {User#user.username, User#user.timestamp} after 2000 -> false end. change_password(Username, NewPassword) -> gen_server:call(?MODULE, {change_pass, Username, NewPassword}). handle_call(list, _From, State) -> {reply, do(qlc:q([{X#user.username, X#user.timestamp} || X <- mnesia:table(user)])), State}; handle_call({register, Username, Password}, _From, State) -> Res = case exists_p(Username) of false -> Salt = make_salt(), User = #user{username=Username, password=salt(Salt, Password), salt=Salt, timestamp=now()}, transaction(fun() -> mnesia:write(User) end); _ -> already_exists end, {reply, Res, State} handle_call({change_pass, Username, NewPassword}, _From, State) -> User = find(Username), Salt = make_salt(), {reply, transaction(fun() -> mnesia:write(User#user{password=salt(Salt, NewPassword), salt=Salt}) end), State}. %%%%%%%%%%%%%%%%%%%% database utility make_salt() -> binary_to_list(crypto:rand_bytes(32)). salt(Salt, String) -> sha256:encode(lists:append(Salt, String)). exists_p(Username) -> try find(Username) catch error:_ -> false end. find(Name) -> [Rec] =}.
The pseudocode differences are minute, to be sure,
def register(username, password): store(username, salt(s, password), timestamp(), s) def auth(username, entered_password): spawn: if user = find(username) and user.password == salt(user.s, entered_password): user.except(password, salt) else: wait(2, :seconds) false def change_pass(username, new_password): store(find(username).password = salt(s, new_password), s) def salt(s, string): secure_hash(s + string)
but they make for a more robust password-based system. Granted, that's still like being really, really good at arguing on the internet, but baby steps.
Github here, if you want to play around with it.EDIT:
The link above no longer exists. All features from this library have been folded into auth (there have been changes since this post was written, so it's not exactly the same codebase, but the principles are the same)Thu, 30 Aug, 2012
For next time, I'll be putting together an extension to this that does public-key-based auth, (as well as passwords for the normies).
Footnotes
1 - [back] - Such as MD5 and SHA1. Note that using these, for example, in the way that git does isn't a huge deal, since that's merely supposed to be a consistency check and not a security feature.
2 - [back] - Surprise, surprise.
3 - [back] - As long as the herd isn't demonstrably wrong, of course.
4 - [back] - Which I should probably do in any case, but still.
5 - [back] - By the way, salt does not have to be secret. You can keep it in the same table as your passwords, and you shouldn't be particularly worried if someone finds out which salt goes with which password. Well, no more worried than if they just got a hold of your hashed passwords.
6 - [back] - How much harder depends on what salt you use.
7 - [back] - If you're particularly obsessive, use
crypto:strong_rand_bytes/1 instead. The only difference is that the
strong_ variant gets some of its randomness from OS provided entropy, but it may also periodically hand you back a
low_entropy error instead of a random byte string.
If you are going to put something like this in production maybe you should consider changing your password hashing algorithm from SHA-256 to bcrypt (see ).
Anyway, nice article! :)
Hmm.
Good point; I'll write an updated version that recommends using `bcrypt` and outlines the reasoning. | http://langnostic.blogspot.com/2012/06/authentication.html | CC-MAIN-2017-39 | refinedweb | 3,448 | 56.35 |
#include <stdio.h> int ferror(FILE *stream);
int feof(FILE *stream);
void clearerr(FILE *stream);
int fileno(FILE *stream);
The ferror() function returns a non-zero value when an error has previously occurred reading from or writing to the named stream (see Intro(3)). It returns 0 otherwise.
The feof() function returns a non-zero value when EOF has previously been detected reading the named input stream. It returns 0 otherwise.
The clearerr() function resets the error indicator and EOF indicator to 0 on the named stream.
The fileno() function returns the integer file descriptor associated with the named stream; see open(2).
See attributes(5) for descriptions of the following attributes:
open(2), Intro(3), fopen(3C), stdio(3C), attributes(5), standards(5) | http://docs.oracle.com/cd/E36784_01/html/E36874/ferror-3c.html | CC-MAIN-2016-26 | refinedweb | 124 | 55.84 |
This 3-part article series presents how to use the Model-View-View-Model (MVVM) design pattern in MVC Core applications. The MVVM approach has long been used in WPF applications but hasn’t been as prevalent in MVC or MVC Core applications. This article illustrates how using MVVM in MVC makes your applications even more reusable, testable, and maintainable. You’re going to be guided step-by-step building an MVC Core application using the Entity Framework (EF) and a view model class to display and search for product data.
Related Articles
Use the MVVM Design Pattern in MVC Core: Part 2
Use the MVVM Design Pattern in MVC Core: Part 3
The Model-View-View-Model Approach
The reasons programmers are adopting the MVVM design pattern are the same reasons programmers adopted Object Oriented Programming (OOP) over 30 years ago: separation of concerns, reusability, maintainability, and testability. Wrapping the logic of your application into small, stand-alone.
Any other classes using the class automatically get the fix.
The key to MVVM in MVC is moving logic out of the MVC controller and into a view model class that contains properties that you can bind to the user interface. The controller should only call a method in your view model, and periodically set a property or two prior to calling that method. Normally, properties that don’t need to be set as view model properties will be bound to controls on your user interface. To ensure reusability of your view model classes, don’t use any ASP.NET objects such as Session, ViewBag, TempData, etc. in your view model.
Why Use MVVM in MVC
When using MVVM in WPF, you typically have a view model class for each screen. In MVC, you do the same by having one view model class per page. Of course, there can be exceptions, but this separation keeps your controllers and view model classes small and focused. The controller's GET method creates an instance of the view model (or has one injected) and passes it as the model to the page. The page binds to properties on the view model as HTML input or hidden elements. These bound properties may come from an instance of one of your EF entity classes, or additional properties that are specific to that page. If you have input elements on the page, the POST method in the controller accepts the view model as its parameter and all the bound properties are filled in automatically.
From within the controller, call a method in the view model to get or save data, or any other command you need. Your controller methods are thus kept small because the code to load the page, and to save the data from the page, is all within the view model. Data binding takes care of all the rest.
Another big advantage of using a view model class is that unit testing is easier. There’s no need to create an instance of, or test, any methods in the controller. If all the controller is doing is calling methods in the view model, then you just need to test the view model.
Many of us have been developing for many years and we’ve seen technologies come and go. However, the basics of how you create classes with methods and properties hasn’t changed. If you keep any technology-specific objects such as Session, ViewData, TempData, etc., out of your view models, then moving from one technology to another becomes easier. For example, I’ve had many clients create an MVC application, then decide that they want to create a part of that application in WPF for their data-entry people. I only had to create the UI in WPF and simply reuse the view model classes that had already been tested.
The Tools You Need
If you follow along with the steps in this article, you’re going to build an MVC Core application. The application is going to show you how to interact with a product table in a SQL Server database. I’m going to use Visual Studio Code, MVC Core, C#, the Entity Framework and the Adventure Works Lite sample database for SQL Server. You can get the Adventure Works Lite database on my GitHub at.
In the GitHub repository there’s a file named
AdventureWorksLT.bak that you can use to restore the SQL Server database. If you’re unable to restore the backup, there’s a
SalesLT-Product.sql file that you can use to create the product table with data in any SQL Server database.
The Overall Project
For this article, you’re going to be creating the four projects shown in Figure 1. This architecture is one that I’ve found to be flexible, reusable, and testable. The four projects are the MVC Core application, and one class library each for your view model classes, data layer classes, and entity classes. For many of you, the only thing that might be a little different is the addition of the view model classes. The focus of this article is to show how to simplify your controllers and move most of your logic into view model classes. A secondary focus is to illustrate how to build a multi-layered project in Visual Studio Code with .NET Core.
In this article, you’re going to be using the SalesLT.Product table in the AdventureWorksLT database. You’ll first learn to build an HTML table to list all products in this table. Next, you add search capabilities so you can search on one or multiple columns. The logic to build the listing and searching is created in the view model class instead of in the controller.
The ProductController class (1a in Figure 1) has both an instance of a ProductViewModel class (2a) and an implementation of an IProductRepository (3b) injected into it by MVC Core. The implementation is a ProductRepository class (3c), which also has an instance of the AdvWorksDbContext class (3a) injected into it by MVC Core.
The ProductRepository is passed into the constructor of the ProductViewModel class by the DI service in MVC Core. A method named HandleRequest() is called on the view model to retrieve the data from the Product table using the Entity Framework, which is implemented in the ProductRepository class. Each row of data in the Product table is represented by the entity class Product (4a). A generic list of Product objects is placed into a
Products property of the ProductViewModel. This view model class is used as the model to the Products page to create the HTML table.
Create the MVC Core Project
Let's create our MVC Core project using Visual Studio Code. Startup Visual Studio Code and open a terminal window by clicking the
Terminal > New Terminal menu. Navigate to the folder where you normally place your development projects. For example, I’m going to create my project under the
D:\Samples folder. Once I open a terminal window, I type in the command:
CD D:\Samples
Once you’re in your development folder, create a new folder named
MVVMSample using the MKDIR (or the MD) command:
MKDIR MVVMSample
Change to the new directory using the CD command in the terminal window, as shown in the command below.
CD MVVMSample
Create a new MVC Core project in the MVVMSample folder by executing the
dotnet new mvc command.
dotnet new mvc
Once the command has run, open the folder in VS Code by clicking the
File > Open Folder... menu. A few seconds after opening the folder, you should be prompted with a dialog that looks like Figure 2. Click on the
Yes button to add the required assets to the new project.
Try It Out
To ensure that everything is created correctly, click the
Run > Start Debugging menu and a Web page should be displayed in your default browser.
Create the Entity Layer Project
Now that you have the MVC project created, start building the rest of your projects beginning with the Entity project. In this project is where you place all your classes that map to each table in your database. This project should be a simple class library with as few references to DLLs as possible. If you keep the references to a minimum, you can reuse your Entity DLL in other projects. Although you’re going to map the Product entity in this article to a table in a SQL Server database, by keeping your entity classes in a separate DLL, you can also map Product data from an XML or JSON file to the same Product entity class.
To create the Entity project, open a terminal window by clicking on the
Terminal > New Terminal menu. Go to your development directory (in my case that was D:\Samples). Type in the following commands to create a new folder named
MVVMEntityLayer:
MD MVVMEntityLayer CD MVVMEntityLayer dotnet new classlib
The
dotnet new classlib command creates a class library project with the minimum number of references to .NET Core libraries. Now that you’ve created this new project, add it to your Visual Studio Code workspace by clicking the
File > Add Folder to Workspace... menu. Select the MVVMEntityLayer folder and click on the
Add button. You should see the MVVMEntityLayer project added to your VS Code workspace.
Save this workspace by clicking on the
File > Save Workspace As... menu. Set the name to
MVVMSampleWS and place it into the MVVMSample folder. In the future, you can now double-click on this
MVVMSampleWS.code-workspace file to open your solution.
Add Data Annotations
In order to map a class to a table using the Entity Framework, you need some data annotation attribute classes. These data annotation classes are not a part of the default .NET Core library, so you need to add a library. Click on the
Terminal > New Terminal menu and you should be shown options to open the new terminal window in either the MVVMSample or the MVVMEntityLayer folder. Select the MVVMEntityLayer folder and type in the following command to add the appropriate DLL so you can use the data annotation attributes.
dotnet add package Microsoft.AspNetCore.Mvc.DataAnnotations
Create the Product Class
Within the MVVMEntityLayer, project rename the
Class1.cs file to
Product.cs. Open the
Product.cs file, remove the code currently in the file and replace it with the code in Listing 1. Most of the code in this Product class should be familiar to you. The
[Table()] attribute is used to inform the Entity Framework of the name of the table and schema in which to find the table in the database. Add the
[Column()] attribute to any decimal values so the precision and scale can be specified. Add the
[DataType()] attribute to any fields to be displayed as currency or date values. MVC Core reads the
[DataType()] attribute when using the
DisplayFor() method and if it’s currency, displays the value in the correct format on the Web page.
Listing 1: The Product class maps the fields in the Product table to properties in this class
using System; using System.ComponentModel.DataAnnotations; using System.ComponentModel.DataAnnotations.Schema; namespace MVVMEntityLayer { [Table("Product", Schema ="SalesLT")] public partial class Product { public int? ProductID { get; set; } public string Name { get; set; } public string ProductNumber { get; set; } public string Color { get; set; } [DataType(DataType.Currency)] [Column(TypeName = "decimal(18, 2)")] public decimal StandardCost { get; set; } [DataType(DataType.Currency)] [Column(TypeName = "decimal(18, 2)")] public decimal ListPrice { get; set; } public string Size { get; set; } [Column(TypeName = "decimal(8, 2)")] public decimal? Weight { get; set; } [DataType(DataType.Date)] public DateTime SellStartDate { get; set; } [DataType(DataType.Date)] public DateTime? SellEndDate { get; set; } [DataType(DataType.Date)] public DateTime? DiscontinuedDate { get; set; } } }
Reference Entity Layer from the MVVM Sample Project
If you’re used to using Visual Studio, you know that in order to use classes from another project, you must include a reference to that DLL. It’s no different in VS Code, but to set the reference, you need to type in a command. Click on the
Terminal > New Terminal menu and select the MVVMSample folder. Set a reference to the MVVMEntityLayer project using the following command.
dotnet add . reference ../MVVMEntityLayer/MVVMEntityLayer.csproj
Try It Out
To ensure that you’ve typed in everything correctly, run a build task to compile the projects. Because you have a reference from the MVVMSample to the MVVMEntityLayer, if you run a build task on the MVVMSample project, it builds the MVVMEntityLayer project as well. Select the
Terminal > Run Build Task... menu and select the MVVMSample project. Watch the output in the terminal window and you should see that it compiles both projects.
Create the Data Layer Project
The next project to create is the Data Layer project in which you place all your repository classes. These repository classes interact with the tables in your database through the Entity Framework. Each repository class should implement an interface. Having both a class and an interface allows you to use DI to inject the concrete implementation of the interface. This is advantageous when it comes to testing time. You can swap out the concrete implementation, but the rest of your code stays the same.
To create this project, open a terminal by clicking on the
Terminal > New Terminal menu. Go to your development directory (in my case, that was D:\Samples). Type in the following commands to create a new folder named
MVVMDataLayer.
MD MVVMDataLayer CD MVVMDataLayer dotnet new classlib
Add this new project to your Visual Studio Code workspace, by clicking the
File > Add Folder to Workspace... menu. Select the MVVMDataLayer folder and click on the
Add button. You should now see the MVVMDataLayer project added to your VS Code workspace. Save your workspace by clicking on the
File > Save menu. You can go ahead and delete the
Class1.cs file, as you won't be needing that.
Add Entity Framework
For this article, you’re going to use the Entity Framework for the concrete implementation of your repository classes. In order to use the Entity Framework in the .NET Core application, you need to add a package to the project. Go back to the terminal window that should still be open in the MVVMDataLayer folder. Type in the following command to add the Entity Framework to the data layer project.
dotnet add package Microsoft.EntityFrameworkCore.SqlServer
Add Some Folders
As you’re going to have different types of files in this project, add three folders in which to store these different files named
Models, RepositoryClasses, and
RepositoryInterfaces. In the Models folder, place at least one class that inherits from the Entity Framework's DbContext class. You’re going to have one repository class for each table that you need to interact with in your database. Place these classes in the RepositoryClasses folder. Into the RepositoryInterfaces folder is where you place the corresponding interface for each repository class.
Reference the Entity Layer
Because your data layer will be reading data from the Product table, you’re going to need access to the Product class that you created earlier. Add a reference to the Entity Layer project you created earlier by opening a terminal window in the MVVMDataLayer folder and typing the following command:
dotnet add . reference ../MVVMEntityLayer/MVVMEntityLayer.csproj
Add a DbContext Class
For your simple article, you’re just going to need a single class to inherit from the Entity Framework's DbContext class. Add a new file in the Models folder named
AdvWorksDbContext.cs. Add the code shown in Listing 2 to this new file. The AdvWorksDbContext class is a standard implementation of an EF DbContext object. The constructor is passed an instance of a DbContextOptions object used to pass in any options such as a connection string. A single
DbSet<Product> property is created to hold the collection of products that are read from the
SalesLT.Product table.
Listing 2: Add a class that inherits from DbContext to access your database tables
using Microsoft.EntityFrameworkCore; using MVVMEntityLayer; namespace MVVMDataLayer { public partial class AdvWorksDbContext : DbContext { public AdvWorksDbContext(DbContextOptions<AdvWorksDbContext> options) : base(options) { } public virtual DbSet<Product> Products { get; set; } } }
Add Product Interface Class
Our repository class needs a single method named Get() used to retrieve all records from the Product table. Before creating the repository class, add an interface with the contract for this Get() method. Create a new file in the RepositoryInterfaces folder named
IProductRepository.cs. Add the code shown in the code snippet below.
using System.Collections.Generic; using MVVMEntityLayer; namespace MVVMDataLayer { public interface IProductRepository { List<Product> Get(); } }
Add Product Repository Class
In the RepositoryClasses folder, add a new file named
ProductRepository.cs. Enter the code shown in Listing 3 into the new file you created. This class needs to implement the Get() method from the IProductRepository interface. In addition, a private property of the type AdvWorksDbContext should be added. This private property is set from the constructor of this class. When a ProductRepository class is created by the DI service, an instance of the AdvWorksDbContext also created by the DI service is injected into this class.
Listing 3: Each repository class implements an interface and receives a DbContext object in its constructor
using System; using System.Collections.Generic; using System.Linq; using MVVMEntityLayer; namespace MVVMDataLayer { public class ProductRepository : IProductRepository { public ProductRepository(AdvWorksDbContext context) { DbContext = context; } private AdvWorksDbContext DbContext { get; set; } public List<Product> Get() { return DbContext.Products.ToList(); } } }
Reference Data Layer from MVVM Sample Project
Because the DI system is going to create an instance of our ProductRespository and the AdvWorksDbContext, you’re going to need the data layer project referenced from the MVC Core project. Click on the
Terminal > New Terminal menu and select the MVVMSample folder. Set a reference to the MVVMDataLayer project using the following command.
dotnet add . reference ../MVVMDataLayer/MVVMDataLayer.csproj
Try It Out
To ensure that you’ve typed in everything correctly, run a build task to compile the projects. Because you now have a reference from the MVVMSample to both the MVVMEntityLayer and the MVVMDataLayer projects, all three projects are built. Select the
Terminal > Run Build Task... menu and select the
MVVMSample project. Watch the output in the terminal window and you should see that it compiles all three projects.
Create View Model Project
Now that you’ve created the Product entity class and the data layer to get a collection of Product objects, you can build your view model class. So far, you may be thinking, why do I need a view model class? I have everything I need in my Product and my ProductRepository classes. The answer is because you want to cut down the amount of code you need to write in your controller, and you’re going to need additional properties other than what is in your entity class.
I’m sure you’ve found that many times on your Web pages, you need additional properties to keep track of the page state. This state is keeping track of what page you’re on when paging through a large table. Or what sort column you’re sorted upon. If you’re in an edit page, you might need to keep track of whether the user requested to do an add or an edit of the record. All of this additional data needs to be kept track of, so where do you put it? You don't want to add it to your entity class as you then need to add additional attributes to mark them as not mapped to the table. Plus, you’re going to need these properties on many different pages, and you don't want to copy and paste them from one entity class to another.
This is where a view model class comes in. A view model class holds this additional state data along with an instance of your entity class (Figure 3). Later in this article series, you’re going to add searching, paging, sorting, and CRUD logic to the product Web page. At that time, you’re going to start adding additional properties.
Create your view model project by opening a terminal window. Go to your development directory and type in the following commands to create a new folder named
MVVMViewModelLayer.
MD MVVMViewModelLayer CD MVVMViewModelLayer dotnet new classlib
Now that you’ve created this new project, add it to your Visual Studio Code workspace, by clicking on the
File > Add Folder to Workspace... menu. Select the MVVMViewModelLayer folder and click on the
Add button. You should now see the MVVMViewModelLayer project added to your VS Code workspace. Save your workspace by clicking on the
File > Save menu.
Reference the Entity and Data Layers
Your view model class needs access to the Product and ProductRepository classes you created earlier. Add a reference to the entity layer and data layer projects you created earlier by opening a terminal window in the MVVMViewModelLayer folder and typing the following commands:
dotnet add . reference ../MVVMEntityLayer/MVVMEntityLayer.csproj dotnet add . reference ../MVVMDataLayer/MVVMDataLayer.csproj
Create Product View Model Class
In the view model layer project, rename the
Class1.cs file to
ProductViewModel.cs and delete all the code within this file. Add the code shown in Listing 4 to create your ProductViewModel class. You can see two constructors on this class. You need a parameter-less constructor so MVC can create the class in the POST method of your controller. The other constructor is passed an instance of a ProductRepository class when the ProductViewModel is created by the DI system in MVC Core.
Listing 4: A view model class helps maintain state, exposes model properties to the UI, and interacts with repository classes
using System; using System.Collections.Generic; using System.Linq; using MVVMDataLayer; using MVVMEntityLayer; namespace MVVMViewModelLayer { public class ProductViewModel { /// <summary> /// NOTE: You need a parameterless /// constructor for post-backs in MVC /// </summary> public ProductViewModel() { } public ProductViewModel(IProductRepository repository) { Repository = repository; } public IProductRepository Repository { get; set; } public List<Product> Products { get; set; } public void HandleRequest() { LoadProducts(); } protected virtual void LoadProducts() { if(Repository == null) { throw new ApplicationException("Must set the Repository property."); } else { Products = Repository.Get().OrderBy(p => p.Name).ToList(); } } } }
Two public properties are needed in the view model. One is for the repository object. This allows you to set the repository object in the POST method of your controller. The other property is a list of Product objects that will be loaded with all the products returned from the table. It’s from this property that the HTML table is built.
Two methods are created in this first iteration of your view model. The protected
LoadProducts() method is responsible for calling the Get() method on the Repository object and retrieving the list of product objects. The public
HandleRequest() method is called from the controller to retrieve the product data. In the future, more logic is added to the HandleRequest() method to perform other functions such as searching, paging, adding, editing, and deleting of product data. The HandleRequest() method should be the only public method exposed from this view model class. By making all other methods not visible outside the class, your controller only needs to ever call a single method.
Reference View Model Layer from MVVM Sample Project
The DI service requires an instance of the ProductViewModel class to insert into your control, so reference the view model layer project from your MVC Core project. Click on the
Terminal > New Terminal menu and select the MVVMSample folder. Set a reference to the MVVMViewModelLayer project using the following command.
dotnet add . reference ../MVVMViewModelLayer/MVVMViewModelLayer.csproj
Try It Out
To ensure that you’ve typed in everything correctly, run a build task to compile the projects. Because you have a reference from the MVVMSample to the MVVMEntityLayer, the MVVMDataLayer, and the MVVMViewModelLayer projects, all four projects will be built. Select the
Terminal > Run Build Task... menu and select the MVVMSample project. Watch the output in the terminal window and you should see that it compiles all projects.
Set Up the MVC Project
Now that you have the entity, data, and view model projects created to support retrieving and modifying data in the Product table, it’s now time to set up the MVC project for using these classes. The AdvWorksDbContext class needs to be injected using the DI service, so you need to add a reference to the Entity Framework in your MVC project. Select
Terminal > New Terminal and ensure that the terminal prompt is in your MVVMSample project folder. Type in the following command to add the Entity Framework to your MVC project:
dotnet add package Microsoft.EntityFrameworkCore.SqlServer
Modify the Startup Class
In the Startup class is where you inject your classes into the MVC DI service. Open the
Startup.cs file and add three using statements at the top. You need one each for your Data Layer, View Model Layer, and for the Entity Framework.
using MVVMDataLayer; using MVVMViewModelLayer; using Microsoft.EntityFrameworkCore;
Next, create a new method in the Startup class named
InjectAppServices() as shown in Listing 5.
Listing 5: Build a separate method to inject your classes into the MVC DI service
private void InjectAppServices(IServiceCollection services) { // Get connection string from appsettings.json string cnn = Configuration["ConnectionStrings:AdvWorksConnectionString"]; // Add AdventureWorks DbContext object services.AddDbContext<AdvWorksDbContext>(options => options.UseSqlServer(cnn)); // Add Classes for Scoped DI services.AddScoped<IProductRepository, ProductRepository>(); services.AddScoped<ProductViewModel>(); }
This method is where you inject your application-specific classes. I like creating a separate method so I don't have an overly-large ConfigureServices() method. Add a call to the InjectAppServices() method from within the ConfigureServices() method as shown in the code snippet below.
public void ConfigureServices(IServiceCollection services) { services.AddControllersWithViews(); // Inject your application services InjectAppServices(services); }
Add a Connection String
In the InjectAppServices() method, you retrieve a connection string from the
appsettings.json file. Open the
appsettings.json file and add an appropriate JSON property to store your connection string. For formatting purposes in this magazine, I had to break the connection string onto several lines. Your connection string must be on a single line.
"ConnectionStrings": { "AdvWorksConnectionString": "Server=Localhost;Database=AdventureWorksLT;Trusted_Connection=True;MultipleActiveResultSets=true;Application Name=MVVM Sample" },
Create a Product List Page
I hope you agree that you now have a very elegant, reusable, and testable design with the code you’ve written so far. It’s now time to see the fruits of your labor expressed in a controller of your MVC application. Create a new
ProductController.cs file in the
Controllers folder and add the code shown in Listing 6 to this new file.
Listing 6: The controller logic is very simple because of DI and the MVVM design pattern
using Microsoft.AspNetCore.Mvc; using Microsoft.Extensions.Logging; using MVVMDataLayer; using MVVMViewModelLayer; namespace MVVMSample.Controllers { public class ProductController : Controller { private readonly IProductRepository _repo; private readonly ProductViewModel _viewModel; public ProductController(IProductRepository repo, ProductViewModel vm) { _repo = repo; _viewModel = vm; } public IActionResult Products() { // Load products _viewModel.HandleRequest(); return View(_viewModel); } } }
The MVC DI service injects an instance of the ProductRepository and the ProductViewModel classes into the ProductController class. Two private variables hold each of these instances passed in. When the user navigates to the Product/Products path in the MVC application, the HandleRequest() method is called on the view model. This method loads the products into the Products property in the view model. The view model is then passed to the
Products.cshtml page, which you’re creating next.
Add a Product Page
Add a
Product folder under Views folder in which to put the MVC pages for displaying your product data. Create a
Products.cshtml file in this new Product folder and enter the code shown in the snippet below.
@model MVVMViewModelLayer.ProductViewModel @{ ViewData["Title"] = "Products"; } <h1>Products</h1>; <partial name="_ProductList">
Add a Product List Partial Page
Add a partial page named
_ProductList.cshtml in the Product folder. Add the code shown in Listing 7 to this new partial page file. You’re going to be adding more functionality to the product page as you progress through this series of articles, so it makes sense to break your page into multiple partial pages. This is a good separation of concerns design pattern for Web pages.
Listing 7: Create a partial page for the display of the Product table
@using MVVMEntityLayer <table class="table table-bordered table-hover"> <thead> <tr> <th>Product Name</th> <th>Product Number</th> <th class="text-right">Cost</th> <th class="text-right">Price</th> </tr> </thead> <tbody> @foreach (Product item in Model.Products) { <tr> <td>@Html.DisplayFor(m => item.Name)</td> <td>@Html.DisplayFor(m => item.ProductNumber)</td> <td class="text-right">@Html.DisplayFor(m => item.StandardCost)</td> <td class="text-right">@Html.DisplayFor(m => item.ListPrice)</td> </tr> } </tbody> </table>
Modify Index Page
To call the Product List page, open the
Index.cshtml page in the
Views\Home folder and modify the code to look like the snippet shown below.
@{ ViewData["Title"] = "Home Page"; } <div class="list-group"> <a asp-Product List</a> </div>
Try It Out
Run the application and click on the Product List link. If you’ve done everything correctly, you should see the list of products from the Product table in the AdventureWorksLT database, as shown in Figure 4.
Searching for Products
To further illustrate how simple using the MVVM design pattern makes your controller code, in this part of the article you add some HTML and code to search for products, as shown in Figure 5. Add a new entity class named ProductSearch (see Figure 3) to hold the two properties to search for. You’re also adding a view model base class that all your view models are going to inherit from. This base class contains, for now, just a single property named
EventCommand.
The Search button you see in Figure 5 uses a
data- attribute to specify the command to send to the view model to process. Here’s the HTML code you’re going to enter in just a bit.
<button type="button" data-Search</button>
To get the command and send it to the view model, you’re going to write a little bit of JavaScript to put the value “search” into a hidden field on this product page. That hidden field is bound to the
EventCommand property in the view model base class. In the
HandleRequest() method you’re going to check to see if the
EventCommand is filled in with a value, in this case “search”, and if it is, call a
Search() method instead of the
LoadProducts() method you called before to display all products.
Create a Product Search Class
To hold data for searching, create a new class in the Entity Layer project named
ProductSearch.cs and add the code shown in Listing 8. The reason why you’re creating a new class instead of just reusing the Product class is that eventually you’re going to add data validation annotations to the Product class. You’re not going to want those validations on these search values. Another reason is that if you wanted to search on the SellStartDate field, you might need two properties;
BeginDate and
EndDate to specify a date range. You don’t want to add these two extra properties to your Product class because they don’t map to any field in the Product table.
Listing 8: Add a new search class to hold the values to search for in the Product table
using System.ComponentModel.DataAnnotations; using System.ComponentModel.DataAnnotations.Schema; namespace MVVMEntityLayer { public partial class ProductSearch { [Display(Name = "Product Name")] public string Name { get; set; } [Display(Name = "List Price Greater Than or Equal to")] [DataType(DataType.Currency)] [Column(TypeName = "decimal(18, 2)")] public decimal ListPrice { get; set; } } }
Modify Data Layer to Perform Searching
A
Search() method needs to be added to both the interface and to the repository class. Open the
IProductRespository.cs file and add a new method declaration for the repository contract.
List<Product> Search(ProductSearch entity);
Next, open the
ProductRepository.cs file and add a Search() method to complete the interface contract you just defined. In this
Search() method, you add a
Where() method to see if the Name column in the Product table starts with the value you entered in the ProductSearch
Name property and the ListPrice column is greater than or equal to the value in the
ListPrice property.
public List<Produc> Search(ProductSearch entity) { List<Product> ret; // Perform Searching ret = DbContext.Products.Where(p => (entity.Name == null || p.Name.StartsWith(entity.Name)) && p.ListPrice >= entity.ListPrice).ToList(); return ret; }
Add View Model Base Class
Add a new folder in the View Model Layer project named
BaseClasses. Within this new folder, create a new file named
ViewModelBase.cs. Add the code shown below to this new file.
namespace MVVMViewModelLayer { public class ViewModelBase { public ViewModelBase() { EventCommand = string.Empty; } public string EventCommand { get; set; } public virtual void HandleRequest() {} } }
Modify the Product View Model Class
To use the
EventCommand property, ensure that all your view model classes inherit from this base class. Open the
ProductViewModel.cs file and modify the class declaration as shown below.
public class ProductViewModel : ViewModelBase
Add a new property to this class to hold an instance of the ProductSearch class by entering the code shown below.
public ProductSearch SearchEntity { get; set; }
Instantiate a new instance of this ProductSearch class in both constructor methods on the ProductViewModel class. An instance is required if you want the user to enter data into the search values and have that data post back into the properties in the ProductSearch object. Be sure to also call the base class' constructor by invoking the base() call on each constructor.
public ProductViewModel() : base() { SearchEntity = new ProductSearch(); } public ProductViewModel(IProductRepository repository) : base() { Repository = repository; SearchEntity = new ProductSearch(); }
Add a
SearchProducts() method to the ProductViewModel class. This method is similar to the LoadProducts() method you wrote earlier in that you check to ensure that the Repository property has been set. You then assign to the
Products property the results of calling the
Search() method in the ProductRepository class.
public virtual void SearchProducts() { if (Repository == null) { throw new ApplicationException("Must set the Repository property."); } else { Products = Repository.Search(SearchEntity).OrderBy(p => p.Name).ToList(); } }
Because you added a virtual HandleRequest() method in the view model base class, add the
override keyword to the
HandleRequest() method declaration in the
ProductViewModel class.
public override void HandleRequest()
Modify the
HandleRequest() method to check the
EventCommand property to see if there’s a value in it. If there’s no value, call the
LoadProducts() method as before. However, if there’s a “search” value in the
EventCommand property, call the
SearchProducts() method you just created. The
HandleRequest() method should now look like the code snippet below.
public override void HandleRequest() { switch (EventCommand.ToLower()) { case "search": SearchProducts(); break; default: LoadProducts(); break; } }
Add Hidden Field Partial Page
Because you added a view model base class that’s going to be used for all view models, you should add a partial page that creates hidden fields for the
EventCommand property, and the other properties you’re going to add later in this article series. Add to
Views\Shared folder a new partial page named
_StandardViewModelHidden.cshtml. Add the code shown below to this new file.
@model MVVMViewModelLayer.ViewModelBase <input type="hidden" asp-
Add Search Partial Page
Just like you create a partial page for the product table, create a partial page for the search area on your product page as well. Add a new file under the
Product folder named
_ProductSearch.cshtml. Into this file add the code shown in Listing 9.
Listing 9: Add a partial page for the search area on your product Web page
@model MVVMViewModelLayer.ProductViewModel <div class="card"> <div class="card-header bg-primary text-light"> <h5 class="card-title">Search for Products</h5> </div> <div class="card-body"> <div class="form-row"> <div class="form-group col"> <label asp-</label> <input asp- </div> <div class="form-group col"> <label asp-</label> <input asp- </div> </div> </div> <div class="card-footer bg-primary text-light"> <button type="button" data-Search</button> </div> </div>
The code in the _ProductSearch.cshtml file creates a Bootstrap card with the search fields contained within that card. In the footer of this card, add a Search button. It’s on this Search button that you add the
data- attribute that’s going to send the command “search” to the view model.
Modify Products Page
When you created the Product page earlier, you had a single
<partial> tag to display the _ProductList.cshtml file. You now need to add two more partial pages to the Product page. However, you need to wrap these partial tags into a
<form> tag so the data in the hidden field and the search fields can be posted back to the view model in the POST method of the Product controller. Open the
Products.cshtml page and replace the previous
<partial> tag with the code shown in the following snippet.
<form method="post"> <partial name="~/Views/Shared/_StandardViewModelHidden.cshtml" /> <partial name="_ProductSearch.cshtml" /> <partial name="_ProductList" /> </form>
Post the Form Using JavaScript
On the Search button, you set the button's type attribute to
button, which means that it won’t post the form back to the controller. It’s now time to see how you’re going to accomplish that. At the bottom of the Products page, add a section for some JavaScript and enter the code shown in Listing 10.
Listing 10: JavaScript is used to get the command to send to the view model
@section Scripts { <script> $(document).ready(function () { // Connect to any elements that have 'data-custom-cmd' $("[data-custom-cmd]").on("click", function (event) { event.preventDefault(); // Fill in the 'command' to post back to view model $("#EventCommand").val($(this).data("custom-cmd")); // Submit form $("form").submit(); }); }); </script> }
The JavaScript in Listing 10 uses jQuery $(document).ready() to look for any HTML element that has the attribute
data-custom-cmd on it when the page loads. It then hooks into its “click” function and overrides the default click event. When the button is clicked upon, the value in the
data-custom-cmd attribute is retrieved and placed into the hidden field named
EventCommand. The form is then submitted, which causes the fields in the search, plus the
EventCommand hidden field to be posted to the controller.
The Post Method
The POST method is created by adding the
[HttpPost] attribute above an MVC action method with the same name as the GET method you used earlier. To this POST method, an instance of the ProductViewModel is created by MVC and the data contained in the inputs within the
<form> tag is filled into the corresponding properties. Open the
ProductsController.cs file and add the POST method shown in the following code snippet.
[HttpPost] public IActionResult Products(ProductViewModel vm) { vm.Repository = _repo; vm.HandleRequest(); return View(vm); }
Try It Out
Run the application, enter some data, like “HL” for the Product Name, in the Search field and click on the Search button. If you’ve done everything correctly, you should see only products that start with the letters “HL” displayed in the product table.
Summary
In this article, you got a taste of how to architect an MVC Core application with multiple projects. Separating your entity, data, and view model classes into separate projects provides you with the most flexibility and helps you focus on each layer as you develop. You also learned how little code you need in your controller. Using a hidden field makes it easy to communicate commands to your view model. In the next article in this series, you’ll learn how to do sorting and paging using the MVVM design pattern.
Related Articles
Use the MVVM Design Pattern in MVC Core: Part 2
Use the MVVM Design Pattern in MVC Core: Part 3 | https://www.codemag.com/Article/2005031/Use-the-MVVM-Design-Pattern-in-MVC-Core-Part-1?utm_source=2005031&utm_medium=articleviewer&utm_campaign=articlelinks | CC-MAIN-2020-40 | refinedweb | 6,633 | 54.63 |
What are possible reasons for empty service responses?
Hey, I have a problem regarding a service call. My code is very similar to the example given below. This example WORKS WELL, I am aware of that. However, in my very long code I have the problem that in a similar situation, the response in
printer.cpp is empty after the service call. The service call itself is executed and the result is calculated, but the response object after the call is empty. That is very strange as I give the request and response as references and thus it should work on the given object.
I would like to provide you with the relevant code, but I have not a single clue where the issue lies and thus would have to paste everything here, which would be way too much.
So, basically, I would like to know what possible reasons there are for a service response to be empty/uninitialized. Any help is appreciated.
calcu.cpp
#include "ros/ros.h" #include "std_msgs/String.h" #include "beginner_tutorials/AddTwoInts.h" #include <sstream> bool calcuCallback(beginner_tutorials::AddTwoInts::Request &req, beginner_tutorials::AddTwoInts::Response &res) { res.sum = req.a + req.b; } int main (int argc, char** argv) { ROS_INFO("calcu running"); ros::init (argc, argv, "calcu"); ros::NodeHandle nh; // create a ROS service ros::ServiceServer service= nh.advertiseService("add_two_ints", &calcuCallback); // spin ros::spin(); return (0); }
printer.cpp
#include <ros/ros.h> #include <opencv2/opencv.hpp> #include "beginner_tutorials/AddTwoInts.h" #include "beginner_tutorials/Num.h" void numCallback(const beginner_tutorials::NumPtr msg) { static beginner_tutorials::AddTwoInts::Request request; static beginner_tutorials::AddTwoInts::Response response; request.a = msg->num; request.b = 3; ros::service::call("add_two_ints", request, response); std::cout << "Response sum = " << response.sum << "\n"; // RESPONSE WOULD BE EMPTY/UNINITIALIZED } int main (int argc, char** argv) { ROS_INFO("printer running"); ros::init (argc, argv, "printer"); ros::NodeHandle nh; if(!ros::service::waitForService("add_two_ints",10000)) return -1; // create ROS subscribers for local landmarks and dGPS pose estimates ros::Subscriber sub = nh.subscribe("inputs", 1, &numCallback); // spin ros::spin(); return (0); }
addTwoInts.srv
int64 a int64 b --- int64 sum
Num.msg
int64 num
Hard to tell. Sounds like a problem on the service-server side. Does calling the service from the command line also result in an empty response?
Alright, so when calling the service from the command line it prints
ERROR: service [/add_two_ints] responded with an error:. Any idea how to procede? I googled this error and found that maybe a
ros::service::waitForServicecall is the reason, but no idea why. I added the code in my question.
Should add that I called the service by invoking
rosservice call /add_two_ints "args"and the service callback is doing its work until the last line, but leads to the error. Note that in my actual program no error is printed out, but the response is empty. | https://answers.ros.org/question/247458/what-are-possible-reasons-for-empty-service-responses/ | CC-MAIN-2020-10 | refinedweb | 469 | 51.95 |
You can access the quosure components (its expression and its environment) with:
get_expr() and
get_env(). These getters also support other
kinds of objects such as formulas
quo_get_expr() and
quo_get_env(). These getters only work
with quosures and throw an error with other types of input.
Test if an object is a quosure with
is_quosure(). If you know an
object is a quosure, use the
quo_ prefixed predicates to check
its contents,
quo_is_missing(),
quo_is_symbol(), etc.
is_quosure(x)
quo_is_missing(quo)
quo_is_symbol(quo, name = NULL)
quo_is_call(quo, name = NULL, n = NULL, ns = NULL)
quo_is_symbolic(quo)
quo_is_null(quo)
quo_get_expr(quo)
quo_get_env(quo)
quo_set_expr(quo, expr)
quo_set_env(quo, env)
is_quosures(x)
An object to test.
A quosure to test.
The name of the symbol or function call. If
NULL the
name is not tested.
An optional number of arguments that the call should match.
The namespace of the call. If
NULL, the namespace
doesn't participate in the pattern-matching. If an empty string
"" and
x is a namespaced call,
is_call() returns
FALSE. If any other string,
is_call() checks that
x is
namespaced within
ns.
A new expression for the quosure.
A new environment for the quosure.
When missing arguments are captured as quosures, either through
enquo() or
quos(), they are returned as an empty quosure. These
quosures contain the missing argument and typically
have the empty environment as enclosure.
is_quosure() is stable.
quo_get_expr() and
quo_get_env() are stable.
is_quosureish() is deprecated as of rlang 0.2.0. This function
assumed that quosures are formulas which is currently true but
might not be in the future.
quo() for creating quosures by quotation;
as_quosure()
and
new_quosure() for constructing quosures manually.
# NOT RUN { quo <- quo(my_quosure) quo # Access and set the components of a quosure: quo_get_expr(quo) quo_get_env(quo) quo <- quo_set_expr(quo, quote(baz)) quo <- quo_set_env(quo, empty_env()) quo # Test wether an object is a quosure: is_quosure(quo) # If it is a quosure, you can use the specialised type predicates # to check what is inside it: quo_is_symbol(quo) quo_is_call(quo) quo_is_null(quo) # quo_is_missing() checks for a special kind of quosure, the one # that contains the missing argument: quo() quo_is_missing(quo()) fn <- function(arg) enquo(arg) fn() quo_is_missing(fn()) # } | https://www.rdocumentation.org/packages/rlang/versions/0.2.2/topics/quosure | CC-MAIN-2021-25 | refinedweb | 359 | 57.47 |
Introduction to JLabel in Java
JLabel is of the many Java classes from the Java Swing package. The JLabel class from the swing package is able to display a text or a picture or both. Similar to other classes in the Swing package, the label and label’s contents displayed by JLabel are aligned using horizontal and vertical alignments. The programmer can specify where the label’s contents will be displayed on the label’s display area by setting the alignments.
By default, the text or more specifically, label text is aligned vertically and is displayed at the center of their display area whereas an image or picture displayed is horizontally centered by default.
We can also easily specify the position and display the text relative to our image. The text is normally displayed at the end of our image, with the text aligned vertically, as discussed above.
It is the simplest of the Swing’s GUI components. JLabel component from the Swing package is almost the same as a label from the AWT package, the difference is JLabel does not contain user-editable text that is ‘read-only text. JLabel simply is used to display a text message or an icon/image or both on the screen but it is not able to react to events from the user for example mouse focus or keyword focus.
Example of JLabel in Java
We can simply use JLabel by creating and using an instance for this class. Following is an example screenshot after creating an object for JLabel class and printing our label, ‘A Basic Label’.
Here we created an object of JLabel class called ‘label’ with a label text ‘A Basic Label’ given with it. You can simply write it as:
JLabel label1 = new JLabel("A basic label."); OR
JLabel label1;
label1 = new JLabel("A basic label.");
It will be displayed as:
Purpose of JLabel in Java
JLabel does not react to input events performed by the user like mouse focus or keyboard focus. It is simply a non-editable text or image or icon or both. JLabel is generally used along with those components that do not have their own ability to explain or demonstrate their purpose. The JLabel object created will provide our user, the text instructions or information on our GUI.
For example, a text area for entering a Name or Password, etc will require a label to tell the user about the text box.
Find this example explained below with screenshots.
Without the usage of JLabel, the text boxes will appear lost to a user since they do not tell themselves what the user is expected to enter in the text field. Take the following example, we have added a text field without any labels.
Note, you can simply add a text field using a following simple line of code.
JTextField textEmail = new JTextField(20); //creating object for text field
textEmail.setBounds(50,50, 150,20); //setting the bounds for the text box
But if this text field is used in combination with JLabel, it will appear as below and will make more sense, isn’t it?
Below is another example where we used our previous text field along with which we have added a simple one-line string ‘Enter Email address’, suggesting our user that he needs to add his/her email address in the given text area.
As shown above, we can simply add a text field. Now we will add a label too as shown below:
textLabel = new JLabel("Enter e-mail address:");
JTextField textEmail = new JTextField(20);
textLabel.setBounds(20,50,150,20);
textEmail.setBounds(180,50, 150,20);
This was a simple example we created. It was a simple program displaying a text field and a label with it. We also can add an icon along with using another commonly used method with JLabel, known as the setIconTextGap method. This method helps the programmer to specify how many pixels our text and our image should be displayed apart.
Constructors of JLabel
Java JLabel class has several constructors that can be used to create our label with different characteristics.
- JLabel (): This constructor creates an empty label that is without any text. This instance of the class creates the label with no image and an empty string or text for its title. The text can be set at a later time.
- JLabel (Icon Image): This constructor creates a label with only a specified icon or image. The icon or image file can be used from your own file system.
- JLabel (String Text): This instance creates a label with a specific text while declaring our constructor. Apart from the above mentioned basic constructors, we also have the following that can use.
- JLabel (Icon Image, int horizontalAlignment): This constructor instance is used to create a specified image or icon along with horizontal alignment.
- JLabel(String text, int horizontalAlignment): This constructor instance is used to create a specified text along with horizontal alignment.
- JLabel (String text, Icon icon, int horizontalAlignment): This constructor instance is used to create a specified image or icon, text as well as its alignment as ‘horizontal’.
Examples of JLabel
Following is an example for creating a simple program of ‘Sign In Form’ with two labels added for two text fields displaying their nature. We also added a button with its own label displaying text as ’Sign In’.
Code:
import javax.swing.*;
class Java_JLabel_SignIn
{
public static void main(String args[])
{
//Adding our Frame
JFrame f= new JFrame("Label Demo");
//Creating objects for our Labels
JLabel label1,label2;
//Creating object for Sign In button
JButton Button1;
//Creating object for our text boxes
JTextField TextBox1,TextBox2;
//Creating our button
Button1=new JButton("Sign In");
//Creating our first Label
label1=new JLabel("User Name:");
//Creating our second label
label2=new JLabel("Password:");
//Creating our first text field
TextBox1 = new JTextField(20);
//Creating our second text field
TextBox2 = new JTextField(20);
//Setting bound for our Label1
label1.setBounds(50,50, 100,30);
//Setting bound for our Label2
label2.setBounds(50,100, 100,30);
//Setting bound for our TextBox1
TextBox1.setBounds(180,50, 150,20);
//Setting bound for our TextBox2
TextBox2.setBounds(180,100, 150,20);
//Setting bound for our Button1
Button1.setBounds(110,150,95,30);
//Adding our Label1,Label2,TextBox1,TextBox2,Button1 to our frame
f.add(label1);
f.add(label2);
f.add(Button1);
f.add(TextBox1);
f.add(TextBox2);
f.setSize(300,300);
f.setLayout(null);
f.setVisible(true);
}
}
Output:
You can see the code below, I have used Eclipse for writing the code.
When the above lines of code are executed, we get the following window as our output. Check it out:
Common Methods Used in JLabel
We have already discussed JLabel and how to create one as a text or an icon. Following is another list of methods that are generally used along with JLabel in our programs. These are the commonly used methods of JLabel class.
- getIcon (): This method is used to get the image displayed that our label had displayed.
- setIcon(Icon i): This method helps set our icon to be displayed to our image, i.
- getText(): This method returns our text which is displayed by our label.
- setText(String s): This method simply sets the text that will be displayed by our label to our string, s.
Above are a few methods used generally along with JLabel among others like setDisplayedMnemonic, getDisplayedMnemonic, etc.
JLabel is a descendant from JComponent that is used to create a simple text or icon labels. They are used to provide text instructions and other information, if required, on the graphical interface for our users to make their experience easy.
We use the JLabel component from the Swing package when we need one graphical interface component that needs a message or an image to be displayed.
Recommended Articles
This is a guide to JLabel in Java. here we discuss the Purpose, Constructors, Examples and Common Methods Used in JLabel. You may also look at the following article to learn more – | https://www.educba.com/jlabel-in-java/?source=leftnav | CC-MAIN-2021-21 | refinedweb | 1,334 | 61.56 |
LinkedIn connections are a very important thing for an IT professional, so we need to send connection requests to a lot of people who can be useful to us. But sometimes sending connection requests one at a time can be a little annoying and hectic. It would be nice to automate this work but How?
Python to rescue!
In this article, we will learn how to automate the accepting of LinkedIn connections using Python.
Modules required –
- Selenium –
Seleniumdoes not comes built-in with python. To install selenium type the below command in the terminal.
pip install selenium
- Pyautogui –
Pyautoguialso does not comes built-in with python. To install pyautogui type the below command in the terminal.
pip install pyautogui
- Chrome web driver – To download chrome web driver click here.
Below is the implementation.
First of all, let’s import all the important stuff.
# connect python with webbrowser-chrome from selenium import webdriver from selenium.webdriver.common.keys import Keys import pyautogui as pag
Now, let’s write the main function-
def main(): # url of LinkedIn url = “" # url of LinkedIn network page network_url = “ / mynetwork/" # path to browser web driver driver = webdriver.Chrome('C:\\Program Files\\Web Driver\\chromedriver.exe'') driver.get(url) # Driver's code if __name__ == __main__: main()
We need to go to the authentication page and then we need to login. Here is the code-
def login(): # Getting the login element username = driver.find_element_by_id(“login-email”) # Sending the keys for username username.send_keys(“username”) # Getting the password element password = driver.find_element_by_id(“login-password”) # Sending the keys for password password.send_keys(“password”) # Getting the tag for submit button driver.find_element_by_id(“login-submit”).click()
find_element_by_id is used to find the HTML tag ‘login-email’ and ‘login-password’ then we sent the keys of those.
Next, we go to the network section-
def goto_network(): driver.find_element_by_id(“mynetwork-tab-icon”).click()
Now, LinkedIn tries to prevent scraping so finding the connection button can be a little tricky. So you need to try hard and find the connection button position somehow(You can use some techniques like Xpath).
Code for sending requests-
def send_requests(): # Number of requests you want to send n = input(“Number of requsts: ”) for i in range(0, n): # position(in px) of connection button # will be different for different user pag.click(880, 770) print(“Done !”)
To click on the required position, we use
pyautogui i.e. pag.click(, ). So this is how we can automate sending LinkedIn connections.
Here is the full code-
Output screen:
All, the connections are | https://www.geeksforgeeks.org/automate-linkedin-connections-using-python/ | CC-MAIN-2021-25 | refinedweb | 420 | 58.89 |
Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.
Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.
ccefront [-batch] [-auto] [-roba] [-t ccetempl] [-first F] [-last F]
[-vmode ] [-brate N] [-brate_minmax N M]
[-pass <1-9>] [-bias <0-100>] [-q_factor <1-200>] [-q_char <0-100>]
[-gop M NM] [-progressive <0|1>] [-new_vaf <0|1>]
[-aspect <1:1|4:3|16:9|2.21:1>] [-seq_endcode <0|1>]
[-top_first <0|1>] [-zigzag <0|1>] [-nofilters] [-noaudio]
[-size_mb MB] [-total_mb MB] [-extra_br N] [-t_avs templ.avs] [-t_ecl templ.ecl]
[-out_ecl file.ecl] [-out_avsdir dir] [-qmat matrix.txt] [-chap chaps.txt]
[-@resize W H] [-@crop L T W H] [-@addborders L T R B]
[-roba_reuse_samples] [-roba_szpass_mode N]
[-roba_sample_percent P] [-roba_adjust_percent P]
[-nocancel] [-minimize] [-shutdown|-hibernate] [-priority <0-5>]
[-un-{option}] [[-ecl] file.ecl] [videofile(s)] ...
-auto Global opt. CCE command line argument. Auto start CCE.
-batch Global opt. CCE command line argument. Auto start and close CCE
after encoding.
-roba Global opt. Enable RoBa OPV mode with predictable target size. Implies
-batch mode. An analysis is done to determine the Q factor that will create
a bitrate closest possible to the one specified by vbr_brate_avg in the
.ecl file. Note that if the current video mode in the .ecl file is OPV, then
opv_brate_min and opv_brate_max are used, otherwise vbr_brate_min and
vbr_brate_max are used. You may edit the ccefront.ini file to customize
the default values for roba, and override them with the -roba_... command
line arguments. If the path to rejig_exe is set, ReJig may be used in a
subsequent "sizing" pass (when the mpv file is oversized).
Note that the -roba and the other -roba_... flags are global, and applies
for all the input files, also if they are specified at the end of the
command line. A roba_log.txt file will be created for each processed file
and placed in the tmp subdirectory of the ccefront application data dir.
-roba_szpass_mode N In roba-mode, an optional "sizing" pass can be performed, either to make
sure that the result hits the target size accurately, or to improve the
encoding quality (not all agree that it will). The sizing pass can either
be done by encoding (CCE), or transcoding (oversized only, requires ReJig).
Mode 0-5 can be used:
0 - no sizing pass
1 - always encode
2 - transcode if oversize (oversize slack is default 0%)
3 - encode if oversize
4 - encode if undersize or oversize or Q > limit
(undersize slack is default 2.5%, Q limit is default 40)
5 - encode if undersize or Q > limit, transcode if oversize
-roba_szpass_limits L H Q
Set low, high slack values for under/oversizing, and Q limit.
-roba_... Set various other roba values. Defaults in ccefront.ini.
-t template CCE command line argument. Select a CCE template. This option cancels
all the previously specified ecl-key-modifier options (e.g. -bias).
-first F CCE command line argument. Set first frame to encode.
-last F CCE command line argument. Set last frame to encode.
-vmode mode Unified video mode setting. mpcbr = multipass cbr and vbr1 = 1st pass VBR
only works with newer CCE versions.
-brate N Set both CBR and average VBR bitrate.
-brate_minmax N M Set Min and max bitrates. (both VBR and OPV).
-pass N Set number of VBR (and MP CBR) passes after initial vaf pass.
-bias N Set VBR bias.
-q_factor OPV Q factor.
-q_char Set quantizer characteristics (quality priority). Normalized to range 1-100
for all versions of CCE.
-gop M NM Set gop values. gop_hdr is also set as M * NM in the .ecl file.
-progressive B Specify whether source is progressive or interlaced.
-new_vaf B Set the create_new_vaf key. Will work with CCE SP 2.50 as well, although not
originally supported.
-aspect R Set aspect ratio. 2.21:1 is only supported with CCE SP 2.70
-nofilters Disable all built-in filters (horizontal and vertical) and dithering.
-noaudio Disable audio output.
-size_mb N Another way to set bitrate. Specify the target size in MB of each encoded file.
The bitrate is computed and set in the created ECL file.
-total_mb N Global option. Set total target size of all files in MB. The bitrate is
computed and set in the created ECL file. Example:
> ccefront -brate 4000 f1.avi -brate 2000 f2.avi -total_mb 800
This will adjust the bitrates for both f1.avi and f2.avi to output 800 MB in
total, but f1.avi will still retain the double bitrate of f2.avi
-extra_br N Specify extra bitrate (typically audio) that is subtracted from the bitrate
you specify with either -brate, or indirectly by -size_mb or -total_mb.
-t_avs t.avs Create an AVS file for following input video files, from an AVS "template" file.
If the specfied file is not found, the templates subfolder is searched.
Substitution parameters that may be used in the template:
@source_file, @resize_w, @resize_h, @crop_l, @crop_t, @crop_w, @crop_h,
@addborders_l, @addborders_t, @addborders_r, @addborders_b
-@resize W H Set values for the avs template substitution variables. (-t_avs)
-@crop L T W H Set values for the avs template substitution variables. (-t_avs)
-@addborders L T R B Set values for the avs template substitution variables. (-t_avs)
-t_ecl t.ecl Use a "template" ECL file. The template file should be a stripped down .ecl file.
Note that this option cancels all the previously specified ecl-key-modifier
options (e.g. -bias). Only the [item] section is used. Basically it should have
removed all keys that are automatically detected by CCE during loading, such as
frame rate (frame_rate_idx). It should be similar to the -t option, but it uses
an external template file instead of an internal template (stored in the registry).
The following keys are "unified" for all versions:
vmode=N (N: 0=OPV, 1=CBR, 2=VBR, 3=MP CBR, 4=VBR 1st,
uses correct key and values for various CCE's)
quality_prec=N (N: 0-100. Scales down to 0-64 for CCE 2.66/2.67,
uses key "q_char_f" for CCE 2.70)
In addition, the template file may contain a version-specific section, e.g.
[item-2.50], [item-2.66] [item-2.67], or [item-2.70]. Note that no "unification"
of the keys mention above will take place in these sections.
A matrix (qmat) may not be part of a template as of now. Use the -qmat option to
import a matrix instead.
-out_ecl file Global opt. Specify path/name of generated ECL file. Default output is
"created.ecl" in the tool's home folder.
-out_avsdir dir Global opt. Specify folder where the generated .avs files (by using -t_avs)
should be placed. Default is the same as the input folder.
-qmat file.txt Import a quantization matrix from a file. If the specfied file is not found,
the q_matrix subfolder is searched. An example file is located in this folder.
-chap ch.txt Import chapter points from a file. Simply appended to the [file] section in
the ECL file.
-nocancel Global opt. Hide the Cancel button in CCE encode window. Default enabled in
ccefront.ini file. Hitting the Enter key in CCE SP 2.50, when the encode window
is active will *not* Cancel encoding (as it does when using 2.50 alone).
-minimize Global opt. Minmize CCE SP 2.66/2.67 with EclCCE / ReJig
-shutdown Global opt. Power down computer after encoding (-auto/-batch/-roba mode).
-hibernate Global opt. Hibernate computer after encoding (-auto/-batch/-roba mode).
-priority N Global opt. Set CPU process priority. 0=idle, 2=normal, 5=realtime. Default idle,
but can be modified in the ccefront.ini file.
-un-{option} Undo earlier command line settings that modifies .ecl file settings. Example:
> ccefront -qmat mymatrix.txt f1.avi -un-qmat f2.avi ;f2.avi does not use matrix
As a special case, -un-all will unset all ecl-modifier arguments.
[-ecl] file.ecl CCE command line argument. Input ECL file. You may skip the -ecl option if
the file has an .ecl extension
videofile(s) Any video file that is accepted by CCE, including .avs files. Wildcards * and ?
may be used for specifying a group of files. | http://forum.doom9.org/showthread.php?t=92157&highlight=ccefront | CC-MAIN-2019-18 | refinedweb | 1,372 | 70.29 |
Ditch Photoshop: making on-the-fly edits and enhancements to images using Python.
Last week the Refresh DC community had the opportunity to listen to some of the supremely talented folks at nclud discuss their latest redesign, from concept to technical implementation. They pushed a lot of boundaries developing a highly interactive front-end, and in addition to that creative work they needed to develop some specialized functionality in WordPress to support their design without tedious content updates.
Some of these things included making edits to uploaded images on upload so they could be reused in multiple ways and to reduce the need for funky manual preprocessing. Things like resizing, blurring, and even recolorizing.
I was curious what steps would be required to implement similar functionality in Python. So with my REPL at the ready and the estimable Python Imaging Library (PIL) on my path I wrote a few lines to get us started.
A bed of tulips at night. Our base image.
You can open an image from a file via a file path string or a file object. For simplicty, we’ll just a file path.
import Image flowers = Image.open("flowers.jpg")
Resizing images with PIL is straightforward. The resize method takes a size argument (a tuple of width and height for the new image) and an optional filter argument which dictates how PIL will scale the image.
The default filter is to use nearest neighbor scaling. Here’s the code and the result:
nearest = flowers.resize((flowers.size[0] / 2, flowers.size[1] / 2)).save( "flowers-half-nearest.jpg")
We can get an improvement using the
ANTIALIAS scaling filter:
antialiased = flowers.resize((flowers.size[0]/2, flowers.size[1]/2), Image.ANTIALIAS).save("flowers-half-antialias.jpg")
Resizes don’t have to make the image smaller or be done proportionally of course.
squished = flowers.resize((flowers.size[0], flowers.size[1] / 2), Image.ANTIALIAS).save("flowers-stretch-antialias.jpg")
The next problem is recolorizing an image. Let’s say you want to upload some images and get monochromatic copies for some purpose of your choosing.
The first way I thought about doing this was to strip all color from the base image and then use a single color layer with an alpha value. Kind of like holding a colored transparency sheet over a black and white photo.
This 3-line function creates a single color layer and returns a composite image by blending that single color layer with the base layer.
To return an image with a dark blue color scheme, we’ll specify a dark blue using its hex code and set an alpha value:
overlay = image_overlay(im, color="#0000CC", alpha=0.3)
And we’ll get this result. Note that the image quality will partially depend on the file type you choose for export. This file is in PNG format; the result was noticeably superior to the resulting JPEG.
The previous method works, but it feels a bit too hackish. What if we could more direcly change the color balance of the source image, instead of mucking about with layers?
PIL ships with the
ImageOps module, from which the colorize function
will prove helpful. This function replaces black and white pixel values
with the colors we specify. The base image must be a grayscale image, in
“L” mode, so we’ll make it grayscale first.
Picking a couple values of blue with our function like so:
recolorized = image_recolorize(im, black="#000066", white="#9999CC")
Results in this image.
Obviously we need to do some more work to figure out which colors to use in both examples to get a closer match. However it doesn’t appear like there’s much lost in the way of brightness or contrast by using the overlay method, so if you want a monochromatic image, the overlay method is probably the best way to get the desired result without too much color testing.
Now what if we want to blur an image? This might be helpful for
indicating perspective or beer goggles. For this we’ll use PIL’s
ImageFilter module.
import ImageFilter blurred = flowers.filter(ImageFilter.BLUR)
And here’s our basic blurred image:
Now with a Gaussian blur:
blurred = flowers.filter(ImageFilter.GaussianBlur)
In PIL version 1.1.5 the Gaussian blur filter has a hard coded radius value, but there’s an easy work around:
Now we can set an aggressive blur radius:
blurred = flowers.filter(MyGaussianBlur(radius=10))
Now we have the world through beer googles.
There’s a lot more you can do with PIL. For most CMS needs it’s the resizing module that will be most useful, and there are various CMS and framework specific libraries that provide great intermediary interfaces (e.g. easy_thumbnails for Django).
PIL’s
thumbnail function is very similiar to the resize function, but
instead of returning an image based on the specified dimensions, it uses
those dimensions as a limit and returns an image with the same aspect
ratio as the original.
In this code, the resultant image
thumb will have the same aspect
ratio as the source image
flowers and it’s longest dimension will be
no more than 100 pixels long. As with the
resize function we can
accept the default scaling filter or specify one.
thumb = flowers.thumbnail((100, 100), Image.ANTIALIAS)
One quick tip for testing your code or parameters in the interpreter:
you can use the
show method to view the image using your systems
default image viewer (e.g. Preview on Mac OS).
myimage.show()
And there’s no need to stop at blurring images. The
ImageFilter module
has additional filters, and as you can see from the previous example,
they can be extended as you need. If you need to do more advanced image
editing you can always enlist numpy for its efficient matrix handling.
You can find detailed documentation in the Python Imaging Library
Handbook.
Learn from more articles like this how to make the most out of your existing Django site. | https://wellfire.co/learn/python-image-enhancements/ | CC-MAIN-2019-18 | refinedweb | 1,002 | 64.2 |
Mar 12, 2008 03:55 AM|isjf|LINK
hi there
I saw these code from a C# project , just wondering why there using Microsoft.visualBasic.ControlChars.CrLF
is it a VB controls or..something else? ( I'm not sure how to call it)
and what is 【Microsoft.visualBasic.ControlChars.CrLF】function?
thank you
textBox1.Text += strMsg + Microsoft.VisualBasic.ControlChars.CrLf; textBox1.SelectionStart = textBox1.Text.Length - 1;
Mar 12, 2008 04:08 AM|TonyDong|LINK
In C#, we use \r\n for new line return, if you want to use new line in html, you can use server.htmldeode(strMsg+"<br />");
Star
12920 Points
Mar 12, 2008 04:10 AM|Careed|LINK
To accommodate some of the VB functionality prior to .NET, the designers and developers of .NET has presented all users (not just VB users) with the ability to use this functionality. Thus, there exists a namespace called Microsoft.VisualBasic that contains this functionality that includes special variables and methods that are similar to the same in pre-.NET VB.
Specifically, CrLf is "Carriage Return Line Feed". In other words, this starts a new line within a string.
Look at the MSDN documentation and look at the various Microsoft.VisualBasic namespaces that are there. This should give you more information about this field and other items of interest.
All-Star
20773 Points
Mar 12, 2008 04:15 AM|sreejukg|LINK
user \r\n
\r - carreage return
\n new line
Mar 12, 2008 05:51 AM|mn.shelly|LINK
Hi,
Check this one 'System.Environment.NewLine'
Mar 12, 2008 06:08 AM|isjf|LINK
thank you but... why metion 【System.Environment.NewLine】?
is the same function like /r/n?
what's the difference of them?
also ... any limit to use these functions?
can I use System.Environment.NewLine in WinForm?
All-Star
20773 Points
Mar 12, 2008 10:31 AM|sreejukg|LINK
I dont think so there is any difference between these 2. if you are using both methods, the out put will be the same, it will add a carreage return and a line feed to the specified position
Regards
Sreeju
Star
12920 Points
Mar 12, 2008 11:32 AM|Careed|LINK
All of these are just different ways to perform the process to create a new line within a string. They perform the same function.
The purpose of the different methods is how you approach the "problem" of creating a new line in a text string. If I'm a long-time VB developer, then I would be more inclined to use vbCrLf (which is a string field in the Microsoft.VisualBasic.Constants class) ; if I'm a C/C++/C# programmer, then "\r\n" would be more familiar to me; if I'm focusing on using more .NET in my code so I'm not too language-specific, I would more than likely use System.Environment.NewLine. So, it's really all of matter of perspective and perception from the programmer's point of view.
Then again, why would some use Microsoft.VisualBasic.ControlChars.CrLf in a C# program? Maybe they're confused about all that .NET provides....[:D]
Mar 12, 2008 05:10 PM|TonyDong|LINK
They are the same, and used in different language.They are the same, and used in different language.
9 replies
Last post Mar 12, 2008 05:10 PM by TonyDong | http://forums.asp.net/t/1232284.aspx | CC-MAIN-2015-22 | refinedweb | 559 | 67.55 |
Recently I have seen some questions about reading a text file content on OTN forum, though it is very simple task still some find it confusing. That’s why here I am showing how to read text file in java using FileInputStream
This is a sample text file
and here goes the java code to read text file using FileInputStream
package client; import java.io.File; import java.io.FileInputStream; import java.io.IOException; public class ReadFileJava { public ReadFileJava() { super(); } public static void main(String[] args) { // Absolute path of file that you want to read File f = new File("D:/sampleFile.txt"); FileInputStream fis = null; try { fis = new FileInputStream(f); int data; //Iterate over file content while ((data = fis.read()) != -1) { System.out.print((char) data); } } catch (IOException ioe) { ioe.printStackTrace(); } finally { try { if (fis != null) fis.close(); } catch (IOException ioe) { ioe.printStackTrace(); } } } }
Run this code
And output is
Cheers 🙂 Happy Learning
An Oracle ACE, Blogger, Reviewer, Technical Lead working on Oracle ADF
2 thoughts on “Read text file in Java using FileInputStream”
Hello Ashish,
Thanks for this wonderful post. I am using jdev 12.1.3 and have a similar requirement in adf to read file present in unix server path like “u03/prnqoc01/appl/common/attachment”
Will providing this pathname work in reading the file?
Regards,
Abhik Dey
If you are executing java code on same machine then i think it should work. | http://www.awasthiashish.com/2018/11/read-text-file-in-java-using-fileinputstream.html | CC-MAIN-2019-26 | refinedweb | 235 | 63.49 |
JS: JSON
JSON is the value of the property key
"JSON" of the global object.
[JS: the Global Object]
console.log( window.JSON === JSON ); // true
Type
Type of
JSON is object.
[see JS: Value Types]
console.log ( typeof JSON === "object" ); // true console.log ( Object.prototype.toString.call( JSON ) === "[object JSON]" ) // true
Parent
Parent of
JSON is
Object.prototype.
[see JS: Prototype and Inheritance]
console.log ( Reflect.getPrototypeOf ( JSON ) === Object.prototype ); // true
Purpose
Purpose of
JSON is as a namespace to hold functions, for working with the JSON data interchange format.
JSON is a static object.
JSON is not a function. You cannot call it, nor with
new.
What's JSON Data Interchange Format?
JSON is a data interchange format.
JSON is a more strict syntax of nested JavaScript object or array. Used to pass data to the web browser, or exchange data with other language, API.
For example, here's a JSON:
{ }
More about JSON syntax can be found here.
JavaScript Syntax vs JSON Syntax
JSON syntax is more strict than JavaScript syntax. For example:
{'h':2}→ bad. JSON does not allow single quote.
{"h":2,}→ bad. JSON does not allow extra comma at end.
{h:2}→ bad. JSON does not allow unquoted property key.
[undefined]→ bad. JSON value does not allow
undefined. (but
nullis ok)
Properties
Like it? Help me by telling your friends. Or, Put $5 at patreon.
Or, Buy JavaScript in Depth
Or, Buy JavaScript in Depth | http://xahlee.info/js/js_json_methods.html | CC-MAIN-2019-35 | refinedweb | 240 | 71.51 |
I have updated to latest Django version 1.0.2 after uninstalling my old Django version.But now when I run django-admin.py I get the following error. How can I resolve this?
Traceback (most recent call last): File "C:\Python25\Lib\site-packages\django\bin\django-admin.py", line 2, in <module> from django.core import management ImportError: No module named django.core
You must make sure that django is in your PYTHONPATH.
To test, just do a
import django from a python shell. There should be no output:
ActivePython 2.5.1.1 (ActiveState Software Inc.) based on Python 2.5.1 (r251:54863, May 1 2007, 17:47:05) [MSC v.1310 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import django >>>
If you installed django via
setuptools (
easy_install, or with the
setup.py included with django), then check in your
site-packages if the
.pth file (
easy-install.pth,
django.pth, ...) point to the correct folder.
HIH.
I have the same problem on Windows and it seems I've found the problem. I have both 2.7 and 3.x installed. It seems it has something to do with the associate program of .py:
In commandline type:
assoc .py
and the result is:
.py=Python.File
which means .py is associated with Python.File
then I tried this:
ftype Python.File
I got:
Python.File="C:\Python32\python.exe" "%1" %*
which means in commandline .py is associated with my Python 3.2 installation -- and that's why I can't just type "django-admin.py blah blah" to use django.
ALL you need to do is change the association:
ftype Python.File="C:\Python27\python.exe" "%1" %*
then everythong's okay! | https://pythonpedia.com/en/knowledge-base/312549/no-module-named-django-core | CC-MAIN-2020-16 | refinedweb | 292 | 73.03 |
Blogs by Author & Date
A situation arises where you have a system that includes a PLC, an HMI, and some peripheral devices. The HMI displays some information to a user and allows for some basic control and the PLC manages data collection and communication through an RS485 network using a Modbus RTU protocol.
Easy, right? Many PLCs already support RS485 and Modbus communication and only require a few functions blocks.
There is a twist, however. The devices with which the PLC is communicating are Arduinos. Lots of them.
So, how can you program an Arduino to act as a Modbus device?
Turns out the answer is easy.
In this post, I'll talk about adding an Arduino to an RS485 network and programming it to function as a Modbus slave device.
First, let’s talk about Arduino and RS485.
Although Arduino supports serial communication through its built-in UART (Universally Asynchronous Receiver/Transmitter), it uses TTL (Transistor-Transistor Logic), not RS485. Both signaling types use serial communication. Serial communication means data is sent one bit at a time at a specified BAUD rate (bits per second).
However, TTL is single-ended, whereas RS485 relies on a differential signal (what's the difference?). This is far more useful in an industrial setting where signals can be influenced by electrical noise, and devices can be separated dozens or hundreds of meters.
To allow an Arduino to speak over an RS485 network, an additional device must be used to convert TTL to RS485. There are several devices on the market that do this, but I used this RS485 Transceiver Breakout Board.
This picture shows my hardware setup.
Because I’m using an Arduino Mega, 4 serial ports available. If you are unsure how many ports you have, the Arduino website provides a description of serial ports on all board models.
Did you run out of Serial ports? Don’t forget about Software Serial!
Great! Your Arduino is now networked to the PLC (or another master device) and other non-master devices.
Next is software. I’ll be using the Arduino IDE for software development.
Although Modbus was developed in 1979, it has stood the test of time and is proven to still a reliable industrial communication protocol. You may choose to write your own C++ library implementation of Modbus, but I opted to use a prewritten library.
A quick Google search yields several options. I chose this one.
The first step is to include the ModbusRtu library in your sketch.
The Modbus constructor takes three parameters:
0 = Serial
1 = Serial1
2 = Serial2
3 = Serial3
4 = SoftwareSerial
Below is example code that shows how we can use our newly created RS485 network and Modbus library using a very simple (and probably unrealistic) scenario.
In this scenario, our Arduino is hooked up to a toggle switch and an LED. We have an operator who wants to know the state of the switch (on or off) and the ability to remotely toggle the LED. What should we do?
First, look at how the Modbus library function that allows for reading and writing:
Modbus::poll( uint16_t *regs, uint8_t u8size )
*regs is a register table used for communication exchange, and u8size is the size of the register table.
This method checks if there is an incoming query in its serial buffer. If there is, the library will validate the message (check the device address, data length, and CRC) and subsequently perform the correct function.
In other words, to use this function, we must pass it an unsigned 16-bit integer array and its length. The array will contain the data that the master device is reading or writing over.
For this simple example, I created an array called modbus_array that contains two elements.
Now, let’s look at this from the perspective of the master device.
Toggling the LED
// Includes
#include <ModbusRtu.h>
// Defines
#define LED_PIN 5
#define SWITCH_PIN 6
// Variables
Modbus modbus_port;
// modbus_array = { LED control, toggle switch state }
uint16_t modbus_array[] = {0, 0};
void setup()
{
// Initiate a MODBUS object
modbus_port = Modbus(1, 1, 7);
// Start the serial port with a specified baud rate
modbus_port.begin(9600);
pinMode(LED_PIN, OUTPUT);
pinMode(SWITCH_PIN, INPUT);
}
void loop()
{
// Allow the master to access the registry table
modbus_port.poll(modbus_array, sizeof(modbus_array)/sizeof(modbus_array[0]));
// If the master device sets the first element in the array to 0, turn off the LED.
// Any other number will make the LED go high.
if (modbus_array[0] == 0) {
digitalWrite(LED_PIN, LOW);
} else {
digitalWrite(LED_PIN, HIGH);
}
// Set the second element in the array to the state of the switch
modbus_array[1] = digitalRead(SWITCH_PIN);
}
Arduinos are usually confined to hobbyist and educational markets because of their low barrier to entry, but this also makes them ideal candidates for prototyping, even in the industrial automation space.
Do you have a low fidelity project that could use some low-cost microcontrollers? Check out our Embedded Development and Programming service area to find out if DMC could be good fit for you!
Learn more about DMC's embedded solutions.
There are currently no comments, be the first to post one.
Name (required)
Notify me of followup comments via e-mail | https://www.dmcinfo.com/latest-thinking/blog/id/9468/categoryid/10/turning-an-arduino-into-a-modbus-device | CC-MAIN-2018-34 | refinedweb | 859 | 55.34 |
A role account is an e-mail address which serves a particular function, not an individual person, for example "sales@" or "info@". Postmaster@domain (RFC2821) and abuse@domain (RFC2142) are the two role accounts which every ISP, webhost, mail service and DNS host must have in order to promptly identify spam and abuse related problems on their network. Networks which do not tolerate spammers are careful to look at their abuse mail every day and take effective actions to stop any problems which arise.
A Feedback Loop (FBL) is an automated stream of spam reports sent by prior agreement between individual receiving and sending networks, often based on a "This Is Spam" button in the user interface. FBLs are intended to help streamline and automate the spam reporting process with specific machine-readable parts. A standard Abuse Reporting Format (ARF, RFC5965) is specified and implemented for FBLs. ARF follows existing RFC2045 MIME standards for e-mail (and the earlier RFC1341). More about ARF is here and here.
DMARC records in your domain's DNS can help you detect spoofing or misuse of your domain used in spam or phishing. Some malware uses the domain of its local host when it spreads in spam, and DMARC could help you spot that infection on your server and clean it up.
A single spam report could be a fluke or someone reporting mail they actually signed up for, or it could represent 10,000 or more spam recipients. For wide-scale, pure-spam mailshots, the reporting rate is often even less than one in 10,000 due to filtering and "LART fatigue" of many who used to report spam. In general, hand-crafted reports are more likely to be actual spam than reports from automated"This Is Spam" (TIS) buttons. Even TIS reports average in excess of 80% spam, though. Evaluate reports as you will; ignore them at the risk of your network's reputation and e-mail deliverability. (Ignoring them includes setting barriers to accepting them, such as imposing requirements like ARF, which is not supported by end-user mail clients, or DKIM or SPF, or content filtering your abuse mailbox.)
Here are some tools which can help direct spam reports to your proper role account. WARNING: FBLs can produce very high volumes of e-mail. Use a specifically designated e-mail account and allow plenty of disk space and server cycles to accept the full stream of reports. Some networks use a seperate server to accept the flow (for example, fbl@abuse.domain.tld). Do not apply content-based spam filters to FBL or abuse@ accounts or you will discard the very messages you need to keep your network clean.
1. The Network Abuse Clearinghouse
Not a reporting service or FBL per se, but a database of correct addresses for spam reports based on domain. Registered users can send spam reports via its mail server, or anyone can query it to find reporting addresses for a particular domain. Since reports are not automated, volumes tend to be lower and reports hand-crafted. This may identify some of the more stubborn spam issues on a network, for example HTTP redirectors or DNS. See the Abuse.net website for instructions on using and updating its services.
1b. Abusix' Abuse Contact DB provides the Abuse point-of-contact (POC) from network whois records with an easy DNS query in inverse IP format, just like a DNSBL query. Check your network's IP-whois records to be sure you have registered an Abuse POC.
2. SpamCop
SpamCop reports the spam source, SMTP relay and spamvertised URLs in the message body based on IP. Registered users can use SpamCop to parse and report spam. A SpamCop feed might be high volume depending on your network size and output. The instructions on this page explain how anyone can receive SpamCop daily or hourly summaries about spam problems in specified IP ranges. For more information see the SpamCop FAQ for abuse-desks and administrators.
3. AOL Feedback Loop:
When AOL users click the "This Is Spam" button in their e-mail client, this system generates a "SCOMP" report to you. While it can be high-volume, it offers excellent feedback on proxies, virus infections and spammers on your network. You need to sign up for this free service with AOL. (And same with the MSN, Yahoo! and Outblaze systems, too.) AOL's whitelist info is here.
4. Outblaze (mail.com): Request a feedback loop by contacting postmaster@outblaze.com. ISPs and COI bulk mailers only!
5. Microsoft (msn.com, live.com, hotmail.com) has Feedback Loops and other information for bulk mailers at. Their Smart Network Data Services includes delivery numbers and any Spamhaus Zen listings within your IP ranges at SNDS, "Junk Mail Reporting Program" FBL at JMRPP and other services. Senders may also be interested in this PDF to help their delivery.
6. United Online Trusted List and Feedback Loop (Netzero and Juno):
7. Road Runner FBL:
Road Runner's postmaster page:
8. Yahoo! FBL:
Deliverability information:
9. "USA.net offers a feedback loop service, operated by Return Path, free of charge, to parties sending large amounts of mail to USA.net members. The feedback loop (FBL) will forward any mail reported as spam originating from the associated IP addresses back to the listed email address. We highly recommend the use of a dedicated e-mail address for this purpose." (Spamhaus: good advice for any FBL!) And, of course,.
10. Comcast Feedback Loop:
Postmaster pages for more info:
11. Earthlink Feedback Loop: Write to fblrequest@abuse.earthlink.net with your IP range, domains, your network's contact information including name, contact e-mail and phone, and the e-mail to which the FBL will be sent. ISPs only. [May 2009: reported to be unresponsive; status unknown.]
12. Excite Feedback Loop:
13. Cox.net Feedback Loop: and Postmaster pages.
14. Rackspace Feedback Loop: (formerly MailTrust)
15. Tucows (OpenSRS) Feedback Loop:
16. Synacor Feedback Loop: Support is at tss@synacor.com.
17. FastMail Feedback Loop:.
18. Mail.Ru Feedback Loop:.
19. Terra Feedback Loop:.
20. Zoho Feedback Loop:.
Spamhaus is happy to update this information at the request of the FBL provider or other authority. Other sites have additional information, and there is a list on Wikipedia. Also, private FBLs can be established between consenting parties to acquire additional evidence of spam, either free or for a price; for example see Abusix.com. Ask your network friends and geek buddies to filter spam from your IP ranges into a forwarding account for you, in exchange for you feeding them data about their ranges.
Tip: Do not put your FBL or abuse@ account behind content-based spam filters. What's the point of filtering away the evidence you need to clean up your network?
Finally, be sure that your network has an enforcible Acceptable Use Policy (AUP) as a part of your contract with each and every customer! More examples are here. Seek legal counsel to ensure that your AUP will cover you in the event of account termination for abuse.
Provide proper role accounts in RIR whois records, including abuse role accounts. Be sure such accounts are read frequently by admins with the authority to fix the problems identified by reports to that address.
Properly identify subnet clients. For example, ARIN requires public identification records for /29 (8 IPs) and larger ranges (section 4.2.3.7.2.).
This FAQ also has information about proper rDNS as well as the role accounts to go with those hostnames.
Procmail can be extremely useful for sorting an inbound mail stream. For example, you could flag and sort any mail which had any of your IPs in it. You could bundle it into /24 chunks (or whatever works for you) and triage based on relative volumes. Spammers can even turn themselves in, that way.
Grepcidr is another handy tool for finding IPs or IP ranges from within a file. You could "$ grepcidr IP.Range spam.file" to pull out all the IPs in that range from your file of spam and abuse reports, for example.
Filtering an abuse@ mailbox can be tricky because much of the mail which it should receive looks like spam. Filter out spam reports and you risk not identifying a problem on your network. Spamresource.com offers some thoughts on how to deal with all the spam aimed at the abuse box.
The best way to stop the massive amounts of dynamic IP spam is for your network to filter port 25 on dynamic ranges, only allowing port 25 access to your smarthosts (or not even that, if your smarthosts use AUTH). The Messaging Anti-Abuse Working Group (MAAWG) has its recommendations on port 25 blocking here. If you run a NAT gateway, see
But even if you cannot block port 25, most other networks would rather not accept any mail from dynamic ranges. To assist them with that, it is polite for your network to list your dynamic, end-user, IP ranges in the Spamhaus PBL.
Besides being a good Internet neighbor and helping other networks avoid spam from your users, listing your dynamic addresses in those lists makes those ranges less attractive to spammers because they know they can't deliver to many networks which use those lists, and you'll get proportionally fewer spam reports.
Please add your dynamic IP ranges to those lists!
What is needed:
In Outlook and Outlook Express, the menus for that are something like this:
- Internet E-mail Settings
- - Outgoing Server
- - - [x] My outgoing server (SMTP) requires authentication
- Advanced
- - Server Port Number
- - - Outgoing server (SMTP): [587]
- - - [x] This server requires an encrypted connection
You should also consider filtering outbound spam on your mail server, possibly with something like.
ISP admins need to be able to identify proxies, secure them and block unauthorized access to them, and remove the software if necessary. Many trojan proxies do not show up with routine port scans, either operating on obscure high-number ports or using evasive mechanisms to avoid detection. Their filenames are not consistent. It may be necessary to do complete forensics, including packet sniffing and system monitoring, to identify the malware.
For many end-user systems, it is simply more effective to wipe the hard disk and reinstall a fresh system, or to pay a professional to clean the system. The "SecCheck" tool at MyNetWatchman.com is an excellent starting point, highly recommended, easy to use and free!
[hijack source/C&C]---(SOCKS)--->[zombie PC]---(SMTP)--->[spam rcpt]
To confirm those detections, as well as to monitor your own network routinely, every ISP should have the tools and skills to do traffic flow analysis. When you look at traffic flowing from a C&C server, you will see many connections on high-number ports to many destination IP address around the 'net - thousands and thousands of connections on unusual high-number ports! The destinations are almost always compromised proxies on end-user broadband networks. Be prepared with a good network monitoring tool to check traffic flows from suspected hijack servers. Some of the tools we've heard of include Tippingpoint, Fortinet, Sandvine, Packeteer, Allot NetEnforcer, Cisco's P-cube, and flow-tools for collecting and processing netflow data from Cisco and Juniper routers. Also see Stager for sorting and presenting data collected by those tools.
Network operators can also subscribe to a daily report of reported C&Cs on their network. ISOTF's "Drone Armies" research team posted a summary of their work on the NANOG mailing list and invited interested admins to contact them at c2report@isotf.org. Those reports show IP, time, domain, and port.
Spamhaus hears two common excuses given by proxy spammers to trick abuse admins. One excuse is a cover-up for that unusual traffic flow pattern, "We're doing VOIP." A closer look at the packets will disprove that. The other common exuse is "we removed the virus from the computer." Since it wasn't a virus causing the problem, that obviously doesn't stop the spam. Remember, these are dedicated servers with large lists feeding into them. And also remember, if it was a virus problem, you'd see that IP in spam headers. *wink*.
If you use the same hosts for incoming email and smarthosting or outgoing email, then you should always ensure that you exempt authenticated clients from PBL checks. As your users are often on dynamic IP addresses, a user may be assigned an IP address from his provider that is in the PBL.
Another way of putting this is: "Do not use the PBL to block your own users".
Note: This also applies to using the PBL to deny access to web-forums, journals or blogs.
The backscatter problem does not affect only spam firewalls though. Also regular MTAs can suffer of the same problem due to misconfigurations. The problem occurs every time a message is accepted from the original sender, then it is discovered that it can not be delivered for any reason and a non-delivery notification is generated and returned to the original sender. The problem is that the original sender is forged in spam, so those messages may go to someone who was not involved at all in the spam sending. The problem is solved by having the MTA detect that the message can not be delivered while the original SMTP transaction is still in progress, and reject the transaction without accepting the message. In this way, no mail will go to the forged sender.
On the Barracuda Spam Firewall, the option to turn spam bouncing off can be found in the Basic Tab under Spam Scoring. Near the bottom there is a check box for "Send Bounce." This is checked by default and should be unchecked. Instructions with screenshots are shown at. Barracuda 300 may not have this option, but the 400 and 600 versions do have it. Barracuda Networks themselves have now published a document (pdf) on how to shut of this type of bouncing.
When using the amavisd-new content scanner, the configuration file amavisd.conf should contain:
$final_virus_destiny = D_DISCARD; (or D_REJECT)
$final_banned_destiny = D_DISCARD; (or D_REJECT)
$final_spam_destiny = D_DISCARD; (or D_REJECT)
Note from the above example that the same principle applies to anti-virus notifications and bounces as well. If it's a virus, and it's after the year 2003, 99.9% of the time the return path will be bogus. DO NOT SEND IT THERE!
Other references on spam and virus backscatter:
Currently, the most effective protection is Bounce Address Tag Validation (BATV): "Bounce Address Tag Validation (BATV) provides a mechanism for assessing the validity of an email's envelope return (bounce) address. It permits the original submitter of a message to sign the SMTP MailFrom address. This enables detection of invalid bounce addresses."
Other tools which can help include:
DKIM
SPF
Sender-ID
<!flesh out!>
If you have this problem, you can not solve it by blocking specific IP addresses or email senders, or in general using filtering/blocking rules, antispam appliances, etc. The spammers exploited a security hole: it should be impossible for anyone external to your organization to use your server to send mail to arbitrary destinations. Your server should accept inbound mail or outbound mail, but it should reject at the SMTP level by default all mail attempting to "pass through".
So, to solve the problem you have to identify and fix this security hole. This should not be too difficult, because the spam was sent by your mail server and therefore evidences and traces have been left in the mail logs. You have to analyze the logs to figure out what the spammers did. If your mail server is Microsoft Exchange, see the FAQ entry below.
In most cases no malware or viruses are involved. Just checking your server and clients for malware is generally insufficient, usually not relevant and does not solve the problem. Obviously, blocking the forged sender address or the IP of origin also does not solve the problem, as those parameters are constantly changed.
These are the most common mechanisms exploited, approximately ordered by number of occurrences, and their typical resolution path:
For a general perspective about this problem, you can also read this article.
Investigate the logs to find the name of the abused account. Follow the indications in the Microsoft Support Article 895853 How to troubleshoot mail relay issues in Exchange Server 2003 and in Exchange 2000 Server, in particular in the section If mail relay occurs from an account on an Exchange computer that is not configured as an open mail relay.
Then, first of all, all the PCs and laptops used by the owner of that account should be checked for malware, as it is possible that the access credentials were stolen by a trojan program running on a client. This is likely if the password was not a very simple one (a very simple password was probably guessed).
Then, the password of that account should be changed to a new and secure one. After that, keep the logs monitored for a few days to make sure that the abusers can not access the server any more.
Another useful reference from Microsoft is Article 324958 How to block open SMTP relaying and clean up Exchange Server SMTP queues in Windows Small Business Server.
In 2014 and 2015 we are seeing lots of DNS set up on hacked machines, usually on systems in commercial hosting facilities. Forensics on these intrusions have been minimal; usually the box just gets wiped, or maybe some ports blocked, but the actual intrusion vector may not be determined. We'd like to hear more from anyone knowledgable in these intrusions. Here are some of the ways we have heard of DNS being set up to use IP addresses without the owner's permission:
1. TinyDNS installed illicitly. TinyDNS is excellent and perfectly legitimate DNS software, but it is also widely used by bad guys for these rogue DNS installations. They probably like its small size and fast performance, same as legitimate users. Any hack that allows content to be written to the box could be the vulnerability used to install it.
2. NAT (Natural Address Translation; "natting") like this:
-A PREROUTING -d 0.0.0.0 -p udp -m udp --dport 53 -j DNAT--to-destination 255.0.0.255:53
-A POSTROUTING -d 255.0.0.255 -p udp -m udp --dport 53 -j SNAT--to-source 0.0.0.0
3. Multicast DNS (5353/tcp, 5353/udp)
Our blog post on webserver security has general information which can help deter these attacks:
13 July 2007: The Honeypot Project has a very informative paper about fast flux networks at
The Mannheim Formula paper of 2007 provides fast flux diagnostic formulas:
January 2008: ICANN's Security and Stability Advisory Committee has published an advisory on Fast Flux hosting at sac025.pdf.
$ dig @NS2.WESTNS.COM wildcard.malaga-53.com a
; <<>> DiG 9.2.4 <<>> @NS2.WESTNS.COM wildcard.malaga-53.com a
; (1 server found)
;; global options: printcmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 13728
;; flags: qr aa rd; QUERY: 1, ANSWER: 5, AUTHORITY: 0, ADDITIONAL: 0
;; QUESTION SECTION:
;wildcard.malaga-53.com. IN A
;; ANSWER SECTION:
wildcard.malaga-53.com. 180 IN A 68.126.240.182
wildcard.malaga-53.com. 180 IN A 70.228.170.6
wildcard.malaga-53.com. 180 IN A 68.74.207.77
wildcard.malaga-53.com. 180 IN A 70.253.85.166
wildcard.malaga-53.com. 180 IN A 67.37.184.67
;; Query time: 145 msec
;; SERVER: 67.190.128.40#53(67.190.128.40)
;; WHEN: Sun Sep 3 17:xx:xx 2006
;; MSG SIZE rcvd: 120
[67.37.184.67] adsl-67-37-184-67.dsl.chcgil.ameritech.net.
[70.228.170.6] ppp-70-228-170-6.dsl.emhril.ameritech.net.
[68.74.207.77] adsl-68-74-207-77.dsl.milwwi.ameritech.net.
adsl-68-74-207-77.dsl.milwwi.sbcglobal.net.
[70.253.85.166] ppp-70-253-85-166.dsl.austtx.swbell.net.
[68.126.240.182] ppp-68-126-240-182.dsl.irvnca.pacbell.net.
[67.190.128.40] c-67-190-128-40.hsd1.co.comcast.net.
$ dig @ns2.WESTNS.COM ns2.westns.com a
; <<>> DiG 9.2.4 <<>> @ns2.WESTNS.COM ns2.westns.com a
; (1 server found)
;; global options: printcmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 53259
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 5, ADDITIONAL: 4
;; QUESTION SECTION:
;ns2.westns.com. IN A
;; ANSWER SECTION:
ns2.westns.com. 180 IN A 70.115.193.8
;; AUTHORITY SECTION:
westns.com. 180 IN NS ns2.westns.com.
westns.com. 180 IN NS ns4.westns.com.
westns.com. 180 IN NS ns3.westns.com.
westns.com. 180 IN NS ns5.westns.com.
westns.com. 180 IN NS ns1.westns.com.
;; ADDITIONAL SECTION:
ns4.westns.com. 180 IN A 24.98.2.58
ns3.westns.com. 180 IN A 67.184.161.182
ns5.westns.com. 180 IN A 24.163.92.147
ns1.westns.com. 180 IN A 70.245.189.179
;; Query time: 2359 msec
;; SERVER: 67.190.128.40#53(67.190.128.40)
;; WHEN: Sun Sep 3 18:xx:xx 2006
;; MSG SIZE rcvd: 198
It's a tricky way for a spammer to make their spam appear to come from a different IP than their dedicated server, but still on the same VLAN subnet.
To avoid the problem, design your VLAN so that:
- customers can't snoop on each other, even at broadcast level such as arp traffic;
- no customer can send out packets with any source IP other than their assigned one;
- no customer can generate ARP traffic for any IPs other than their own.
As for identifying what server it really is, can you put a port monitor on the switch port of your router and sniff its packets? You should be able to see the MAC address of anything returned from those IP addresses, even if they were injecting forged packets from somewhere else on the Internet. And you should be able to see the gratuitous ARPs.
This technique is in use in the wild. If you detect this on your network, Spamhaus would very much appreciate any details you can provide, including connection information for the spammer's "home base."
Note that the webmail module has been removed from the PHP-Nuke distribution due to its poor security since version 7.2 (march 2004). Using a more recent distribution of PHP-Nuke is of no help if the webmail module has been retained from a previous release, or has been downloaded later from some other site as an "add-on".
Also note that, according to user reports, it appears to be insufficient to disable the webmail feature to prevent the scammers from using it. You must delete the webmail module, causing removal of all the related files from the server.
Getting a delegation
You get a rDNS delegation from the same entity that gave you the IP addresses: it may be the ISP you are taking connectivity from, or the regional Internet registry of your area.
Regional Internet Registries (RIR) have formal procedures for assigning rDNS delegations: follow the links pertinent to your registry.
For general information about defining a reverse zone in your nameservers, see for instance
the APNIC guide or the RIPE pages linked above.
Note that you do not need to insert all the individual records by hand. If you are using Bind, once you have defined a naming convention for a portion of your space you can use the powerful $GENERATE directive (described in the Bind 9 manual) to define several records with a single line.
For instance, you can write in the 2.1.in-addr.arpa zone file:
$ORIGIN 2.1.in-addr.arpa.
$GENERATE 1-254 $.3 PTR adsl-1-2-3-$.dynamic.yourdomain.net.
1.3.2.1.in-addr.arpa. PTR adsl-1-2-3-1.dynamic.yourdomain.net.
...
254.3.2.1.in-addr.arpa. PTR adsl-1-2-3-254.dynamic.yourdomain.net.
Do not forget to check that matching forward A records are defined in the zone of yourdomain.net. This is very important because it "validates" the rDNS information. You can define them with another $GENERATE directive, like
$ORIGIN dynamic.yourdomain.net.
$GENERATE 1-254 adsl-1-2-3-$ A 1.2.3.$
adsl-1-2-3-1.dynamic.yourdomain.net. A 1.2.3.1
...
adsl-1-2-3-254.dynamic.yourdomain.net. A 1.2.3.254
123.adsl7.timbuktu.dynamic.example.com
Clearly the same information could be made available with another scheme like
dadsl7-123-tktu.example.com, but other people would have to define complex regular expressions to parse that, wasting resources and increasing the likelihood of making mistakes.
We also recommend to use similar conventions to identify static IPs and business connections as clearly as possible. For instance, you can use something like:
245.sdsl.timbuktu.business.static.example.com
mail.example.com
SEC. 4 (c)(1) ISP HELD HARMLESS FOR GOOD FAITH PRIVATE ENFORCEMENT- An ISP is not liable, under any Federal or State civil or criminal law, for any action it takes in good faith to block the transmission or receipt of unsolicited commercial e-mail. | http://www.spamhaus.org/faq/section/ISP%20Spam%20Issues | CC-MAIN-2015-40 | refinedweb | 4,234 | 63.19 |
Expressional Inference Rule Engine Release Notes
Bloomreach offers Enterprise support for this feature to Bloomreach Experience customers. The release cycle of this feature may differ from our core product release cycle.
Releases
3.1.0 (unreleased)
Released: TBD
- SERVPLUG-103 - Correct the demo project, moving all plugin dependencies into the CMS/platform.
3.0.3
Released: 6 September 2019
- SERVPLUG-90 - Access goal values from the rules expression builtin model
3.0.2
Released: 20 August 2019
- SERVPLUG-85 - open multiple Inference Rule documents
- SERVPLUG-87 - Autoexport breaks after installing the addon
- SERVPLUG-88 - Upgrade demo based on v13.3 while the module itself kept with v13.0
3.0.1
Released: 6 February 2019
- SERVPLUG-77 - Hide inferenceengine namespace in folder views pane (right pane, not tree)
3.0.0
Released: 15 january 2019
- SERVPLUG-55 - Upgrade for v13 compliancy
The dependency model in projects has changed! Please revisit the installation page.
2.x and 1.x
See here for the release notes for versions 2.x on Bloomreach Experience 12 and 11. | https://documentation.bloomreach.com/13/library/enterprise/services-features/inference-engine/release-notes.html | CC-MAIN-2021-04 | refinedweb | 174 | 50.23 |
Nodes unable to join TTN via LoPy nano gateway
Hi all,
I am trying to set up the LoPy as a nano gateway. I followed the tutorial, but still run into some connectivity issues.
The nano gateway is registered, I ticked the box 'I'm using a packet forwarder' and I can see in the console that it is connected to TTN.
In order to test the gateway, I am using an MTDOT-BOX-G-915 that I used previously to test another gateway (but not based on a LoPy).
The nano gateway can forward the Join Requests of the node to TTN (I can see those messages in the console, see the screenshot below). But then it seems no packet goes back to the node as it is not able to join the network.
The interesting thing bit is that all the Join Requests are made by the same node, but the app eui and dev eui change all the time.
Here are a the detail couple of requests made by the same node:
{ "gw_id": "eui-XXXX", "payload": "ACYhbpBc59DgWb8gIzfm0DQIcXrQTus=", "dev_eui": "34D0E6372320BF59", "lora": { "spreading_factor": 10, "bandwidth": 125, "air_time": 370688000 }, "coding_rate": "4/5", "timestamp": "2017-05-22T14:50:20.747Z", "rssi": -54, "snr": -9, "app_eui": "E0D0E75C906E2126", "frequency": 917999900 } { "gw_id": "eui-XXXX", "payload": "AEVFAPB+1bNwy76QCQoZEQAX/JeBwlI=", "dev_eui": "0011190A0990BECB", "lora": { "spreading_factor": 10, "bandwidth": 125, "air_time": 370688000 }, "coding_rate": "4/5", "timestamp": "2017-05-22T14:52:18.184Z", "rssi": -97, "snr": -65, "app_eui": "70B3D57EF0004545", "frequency": 917999900 }
Here is the
configuration.py(I am in Australia, so I had to change the frequency and some other details) :
GATEWAY_ID = 'XXXX' SERVER = 'router.eu.thethings.network' PORT = 1700 NTP = "pool.ntp.org" NTP_PERIOD_S = 3600 WIFI_SSID = 'XXXX' WIFI_PASS = 'XXXX' LORA_FREQUENCY = 91800000 LORA_DR = "SF10BW125"
In the file
nanogatewy.piI changed the following:
RX_PK = {"rxpk": [{"time": "", "tmst": 0, "chan": 6, "rfch": 0, "freq": 918.0, "stat": 1, "modu": "LORA", "datr": "SF10BW125", "codr": "4/5", "rssi": 0, "lsnr": 0, "size": 0, "data": ""}]}
I also made sure that the port 1700 is open on my router and that UDP packets coming through that port are routed to the LoPy.
I did try other TTN routers (asia-se, au,...) as well, but with always the same results.
Also, I tried to connect the LoPy as a node to TTN via an other gateway, and it did work properly. So the Lora radio module works.
Do you have any idea why I cannot get that node to join TTN via the LoPy nano gateway? Is there something I am missing somewhere?
Thanks,
Johan
Finally I could get the node to join TTN! :D
I needed to add a delay of 5 seconds before the node starts listening for the response of the Join Request with the command
AT+JD:
> AT+JD=5 OK > AT+JOIN Successfully joined network OK
Also the fix for
_send_down_linkin my last post is no more needed as the frequency in the Event Data is now ok.
I have made some progress thanks to your suggestion.
To set the node to use only one channel (here 918Mhz) I can use the commands
AT+CHM:
>AT+CHM=0,4000 OK
and
AT+TXCHto
Then I could see in the Console that the Join Requests are accepted, but the nano gateway still did not transfert those messages to the node.
Here is the Event Data of the Join Accept:
{ "gw_id": "eui-XXX", "payload": "IHFSiaovCJacFacBPZaei8Q=", "lora": { "spreading_factor": 10, "bandwidth": 500, "air_time": 82432000 }, "coding_rate": "4/5", "timestamp": "2017-05-24T10:52:35.449Z", "frequency": 925700000 }
So I figured that the
frequencyin the Event Data (925.7 Mhz) does not match the frequency the node is using.
To fix that, I modified the function
_send_down_linkso that it uses the good frequency to forward the message:
def _send_down_link(self, data, tmst, datarate, frequency): self.lora.init(mode=LoRa.LORA, frequency=self.frequency, bandwidth=LoRa.BW_125KHZ, sf=self._dr_to_sf(datarate), preamble=8, coding_rate=LoRa.CODING_4_5, tx_iq=True) while time.ticks_us() < tmst: pass self.lora_sock.send(data)
(it simply ignores the wrong frequency mentionned in the Event Data).
Now I get the following error:
> Downlink timestamp error!, t_us: 4295066743
Do you have any idea how I can fix that one?
Thanks! I'll have a look and let you know how it goes.
@jojo you need to set your MTDOT to only use a single channel - I'm not exactly sure how to do this but try their documentation! I'll have a look into it for you if you can't find anything.
As I use an MTDOT, I am using the following at commands:
# waking up the moduel > at OK # enable public network mode > at+pn=1 OK # selecting the sub-band for Australia > at+fsb=2 OK # selecting OTA mode > at+njm=1 OK # app eui > at+ni=0,XXXXXX Set Network ID: XX-XX-XX OK # app key > at+nk=0,XXXXXX Set Network Key: XXXXXX OK # bumping the tx power > at+txp=20 OK # joining the network > at+join Failed to join network
(more info on the AT commands can be found here here).
Could you please share the code that you are using for your nodes? The Nano-Gateway is only a single channel gateway (Not a formal LoRaWAN Gateway) so you need to do some specific configuration to allow for this.
Thanks! | https://forum.pycom.io/topic/1262/nodes-unable-to-join-ttn-via-lopy-nano-gateway/3 | CC-MAIN-2019-30 | refinedweb | 880 | 68.91 |
Neural Network Lab
Learn how to create a perceptron that can categorize inputs consisting of two numeric values.
A perceptron is computer code that models the behavior of a single biological neuron. Perceptrons were one of the very earliest types of machine-learning techniques and are the predecessors to neural networks. Although perceptrons are quite limited, learning about perceptrons might interest you for several reasons. Understanding perceptrons gives you a foundation for understanding neural networks, knowledge of perceptrons is almost universal in the machine-learning community and, in my opinion, perceptrons are conceptually interesting in their own right.
Take a look at a demo C# console application in Figure 1 to get an idea of where this article is headed. The goal of the demo is to create a perceptron that can categorize input that consists of two numeric values (such as 1.0, 4.5) into one of two classes (-1 or +1). The first part of the screenshot shows that there are 16 training data items with known classifications. For example, the first training item input is (-5.0, -5.5) and its corresponding class is -1. You can imagine that the two inputs might be the results of two medical tests and that the classification represents the presence (+1) or absence (-1) of some disease.
The next part of the screenshot indicates that the perceptron uses the training data to compute two weights with values (-0.0010, 0.0180) and a bias term with value 0.0060. The training process uses a learning rate that has been set to 0.001 and a maxEpochs value of 500. The two weights and the bias term essentially define the behavior of the perceptron. After training, the perceptron correctly classifies 88 percent of the training data (14 out of 16 items).
After the perceptron has been created, it's presented with a new data item, (1.0, 4.5), that belongs to an unknown class. The perceptron predicts that the new data item belongs to class +1.
The Excel graph in Figure 2 illustrates the perceptron demo. Training data items that belong to class -1 are colored blue and are mostly below the x-axis. Training items that belong to class +1 are colored red. The new data item with unknown class is labeled with a question mark. You can intuit that the new item is most likely red, or belongs to class +1, as predicted by the perceptron.
In the sections that follow, I'll walk you through the C# code for the demo program. This article assumes you have basic familiarity with perceptrons, and at least intermediate-level programming skills. You may want to read my previous column, "Modeling Neuron Behavior in C#," for an introduction to perceptrons.
Overall Program Structure
To create the perceptron demo program, I launched Visual Studio 2012 and created a new C# console application named PerceptronClassification. The demo program has no significant Microsoft .NET Framework dependencies, so any version of Visual Studio should work. After the template code loaded, I renamed file Program.cs in the Solution Explorer window to the more descriptive PerceptronClassificationProgram.cs, and Visual Studio automatically renamed the class containing the Main method for me. At the top of the code I deleted all using statements except for the one that references the System namespace. The overall program structure and the Main method, with a few minor edits, are presented in Listing 1.
Listing 1. Perceptron classification program structure.
using System;
namespace PerceptronClassification
{
class PerceptronClassificationProgram
{
static void Main(string[] args)
{
try
{
Console.WriteLine("\nBegin perceptron classification demo");
double[][] trainData = new double[16][];
trainData[0] = new double[] { -5.0, -5.5 };
trainData[1] = new double[] { -3.5, -6.0 };
trainData[2] = new double[] { -2.0, -4.5 };
trainData[3] = new double[] { -2.0, -1.5 };
trainData[4] = new double[] { -3.5, 2.0 };
trainData[5] = new double[] { -2.5, 4.0 };
trainData[6] = new double[] { -1.5, 3.0 };
trainData[7] = new double[] { 2.0, 3.5 };
trainData[8] = new double[] { 4.5, 5.0 };
trainData[9] = new double[] { 6.0, 2.5 };
trainData[10] = new double[] { 3.0, 1.5 };
trainData[11] = new double[] { 1.5, -5.0 };
trainData[12] = new double[] { 2.0, 2.0 };
trainData[13] = new double[] { 3.5, -4.0 };
trainData[14] = new double[] { 4.0, -5.5 };
trainData[15] = new double[] { 4.5, -2.0 };
int[] Y = new int[16] { -1, -1, -1, 1, 1, 1, 1, 1,
1, 1, 1, -1, -1, -1, -1, -1 };
Console.WriteLine("\nTraining data: \n");
ShowTrainData(trainData, Y);
double[] weights = null;
double bias = 0.0;
double alpha = 0.001;
int maxEpochs = 500;
Console.Write("\nSetting learning rate to " + alpha.ToString("F3"));
Console.WriteLine(" and maxEpochs to " + maxEpochs);
Console.WriteLine("\nBeginning training the perceptron");
Train(trainData, alpha, maxEpochs, Y, out weights, out bias);
Console.WriteLine("Training complete");
Console.WriteLine("\nBest percetron weights found: ");
ShowVector(weights, 4);
Console.Write("\nBest perceptron bias found = ");
Console.WriteLine(bias.ToString("F4"));
double acc = Accuracy(trainData, weights, bias, Y);
Console.Write("\nAccuracy of perceptron on training data = ");
Console.WriteLine(acc.ToString("F2"));
double[] unknown = new double[] { 1.0, 4.5 };
Console.WriteLine("\nNew data with unknown class = ");
ShowVector(unknown, 1);
Console.WriteLine("\nUsing best weights and bias to classify data");
int c = ComputeOutput(unknown, weights, bias);
Console.Write("\nPredicted class of new data = ");
Console.WriteLine(c.ToString("+0;-0"));
Console.WriteLine("\nEnd perceptron demo\n");
Console.ReadLine();
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
Console.ReadLine();
}
} // Main
static int ComputeOutput(double[] data, double[] weights,
double bias) { . . }
static int Activation(double x) { . . }
static double Accuracy(double[][] trainData, double[] weights,
double bias, int[] Y) { . . }
static double TotalError(double[][] trainData, double[] weights,
double bias, int[] Y) { . . }
static double Error(double[] data, double[] weights,
double bias, int Y) { . . }
static void Train(double[][] trainData, double alpha, int maxEpochs,
int[] Y, out double[] weights, out double bias) { . . }
static void ShowVector(double[] vector, int decimals) { . . }
static void ShowTrainData(double[][] trainData, int[] Y) { . . }
} // class
} // ns
The demo program consists of a Main method and eight helper methods. The program begins by setting up the 16 training data items in a matrix named trainData. For simplicity, and so that I could visualize the demo as a graph, each item has two type double values. Perceptrons can handle data with any number of dimensions. The classes (-1 or +1) for each training item are stored in a separate array named Y. The training data has been designed so that no perceptron can classify the data with 100 percent accuracy.
Helper method ShowTrainData is used to display the training data as shown in Figure 1. Next, the program declares a weights array and a bias variable. The values for weights and bias will be computed and returned as out parameters by method Train. Variable alpha, usually called the learning rate, is set to 0.001 and controls how quickly the Train method converges to a solution. Variable maxEpochs is set to 500 and limits how many times the weights and bias are updated in method Train.
The Train method uses the perceptron learning algorithm to search for and return the weights and bias values that create a perceptron that best fits the training data (in a sense I'll explain shortly). Method Accuracy uses the best weight and bias values to predict the class of each training item, compares each predicted class with the known class stored in the Y array, and calculates the percentage of correct classifications.
The demo program concludes by creating a new data item (1.0, 4.5) and using the perceptron to predict the item's class. Notice that there isn't a perceptron object of some sort. The perceptron consists of the weights and bias values plus method ComputeOutput. You may want to encapsulate my code into an explicit Perceptron class.
Computing Perceptron Output
Method ComputeOutput accepts three input arguments: a data item, the weights array and the bias value. The method returns either -1 or +1. It's possible to create multi-class (more than two class values) perceptrons rather than binary classification perceptrons, but in my opinion perceptrons are best suited for two-class problems.
Listing 2. Helper method Activation.
static int ComputeOutput(double[] data, double[] weights,
double bias)
{
double result = 0.0;
for (int j = 0; j < data.Length; ++j)
result += data[j] * weights[j];
result += bias;
return Activation(result);
}
static int Activation(double x)
{
if (x >= 0.0) return +1;
else return -1;
}
In Listing 2 I've defined a helper method, Activation, which is nothing more than a simple threshold function. In neural networks, activation functions can be much more complex. When using perceptrons for classification with real values, which can be positive or negative, it's usually best to code the two possible classes as -1 and +1 rather than 0 and 1.
The perceptron training method computes the error associated with the current weights and bias values. An obvious measure of error is the percentage of incorrectly classified training data items. However, error computation is surprisingly subtle and a different approach is used to compute a total error value.
Helper method Error computes how far away a perceptron's pre-activation output for a single training data item is from the item's actual class value; the Error method returns a value that's one-half of a sum of squared deviations:
static double Error(double[] data, double[] weights,
double bias, int Y)
{
double sum = 0.0;
for (int j = 0; j < data.Length; ++j)
sum += data[j] * weights[j];
sum += bias;
return 0.5 * (sum - Y) * (sum - Y);
}
Method TotalError is the sum of errors for all training data items:
static double TotalError(double[][] trainData, double[] weights,
double bias, int[] Y)
{
double totErr = 0.0;
for (int i = 0; i < trainData.Length; ++i)
totErr += Error(trainData[i], weights, bias, Y[i]);
return totErr;
}
Training the Perceptron
With methods ComputeOutput and TotalError defined, it's possible to define a training method that computes the best (lowest total error) weights and bias values for a given set of training data. Method Train is presented in Listing 3.
Listing 3. Training the perceptron.
static void Train(double[][] trainData, double alpha, int maxEpochs,
int[] Y, out double[] weights, out double bias)
{
int numWeights = trainData[0].Length;
double[] bestWeights = new double[numWeights]; // Best weights found, return value
weights = new double[numWeights]; // Working values (initially 0.0)
double bestBias = 0.0;
bias = 0.01; // Working value (initial small arbitrary value)
double bestError = double.MaxValue;
int epoch = 0;
while (epoch < maxEpochs)
{
for (int i = 0; i < trainData.Length; ++i) // Each input
{
int output = ComputeOutput(trainData[i], weights, bias);
int desired = Y[i]; // -1 or +1
if (output != desired) // Misclassification, so adjust weights and bias
{
double delta = desired - output; // How far off are you?
for (int j = 0; j < numWeights; ++j)
weights[j] = weights[j] + (alpha * delta * trainData[i][j]);
bias = bias + (alpha * delta);
// New best?
double totalError = TotalError(trainData, weights, bias, Y);
if (totalError < bestError)
{
bestError = totalError;
Array.Copy(weights, bestWeights, weights.Length);
bestBias = bias;
}
}
}
++epoch;
} // while
Array.Copy(bestWeights, weights, bestWeights.Length);
bias = bestBias;
return;
}
The training method is presented in high-level pseudo-code in Listing 4.
Listing 4. The perceptron training method in pseudo-code.
loop until done
foreach training data item
compute output using weights and bias
if the output is incorrect then
adjust weights and bias
compute error
if error < smallest error so far
smallest error so far = error
save new weights and bias
end if
end if
increment loop counter
end foreach
end loop
return best weights and bias values found
Although the training method doesn't have very many lines of code, it's quite clever. The heart of the method is the line that adjusts the weights when a training item is incorrectly classified:
weights[j] = weights[j] + (alpha * delta * trainData[i][j]);
The adjustment value has three terms: alpha, delta and the input value associated with the weight being adjusted. Delta is computed as desired - output, that is, the desired value stored in the Y array minus the output produced by the current weights and bias. Suppose that the desired value is +1 but the output value is -1. This means the output value is too small, and so weights and bias values must be adjusted to increase the output value. If the input associated with some weight is positive, then increasing the weight will increase the output. If the associated input is negative, then decreasing the weight will increase the output.
The learning rate, alpha, is typically some small value such as 0.001 or 0.00001, and throttles the magnitude of change when weights and bias values are adjusted. An advanced alternative is to allow alpha to change, starting out with a relatively large value and gradually decreasing. The idea is to have large adjustment jumps early on but then make the jumps finer-grained later. Another advanced alternative is to set alpha to a small random value each time it's used.
The version of the perceptron-training algorithm presented here iterates a fixed maxEpochs times (500 in the demo). Notice that no change to weights or bias will occur after all training data items are correctly classified. You might want to check for this condition and exit the main learning loop when it occurs.
The training algorithm computes the total error every time there's a change to weights and bias values. Because the total error function iterates through all training data items, this is an expensive process and may not be feasible for very large data sets. An alternative is to check total error only occasionally -- for instance, after every 10th change to the weights and bias.
Wrapping Up
Even though the training method uses a not-so-obvious measure of error to compute the weights and bias values that define a perceptron, ultimately what matters most is how well a perceptron classifies data. Method Accuracy, shown in Listing 5, returns the percentage of correctly classified data items.
Listing 5. Method Accuracy.
static double Accuracy(double[][] trainData, double[] weights,
double bias, int[] Y)
{
int numCorrect = 0;
int numWrong = 0;
for (int i = 0; i < trainData.Length; ++i)
{
int output = ComputeOutput(trainData[i], weights, bias);
if (output == Y[i]) ++numCorrect;
else ++numWrong;
}
return (numCorrect * 1.0) / (numCorrect + numWrong);
}
Finally, for the sake of completeness, here are the two display routines used in the demo program. Method ShowVector is:
static void ShowVector(double[] vector, int decimals)
{
for (int i = 0; i < vector.Length; ++i)
Console.Write(vector[i].ToString("F" + decimals) + " ");
Console.WriteLine("");
}
Method ShowTrainData is:
static void ShowTrainData(double[][] trainData, int[] Y)
{
for (int i = 0; i < trainData.Length; ++i)
{
Console.Write("[" + i.ToString().PadLeft(2, ' ') + "] ");
for (int j = 0; j < trainData[i].Length; ++j)
{
Console.Write(trainData[i][j].ToString("F1").PadLeft(6, ' '));
}
Console.WriteLine(" -> " + Y[i].ToString("+0;-0"));
}
}
The code and explanation presented in this article should give you a solid foundation for understanding classification with perceptrons. I recommend you investigate a bit with some of the alternatives I've suggested. Also, it's very informative to experiment by changing the value of the learning rate alpha, the value of the loop limit maxEpochs, the values of the training data and the values of an unknown data item to classify.
Perceptrons are just one of many machine-learning techniques that can be used to perform binary classification on real-valued input. The key weakness of perceptrons is that they only work well on data that can be linearly separated. In Figure 2, notice that it's possible to separate the two classes of data items using a straight line. In other words, there are some types of binary classification problems where perceptrons just don't work. This weakness led to the creation of neural networks, which are collections of interconnected perceptrons.
Printable Format
I agree to this site's Privacy Policy. | http://visualstudiomagazine.com/articles/2013/04/01/classification-using-perceptrons.aspx | CC-MAIN-2014-42 | refinedweb | 2,648 | 57.67 |
On Sun, Oct 28, 2007 at 10:39:00AM -0400, Ronald S. Bultje wrote: > Hi, > > On 10/28/07, Ronald S. Bultje <rsbultje at gmail.com> wrote: > > > >? > > > [...] > Index: ffmpeg/libavformat/rtsp.c iam not rtsp maintainer :) [...] > @@ -303,15 +314,6 @@ > /* fill the dest addr */ > url_split(NULL, 0, NULL, 0, hostname, sizeof(hostname), &port, NULL, 0, uri); > > - /* XXX: fix url_split */ > - if (hostname[0] == '\0' || hostname[0] == '?') { > - /* only accepts null hostname if input */ > - if (s->is_multicast || (flags & URL_WRONLY)) > - goto fail; > - } else { > - udp_set_remote_url(h, uri); > - } > - > if(!ff_network_init()) > return AVERROR(EIO); > > @@ -380,6 +382,16 @@ > } > #endif /* CONFIG_IPV6 */ > > + /* XXX: fix url_split */ > + s->udp_fd = udp_fd; > + if (hostname[0] == '\0' || hostname[0] == '?') { > + /* only accepts null hostname if input */ > + if (s->is_multicast || (flags & URL_WRONLY)) > + goto fail; > + } else { > + udp_set_remote_url(h, uri); > + } > + > if (is_output) { > /* limit the tx buf size to limit latency */ > tmp = UDP_TX_BUF_SIZE; > @@ -394,7 +406,6 @@ > setsockopt(udp_fd, SOL_SOCKET, SO_RCVBUF, &tmp, sizeof(tmp)); > } > > - s->udp_fd = udp_fd; > return 0; > fail: > if (udp_fd >= 0) this looks independant of the rest of the patch, can it be split? note, i dont know the code at all, iam just reviewing it as it doesnt seem anyone else is volunteering ... also the more verbose the descriptions of the patches are the more likely they will be accepted, things like X doesnt work Y does work are not good explanations something, like: 'see page 15 of "portable networking for idiots" this explains why what udp.c does is wrong and why my patch is correct' or similar would be much: <> | http://ffmpeg.org/pipermail/ffmpeg-devel/2007-October/038367.html | CC-MAIN-2016-44 | refinedweb | 246 | 60.14 |
I have written a C++ template library for KD-Trees, trying to stay as close as possible to STL containers. The library is not (yet) complete and it's not thoroughly tested. However, given the effort and grief I went through in writing it, I would like to make it available to folks, get people to test it, and hopefully have some peeps submit improvements. It - sports an unlimited number of dimensions (in theory) - can store any data structure, provided the data structure provides operator[0 - k-1] to access the individual dimensional components (arrays, std::vector already do) and a std::less implementation for the type of dimensional components - has support for custom allocators - implements iterators - provides standard find as well as range queries - has amortised O(lg n) time (O(n lg n) worst case) on most operations (insert/erase/find optimised) and worst-case O(n) space. - provides a means to rebalance and thus optimise the tree. - exists in its own namespace - uses STL coding style, basing a lot of the code on stl_tree.h So the question is: should I/can I package it for Debian? It's not yet documented, although the usage should be fairly straight forward. I am hoping to find someone else to document it as I suck at documentation and as the author, it's exceptionally difficult to stay didactically correct. It's just 6 .hpp files, so it would be a -dev package without a .so or .a file. Other than installing the 6 .hpp files into /usr/include/kdtree++, providing pkg-config files and a -config binary, do I need to do anything | https://lists.debian.org/debian-devel/2004/05/msg00394.html | CC-MAIN-2018-17 | refinedweb | 274 | 51.68 |
import java.util.Iterator; 20 21 /** 22 * Defines an iterator that operates over a <code>Map</code>. 23 * <p> 24 * This iterator is a special version designed for maps. It can be more 25 * efficient to use this rather than an entry set iterator where the option 26 * is available, and it is certainly more convenient. 27 * <p> 28 * A map that provides this interface may not hold the data internally using 29 * Map Entry objects, thus this interface can avoid lots of object creation. 30 * <p> 31 * In use, this iterator iterates through the keys in the map. After each call 32 * to <code>next()</code>, the <code>getValue()</code> method provides direct 33 * access to the value. The value can also be set using <code>setValue()</code>. 34 * <pre> 35 * MapIterator<String,Integer> it = map.mapIterator(); 36 * while (it.hasNext()) { 37 * String key = it.next(); 38 * Integer value = it.getValue(); 39 * it.setValue(value + 1); 40 * } 41 * </pre> 42 * 43 * @param <K> the type of the keys in the map 44 * @param <V> the type of the values in the map 45 * @since 3.0 46 * @version $Id: MapIterator.java 1361710 2012-07-15 15:00:21Z tn $ 47 */ 48 public interface MapIterator<K, V> extends Iterator<K> { 49 50 /** 51 * Checks to see if there are more entries still to be iterated. 52 * 53 * @return <code>true</code> if the iterator has more elements 54 */ 55 boolean hasNext(); 56 57 /** 58 * Gets the next <em>key</em> from the <code>Map</code>. 59 * 60 * @return the next key in the iteration 61 * @throws java.util.NoSuchElementException if the iteration is finished 62 */ 63 K next(); 64 65 //----------------------------------------------------------------------- 66 /** 67 * Gets the current key, which is the key returned by the last call 68 * to <code>next()</code>. 69 * 70 * @return the current key 71 * @throws IllegalStateException if <code>next()</code> has not yet been called 72 */ 73 K getKey(); 74 75 /** 76 * Gets the current value, which is the value associated with the last key 77 * returned by <code>next()</code>. 78 * 79 * @return the current value 80 * @throws IllegalStateException if <code>next()</code> has not yet been called 81 */ 82 V getValue(); 83 84 //----------------------------------------------------------------------- 85 /** 86 * Removes the last returned key from the underlying <code>Map</code> (optional operation). 87 * <p> 88 * This method can be called once per call to <code>next()</code>. 89 * 90 * @throws UnsupportedOperationException if remove is not supported by the map 91 * @throws IllegalStateException if <code>next()</code> has not yet been called 92 * @throws IllegalStateException if <code>remove()</code> has already been called 93 * since the last call to <code>next()</code> 94 */ 95 void remove(); 96 97 /** 98 * Sets the value associated with the current key (optional operation). 99 * 100 * @param value the new value 101 * @return the previous value 102 * @throws UnsupportedOperationException if setValue is not supported by the map 103 * @throws IllegalStateException if <code>next()</code> has not yet been called 104 * @throws IllegalStateException if <code>remove()</code> has been called since the 105 * last call to <code>next()</code> 106 */ 107 V setValue(V value); 108 109 } | http://commons.apache.org/proper/commons-collections/xref/org/apache/commons/collections/MapIterator.html | CC-MAIN-2013-20 | refinedweb | 521 | 51.07 |
Introdu about it can search in my website (link on my profile). I do not want to insist on it too much. Let's say it's a project I'm working on long ago and has undergone many transformations over time. I think it will not ever finish it, but working on it I discovered/learned many interesting things, some of which will be posted (or already posted) here on Instructables.
Idea that excited me (so much) was to use this TFT touchscreen display to make a XY controller (pad) with visual feedback.
Step 1: Overview of the Shield
Resolution: 240x320.
Size: 2.8 Inch.
Colors: 262K
TFT driver: ILI9325DS (supported by UTFT library)
Touch driver: XPT2046 (suported by Utouch library *this particular model with some changes)
Interface:
- TFT: 8bit data and 4bit control.
- Touch Screen: 5 bit.
- SD : 4bit.
Price: $19 at... (From here I bought it )
Step 2: TFT Hardware Setup
Installing shield is straightforward. You still need to select the correct voltage before use. It is a switch in the top right, next to SD socket. For Arduino Mega and Arduino Uno must select 5V. Also do not push shield completely.
Step 3: Libraries Setup
1. First you need to install UTFT library. Latest version is here I'm going to put the library here too (file UTFT.zip). You never know what might happen in the future.
2. Same thing about UTouch library (file UTouch.zip).
3. Now we need to replace UTouch.cppUTouch.h with same files from UTouchWorking.zip. You can read more about this here: .
4. If you use Arduino MEGA need to edit file
...arduino-1.5.8\libraries\UTFT\hardware\avr\HW_AVR_defines.h
and uncomment the line: (If you use Arduino Uno is not need this modification):
#define USE_UNO_SHIELD_ON_MEGA 1
5. To optimize memory usage we need to edit file
...\arduino-1.5.8\libraries\UTFT\memorysaver.h
and uncomment the lines for the display controllers that you don't use: for this display uncomment all lines except this:
//#define DISABLE_ILI9325C 1 // ITDB24
Step 4: Touch Screen Calibration
To work properly, the touchscreen need calibration.
To make the calibrations for modified UTouch library we need to run this sketch: SimplerCalibration.ino (SimplerCalibration.zip):
We need to match the orientation of UTFT library with UTouch library:
myGLCD.InitLCD(LANDSCAPE); myTouch.InitTouch(LANDSCAPE);
There are 4 steps. We need to edit line #define selector for every step and upload and run sketch step by step:
#define selector 1
In this step we will verify that we put the correct resolution in SimplerCalibration ino file. This is an optional step. I put it here because that was designed by the author of this solution.
#define selector 2
This is the most important of the four. Here is actually calibration. After uploading sketch you must obtain left-top point and right-bottom point like in photo above; and make modification in file:
...\arduino-1.5.8\libraries\UTouch\UTouch.cpp
void UTouch::InitTouch(byte orientation){ orient = orientation; _default_orientation = 0; touch_x_left = 306; //enter number for left most touch touch_x_right = 3966; //enter number for right most touch touch_y_bottom = 3906; //enter number for bottom most touch touch_y_top = 174; //enter number for top most touch disp_x_size = 320; // do not forget them if different disp_y_size = 240; // do not forget them if different prec = 10; // ..................................................
We see that values for touch_y_bottom and touch_y_top are swaped in relation to values obtain from screen. (because origin of TFT axes are different from origin of touch screen). You will figure out that for every model of TFT. You might need or not to swap y-axis or x-axis values depend of your TFT model. For this particular model works like above.
#define selector 3
Test program. Display x y coordinates of touch point. Optional.
#define selector 4
Test program. Put a white pixel at touch point. Optional. It is still very intuitive. If you will see those pixels are mirrored on x or y axis you need to swap values for that axix.
Step 5: Examples
If everything is ok with calibration we can move forward an run examples from UTFT and UTouch libraries.
Let's not forget to edit lines that refers to the type of display and touch screen:
UTFT myGLCD(ITDB24, A5,A4,A3,A2);
UTouch myTouch(A1,10,A0,8,9);
I have attached photos taken from two examples. UTouch_ButtonTest and UTouch_QuickPaint.
Please note that it was quite difficult (for me) to take usable photos of the TFT, because if I shoot directly (vertical) appear camera reflection. It is as if I try to photograph the surface of a mirror (with details).
Step 6: XY MIDI Pad
If you run last examples have noticed they run quite slowly. There is nothing wrong with TFT display and also there is nothing wrong with the code or libraries. This is because we try to use an 8-bit microcontroller at 16MHz (or 20MHz). In fact this display can run much faster than we can send data (with our processor).
Indeed we could do some improvements to the code and libraries, but changes will not be dramatic. Ideally we need more powerful processor, 32bit (even 16 bit), DMA controller, >150 Mhz, more RAM (for video buffer) etc...
Instead we can design our programs to update only a small area of the screen when we need speed.
I put the whole code for Arduino project XY Pad MIDI here(attached to this step, MIDIPad.zip). Can be studied in detail to see how I applied what I said above. However I will comment on some sections.
In function draw_Pad(long x, long y), before drawing new lines, clear old lines redrawing them with background color.
void draw_Pad(long x, long y)<br>{ // we draw 3 three lines for x and three lines for y // for better visibility myGLCD.setColor(pad_bk); myGLCD.drawLine(old_x-1,pad_topY,old_x-1,pad_bottomY); // clear old line x-1 myGLCD.drawLine(old_x+1,pad_topY,old_x+1,pad_bottomY); // clear old line x+1 myGLCD.drawLine(old_x,pad_topY,old_x,pad_bottomY); // clear old line x myGLCD.drawLine(pad_topX,old_y-1,pad_bottomY,old_y-1); // clear old line y-1 myGLCD.drawLine(pad_topX,old_y+1,pad_bottomY,old_y+1); // clear old line y+1 myGLCD.drawLine(pad_topX,old_y,pad_bottomY,old_y); // clear old line y myGLCD.setColor(reticle_color); myGLCD.drawLine(x-1,pad_topY,x-1,pad_bottomY); // draw new line x-1 myGLCD.drawLine(x+1,pad_topY,x+1,pad_bottomY); // draw new line x+1 myGLCD.drawLine(x,pad_topY,x,pad_bottomY); // draw new line x myGLCD.drawLine(pad_topX,y-1,pad_bottomX,y-1); // draw new line1 y-1 myGLCD.drawLine(pad_topX,y+1,pad_bottomX,y+1); // draw new line2 y+1 myGLCD.drawLine(pad_topX,y,pad_bottomX,y); // draw new line3 y }
I have not used the well known Arduino MIDI library (like my previous project). Instead I use a simple function to send MIDI CC commands:
void SendMIDIControl(byte channel, byte controller, byte value) { byte tmpChannel = (channel & 0b00001111)-1; //0= channel1...1=channel2... etc tmpChannel = 0b10110000 + tmpChannel; //midi data first bit allways 1, //+ 011 control change command //+ midi channel byte tmpController = controller & 0b01111111; //midi data first bit allways 0 byte tmpValue = value & 0b01111111; //midi data first bit allways 0 Serial1.write(tmpChannel); Serial1.write(tmpController); Serial1.write(tmpValue); }
For sending MIDI commands to PC via USB I used a module that I made previously. For details see my project here:-...
Important!:
We can not use the first serial port pins because its pins are already used by TFT shield.
- For Arduino UNO we must use SoftwareSerial.
- For Arduino MEGA we can use SoftwareSerial or Serial1 / Serial2 (I tested with SoftwareSerial and Serial1)
My Arduino USB Midi Interface module can be replaced(theoretical) with a combination of MIDI Shield and USB To MIDI converter. I have not tested this way (I do not have neither).
Step 7: Final
After I played for a while with this project I saw that there is room for improvement (as always).
We can give up the right buttons to manage settings with some physical push buttons. This will increase the usability of the pad. This project was designed in this form to be a starting point (and proof of concept) for your MIDI projects.
In this case we need to make map coordinats separately for X an Y
byte CoordToMIDI(unsigned int coord){ float temp; temp=coord; temp=temp/1.72; return (byte)temp; }
will change in
byte CoordXToMIDI(unsigned int coord){ float temp; temp=coord; temp=temp/another_value1; // depend of your virtual pad x size return (byte)temp; }
byte CoordYToMIDI(unsigned int coord){ float temp; temp=coord; temp=temp/another_value2; // depend of your virtual pad y size return (byte)temp; }
We can also try using Arduino Due. Because this board use 3V, my interface need level converter and TFT switch moved on 3V position.
Thanks for your attention!
Recommendations
We have a be nice policy.
Please be positive and constructive.
6 Comments
i want to use it as a MIDI XY pad to controll functions on my DAW like Ableton Live 9. I will try this ?
It should work smoothly. At the time I did the project, I tried several DAW programs (demo versions) ... Meanwhile I gave up windows forever... so I tried it with LMMS with Linux and worked too.
thank u so much. i will get to work soon. what a brilliant instructable. excellent work!
Thank you for your words! :) I like to see some of my projects are useful. Good luck with your projects. See you around!
Smart idea! Thanks for shearing :)
Thank you! :) | http://www.instructables.com/id/XY-MIDI-Pad-with-Arduino-and-TFT/ | CC-MAIN-2018-13 | refinedweb | 1,597 | 66.64 |
(For more resources on Yui, see here.)
Q: What is the YUI?
A: The Yahoo! User Interface (YUI) Library.
Q: Who is it for and who will it benefit the most?
A:.
Q: How do I install it?
A: The simple answer is that you don’t. Both you, while developing and your users can load the components needed both from Yahoo! CDN and even from Google CDN across the world. The CDN (Content Delivery Network) is what the press nowadays calls ‘the cloud’ thus, your users are more likely to get a better performance loading the library from the CDN than from your own servers. However, if you wish, you can download the whole package either to take a deep look into it or serve it to your users from within your own network. You have to serve the library files yourself if you use SSL.
Q: From where can one download the YUI Library?
A: The YUI Library can be downloaded from the YUI homepage. The link can be found at.
Q: Are there any licensing restrictions for YUI?
A: All of the utilities, controls, and CSS resources that make up the YUI have been publicly released, completely for free, under the open source BSD (Berkeley Software Distribution) license. This is a very unrestrictive license in general and is popular amongst the open source community.
Q: Which version should I use?
A: The YUI Library is currently provided in two versions, YUI2 and YUI3. YUI2 is the most stable version and there are no plans to discontinue it. In fact, the YUI Team is working on the 2.9 version, which will be the last major revision of the YUI2 code line and is to be released in the second half of 2011. The rest of this article will mostly refer to the YUI 2.8 release, which is the current one. The YUI3 code line is the newest and is much faster and flexible. It has been redesigned from the ground up with all the experience accumulated over 5 years of development of the YUI2 code line. At this point, it does not have such a complete set of components as the YUI2 version and many of those that do exist are in ‘beta’ status.
If your target release date is towards the end of 2011, YUI3 is a better choice since by then, more components should be out of ‘beta’.
The YUI team has also opened the YUI Gallery to allow for external contributions. YUI3, being more flexible, allows for better integration of third-party components thus, what you might not yet find in the main distribution might be already available from the YUI Gallery.
Q: Does Yahoo! use the YUI Library? Do I get the same one?
A: They certainly do! The YUI Library you get is the very same that Yahoo! uses to power their own web applications and it is all released at the same time. Moreover, if you are in a rush, you can also stay ahead of the releases (at your own risk) by looking at GitHub, which is the main life repository for both YUI versions. You can follow YUI’s development day by day.
Q: How do I get support?
A: The YUI Library has always been one of the best documented libraries available with good users guide and plenty of well explained examples besides the automated API docs. If that is not enough, you can reach the forums, which currently have over 7000 members with many very knowledgeable people amongst them, both from the YUI team and many power users.
Q: What does the core of the YUI library do?
A: What was then known as the ‘browser wars’, with several companies releasing their own set of features on the browsers, left the programming community with a set of incompatible features which made front-end programming a nightmare. The core utilities try to fix these incompatibilities by providing a single standard and predictable API and deal with each browser as needed. The core of the library consists of the following three files:
- YAHOO Global Object: The Global Object sets up the Global YUI namespace and provides other core services to the rest of the utilities and controls. It's the foundational base of the library and is a dependency for all other library components (except for the CSS tools).
- Dom utilities: The Dom utilities provide.
- Event Utility: The Event Utility provides a unified event model that co-exists peacefully with all of the A-grade browsers in use today and offers a consistent method of accessing the event object. Most of the other utilities and controls also rely heavily upon the Event Utility to function correctly.
Q: What are A-grade browsers?
A: For each release, the YUI Library is thoroughly tested on a variety of browsers. This list of browsers is taken from Yahoo!’s own statistics of visitors to their sites. The YUI Library must work on all browsers with a significant number of users. The A-grade browsers are those that make up the largest share of users.
Fortunately browsers come in ‘families’ (for example, Google’s Chrome and Apple’s Safari both use the WebKit rendering engine) thus, a positive result in one of them is likely to apply to all of them. Testing in Safari for Mac provides valid results for the Safari on Windows version, which is rarely seen. Those browsers are considered X-Grade, meaning, they haven’t been tested but they are likely to work fine.
Finally, we have the C-grade browsers which are known to be obsolete and nor YUI nor any other library can really be expected to work on them. This policy is called Graded Browser Support and it is updated quarterly. It does not depend on the age of the browser but on its popularity, for example, IE6 is still in the A-grade list because it still has a significant share.
(For more resources on Yui, see here.)
Q: What are the CSS tools for?
A: Just as there were plenty of incompatibilities on the programming interface of the browser, there are also many in the way the information is presented and laid out on the screen. Moreover, while the standards for the DOM API exist, there is no standard on how, say, an H1 should look. It is just expected to be bolder or larger or look somehow more important than an H2, but there is no actual definition on exactly how much.
This might seem irrelevant until you try to pack more information into a web page. Then, small differences in font type or size amongst the browsers can completely break a carefully designed layout. The CSS tools try to standardize the presentation and layout of data on the screen. These tools do not require any of the other YUI components and can be used alone or in combination with other libraries.
- Reset CSS Tool: provides normalization services to the most common HTML elements so that borders, margins, and padding, among other things, are set to zero across all of the most common browsers. It ensures that everything starts from the ground floor, Reset turns everything off.
- Base CSS Tool: It builds upon the level foundation provided by Reset to give certain elements back some of the styling that marks them out from other elements. It provides an ideal browser formatting. It is not expected to be used as is but to be the base of your own ideal environment.
- Fonts CSS Tool: It standardizes all of your page text to the Arial font and gives a fixed line height. It also fixes sizes to a consistent size across the document.
- Grids CSS Tool: The precise layout of your page can be declared through the use of a range of CSS classes, IDs, and templates defined for you in the Grids CSS Tool. The resulting layout is also compatible with the recommendations of the advertising industry for banners.
Q: How can I communicate with the server?
A: The YUI Library provides several ways. The most basic is the Connection Manager, a wrapper around the XmlHttpRequest (XHR) object, providing a very simple and robust interface. It also has the ability to upload files and process the data from forms.
Since the XHR object is limited to load data from the same domain as the current page, YUI also provides the Get utility which lets you cross domain borders and reach servers that use JSONP.
If you need to fetch tabular data, such as the results of an SQL query, the DataSource makes it easier to fetch and parse it. It can read XML, JSON or CSV data both from local and remote sources, and it can parse each column into numbers, dates or use any custom parser you might provide. It returns and native JavaScript array of objects. It can use either the Connection Manager or the Get utility for remote sources.
Q: What does the The YUI Connection Manager utility do?
A: with a PHP (or other forms of) proxy for negotiating cross-domain requests.
Q: The ‘Go Back’ button on the browser toolbar takes the user out of my page. Can YUI help me?
A: The Browser History Manager (BHM) allows you to manipulate the browser’s own history to insert your own entries into it. When the user then goes back using the browser button, it will go back to a previous state within your own application.
To help you do that, the BHM allows you to store state information at each stage you want to preserve and allows you to recover it when the user navigates back and forth through the browser history.
Q: Can I manage cookies with the YUI Library?
A: The Cookie utility lets you manage cookies very easily., for example, to create/set a cookie you can do:
YAHOO.util.Cookie.set("user",userName);
Then you can easily read it by doing:
var user = YAHOO.util.Cookie.get("user");
You can set expiration dates for each cookie, restrict it to a particular domain or path and handle subcookies.
Q: Can I define my own menu when the user right-clicks?
A: The Menu component has three types of menus. The regular left hand side menu, as often seen in most web pages, the application-type menu, a horizontal bar at the top of the screen where the options drop down and the ContextMenu which pops up by the cursor when the user right clicks anywhere on the page. This one will show up instead of the regular menu provided by the browser.
All menus allow you to have an unlimited number of submenus, to show/hide or enable/disable options, show and respond to keyboard shortcuts and add icons to any of the icons. These options can be handled dynamically so you can tailor your menu according to the element under the cursor when right-clicked.
The menu items can either navigate to other URLs or call functions within your code to respond to them.
Q: Can I offer suggestions to my users when they type on a text box?
A: That is called auto-complete and YUI’s AutoComplete component handles that. You can configure it to establish a connection to your server when the user types a minimum number of characters to fetch suggestions. AutoComplete will continue updating that list of suggestions as more characters are typed. The control will handle the dropdown list and the user selection from that list.
Q: Does YUI have a SpreadSheet?
A: Not quite, however, the DataTable provides some of the same functionality. A SpreadSheet is cell-based: each cell can contain anything any other cell could, unrelated to other neighboring cells.
DataTable behaves more like a database table does. Each column contains the same type of data, be it number, date or string, with each row containing homologous information about an item. DataTable uses DataSource to load tabular data from any local or remote sources. Columns can be grouped, resized, hidden and shown. Cells can be edited. Rows can be added or deleted.
Q: Can I get more dynamic layouts than Grid.css provides?
A: Yes, YUI provides lots of ways to allow the user to dynamically change the layout of the page. The Layout component provides you with a layout based on five panels: a central one surrounded by top, bottom, left and right panels. Except for the central one, the others are optional so you can use them in any combination. You can set the initial size of the surrounding panels and allow the user to resize them at will or completely collapse them.
The Layout component uses the Resize utility, which you can also use to resize any element in the screen. It will draw handles around any suitable element for the user to grab and resize it. You can put constrains on that resizing and you can also limit the direction of that resizing.
To move elements around, you have the Drag and Drop utility. In the most simple case, you can simply allow a container element, with all its content, to be dragged around and dropped anywhere in the page.
Q: Can I build an interface like the File Explorer?
A: Yes, you can combine several YUI components to do that. You would use the Layout component to provide you with the basic layout. You would use the TreeView control on the left panel to show the tree structure of the file system. You can use the central panel to show information about each item in the tree. If the information is tabular, like a detailed view of the files in a folder, the DataTable component is the best option. Since DataTable allows for cell editing, it is easy to set the Name column to enable file renaming. We wouldn’t use the right panel that Layout provides.
The top and bottom panels can be used to show toolbars and status respectively. The top panel could also hold an application-like Menu from the Menu component. Additional functionality could be added by the ContextMenu component in response to the user right-clicking on the files. The Button family of components makes it easy to build any toolbar you might want.
Q: Can formatted documents be edited in YUI?
A: YUI includes a Rich Text Editor (RTE) which helps in editing formatted documents. It is of two types—the basic YAHOO.widget.SimpleEditor, which provides the basic editing functionality, and the full YAHOO.widget.Editor, which is a subclass of SimpleEditor. Editor adds several features at a cost of close to 40% more size, plus several more dependencies, which we might have already loaded and might not add to the total. A look at their toolbars can help us to see the differences:
The preceding screenshot shows the standard toolbar of SimpleEditor. The toolbar allows selection of fonts, sizes, and styles. Also, it lets you select the color both for the text and the background, create lists, and insert links and pictures.
The full editor adds subscript and superscript, remove formatting, show source, undo, and redo to the top toolbar and text alignment, <Hn> paragraph styles, and indenting commands to the bottom toolbar. The full editor requires, beyond the common dependencies for both, Button and Menu so that the regular HTML <select> boxes can be replaced by a fancier one:
Summary
This article provided answers to some of the most frequently asked questions about YUI.
Further resources on this subject:
- YUI 2.8: Menus [article]
- YUI 2.8: Rich Text Editor [article]
- Making Ajax Requests with YUI [article]
- YUI 2.X: Using Event Component [article]
- Implementing a Calendar Control in the Yahoo User Interface (YUI) [article] | https://www.packtpub.com/books/content/faqs-yui | CC-MAIN-2016-36 | refinedweb | 2,634 | 71.85 |
A simple linear solver for use by NOX::LAPACK::Group. More...
#include <NOX_LAPACK_LinearSolver.H>
A simple linear solver for use by NOX::LAPACK::Group.
This class provides a simple linear solver class that stores a NOX::LAPACK::Matrix and provides routines to apply the matrix and solve it using BLAS and LAPACK routines. It is templated so that it can be used to solve both real and complex matrices. It also stores an LU factorization of the matrix so repeated solves are more efficient. The group should signal that the matrix has changed by calling reset().
Apply matrix.
Set
trans to
true to apply the transpose.
ncols is the number of columns in
input and
output, which should be stored column-wise.
References Teuchos::CONJ_TRANS, Teuchos::NO_TRANS, and Teuchos::TRANS.. | http://trilinos.sandia.gov/packages/docs/r11.2/packages/nox/doc/html/classNOX_1_1LAPACK_1_1LinearSolver.html | CC-MAIN-2014-10 | refinedweb | 130 | 58.08 |
Backport module for sys.audit and sys.addaudithook from Python 3.8
Project description
Backport module of sys.audit and sys.addaudithook from Python 3.8.
Note: This module does not backport any of the built-in audit events.
Installation
pip install sysaudit
Quick Usage
sysaudit can be used as a drop-in replacement for sys.audit and sys.addaudithook.
import sysaudit def hook(event, args): print("Event:", event, args) sysaudit.addaudithook(hook) sysaudit.audit("event_name", 1, 2, dict(key="value")) # Event: event_name (1, 2, {'key': 'value'})
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
sysaudit-0.3.0.tar.gz (3.4 kB view hashes) | https://pypi.org/project/sysaudit/ | CC-MAIN-2022-21 | refinedweb | 129 | 63.96 |
quick overview here.
We can use Lightning components in visualforce, using
$A.createComponent and
$Lightning.use. The concept is same in Lightning Web Components as well. You can find How to use Lightning Components in Visualforce here.
First we will create a simple Lightning Web Component component which display records in UI. It is a simple component where we will pass object name dynamically and then get the records and view them, if you want to learn more about dynamic binding in LWC you can read that here.
lwcVFDemo.html
<template> <lightning-card> <h3 slot="title"> <lightning-icon</lightning-icon> Display Lightning Web Components in Visual Force Page <">Record Id</div> </th> <th scope="col"> <div title="Value">Record Name</div> </th> </tr> </thead> <tbody> <template for:each={sObjData} for: <tr key={sobKey.Id}> <th scope="col"> {sobKey.Id} </th> <th scope="col"> {sobKey.Name} </th> </tr> </template> </tbody> </table> </lightning-card> </template>
lwcVFDemo.js
import { LightningElement, api, track, wire } from 'lwc'; import fetchsObjectData from '@salesforce/apex/lwcVFDemoController.fetchsObjectData'; export default class LwcVFDemo extends LightningElement { @api objectName = 'Account'; @track sObjData= []; @wire(fetchsObjectData, {obName :'$objectName'} ) wiredResult(result) { if (result.data) { this.sObjData = result.data; } } }
lwcVFDemoController.apxc
public with sharing class lwcVFDemoController { @AuraEnabled(cacheable = true) public static List<SObject> fetchsObjectData(String obName){ return database.query('SELECT ID, Name FROM '+obName+' LIMIT 5'); } }
After that we will create a Lightning app and create dependency on this component. This is similar of what we do for Lightning component
lwcVFDemoApp.app
<aura:application <aura:dependency </aura:application>
Now the main part, where we will refer this app in Visualforce page. We will also pass one parameter from VF page to Lightning Web Components.
<apex:page > <apex:includeLightning /> <div id="lightning" /> <script> $Lightning.use("c:lwcVfDemoApp", function() { $Lightning.createComponent("c:lwcVFDemo", { objectName: "Contact" }, "lightning", function(cmp) { console.log("LWC component was created"); // do some stuff } ); }); </script> </apex:page>
The code is very simple and those who have used Lightning components in Visualforce must be familiar with it. Here we have first call the
$Lightning.use to call the lightning app and then creating the components using
$Lightning.createComponent and passing the object name as parameter. We can call
$Lightning.use() multiple times on a page, but all calls must reference the same Lightning dependency app. This is how our final output will look.
So now we can easily use Lightning Web Components in Visualforce. We can also use them in Lightning out which is also very much similar with Lightning components.
Did you like the post or want to add anything, let me know in comments. Happy Programming 🙂
5 thoughts on “Use Lightning Web Components in Visualforce”
can we do the same using Imperative
Yes we can, check my other post where I have used both methods.
Hey.. Great article.. Suppose, I need to download it as a pdf document. I tried with renderAs=”pdf” but it didn’t seem to work. it showed up as a blank page.
I don’t think it will work. You can try ctrl + P or window.print() . But not sure about UI in this approach.
Thank you for this example, it helped me a lot.
Sadly i have a problem.. I’m using the NavigationMixin in my LWC to redirect and generate URLs.
Do you have an idea how to make this possible inside the visualforce page?
currently it just returns “javascript:void(0);” instead of an URL
the code inside the lwc.js is:
// Generate a URL to a User record page
this[NavigationMixin.GenerateUrl]({
type: ‘standard__recordPage’,
attributes: {
recordId: id
objectApiName: objectName,
actionName: ‘view’,
},
}).then(url => {
this.detailViewUrl = url;
}); | https://newstechnologystuff.com/2019/05/27/use-lightning-web-components-in-visualforce/ | CC-MAIN-2019-35 | refinedweb | 601 | 51.04 |
This page is a snapshot from the LWG issues list, see the Library Active Issues List for more information and the meaning of C++11 status.
Section: 24.3.7 [array] Status: C++11 Submitter: Nicolai Josuttis Opened: 2010-01-24 Last modified: 2016-02-10
Priority: Not Prioritized
View all other issues in [array].
View all issues with C++11 status.
Discussion:
Class <array> is the only sequence container class that has no types pointer and const_pointer defined. You might argue that this makes no sense because there is no allocator support, but on the other hand, types reference and const_reference are defined for array.
[ 2010-02-11 Moved to Tentatively Ready after 6 positive votes on c++std-lib. ]
Proposed resolution:
Add to Class template array 24.3.7 [array]:
namespace std { template <class T, size_t N > struct array { ... typedef T value_type; ... }; } | https://cplusplus.github.io/LWG/issue1306 | CC-MAIN-2022-21 | refinedweb | 144 | 56.96 |
Testing JavaFX UIs - Part 3 of ?: Node-based Robot and Fixtures (with Demo)
After I learned how to launch a compiled JavaFX UI and a little bit about JavaFX's node hierarchy, I thought I had enough information to start working a JavaFX-specific node-based
Robot and fixtures (similar to the ones in FEST-Swing.)
I did little bit of progress this past weekend. Please note that I'm working with JavaFX's "desktop profile" only (at least for now.)
The following are some of the classes I added to the project:
JavaFXRobot: simulates a user clicking on a JavaFX node.
NodeFinder: knows how to find nodes in a hierarchy given some search criteria.
NodeMatcher: provides the search criteria to
NodeFinder.
SwingButtonNodeMatcher: matches nodes that have a
JButtonattached to them. Matching can be done by type, node id, or button text.
TextBoxMatcher: matches nodes that have a JavaFX text box attached to them. Matching can be done by type or node id.
FrameFixture: knows how to look up nodes in a JavaFX UI.
SwingButtonFixture: simulates user input on and verifies state of a Swing button.
TextBoxFixture: verifies state of a JavaFX text box.
In a previous post, I wrote a pretty long (and hacky) functional test for Jim Weaver's calculator demo (you can find the test here.) It took more than 90 lines just to verify that the calculator's text box is updated correctly when the user clicks on a button (!)
With the new
JavaFxRobot and node fixtures, the test is reduced to the following:
@Test public class CalculatorTest {
private JavaFxRobot robot;
private FrameFixture calculator;
@BeforeMethod public void setUp() {
robot = BasicJavaFxRobot.robotWithNewAwtHierarchy();
JFrame calculatorFrame = launch(Calculator.class);
calculator = new FrameFixture(robot, calculatorFrame);
}
@AfterMethod public void tearDown() {
robot.cleanUp();
}
public void shouldUpdateTextBoxWithPressedNumber() {
calculator.swingButton(withText("8")).click();
calculator.textBox().requireText("8");
calculator.swingButton(withText("6")).click();
calculator.textBox().requireText("86");
}
}
Please click the image to see the test running: (QuickTime format)
The node-based fixtures are very similar to the ones in FEST-Swing. The following code listing, using the calculator example, shows some of the ways to look up a node that contains an attached
JButton:
calculator.swingButton(); // any node that has a JButton
calculator.swingButton("button7"); // by node id
calculator.swingButton(withText("7")); // by button text
calculator.swingButton(withId("button7").andText("7")); // by node id *and* button text
The FEST-JavaFX project is in its very early stages. There is a lot to learn and a lot of work to do. A good starting point would be to finish support for Swing components in JavaFX UIs, since we can apply the lessons we learned while working on FEST-Swing.
I checked-in the code into the project FEST-JavaFX, which is licensed under GPL v2 (with classpath exception,) until we figure out the best license to use (without violating any of the dependencies' licenses, etc.)
Future posts will include how to set up Maven and Eclipse projects that mix Java and JavaFX code (hopefully this week.)
Feedback is always appreciated
From
(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.) | http://java.dzone.com/articles/testing-javafx-uis-part-3-node | CC-MAIN-2014-10 | refinedweb | 527 | 55.74 |
Web Services using Play!
Open your command prompt at the directory where you have unzipped and type.
play new currency
Follow the on-screen prompts to set up your new application.
Next start the server by typing
play new currency
Now, open the file currency/app/views/Application/index.html.
In this page we want to have two drop down lists with common currencies to select, and a value field. Change the code so that it looks like the following.
#{extends 'main.html' /} #{set title:'Home' /} <h1>Play! Currency Converter</h1> <form action="@{Application.convert()}" method="POST"> Currency From: <select name="from"> <option value="USD">USD - US Dollar</option> <option value="GBP">GBP - UK Pound Sterling</option> <option value="EUR">EUR - Euro</option> </select><br /> Currency To: <select name="to"> <option value="USD">USD - US Dollar</option> <option value="GBP">GBP - UK Pound Sterling</option> <option value="EUR">EUR - Euro</option> </select><br /> Amount: <input type="text" name="amount" /> <br /> <input type="submit" name="conv" value="Convert" /> </form>
This code is fairly straightforward HTML. The only Play feature in the code is the form action which points to a Play action (which we will create next), using the @{Application.convert()} code.
Next, we need to open the app/controllers/Application.java.
We need to add the convert action to send our form data to. The file should look like this.
package controllers; import play.mvc.*; import play.libs.*; import org.w3c.dom.Document; public class Application extends Controller { public static void convert(String from, String to, Float amount) { String" + "<soap12:Body><ConversionRate xmlns=\"\">" + "<FromCurrency>"+from+"</FromCurrency>" + "<ToCurrency>"+to+"</ToCurrency>"+ "</ConversionRate></soap12:Body></soap12:Envelope>"; Document doc = WS.url("").setHeader("content-type", "application/soap+xml").body(wsReq).post().getXml(); String rate = doc.getElementsByTagName("ConversionRateResult").item(0).getTextContent(); Float total = amount * Float.parseFloat(rate); render(from, to, amount, rate, total); } public static void index() { render(); } }
The main piece of code we are concerned about is the convert method.
This method is called when the form is submitted (courtesy of the @{Application.convert()} code in the index.html page).
The method takes 3 parameters, which Play automatically maps from the HTTP parameters sent by the form, so we have the values present immediately when the method is created.
The first thing we do in the method (Play calls these methods actions, so we will do the same from now on), is to create the XML for the soap request. The XML is a reasonably simple SOAP request containing the envelope and body, and the currency symbols we want to convert from and to.
Next, is the WebService part of Play. To use web services in Play, we need to use the play.libs.WS class. Let’s take a closer look at the Web Service call.
Document doc = WS.url("").setHeader("content-type", "application/soap+xml").body(wsReq).post().getXml();
The first part of the request specifies the URL that we want to
connect to. Here I am using a free sample webservice for live
currencies. The second part adds a header to the request. For the
request to work, the request needs to specify that the content is
soap-xml, which is why the header needs to be added. The third part sets
the body of the request to the SOAP xml we created at the start of the
action, and the final part sends the SOAP request using the post method.
The final part (getXml()) returns the response as a Document object, ready for parsing.
The rest of the convert action simply gets the result from the returned XML, and calculates the total amount converted from the amount to convert multiplied by the exchange rate returned from the web service. All of the values (including the ones submitted by the form) are then passed through to the HTML page, so that they can be rendered, by calling the render method.
Finally, we need to output the results of the conversion. So let’s create a new file called app/views/Application/convert.html, and add the following code.
#{extends 'main.html' /} #{set title:'Converted' /} <h1>Total ${to} ${total}</h1> Converted ${amount} ${from} to ${to} at an exchange rate of ${rate}
We can now try out our application. Go to and you will see a page like this.
If we choose USD and GBP and set an amount, then click convert, we should see the results similar to the following.
To achieve this result, our code called an external web service to look up the Live exchange rate between these two currencies, and then used the results in our controller to perform the necessary calculations to display the results on the screen.
How cool is that! A currency conversion application, using real LIVE currency rates, written in less than 50 lines of code.
(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)
Sylvère RICHARD replied on Sat, 2011/04/30 - 4:03am
play run currency" instead of "
play new currency" to start the server (small typo in the second snippet). Nice article, thanks for sharing ! | http://java.dzone.com/news/web-services-using-play | CC-MAIN-2013-20 | refinedweb | 856 | 55.24 |
[solved] MQTT and tls
Hello there,
I have to use mqtt over tls connection.
right now I am using simple mqtt lib
Can you recommend me a library, or some code snippet? So far googleing doesn't brought me any luck
Just want to share actual working sample.
NOTE: Checking server certificate is DISABLED in this case.
You need
CERT_REQUIREDor
CERT_REQUIREDjudging from documentation.
But, by some reason in my build
ussl.CERT_REQUIREDgave "...object has no attribute...".
# --------------------------------------------- # Tested setup: # - Traefik Proxy 2.5 (Let's Encrypt Cert) # - Mosquitto 2.0.16 # - MicroPython 1.17 # ---------------------------------------------- from umqtt.simple import MQTTClient HOST = "<HOST>" # Without server_hostname it wan't connect (by some reason) ssl_params = {"server_hostname": HOST} c = MQTTClient("<client_id>", server=HOST, port=8883, user="<username>", password="<password>", # Need keepalive > 0 or got MqttException(2) keepalive=10, ssl=True, ssl_params=ssl_params)
- Gijs Global Moderator last edited by
You can upload it either through FTP, or put it in the project folder and add the extension to 'upload file types' in the Pymakr Global settings (I think its already in there)
Gijs
- JOSE RODRIGUEZ last edited by
@andrethemac Hi, how do I add the certificate to the device flash.?
I found the issue.
I got wrong login name and pass from the client....
Thanks for the library, and sample. That was a great help.
Let me share my result so far.
So my first error was: cannot convert str to int. for this line
self.sock = ussl.wrap_socket(self.sock, **self.ssl_params)
in simple.py, This is my call by the way:
c = MQTTClient(client_id="GH001a", server="######",user=b"######", password=b"#####", ssl=True, ssl_params={"cert_reqs":"ssl.CERT_REQUIRED", "ca_certs":"/flash/cert/fullchain1.pem"})
what I come up with (I know it's not nice) to hard code into the simple.py file like this:
self.sock = ssl.wrap_socket(self.sock, cert_reqs=ssl.CERT_REQUIRED, ca_certs='/flash/cert/fullchain1.pem')
The second error wast that CA file not found. That was not hard to crack, just edit pymark.json sync_file_types attribute.
The third and current error is MQTTException: 5 for line 102 in simple.py
raise MQTTException(resp[3])
on the server side it's look like this: Socket error on client <unknown>, disconnecting.
this is where I'am stuck right now.
- andrethemac last edited by
@tttadam
using the default mqtt library and the ussl library
put the root ca certificate in the cert directory (you have to rename it ca.pem in earlier versions but than maybe changed)
the communications now go over ssl. use the mqttc client as before.
from mqtt import MQTTClient import ussl # mqtt definitions ssl_params = {'cert_reqs':ussl.CERT_REQUIRED, 'ca_certs':'/flash/cert/ca.pem'} mqttc = MQTTClient( <yourmachinename>, <yourmqttserver>, keepalive=60, ssl=True, ssl_params=ssl_params )
best regards
André
hmmm, Can you show me an example how the TLS part works, how should I use it?
Thanks.
The builtin AWS IoT library uses MQTT via SSL. | https://forum.pycom.io/topic/4775/solved-mqtt-and-tls | CC-MAIN-2022-05 | refinedweb | 479 | 67.96 |
The Samba-Bugzilla – Bug 10976
rpcsvc/yp_prot.h: No such file or directory
Last modified: 2014-11-29 18:06:54 UTC
In file included from default/source3/librpc/gen_ndr/ndr_wbint.c:3:0:
../source3/include/includes.h:113:28: fatal error: rpcsvc/yp_prot.h: No such file or directory
#include <rpcsvc/yp_prot.h>
^
this happens if rpcsvc/yp_prot.h is not provided by the toolchain/libs. it seems the check "conf.CHECK_CODE('', headers='rpc/rpc.h rpcsvc/yp_prot.h', define='HAVE_RPCSVC_YP_PROT_H')" is bogus, the missing header (rpcsvc/yp_prot.h) is always skipped to not be included in test code, in that case the empty code will compile and will define HAVE_RPCSVC_YP_PROT_H | https://bugzilla.samba.org/show_bug.cgi?id=10976 | CC-MAIN-2016-50 | refinedweb | 112 | 54.29 |
This is a tracking bug for the various about:memory reporters that need to be added.); This simply attaches a huge nested array to the DOM tree (yeah, i know, bad practice) and gobbles up about 1GB of memory which doesn't show up in about:memory. If you don't get the "done" alert the script possibly failed due to an OOM abort, you'll have to close the tab and try again with lower numbers for the loop counts.
JSScripts should be tracked. With 10 gmail tabs open I saw them account for 17% of the live heap. Here's one call stack from Massif: o1o> 17.03% (70,660,318B) 0x67648A4: JSScript::NewScript(JSContext*, unsigned int, unsigned int, unsigned int, unsigned int, unsigned int, unsigned int, unsigne d int, unsigned int, unsigned int, unsigned short, unsigned short, JSVersion) (jsutil.h:239) | o2o> 16.64% (69,033,931B) 0x67675CE: JSScript::NewScriptFromCG(JSContext*, JSCodeGenerator*) (jsscript.cpp:1260) | | o3o> 15.73% (65,262,175B) 0x66DBFCE: js_EmitFunctionScript (jsemit.cpp:3678) | | | o4o> 15.67% (65,014,983B) 0x66D9BE7: js_EmitTree (jsemit.cpp:4640) | | | | o5o> 08.47% (35,117,714B) 0x66D7723: js_EmitTree (jsemit.cpp:6176) | | | | | o6o> 08.46% (35,109,464B) 0x66D585A: js_EmitTree (jsemit.cpp:5984) | | | | | | o7o> 08.13% (33,744,883B) 0x66D8622: js_EmitTree (jsemit.cpp:5892) | | | | | | | o8o> 08.07% (33,496,774B) 0x66D636D: js_EmitTree (jsemit.cpp:5451)
JSObject slots should be tracked, ie. in allocSlots(), growSlots() and shrinkSlots(). With 10 gmail tabs open I saw them account for 7% of the live heap.
Also various strings are important, eg: o1o> 11.67% (48,410,287B): nsStringBuffer::Alloc() (nsSubstring.cpp:209) | o2o> 10.55% (43,754,212B): nsXMLHttpRequest::ConvertBodyToText() (nsXMLHttpRequest.cpp:762) | | o3o> 10.55% (43,754,212B): nsIXMLHttpRequest_GetResponseText() (dom_quickstubs.cpp:26975) | | o4o> 10.06% (41,739,008B): js_NativeGet() (jscntxtinlines.h:326) | | | o5o> 10.06% (41,739,008B): InlineGetProp() (StubCalls.cpp:1868) | | | | o6o> 10.06% (41,739,008B): js::mjit::stubs::GetProp() (StubCalls.cpp:1894) Not sure how best to record them -- lump all strings together, or try to be smarter than that.
Created attachment 529642 [details] snapshot of peak memory usage, 10 gmail tabs Here's the raw data I'm getting these ideas from, BTW.
The Hunspell spell-check accounts for 2.7MB on my machine, it looks like it's a constant amount. See bug 653817 comment 7.
js::PropertyTable::init() and js::PropertyTable::change() together account for 4.3% in the gmail example.
My testcase from comment #2 still shows up under heap-unclassified. Considering that it is trivial to use up memory like that and many poorly written javascript libraries attach all their state to dom objects i would say it would be useful to add instrumentation for that case.
(In reply to comment #2) >); Oh, bug 571249 will fix that. (Well, the follow-up bug 664647 will do so.) I just tried your JS snippet in a build with the js/object-slots reporter enabled (plus per-compartment reporters from bug 661474): 1,624,547,222 B (100.0%) -- explicit ├──1,494,174,525 B (91.97%) -- js │ ├──1,314,183,902 B (80.90%) -- compartment() │ │ ├──1,310,986,032 B (80.70%) -- object-slots │ │ ├──────1,436,610 B (00.09%) -- scripts │ │ ├────────983,040 B (00.06%) -- mjit-code │ │ ├────────572,112 B (00.04%) -- tjit-data │ │ │ ├──444,000 B (00.03%) -- allocators-reserve │ │ │ └──128,112 B (00.01%) -- allocators-main │ │ ├────────131,072 B (00.01%) -- tjit-code │ │ ├─────────69,676 B (00.00%) -- mjit-data │ │ └──────────5,360 B (00.00%) -- string-chars So that's good! :)
I've changed this back to Core/General; although the numbers show up in about:memory, the reporters themselves are mostly in Core, and about:memory doesn't need changing for them to show up once they are added.
Oh, sorry for changing the component without reading this.
I'm changing this bug's title and repurposing it slightly. Instead of being a simple tracking bug for new reporter's, it's a bug for getting the "heap-unclassified" number (which is currently typically around 35--50%) down as low as feasible. Obviously, bugs for new reporters will still block this bug. All of the things identified above have been spun-off into separate bugs (some of which have been fixed) except for the nsXMLHttpRequest strings one in comment 5. A couple more stack traces of interest from the Massif data: o1o> 05.00% (20,748,856B) nsCSSExpandedDataBlock::Compress() (mozalloc.h:229) | o2o> 04.99% (20,688,600B) (anonymous namespace)::CSSParserImpl::ParseDeclarationBlock() (Declaration.h:124) | | o3o> 04.91% (20,371,668B) (anonymous namespace)::CSSParserImpl::ParseRuleSet() (nsCSSParser.cpp:2612) | | | o4o> 04.91% (20,349,128B) nsCSSParser::Parse() (nsCSSParser.cpp:963) | | | | o5o> 04.91% (20,349,128B) mozilla::css::Loader::ParseSheet() (Loader.cpp:1585) | | | | o6o> 04.86% (20,156,400B) mozilla::css::Loader::LoadInlineStyle() (Loader.cpp:1807) | | | | | o7o> 04.86% (20,156,400B) nsStyleLinkElement::DoUpdateStyleSheet() (nsStyleLinkElement.cpp:298) | | | | | o8o> 04.76% (19,743,840B) nsStyleLinkElement::UpdateStyleSheetInternal() (nsStyleLinkElement.cpp:209) o1o> 03.81% (15,797,388B) nsTextFragment::SetTo() (nsMemory.h:68) | o2o> 03.81% (15,797,388B) nsGenericDOMDataNode::SetTextInternal() (nsGenericDOMDataNode.cpp:350) | o3o> 02.50% (10,366,215B) nsHtml5TreeOperation::AppendText() (nsHtml5TreeOperation.cpp:197) | | o4o> 02.50% (10,366,215B) nsHtml5TreeOperation::Perform() (nsHtml5TreeOperation.cpp:531) | | o5o> 02.48% (10,300,277B) nsHtml5TreeOpExecutor::RunFlushLoop() (nsHtml5TreeOpExecutor.cpp:489) | | | o6o> 02.45% (10,160,236B) nsHtml5ExecutorFlusher::Run() (nsHtml5StreamParser.cpp:153) o1o> 02.92% (12,120,960B) ChangeTable (pldhash.c:563) | o2o> 02.91% (12,068,864B) PL_DHashTableOperate (pldhash.c:649) | | o3o> 01.02% (4,233,984B) AddSelector() (nsCSSRuleProcessor.cpp:2608) | | | o4o> 01.02% (4,233,984B) nsCSSRuleProcessor::RefreshRuleCascade() (nsCSSRuleProcessor.cpp:2715) | | | | | o3o> 00.68% (2,839,040B) RuleHash::AppendRuleToTable() (nsCSSRuleProcessor.cpp:528) | | | o4o> 00.68% (2,822,656B) RuleHash::AppendRule() (nsCSSRuleProcessor.cpp:560) | | | | | o3o> 00.33% (1,367,040B) mozilla::FramePropertyTable::Set() (nsTHashtable.h:188) | | | o4o> 00.32% (1,329,152B) nsIFrame::GetOverflowAreasProperty() (FramePropertyTable.h:237) | | | | | | | o4o> 00.01% (37,888B) in 6 places, all below massif's threshold (00.10%) | | | | | o3o> 00.24% (1,009,152B) AppendRuleToTagTable() (nsCSSRuleProcessor.cpp:540) | | | o4o> 00.22% (897,024B) RuleHash::AppendRule() (nsCSSRuleProcessor.cpp:565) | | | | | o3o> 00.21% (888,832B) nsDocument::AddToIdTable(mozilla::dom::Element*, nsIAtom*) (nsTHashtable.h:188) o1o> 02.87% (11,917,512B) nsCSSSelectorList::AddSelector() (mozalloc.h:229) | o2o> 02.87% (11,907,936B) (anonymous namespace)::CSSParserImpl::ParseSelectorGroup() (nsCSSParser.cpp:3644) o1o> 01.63% (6,742,008B) (anonymous namespace)::CSSParserImpl::ParseRuleSet() (mozalloc.h:229) | o2o> 01.62% (6,713,136B) nsCSSParser::Parse() (nsCSSParser.cpp:963) | | o3o> 01.62% (6,713,136B) mozilla::css::Loader::ParseSheet() (Loader.cpp:1585) | | o4o> 01.59% (6,613,920B) mozilla::css::Loader::LoadInlineStyle() (Loader.cpp:1807) | | | o5o> 01.59% (6,613,920B) nsStyleLinkElement::DoUpdateStyleSheet() (nsStyleLinkElement.cpp:298) | | | o6o> 01.56% (6,488,640B) nsStyleLinkElement::UpdateStyleSheetInternal() (nsStyleLinkElement.cpp:209) See the attachment for full details. Looks like we really need some CSS memory reporters! There were a couple of >1% stack traces for CSS things that I haven't copied in here either.
It was suggested at today's MemShrink meeting that it might be worth someone digging into memory dumps or something else to push this along. But the relevant information is actually all there in Massif's output. The hard part of interpreting that output is knowing how the allocation traces Massif reports match up with existing memory reporters. But a similar problem would occur for anyone who dug through memory dumps. I think a better way forward would be to identify sites for which our heap-unclassified number is unusually high. Comment 2 and bug 669005 are both good examples (and both are now fixed). I know Massif is a pain to run on Firefox, so I'm happy to run it on any such sites that people can find. Or if you just know of any remaining decent-sized chunks of memory that aren't covered, please speak up. (Bug 669117 was identified this way by khuey, IIRC.)
If we want a snappy name for this bug, I propose "Dark Matter".
Created attachment 547315 [details] about:memory from technet.com blog post On Linux64, if I load just about:memory?verbose and, I get a "heap-unclassified" of ~50%. The next highest bucket is "js" with 32%. Full output is attached. That page includes lots of tables.
That actually suggests an interesting approach to tracking down additional darkmatter: automatically check the heap-unclassified numbers for a bunch of webpages, and look at sites that have unusually large numbers to see what is weird about them. Relatedly, in bz says: "And by the way, I'm guessing that in this case the remaining unclassified stuff is also somewhere in the js engine. Might be worth it to look up where."
(In reply to comment #17) > That actually suggests an interesting approach to tracking down additional > darkmatter: automatically check the heap-unclassified numbers for a bunch of > webpages, and look at sites that have unusually large numbers to see what is > weird about them. Yes. See comment 14 :)
Nice catch with Bug 675132. Is it possible to use Massif (which I assume is recording the amount requesting of every malloc?) to see how much space is being wasted by allocations-slightly-more-than-powers-of-two? Or that something you already set up to ferret this out? The cycle collector allocates memory in chunks, and I could imagine it accidentally wasting a bit of space in the same way, so it could probably be happening all over the place.
(In reply to comment #19) > Is it possible to use Massif (which I assume is > recording the amount requesting of every malloc?) to see how much space is > being wasted by allocations-slightly-more-than-powers-of-two? Nope. Massif replaces the heap allocator with its own version, and the amount of rounding-up is a characteristic of the heap allocator. Well, I guess I could try to get Massif to round up the same as jemalloc, but there's an easier path forward -- see bug 675136 where I've found that jemalloc rounding could be accounting for something like 17% of "heap-allocated", i.e. roughly half of "heap-unclassified".
Just to clarify about Massif: it makes a half-hearted attempt to account for extra space needed by the allocator -- it adds 8 bytes to the amount recorded for each heap block, on the assumption that the average overhead is that size. You can change it with the --heap-admin option, but it's always a constant. So it's pretty hopeless. Also, that overhead-per-block is recorded separately from the requested-amount-per-block and it's not even shown in those allocation trees that I cut and paste into bugs all the time. So I haven't even considered jemalloc's rounding until just today. tl;dr: Massif is hopeless for measuring extra space caused by jemalloc's rounding and book-keeping.
Yeah, I meant doing some kind of post-processing step in Massif to round up the same as jemalloc, but looks like you cut out the middle man.. There's a fairly obvious candidate for who might implement this :P
I spun the Valgrind tool idea off as bug 676724.
Bug 291643 should be blocking darkmatter as XML produces lots of heap-unclassified
I've set a specific goal of 10% dark matter for "typical" cases, so that this bug has an end goal. We can argue about the meaning of "typical" when we're getting close. (With Gmail and this bug open, I currently see 22.93% dark matter.)
A lot of bugs are of the form "this totally non-typical page generates lots of heap-unclassified." It seems to me that bugs of that form are still worth tracking (perhaps in a separate bug), since it's hard to understand what we're doing wrong on those pages if about:memory fails us.
Something that bug 699951 made me think of is that there's one situation that's hard to measure with DMD, because it's so slow -- long sessions with lots of browsing. (I've left DMD running overnight before, but that doesn't involve opening/closing lots of sites.)
Nicholas, what about Bug 646575?
(In reply to Phoenix from comment #29) > Nicholas, what about Bug 646575? What about it?
Sholdn't it be added to this list too?
It's unclear to me how bug 646575 relates to this bug. Can you explain?
Nicholas, do you know if there are some memory reporters for the memory use of plugin-container.exe (I mean especially about Flash/Java/Silverlight)? Does it count explicitly in 'Other Measurements' in about:memory?
(In reply to Loic from comment #33) > Nicholas, do you know if there are some memory reporters for the memory use > of plugin-container.exe (I mean especially about Flash/Java/Silverlight)? > Does it count explicitly in 'Other Measurements' in about:memory? We currently do not measure plugin-container.exe at all.
Bug 648415 is open for plugin-container measurement, but no progress has been made on it.
FWIW, my "heap-unclassified" is regularly under 10% on my Linux64 box, running a trunk build. It was 6.4% at the end of the day a couple of days ago. And it's currently 10.9% on my Mac, which is on FF19 beta.
Install Stylish and this "issue" would be easily resolved :D
> Install Stylish and this "issue" would be easily resolved :D I don't understand what you mean by 'this "issue"'. Is heap-unclassified high with Stylish installed?
(In reply to Nicholas Nethercote [:njn] from comment #38) > Is heap-unclassified high with Stylish installed? Right, you'll get something like 16% from beginning and growing to ~30% on usage
I'm just curious -- I recognize that this meta-bug is mostly tracking heap-unclassified data in Firefox, and that there's a separate meta-bug for B2G. Is there a comparable bug for Thunderbird? (On my machine at the moment, heap-unclassified is at 33%, which makes it tricky to help diagnose the memory leak I seem to be seeing...)
> Is there a comparable bug for Thunderbird? Nope. You might want to talk to jcranmer, who was working on some Thunderbird memory reporters just the other day. Alternatively, DMD might help you:
(In reply to Nicholas Nethercote [:njn] from comment #41) > > Is there a comparable bug for Thunderbird? > > Nope. You might want to talk to jcranmer, who was working on some > Thunderbird memory reporters just the other day. Alternatively, DMD might > help you: I tried using DMD on Thunderbird to diagnose bug 844937. All it shows is that Thunderbird desperately needs memory reporters for mailnews/ code, which is the work jcranmer has started in bug 480843. | https://bugzilla.mozilla.org/show_bug.cgi?id=563700 | CC-MAIN-2017-26 | refinedweb | 2,450 | 62.04 |
Hi All.
I spent about a day of seeking what is wrong and found out, there is a problem with eqauls/hasCode generator in IDEA (I am using 12.1.3 community edition) . What is wrong from my point of view is using super.hashCode()/equals. It seems to me, if I decided to override equals and hasCode for some classes, it means they are not suitable for me. I looked to my old project where I also used previous version of IDEA, there is no super.hashCode()/equals.
Probably I do not understand something, but I think super should no be used, what do you think? I found some similar discussion dated by 2005.
Small example where this is wrong. In my project I entended javax.management.MBeanAttributeInfo and used extended object as a key in HashMap. Because of strange implementation of MBeanAttributeInfo.equals and MBeanAttributeInfo.hashCode I had duplicated values of my extended object in Map even if I overide equals and hashCode, e.g.
public class StatAttributeInfo extends MBeanAttributeInfo {
....
public boolean equals(Object o) {
if (this == o) return true;
if (!(o instanceof StatAttributeInfo)) return false;
if (!super.equals(o)) return false;
StatAttributeInfo info = (StatAttributeInfo) o;
return getName().equals(info.getName());
}
public int hashCode() {
int result = super.hashCode();
result = 31 * result + getName().hashCode();
return result;
}
I had to remove lines with super.XXX to made it workable
Hi Victor,
That calls to super class equals()/hashCode() are necessary for processing super class-scoped state.
Consider a situation like below:
When you override equals()/hashCode() at Child class you need to explicitly process super class state as well (otherwise two Child objects with the same 'j' value but different 'i' value will be considered equal).
Moreover, there is a possible case that super class' state is not exposed, i.e. the fields are private and no getters are provided.
Denis
Denis, I understand what they would like to achive, but do not understand why it should be done, I mean
"if I decided to override equals and hasCode for some classes, it means they are not suitable for me"
That is why I think it seems strange. Look at javax.management.MBeanAttributeInfo.equals and hashCode, do you think provided implementation is suitable for using in HasMap ? I do not hink so, That is why I decided to override it. Actually I do no think that in general using super.XXX in equals and hasCode is a good idea.
I think the problem is here - "if I decided to override equals and hasCode for some classes, it means they are not suitable for me" - you generalize particular use-case with jmx classes to all use-cases. Much more common scenario is that equals()/hashCode() are overridden because sub-class introduces new state.
Denis
>>>> Much more common scenario is that equals()/hashCode() are overridden because sub-class introduces new state.
Right and it should be my decision whether to use state from parent ot not. In general, I do not have access to source code and do not know how they implemented corresponding methods - I should no rely on implementation details, instead I should relay on contract.
How would you consider closed super class state at your sub-class without delegating to it's super-methods?
Denis
Hi all,
I recently came across a problem pretty much similar to Victor's.
I thought it would be really cool if IntelliJ could somehow figure out if the particular class that the user intends to automatically generate equals/hashCode is the direct descendant of Object class and if so, generates equals/hashCode in a more appropriate manner (not using super.hashCode/super.equals).
Is it technically possible?
I don't have a really deep understanding on how IDEs work, so I hope I am not asking something that is too obvious.
Thanks in advance!
please raise a feature request at | https://intellij-support.jetbrains.com/hc/en-us/community/posts/206873545-Strange-generated-hashCode-?page=1 | CC-MAIN-2020-40 | refinedweb | 647 | 56.96 |
Acme::Your - not our variables, your variables
use Data::Dumper; use Acme::Your "Data::Dumper"; your $Varname; # This is really $Data::Dumper::Varname print "The default variable name for DD is $Varname";
Acme::Your gives you a language construct "your" that behaves similarly to Perl's own "our" constuct. Rather than defining lexically unqualified varibles to be in our own package however, you can define lexically unqualified variable to be from anothter package namespace entirely.
It all starts with the use statement.
use Acme::Your "Some::Package";
This both 'imports' the your construct and states the package that any variables defined with a your statement will be created in.
Then you can do 'your' statements. Note that these are lexical, and fall out of scope much the same way that our variables would. For example
use Acme::Your "Fred" my $foo = "bar"; { your $foo = "wilma"; print $foo; # prints "wilma" } print $foo; # prints "foo" print $Fred::foo # prints "wilma"
Your allows you to import symbols from other packages into your own lexical scope and have access to them.
Acme::Your functions by parsing your source code and filtering it with a source filter. It is possible to fool the parser with some pathelogical cases and you should be aware that this module faces all the standard problems that perl faces when parsing Perl Code.
Acme::Your 0.01 was released on 14th January 2002.
Richard Clamp <richardc@unixbeard.net>
Original idea, documentation, and tests which kill, Mark Fowler <mark@twoshortplanks.com>
Copyright (C) 2002 Richard Clamp and Mark Fowler. All Rights Reserved. This module is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
Filter::Simple, Parse::RecDescent. | http://search.cpan.org/~rclamp/Acme-Your-0.01/lib/Acme/Your.pm | CC-MAIN-2014-15 | refinedweb | 285 | 62.17 |
Hi, I am very new to python and am coming across an issue that I haven't been able to work out on my own.
I have 9 rasters that I need to sum. Nothing fancy, just a simple sum. I have this code that I used in another script (found on some blog somewhere) and it worked without any issue whatsoever:
rasters = arcpy.ListRasters('cf_*')
i = 0
for ras in rasters:
if i == 0:
sumras = arcpy.Raster(ras)
i += 1
else:
sumras = sumras + ras
i += 1
sumras.save(outRasterCFS)
However, in my current script it fails with:
Traceback (most recent call last):
File "<module1>", line 32, in <module>
File "<module1>", line 25, in Cut
File "C:\Program Files (x86)\ArcGIS\Desktop10.5\ArcPy\arcpy\sa\Functions.py", line 4329, in Plus
in_raster_or_constant2) 4326, in Wrapper
["Plus", in_raster_or_constant1, in_raster_or_constant2])
RuntimeError: ERROR 999998: Unexpected Error.
When researching how to resolve this, I found another post somewhere that suggested cellstatistics might be better, so:
sumras = CellStatistics(rasters,"SUM","DATA")
sumras.save(outRasterCFS)
But it fails with:
Traceback (most recent call last):
File "<module1>", line 32, in <module>
File "<module1>", line 16, in Cut
File "C:\Program Files (x86)\ArcGIS\Desktop10.5\ArcPy\arcpy\sa\Functions.py", line 3149, in CellStatistics
ignore_nodata) 3145, in Wrapper
[function] + Utils.flattenLists(in_rasters_or_constants))
RuntimeError: ERROR 999998: Unexpected Error.
Usually, when I run into problems that aren't very clear, I run it manually from ArcMap and can usually figure out what is going wrong in my python, but when I run CellStatistics from ArcMap, it fails as well:
ERROR 010092: Invalid output extent.
I really have no idea where to go from here. I can't find anything online that points me in a helpful direction...
Any thoughts or suggestions? I would greatly appreciate it!
Here is the entire script:
#Set environment
import arcpy
from arcpy import env
from arcpy.sa import *
wrkpath = r'E:\GIS_Data\GEO5419\MyWork'
arcpy.env.workspace = wrkpath
arcpy.env.overwriteOutput = True
def Cut():
arcpy.CheckOutExtension("Spatial")
topRaster = 'Site_DEM'
outRasterCFS = wrkpath+'\CFSum'
rasters = arcpy.ListRasters('cf_*')
sumras = CellStatistics(rasters,"SUM","DATA")
sumras.save(outRasterCFS)
## i = 0
## for ras in rasters:
## if i == 0:
## sumras = arcpy.Raster(ras)
## i += 1
## else:
## sumras = sumras + ras
## i += 1
## sumras.save(outRasterCFS)
arcpy.CheckInExtension("Spatial")
if __name__ == '__main__':
Cut()
I figured it out not long after posting this. I wanted to delete this entire thread, but I cannot find an option to do so.
So, until I do, for anyone else having the same issue, the resolution was to add this to my code:
arcpy.env.extent = "MAXOF"
Now it is working fine. | https://community.esri.com/thread/211907-sum-rasters-failing-with-unexpected-error | CC-MAIN-2018-47 | refinedweb | 436 | 58.38 |
Hey, I’m a little new to pytorch and have been playing around. I noticed that after computing the loss, I am unable to affect the value of loss whatsoever. For eg, if say the loss value is 10, if I do (loss * 1.5).backward() or any number for that matter, it doesn’t seem to affect the weights any differently than when the loss is just 10. I did this with a very small network and dummy data and printed out the outputs from copies of network after backpropping and stepping different losses ( loss.backward() for the original and (loss*1.5).backward() for the copy ) and both of the outputs came out the same. While I did come across a post that mentioned something similar to this where it says the gradient along with the sign is what matters, I don’t understand how I would handle cases where I have a coefficient for loss as in many reinforcement learning algorithms. How can I successfully use a coefficient for the loss when I call backward? Or does (loss * 1.5).backward() actually work and is my understanding is wrong?
Hi Sainath!
It works fine.
Here is a quick example that shows both the gradient and the
optimizer step:
>>> import torch >>> torch.__version__ '1.7.1' >>> ta = torch.tensor ([1.0], requires_grad = True) >>> tb = torch.tensor ([1.0], requires_grad = True) >>> opta = torch.optim.SGD ([ta], lr = 0.1) >>> optb = torch.optim.SGD ([tb], lr = 0.1) >>> ta.backward() >>> ta.grad tensor([1.]) >>> opta.step() >>> ta tensor([0.9000], requires_grad=True) >>> (1.5 * tb).backward() >>> tb.grad tensor([1.5000]) >>> optb.step() >>> tb tensor([0.8500], requires_grad=True)
Best.
K. Frank
Hi! Thanks for the response!
It does work here. Correct me if I’m wrong but what I tried to attempt was:
- Define a simple neural network, instantiate it and make a copy of it
- have dummy data, which in this case was just a list [1,2,3,4,5] with the targets also being [1,2,3,4,5]
- define separate optimizers for the neural nets and use mse loss
With this what I attempted to find was the outputs of both networks after a single pass of the list and backprops where original was backpropped with loss.backward() and the copy was backpropped with (loss*100).backward().
My understanding is that these two networks should now produce different outputs due to different backprop values. But what I found was that both the networks produce the same output the second time.
I guess I must be going wrong somewhere
Hi Sainath!
Please post a small, complete, runnable script that shows this issue,
together with its output.
Best.
K. Frank
Hi! Here is my code:
import torch.nn as nn import torch import torch.optim as optim class model(nn.Module): def __init__(self): super().__init__() self.seq = nn.Sequential(nn.Linear(5,5), nn.Linear(5,5),nn.Linear(5,5)) def forward(self, x): return self.seq(x) x_train = torch.tensor([1,2,3,4,5], dtype = torch.float) y_train = torch.tensor([1,2,3,4,5], dtype = torch.float) m1 = model() m2 = model() m2.load_state_dict(m1.state_dict()) criterion = nn.MSELoss() o1 = optim.Adam(m1.parameters(), lr = 1e-3) o2 = optim.Adam(m2.parameters(), lr = 1e-3) for i in range(2): o1.zero_grad() o2.zero_grad() out1 = m1(x_train) out2 = m2(x_train) print('out1:', out1) print('out2:', out2) loss1 = criterion(out1, y_train) loss1.backward() loss2 = criterion(out2, y_train) (loss2 * 100).backward() o1.step() o2.step()
And here is the output:
out1: tensor([ 0.1006, -1.0441, 1.1687, -0.7200, -0.5491], grad_fn=<AddBackward0>) out2: tensor([ 0.1006, -1.0441, 1.1687, -0.7200, -0.5491], grad_fn=<AddBackward0>) out1: tensor([ 0.1065, -1.0204, 1.1568, -0.7082, -0.5328], grad_fn=<AddBackward0>) out2: tensor([ 0.1065, -1.0204, 1.1568, -0.7082, -0.5328], grad_fn=<AddBackward0>)
I noticed that you used the SGD optimizer and I used Adam. But on switching to the SGD optimizer it works fine and the outputs are different.
Output with SGD optimizer:
out1: tensor([ 0.6368, -0.8041, -0.0553, 0.4925, -0.5171], grad_fn=<AddBackward0>) out2: tensor([ 0.6368, -0.8041, -0.0553, 0.4925, -0.5171], grad_fn=<AddBackward0>) out1: tensor([ 0.6088, -0.7789, -0.0402, 0.4936, -0.4706], grad_fn=<AddBackward0>) out2: tensor([-2.0355, -0.3274, -0.3055, -1.2581, 1.1941], grad_fn=<AddBackward0>)
Switching from Adam to SGD seems to bring about the necessary change but I do not know why. Is this supposed to happen with adam?
Oh okay I think I got it. It was my fault indeed. 100 in (loss * 100).backward() did not work and changing it to a much smaller number such as 0.0001 does work. Sorry! Though I’m not really sure why, guess I should do a thorough read on Adam. | https://discuss.pytorch.org/t/using-a-coefficient-for-loss/120443 | CC-MAIN-2022-21 | refinedweb | 814 | 69.79 |
Slow select operation after many times of insert different Table
(1) By anonymous on 2021-02-11 09:07:03 [link] [source]
Hi all,
I am using sqlite3 with python3 When I insert each row by row to different tables and then select all columns, It is really slow. If I insert all rows at once then everything goes fine. One thing strange is if I add rows to one table after finishing adding rows to the other table, everything goes fine. So, the slow down only occurs when I add rows to different tables within one transaction. One solution is executing 'vacuum' but it takes too much time for my original dataset which takes too much long time for vacuum operation Here are the sample codes below
import sqlite3 import os import time os.system('rm -f *.db') conn1 = sqlite3.connect('db1.db', timeout=1000.0) conn2 = sqlite3.connect('db2.db', timeout=1000.0) conn3 = sqlite3.connect('db3.db', timeout=1000.0) cursor1 = conn1.cursor() cursor2 = conn2.cursor() cursor3 = conn3.cursor() v = ''.join(['a' for _ in range(10000)]) rows = [(str(i),v) for i in range(1000)] cursor1.execute('CREATE TABLE if not exists Test1(ID text, value text)') cursor1.execute('CREATE TABLE if not exists Test2(ID text, value text)') cursor1.execute('CREATE TABLE if not exists Test3(ID text, value text)') cursor2.execute('CREATE TABLE if not exists Test1(ID text, value text)') cursor2.execute('CREATE TABLE if not exists Test2(ID text, value text)') cursor2.execute('CREATE TABLE if not exists Test3(ID text, value text)') cursor3.execute('CREATE TABLE if not exists Test1(ID text, value text)') cursor3.execute('CREATE TABLE if not exists Test2(ID text, value text)') cursor3.execute('CREATE TABLE if not exists Test3(ID text, value text)') ################# n = 1 for i in range(len(rows)//n+1): cursor1.executemany('INSERT INTO Test1 VALUES(?,?);', rows[i*n:(i+1)*n]) cursor1.executemany('INSERT INTO Test2 VALUES(?,?);', rows[i*n:(i+1)*n]) cursor1.executemany('INSERT INTO Test3 VALUES(?,?);', rows[i*n:(i+1)*n]) conn1.commit() ################# for i in range(len(rows)//n+1): cursor2.executemany('INSERT INTO Test1 VALUES(?,?);', rows[i*n:(i+1)*n]) for i in range(len(rows)//n+1): cursor2.executemany('INSERT INTO Test2 VALUES(?,?);', rows[i*n:(i+1)*n]) for i in range(len(rows)//n+1): cursor2.executemany('INSERT INTO Test3 VALUES(?,?);', rows[i*n:(i+1)*n]) conn2.commit() ################# cursor3.executemany('INSERT INTO Test1 VALUES(?,?);', rows) cursor3.executemany('INSERT INTO Test2 VALUES(?,?);', rows) cursor3.executemany('INSERT INTO Test3 VALUES(?,?);', rows) conn3.commit() #cursor1.execute('VACUUM;') #conn1.commit() #cursor2.execute('VACUUM;') #conn2.commit() #cursor3.execute('VACUUM;') #conn3.commit() t1 = time.time() cursor1.execute('SELECT * FROM Test1') rows = cursor1.fetchall() t2 = time.time() print (t2-t1, len(rows)) t1 = time.time() cursor2.execute('SELECT * FROM Test1') rows = cursor2.fetchall() t2 = time.time() print (t2-t1, len(rows)) t1 = time.time() cursor3.execute('SELECT * FROM Test1') rows = cursor3.fetchall() t2 = time.time() print (t2-t1, len(rows))
output
5.287451267242432 1000 0.15462756156921387 1000 0.1925675868988037 1000
(2) By David Raymond (dvdraymond) on 2021-02-11 13:18:20 in reply to 1 [link] [source]
My guess as to what's going on:
For db1 you're shuffling all the pages of the three tables together as you build the file. As you evenly deal out the inserts between then, then as the db file grows it's basically going to be laid out as page for Test1, page for Test2, page for Test3, page for Test1, page for Test2, page for Test3, etc... So when you read everything from Test1, you need to read every third page over the entire length of the file.
When you do the inserts into db2 and db3, you're keeping each table together. So as the db file grows you have a big swath of pages all for Test1, then a big swath of pages all for Test2, then a big swath of pages all for Test3. So when you read all of Test1, you're basically reading all continuous pages in the first third of the file.
That's going to make a difference, especially if you have a spinning hard disk for the file. Whether that's the difference you're seeing I'll let the more knowledgeable folks reply.
(4) By anonymous on 2021-02-11 14:58:25 in reply to 2 [link] [source]
Thank you for the explanation. I agree with your theory. In my application, it is inevitable to insert rows to different tables and to shuffle the order of insertion. Is there any smart solution?
(8) By David Raymond (dvdraymond) on 2021-02-11 16:24:38 in reply to 4 [link] [source]
Not sure. I did a couple test runs on my own system and didn't get anywhere near as big of a difference. So I definitely might be wrong on what's causing it.
(11) By Keith Medcalf (kmedcalf) on 2021-02-11 21:29:08 in reply to 2 [link] [source]
I get the following output:
0.009074687957763672 1000 0.00870060920715332 1000 0.009019851684570312 1000
which are all less than half-a-tick, and all about the same as each other.
(12) By anonymous on 2021-02-12 08:10:37 in reply to 11 [link] [source]
Did you run the exactly same code I attached?
(13) By Keith Medcalf (kmedcalf) on 2021-02-12 08:30:14 in reply to 12 [link] [source]
Other than changing your
os.system('rm -f *.db') to
try: os.unlink('db1.db') except: pass try: os.unlink('db2.db') except: pass try: os.unlink('db3.db') except: pass
(14) By anonymous on 2021-02-12 13:25:19 in reply to 13 [link] [source]
Do you have any guess why there is a huge gap between your and my results?
(3) By Larry Brasfield (larrybr) on 2021-02-11 14:45:30 in reply to 1 [link] [source]
The main reason your scenario is slow is that you are forcing disk I/O at every insert. If your were to wrap your inserts into a transaction, they would go much faster. If that's too many at once, group them into multiple transactions. But keep the group size much greater than 1.
(5) By anonymous on 2021-02-11 15:01:03 in reply to 3 [source]
Thank you for the proposal. I am very unfamiliar with sqlite3. So it would be really beneficial if you can provide some snip of the code or relevant references. Thank you in advance
(6) By Larry Brasfield (larrybr) on 2021-02-11 15:40:30 in reply to 5 [link] [source]
See Transaction, and links therein. Example:
BEGIN TRANSACTION
INSERT INTO MyTable ...
-- many more inserts
COMMIT TRANSACTION
Try it -- you'll like it!
(7) By David Raymond (dvdraymond) on 2021-02-11 15:45:57 in reply to 3 [link] [source]
No, with the way he opened the connection it's doing implicit transactions. So once you start doing selects, inserts, etc you have an open transaction until you explicitly commit or rollback. So his inserts are fine. Also, the timing was only done on the select part of it, and the insert time wasn't included at all.
(9) By Larry Brasfield (larrybr) on 2021-02-11 16:35:21 in reply to 7 [link] [source]
I have not done the research into Python to gainsay your claim if that is something done by its .executemany() method. But if you refer to what SQLite does, you're mistaken.
It is true that every individual statement is wrapped into a transaction, absent an already-open transaction by the SQLite library. But the automatic transactions are over once the statement executes. Otherwise, COMMITs unpaired with BEGINs would have to appear in code. They do not, at least not without error.
(10) By David Raymond (dvdraymond) on 2021-02-11 16:42:31 in reply to 9 [link] [source]
Python's default sqlite3 module is the part that does the implicit transaction stuff I'm talking about, yes.
I'm not referring to the SQLite C library/API/etc itself.
(15.1) By Simon Slavin (slavin) on 2021-02-12 15:29:25 edited from 15.0 in reply to 1 [link] [source]
Various posters to this thread have excellently covered the amount of time taken by software. You asked about a difference between the amount of time your computer took, and the amount of time one of the responder's computers took.
SQLite is fast and efficient and most of the time it takes is taken in storage access – reading and writing files. If two computers execute SQLite APIs at different speeds the cause is usually hardware operating as designed. This is taken up by, among other things,
- speed of storage (hard disk rotation speed, spinning rust or SSD)
- number and sizes of caches (application cache, OS cache, storage subsystem cache)
- whether other apps are accessing storage, thus busting the caches
- how much CPU the database app is able to use (what other processes are doing)
- how efficient the OS is at handing the database app's requirements
As an example, a recent bug in Lenovo's System Update application for Windows, which runs in the background, caused it to use 6% of available CPU in a typical windows installation all the time the computer is working. You wouldn't find this problem by profiling your database program.
So don't worry if someone else's computer is faster than yours. Unless your computer isn't working fast enough to make your program useful, in which case use a dedicated computer with a simple OS and storage setup. | https://sqlite.org/forum/info/ab401e72904d5342 | CC-MAIN-2022-33 | refinedweb | 1,624 | 66.23 |
Control to hold an image and getting X,Y in that control
- EverydayDiesel last edited by EverydayDiesel
Hello, Is there a control designated to hold an image in QT?
I started using an QLabel but I am not sure this is the right control.
This is basically how I draw the image
```
QPixmap pm1(sFilePath);
ui->label->setPixmap(pm1);
ui->label->adjustSize();
I need the ability to draw horizontal/verticle lines based on where the user clicks. Is QLabel the best control for this and how do I capture X, Y within that image? Thanks in advance
Hi,
It depends on what you want to do. With QLabel you already have the pixmap handling part and you just how to manage the painting of your lines.
Please give more details about what that widget shall be used for.
- EverydayDiesel last edited by
Hello and thank you for the reply.
It will basically be chopping a larger image into multiple smaller images.
I will need to evaluate the color the current pixel to determine if it is white (ish) it will be one of these values
rgb(254, 254, 254)
rgb(255, 255, 254)
rgb(254, 255, 255)
rgb(255, 254, 255)
rgb(255, 254, 254)
So you want to do something like edge detection ?
- EverydayDiesel last edited by
The user will click on a general area and it will scan the horizontal pixels until it finds a dark pixel.
Is there a control that I can draw an image on, get the X,Y positions, and get the pixel RGB value based on a X,Y cordinate?
You can do that with every widget. Subclass the mouse related methods.
Depending on what you want to do, you might want to consider using a library like OpenCV for your image processing.
- EverydayDiesel last edited by EverydayDiesel
Please excuse my ignorance but I am new to QT (coming from wxwidgets)
// MainWindow.h
#ifndef MAINWINDOW_H #define MAINWINDOW_H #include <QMainWindow> #include <string> using namespace std; QT_BEGIN_NAMESPACE namespace Ui { class MainWindow; } QT_END_NAMESPACE class MainWindow : public QMainWindow { Q_OBJECT public: MainWindow(QWidget *parent = nullptr); ~MainWindow(); private slots: private: Ui::MainWindow *ui; }; #endif // MAINWINDOW_H
#include "mainwindow.h" #include "ui_mainwindow.h" class MyExtendedQLabel : public QLabel, private Ui::MainWindow::MyExtendedQLabel { protected: void mousePressEvent(QMouseEvent *event); }; MainWindow::MainWindow(QWidget *parent) : QMainWindow(parent) , ui(new Ui::MainWindow) { ui->setupUi(this); } MainWindow::~MainWindow() { delete ui; } void MainWindow::on_cmdLoadImage_clicked() { QString sFilePath = "001.png"; QPixmap pm1(sFilePath); ui->label->setPixmap(pm1); ui->label->adjustSize(); }
I am a bit confused on how to implement this. How do I take the class MyExtendedQLabel and extract the X, Y coordinates to the main window? What connects MyExtendedQLabel to the QLabel (label) on the MainWindow?
- jsulm Lifetime Qt Champion last edited by jsulm
@EverydayDiesel said in Control to hold an image and getting X,Y in that control:
What connects MyExtendedQLabel to the QLabel (label) on the MainWindow?
Well you create an instance of MyExtendedQLabel and put it on your main window like any other widget. If you're using Qt Designer then first put a normal QLabel and then right-click on it and select "Promote to..." to promote QLabel to MyExtendedQLabel.
To get x/y coordinates use mouse events like @SGaist suggested: override to get x/y while mouse is moving, to get x/y when user presses mouse button. | https://forum.qt.io/topic/113372/control-to-hold-an-image-and-getting-x-y-in-that-control | CC-MAIN-2022-27 | refinedweb | 553 | 52.39 |
Blazor is the latest in a series of what I'd call "magnificent" developer-friendly web frameworks that .NET has built. In this tutorial, we'll be reviewing how to send an SMS using Blazor and the Vonage SMS API.
Jump Right to the Code
All of the code from this tutorial is located in GitHub.
Prerequisites
- wherever you want to build the app and run the following command in your terminal:
dotnet new blazorserver -o SendSmsBlazor
This will create a blazor server app for you called
SendSmsBlazor. cd into this directory and enter the command
code . to Launch VS Code. For Visual Studio users, you can just open the sln file.
Add The Vonage Nuget Package
Fire up a terminal in VSCode and run:
dotnet add package Vonage
This will install the Vonage package to the project.
Create Your SmsService
We will have to inject an SMS service into our razor page, so let's create a SmsService.
Add a new file under the
Data folder called
SmsService.cs. If you're using VS Code, this is just going to create a blank file, so add the following to it.
using Microsoft.Extensions.Configuration; using Vonage.Messaging; using Vonage.Request; namespace SendSmsBlazor.Data { public class SmsService { } }
Add a Constructor
Inside the
SmsService class, we must inject a configuration object. The config will contain our API key and API secret, which we'll configure a bit later. For the moment, all you need to do is add a new property inside the
SmsService class called
Configuration of type
IConfiguraiton and add a Constructor taking an
IConfiguration object, which will simply assign our
Configuration property to that object.
public IConfiguration Configuration { get; set; } public SmsService(IConfiguration config) { Configuration = config; }
Write Your SendSms Method
Inside the
SmsService, we're going to add a
SendSms method. That method will take three strings:
to,
from, and
text which will contain the number the message is going to, the Vonage API number the message is coming from, and the text of the message.
All that's left to do from this service's perspective is:
- Pull the API key and secret out of the configuration
- Create a SmsClient
- Send the SMS
All of this can be accomplished with the following:
public SendSmsResponse SendSms(string to, string from, string text) { var apiKey = Configuration["VONAGE_API_KEY"]; var apiSecret = Configuration["VONAGE_API_SECRET"]; var creds = Credentials.FromApiKeyAndSecret(apiKey,apiSecret); var client = new SmsClient(creds); var request = new SendSmsRequest { To= to, From = from, Text = text }; return client.SendAnSms(request); }
Configure SmsService as Injectable
Now that we have the service built, we need to make sure that we can inject it into our razor page. To do this, we need to go into
Startup.cs and find the
ConfigureServices function. Add the following line to the end of this function, and the service will be injectable:
services.AddSingleton<SmsService>();
Add Frontend
We're going to use the
Pages/Index.razor for our frontend, so just open it up and delete everything below line 2.
Dependency Inject SmsService
The first thing we need to do here is pull in our
SmsService. To that end, add a
using and an
inject statement, like so:
@using SendSmsBlazor.Data @inject SmsService SmsService
Add C# Code to Send the Message
One of the really neat things about Blazor is that it allows you to run C# code in the browser—all we need is an
@code{} block, and we're good to go. By doing this we are making an anonymous class in-line, so we will add a
To,
From,
Text, and
MessageId to this anonymous class and add a method called
SendSms which will actually call our SmsService. This is going to look like this:
@code{ private string To; private string From; private string Text; private string MessageId; private void SendSms() { var response = SmsService.SendSms(To, From, Text); MessageId = response.Messages[0].MessageId; } }
Add Input Fields and Send Button
Now that we have all this out of the way, we're going to add a few input fields. The
To,
From, and
Text fields will be populated by binding them to these input fields with the
@bind attribute. At the bottom, just above the button, we will display the sent
MessageId by referencing it in a paragraph tag. Finally, we'll add a button to the bottom that will call the
SendSms button in our anonymous class when clicked. Add the following between the
@code block and the
@inject block:
<h1>Send an SMS!</h1> Welcome to your new app Fill out the below form to send an SMS. <br /> to: <input id="to" @ from: <input id="from" @ text: <input id="text" @ <p>@MessageId</p> <button class="btn btn-primary" @Send SMS</button>
Configure the App
The last thing we must do before running our server is to configure it. If you'll recall, you set an
IConfiguration object in the
SmsService. All you need to do is open
appsettings.json and add two properties to the configuration
VONAGE_API_KEY and
VONAGE_API_SECRET. Set those to the API key and API secret values in the Dashboard.
Running our app
With all this done, just return to your terminal and run the following command.
dotnet run
Your application will tell you what port it's listening on—for me it's port 5001, so I'd navigate to
localhost:5001, fill out the form, and hit SendSms. You'll see the SMS on the number you sent to, with the Message-ID from the SMS appearing just below the text field.
Resources
The code for this demo can be found in GitHub. | https://developer.vonage.com/blog/2020/07/08/how-to-send-an-sms-with-blazor | CC-MAIN-2022-27 | refinedweb | 933 | 61.87 |
On Fri, 27 Apr 2001, Mike Galbraith wrote:>> ran itself into the ground. Methinks I was sent the wrooong patch :)Mike,Please apply this patch on top of Rik's v2 patch otherwise you'll get thelivelock easily:--- linux.orig/mm/vmscan.c Fri Apr 27 04:32:52 2001+++ linux/mm/vmscan.c Fri Apr 27 04:32:34 2001@@ -644,6 +644,7 @@ struct page * page; int maxscan = nr_active_pages >> priority; int page_active = 0;+ int start_count = count; /* * If no count was specified, we do background page aging.@@ -725,7 +726,7 @@ } spin_unlock(&pagemap_lru_lock);- return count;+ return (start_count - count); } /*-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at | https://lkml.org/lkml/2001/4/27/38 | CC-MAIN-2022-21 | refinedweb | 127 | 62.38 |
Feature #12205
update missing/strl{cat,cpy}.c
Description
The attached git diff updates
missing/strlcat.c from 1.8 to 1.15,
missing/strlcpy.c from 1.5 to 1.12 and also the
LEGAL file.
There is no important reason. But there was a license change:
new style-BSD to a less restrictive ISC-style license.
Other changes include improving code readability and
modernizing (function prototypes, no
Upstream URLs (if you're looking for more details):
Files
Updated by shyouhei (Shyouhei Urabe) over 4 years ago
The code is much cleaner so I would +1, but it seems the upstream has more recent revisions (strlcat.c 1.16 and strlcpy.c 1.13). Why to avoid them?
Updated by cremno (cremno phobia) over 4 years ago
Shyouhei Urabe wrote:
The code is much cleaner so I would +1, but it seems the upstream has more recent revisions (strlcat.c 1.16 and strlcpy.c 1.13). Why to avoid them?
The current revisions would require defining a function-like macro called
DEF_WEAK (which originally defines a weak alias):
But CRuby isn't a libc implementation. Possible namespace violations can be solved by e.g. renaming
strl* to
ruby_strl*.
Updated by hsbt (Hiroshi SHIBATA) almost 4 years ago
- Assignee set to hsbt (Hiroshi SHIBATA)
- Status changed from Open to Closed
- Tracker changed from Misc to Feature
Also available in: Atom PDF | https://bugs.ruby-lang.org/issues/12205 | CC-MAIN-2020-29 | refinedweb | 232 | 68.97 |
Can anybody tell me how can i use the keyboard keys while automating the mobile apps? I want to press the go button in mobile keyboard while making the script.
Usage of keyboard in mobile testing
Which device OS (Android/ iOS) you are using? Different OS will have different ways to handle keyboard
Currently i am doing testing on android device.
Can anybody tell me how can i use the keyboard keys while automating the mobile apps?
Need help on urgent basis
Can anybody tell me how can i use the keyboard keys while automating the mobile apps?
Need help on urgent basis
On Android, you can interact with the keyboard through:
import the following at the top of your test:
import com.kms.katalon.core.mobile.keyword.internal.MobileDriverFactory import io.appium.java_client.android.AndroidDriver import io.appium.java_client.android.AndroidKeyCode
And then use this code to access the keys:
AndroidDriver<?> driver = MobileDriverFactory.getDriver() driver.pressKeyCode(AndroidKeyCode.ENTER)
Where AndroidKeyCode is an enum with all of the possible key entries, e.g.
KEYCODE_0,
KEYCODE_A, etc.
hi,
use Mobile.tapAtPosition(1075, 1900, FailureHandling.OPTIONAL)
try it.
Hi, I am unable to enter anything from soft keyboard in device through automation. I need to enter 4 digit pin by selecting each digit from the device soft keyboard. Can you please let me know how to automate it?
Also, the above code throws error when test case is run.
You can try using ‘adb’ command. Refer here:. Basically this adb command can be called directly from either cmd (Windows) or terminal(macOS) and so in your script just create a util served as wrapper to call the command, e.g:
I updated the response above to include the
import statements you’ll need in order to use the keyboard. The
AndroidKeyCode class should have all the keys you need to interact with the pin screen:
AndroidKeyCode.KEYCODE_0 through
AndroidKeyCode.KEYCODE_9.
If you’re still receiving errors, could you please provide a log and screenshot of the error.
Thanks,
Chris
Hi Chris,
Am not getting any errors. The TC is passed but nothing happens. It is not clicking on the numbers. Actually this keyboard is present on the screen by default once we navigate to this screen. Once we tap on numbers, the circle gets highlighted in black. Not sure if this is behaving as a soft keyboard.
You might have better luck by disabling the soft keyboard in Android. You’d do this by setting the
unicodeKeyboard and
resetKeyboard capabilities to
true. You can do this right before starting the app in your test:
import com.kms.katalon.core.configuration.RunConfiguration RunConfiguration.setMobileDriverPreferencesProperty("unicodeKeyboard", true) RunConfiguration.setMobileDriverPreferencesProperty("resetKeyboard", true)
Hope this helps,
Chris
Hi Chris,
Its me again, so I’m trying to enter numeric digit using the android keypad as well and I able to enter the digits just fine but I need to press the “DONE” button within the android keyboard which will lead to me the next screen. so far I have included these codes and here are my errors and the actual screen shot of the application, please help me resolve this issue.
BTW I have already imported- import io.appium.java_client.android.AndroidKeyCode
The error you’re seeing is similar to the error you saw before with
startX and
startY: you are repeating the:
AndroidDriver<?> driver = MobileDriverFactory.getDriver()
line, which is unnecessary. That line sets up the
driver variable, and once it’s done one time in your test, you don’t need to do it again, you can just use:
driver.pressKeyCode(AndroidKeyCode.ENTER).
– Chris
Hi Chris,
I tried commenting out that line and running the “driver.pressKeyCode(AndroidKeyCode.ENTER)” code and it doesn’t press the enter button within the numeric android keyboard, Actually there isn’t any enter button within the keyboard its called DONE button, can you please help?
I wonder if
AndroidKeyCode.KEYCODE_NUMPAD_ENTER would work in this case since it’s more specifically for a number pad.
– Chris
Hi @Chris_Trevarthen,
Unfortunately AndroidKeyCode.KEYCODE_NUMPAD_ENTER code didn’t work, it executes successfully but doesn’t tap on the Done button within the keyboard at all.
That’s unfortunate that the keys aren’t working for you. Have you been able to use the Spy Mobile tool and try to capture the object for that key? That might give some hint as to what type of element it is.
The alternative, if we can’t get the key to work, is to use
Mobile.tapAtPosition to just tap the screen at a certain coordinate that you know will be within the Done button:
Hope this helps,
Chris
Hi Chris,
If the the “Done” button of the keyboard is at the very bottom right corner of the screen how can I predict the x and y coordination for this button. I’m asking you this is because my native keyboard can not be recognized by the katalon studio spy, that’s why I’m trying to TapAtPosition method. | https://forum.katalon.com/t/usage-of-keyboard-in-mobile-testing/8701 | CC-MAIN-2021-43 | refinedweb | 832 | 56.86 |
i need a quick response about array and string...
i need a quick response about array and string... how can i make a dictionary type using array code in java where i will type the word then the meaning will appear..please help me..urgent
about JVM - Java Beginners
about JVM Hello Rose india.net team
I want to ask that why JVM is platform dependent
is it automatically installs whenever we install jdk....
expecting a quick reply from your team........
regards: Sachin Sharma - basics - Java Beginners
About basics Why we are typing java program in text editor?
Why we are running the java program in command prompt
about jboss - Java Beginners
about jboss can you please explain about jboss...how to deploy,where the temp and lock is there...total information about jboss to use in java message services(JMS
about J2EE. - Java Beginners
about J2EE. I know only core Java ... what chapter I will be learn to know about J2EE. Hi Friend,
Please visit the following link: enum - Java Beginners
about enum hi all,
please tell me about "enum" and explain with example. And its use in OOP.
Thanks
about interface - Java Beginners
about interface can anyone explain to me the implementation of the given line
Set si=new HashSet();
Problem is that Set is an interface...://
about coding - Java Beginners
about coding hello sir,
I want to create session expair, when user is not enter anything in login page session wil expair after 10 min.if user enter its go to success page.plz send me full code about this.
thanks
Tomcat Quick Start Guide
Tomcat Quick Start Guide
This tutorial is a quick reference of starting development application using JSP, Servlets and JDBC technologies. In this quick and very
about java swing - Java Beginners
about java swing How to upload the pictures and photo on the panel in java swing ,plz help
thank a lot. Hi Friend,
Try the following code:
import java.awt.*;
import java.io.*;
import javax.swing.*;
import
about swing - Java Beginners
about swing how implement a program of adding two numbers by entering two numbers separately by other user on the input dialog box and after that also show the result of the addition in other dialog box...
your regardly
Quick Sort In Java
Quick Sort in Java
...;java QuickSort
RoseIndia
Quick Sort... to sort integer values of an array using quick
sort.
Quick sort algorithm
about project code - Java Beginners
about project code Respected Sir/Mam,
I need to develop an SMS... in commercial areas to send alerts to their customers about the events....
This can be developed using any kind of components using JAVA.
The following implements and extends - Java Beginners
about implements and extends hello,
class A extends B implements c // this is valid statement
class A implements c extends B // this is invalid statement
can u plz explain me why the 2nd statement is invalid even though
About Java Hi,
Can anyone tell me the About Java programming language? How a c programmer can learn Java development techniques?
Thanks
Hi,
Read about java at.
Thanks
java
community about these two IDE are fairly similar. For basic Java (Java SE...Java Programming tools what is the differance between eclipes.... Because you can get GlassFish with Java EE package for NetBeans, it is easier
java
about these two IDE are fairly similar. For basic Java (Java SE) development...java netbeans what is the differance between eclipes and netbeans... you can get GlassFish with Java EE package for NetBeans, it is easier to use than
java
community about these two IDE are fairly similar. For basic Java (Java SE...What is Java eclipes what is the differance between eclipes.... Because you can get GlassFish with Java EE package for NetBeans, it is easier
java
within the Java community about these two IDE are fairly similar. For basic Java...java eclipes and netbeans tutorial what is the differance between... in NetBeans. Because you can get GlassFish with Java EE package for NetBeans
java
the Java community about these two IDE are fairly similar. For basic Java (Java SE...java eclipes and net beans what is the differance between eclipes... in NetBeans. Because you can get GlassFish with Java EE package for NetBeans, it is easier
java
opinions within the Java community about these two IDE are fairly similar. For basic...differance between java eclipes and netbeans what is the differance... is better in NetBeans. Because you can get GlassFish with Java EE package
java
the Java community about these two IDE are fairly similar. For basic Java (Java SE... in NetBeans. Because you can get GlassFish with Java EE package for NetBeans... MVC based application in Java. Servlet/JSP development is fairly very simple
java
community about these two IDE are fairly similar. For basic Java (Java SE.... Because you can get GlassFish with Java EE package for NetBeans, it is easier... application in Java. Servlet/JSP development is fairly very simple compared
java
community about these two IDE are fairly similar. For basic Java (Java SE.... Because you can get GlassFish with Java EE package for NetBeans, it is easier to use... application in Java. Servlet/JSP development is fairly very simple compared
java
within the Java community about these two IDE are fairly similar. For basic Java... is better in NetBeans. Because you can get GlassFish with Java EE package for NetBeans... are developing MVC based application in Java. Servlet/JSP development is fairly very
About java
About java how we insert our database data into the jTable in java
or
how we show database content into the JTable in java
Hi Friend,
Try the following code:
import java.io.*;
import
Books Of Java - Java Beginners
Books Of Java Which Books Are Available for Short / Quick Learning
about java
about java how to get the value from the user like c, c++ program pls explain with example for me
java - Java Beginners
java hi......
how can i use the visible property in java using swing concept....
pls reply as quick as possible becoz its urgent
java - Java Beginners
:
http...java Java always provides default constructor to ac lass is it true... constructor.If we don't create any constructor for a class java itself creates
java - Java Beginners
:
Thanks...java Write a pseudo code algorithm to pick the 1st element as a pivot and a partitioning array Hi Friend,
In quick sort algorithm, we
java - Java Beginners
link:
1) pl. explain about the insertion sort with example and how... in JAVA explain all with example and how does that example work.
thanks
java
Techniques:
Insertion Sort
Bubble Sort
Quick Sort
For more information, visit the following link:
Core Java - Java Beginners
talk about core java, we means very basic or from the scratch where you learn about...Core Java What is Java? I am looking for Core Java Training Hi friendThe Core Java Technologies and application programming interface
Java - Java Beginners
are also great for setting up quick tests to see how Java works. The applications...Java Console application What is Java Console application? Hi friend,A Java Console application can only display textual data. Console
Java Syntax - Java Beginners
://
Thanks...Java Syntax Hi!
I need a bit of help on this...
Can anyone tell... have read about arrayList but i am trying to see if i could implement something
java - Java Beginners
java hi i am a beginner in java.
I have to make a mini project in java.
I think about to develop a web page using java
so any one please guide me how to develop web page using java
java beginners - Java Beginners
the following links: beginners what is StringTokenizer?
what is the funciton
java - Java Beginners
java why we use classpath.? Hi Friend,
We used classpath to tell the Java Virtual Machine about the location of user-defined classes and packages in Java programs.
Thanks
core java - Java Beginners
core java hallo sir,
in java ,int range is -128 to 127. what about '0' indicate what Hi,
In java byte range is -128 to 127, not of int
java - Java Beginners
java All the data types uses in java and write a program to add 2 numbers. Hi Friend,
Please visit the following links to know about all the data type used in Java :
java - Java Beginners
://
Here you...java I want to about array of objects with some examples.
How it works.
Thanks.......
shambhu....... Hi Friend,
The object
java - Java Beginners
java Hello,
i am not sure about the evaluation order of arguments in java.
can any one tell me that arguments in java is evaluated left...; Hi Friend,
In Java, arguments is evaluated from left to right.
Thanks
Java - Java Beginners
Java WHAT CATEGORY SHOULD I CHOOSE IF I WANT TO ASK ABOUT OOP(OBJECT ORIENTED PROGRAMMING) USING JAVA...JAVA CODES.WHAT SHOULD I CHOOSE BECAUSE I WANT TO ASK ABOUT OOP...THANK YOU Hi friend,
Object Oriented
New To JAVA - Java Beginners
will get more information about java.
Read more detail.
http...New To JAVA hi iam new to java..,can you please guide me how to learn the java and also tell me how many days it takes to learn java Hi
a java program - Java Beginners
for more information.... a java program well sir, i just wanna ask you something regarding... about the second line...
i have made my program but not able to click
java program - Java Beginners
java program Pl. let me know about the keyword 'this' with at least 2 or 3 example programs.
pl. let me know the program to multiply 2 matrix...://
Thanks
java - Java Beginners
java hi,
i'm chandrakanth.k. i dont know about java. but i'm...://
core java - Java Beginners
-in-java/
it's about calculating two numbers
Java Program Code for calculating two numbers
java - Java Beginners
java write a programme to implement linked list using list interface Hi Friend,
Please visit the following link to learn about the implementation of LinkedList .
JAVA - Java Beginners
the infomation about the system property (ii) java -fullversion (iii...java 1.4 vs java 1.5 What is the difference between java 1.4 and java 1.5? Difference between java 1.4 and java 1.5Java programming
Java Syntax - Java Beginners
/java/beginners/array_list_demo.shtml
Thanks...Java Syntax Hi!
I need a bit of help on this...
Can anyone tell... have read about arrayList but i am trying to see if i could implement something
Core Java - Java Beginners
://
visit ,, and u will find everything about java here for sure..., Encapsulation,Polymarphism....? I guess you are new to java and new
core java - Java Beginners
.
Please visit the following links to know more about Overloading and Overriding:
About Java - Java Interview Questions
About Java what is edition of java?
versions of weblogic?
what r the webserver applications used in your application
java programming - Java Beginners
java programming heloo expert,
thanks for your coding
i recently asked about the question on abt how to program a atm system
may i noe in which platform can i run those codes ??
netbeans ?? any suggestions
Java Thread - Java Beginners
Java Thread hii i feel confusion in tread. i want to know about
1... and simple examples of "Multithreading".
1.
2.
JAVA PROJECT - Java Beginners
JAVA PROJECT Dear Sir,
I going to do project in java " IT in HR, recruitment, Leave management, Appriasal, Payroll"
Kindly guide me about programming, Database selection (sql or mysql) which is going to work on networking
java method - Java Beginners
java method i wanna explation about a method
for example... Mail[] getAllMails(String userName)
I WANNA EXPLATION ABOUT ABOVE METHOD CAN U... and Tutorials on Java Mail visit to :
Thanks
java - Java Beginners
for more information about java applet at:
Thanks...;Hi friend,
i am sending simple program of java applet
import java.io.
java - Java Beginners
java tell me more details about abstraction with examples Hi friend,
A class that is missing definitions for one or more methods. You.../java/master-java/java-object-oriented-language.shtml
Thanks Beginners
Java Hi All,
The question below was asked during my interview
Interviewer:I have a team of about 10 members and I need the information about...://
JAVA - Java Beginners
JAVA I want to know about JASON with basics,syntax,uses
Java errors - Java Beginners
Java errors I need help sorting parallel arrays. I've done about 75% of the code myself, but it's ugly and I can't get it to run. One array contains... it's all about):
Ask about java
Ask about java Create a java program for add and remove the details of a person, which is not using by database, a simply java program.
If possible, please wil it in switch case
Java - Java Beginners
Java Hey guys
I'm fairly new to Java programming.
How would I go about converting an ArrayList to an array of object?
Gina
mononelasg@gmail.com Hi Gina,
Use the following code:
import java.util.
java - Java Beginners
java i am a beginer one in java. how should i start
Hi,
If you have knowledge about basic concepts in C Lanauage.
You easily learn java.just refer the BalaGurusamy book or Complete reference.It will help
program - Java Beginners
Java vector program Please give me an example of Java vector program.Thanks!! hi friendNow, read about vector program. Here, is the used without taking user input.
about c and java
about c and java i need java and c language interview and objective questions with answers for the fresher.please provide to me
Please visit the following links:
http
java problem - Java Beginners
java problem Suppose list is an array of five components of the type int.What is stored in list after the following Java code executes?
for (i...] - 3;
}
THANK YOU GUYS FOR SHARING YOUR IDEA ABOUT THIS.THANK YOU SO MUCH!ILL
java array - Java Beginners
java array 1.) Consider the method headings:
void funcOne(int...];
int num;
Write Java statements that do the following:
a. Call... and Alist, respectively.
THANK YOU GUYS FOR SHARING YOUR IDEA ABOUT THIS.THANK
Java for beginners - Java Beginners
://
Thanks...Java for beginners Hi!
I would like to ask you the easiest way to understand java as a beginner?
Do i need to read books in advance
Core Java - Java Beginners
Core Java How can we explain about an object to an interviewer Hi friend,
Object :
Object is the basic entity of object oriented... to :
Java - Jfreecharts - Java Beginners
Java - Jfreecharts Hi,
Can U pls tell how to use jfreecharts?
Pls tell me how to create dynamic charts wrt dbase using jfreecharts.
Regards,
Prashant. Hi Friend,
Visit the following page to know about
Java - Java Beginners
Java How to make a multiple choice quiz in java,
1- Quiz program that answers questions about Math, Science and Arts.
2- Student selects the topic, the program presents a series of questions.
3- Student should answer
Java for beginners
Java for beginners Java for beginners
Which is the best resource... Java video tutorial for beginners.
Thanks
Hi,
Here are the best resources for Learning Java for beginners:
Java Video tutorial
Java tutorials
explanation - Java Beginners
to know about the Garbage collection. I have create small java appication. I don't know about garbage collection, memory leak. I want reference about these and how to use my
Java Code - Java Beginners
Java Code Create a class Computer that stores information about... about a single computer is,
- Company Name
- RAM size.
- Hard Disk... of program. - Method that displays all details about a single instance
core java - Java Beginners
core java pl. tell me about call by value and call by reference... change the fields in the caller?s objects they point to. In Java, you cannot...
System.out.println("Massage 2: i= " + i + ", d= " + d);
Double(i, i); //Java
java - Java Beginners
java hi,
I need RSS Feed consume functionality and java code for this things............
c. Should be able to consume feeds from different..., then that user gets a notification about it.
g. Should be able to tag a feed.
java programming - Java Beginners
java programming
this is my preoccupation:
i design an 2-tier application, the program has the *login window* and *main windows*.
I want to get... has logged on successfully.but i don't know how to go about it.
i also have | http://roseindia.net/tutorialhelp/comment/15483 | CC-MAIN-2014-10 | refinedweb | 2,739 | 57.37 |
This tip is extracted from the book, jQuery, jQuery UI, and jQuery Mobile: Recipes and Examples by Phillip Dutson and Adriaan de Jonge, published by Pearson/Addison-Wesley Professional, Nov 2012, ISBN 9780321822086. For more info, please visit the publisher site. Related video training includes: "jQuery Fundamentals LiveLessons (Video Training)"
[ Enter ITworld's drawing to win a copy of jQuery, jQuery UI, and jQuery Mobile: Recipes and Examples ]
You can add your own selectors similar to the form pseudo selectors and contains(). Writing your own selector helps you to understand their performance issues working on large data sets. And if you know how to work around these performance issues by preselecting elements, you can still take advantage of their elegant notation. Listing 2.10 shows how to create your own selector.
Listing 2.10 Creating a Custom Selector for Every Third Element
00 <!DOCTYPE html>
01
02 <html lang="en">
03 <head>
04 <title>Custom selector</title>
05 </head>
06 <body>
07
08 <h2>Every third line gets a blue background color</h2>
09
10 <ul>
11 <li>One</li>
12 <li>Two</li>
13 <li>Three</li>
14 <li>Four</li>
15 <li>Five</li>
16 <li>Six</li>
17 <li>Seven</li>
18 <li>Eight</li>
19 <li>Nine</li>
20 <li>Ten</li>
21 </ul>
22
23
24 <script src=""></script>
25
26 <script>
27 // please externalize this code to an external .js file
28 $(document).ready(function() {
29
30 $.expr[':'].third = function(obj, index, meta, stack) {
31
32 // obj contains the current element
33
34 // index contains the 0 - n, depending on the current
35 // element and the number of elements
36
37 // meta contains an array with the following values:
38 // [":third('bla')", "third", "'", "bla"]
39
40 // stack contains a NodeList, which can be transformed into
41 // an array using $.makeArray(stack)
42
43 return (index + 1) % 3 == 0;
44 };
45
46 $('li:third(\'bla\')').css('background-color', 'blue');
47
48 // li:nth-child(3n) would do the same of course
49 // this example is to show how to implement a custom
50 // selector
51
52 });
53
54 </script>
55 </body>
56 </html>
At the core of jQuery is the sizzle selector engine. This is what gives jQuery the ability to transverse the DOM and select what you want. You can use the sizzle selector in jQuery to create your own selectors. Line 30 shows how this is done. By using $.expr[':'], you are informing jQuery that you are going to build a selector. Adding .third instructs jQuery how the selector will be accessed. The four variables in the anonymous function are standard in creating a custom selector. Each variable has been explained in the comments to help you understand what each one holds.
This example does not use all possibilities that are available for a custom selector. The bla argument passed to it does not have a function. Using the pointers inside the code comments, you can elaborate on the possibilities. | http://www.itworld.com/article/2832934/enterprise-software/jquery-tip--creating-custom-selectors.html | CC-MAIN-2014-52 | refinedweb | 497 | 59.94 |
The 450 Movement
I do peer review and I want you to pay me four hundred and fifty dollars. I’ll even say please.
Introduction
It’s amazing how quickly a perspective can change.
I thought I’d be an academic forever, maybe longer.
That was Plan A.
For all its ridiculous foibles, and the resulting incipient hair loss, and for my many, many attempts to kick its shins, it was still Plan A. I liked it well enough.
But I never wanted to be ‘an academic’.
I wanted to be ‘a scientist’.
And there are flavours of that.
Plan B was always working in wearable tech, wearable physiological data, wearable device design. I’ve been around things which go beep a lot, more than a decade. I’ve used everything, measured everything, broken everything, and generally gone from muddling my way through to being the full-stack equivalent of a wearables weirdo. Every conceivable way you can get data out of a person without puncturing them, I’ve used it.
The Plague didn’t help. Higher education institutions, research institutions, etc. — they’re in A Right State. They never saw this coming, and it blindsided them, utterly. Many of them will hit the rocks, and soon.
So, I got a job at a company building medical wearables for physical therapists.
(Side note: I work here, and my job is pretty great. I’ve always derived a tremendous amount of pleasure from building a physical object, and being able to build a wearable device is… well, it makes me wonder why it wasn’t Plan A. We’re hiring engineers right now and I have to resist the temptation to give people advice for life which consists of “Hey, stop vacillating and start work here.” If you’re an engineer in Denver, call us.)
Having a ‘corporate job’ hasn’t made me dress any better, but it has certainly has changed my perspective on things.
And yesterday, I had a very odd experience. I was clearing out my personal inbox, which has hundreds of unanswered emails, and I found a review request from a journal.
My first thought was: oh, I should hurry up and send them a contract.
That’s my world now.
We need advice? We find a consultant.
We like the consultant? We sign an NDA, so we can talk freely.
We have a productive conversation? We draw up a contract.
Or, maybe we do spec work for someone else? The other way around. But, again, it ends with contract.
We want something done, we pay for it. The rules apply to us. A little company in a big world. Maybe not a little company for very long, if I have anything to say about it, but we’ll see about that.
This is how commercial relationships are conducted. It is straightforward and ubiquitous. The result is often no more complicated or mysterious than a regular bank or wire transfer. You buy goods and services.
And then, contemplating drawing up a contract to perform a peer review, I realised simultaneously both the utter normality, and the astonishing weirdness of this thought.
And then I laughed myself sick, and had another cup of coffee.
… then I sat down,
wrote up a contract,
shined it up a bit to reflect the fact that it was for peer reviewing,
stuck my tongue firmly in my cheek,
and sent it to the accounting department of the journal group.
Money
A certain irony is: I don’t really need the money now.
But forget me. I don’t matter. This is about you.
This is about your rent, and your tenuous academic job, and your time taken out of your own workflow with increasingly rapid review requests, accommodated in an environment where you have to produce research even faster, and to teach more classes of increasingly anxious students.
Now, you can call this ‘exploitation’ or ‘efficiency’, that depends on your perspective. But the inarguable fact is that academia is a increasingly casualised, difficult, moribund place to work. It squeezes people into dust, and usually people far less able to defend themselves and headbutt circumstances than me.
This is, and I cannot overstress this, awful.
The only people who don’t understand this, and you can check their public resume because they’re usually very proud of it, are people who got tenure during the Taft administration.
These are the only people yammering on about ‘service’ and ‘values’. Everyone else needs the money, because they don’t have a stable full-time job to leverage against.
Ever met a homeless graduate student? I have.
What about someone who couldn’t ‘afford’ to be an academic any more? Yep.
An adjunct who got their classes cut then didn’t have an income? Again.
A pre-tenure professor worried about the debts that got them there? Course.
And now, The Plague. This will get a lot worse.
I’m going to repeat that. This will get a lot worse.
Now, there’s no call to arms here. I’m not going to make the claim that ‘people don’t deserve this’. They don’t, but that’s not the point. The universe is a smelly old thing which has plans for trampling ‘deserve’. We would wish it otherwise.
I am going to make the claim, however, that in our bold new astonishingly tenuous academic hellscape, this is a straightforward matter of commerce. Fiscal reality.
So let’s talk about that, and why I want four hundred and fifty dollars, and why I think you should have four hundred and fifty dollars too.
Cost
My corporate consulting rate is $250 USD per hour. In some circles, that’s unthinkably large. In others it’s embarrassingly small. Pick your poison.
My academic consulting rate is a lot lower, and if I’m being honest with you, I hardly ever charge it. Desperate people have turned up trying to get out of analytical holes, and I simply cannot bring myself to take their money. I just help them. Which is what I would have done if they had just asked, anyway.
So let’s bear in mind as well that while this is academic consulting, journal groups have more money than God. You can read the hundreds of articles about that have been published since… forever. It’s boilerplate. Everyone knows that.
So throw all that together and we can start from some estimate that’s… let’s say $50 to $150 ph. Ballpark.
(Bear in mind, you have to pay legit tax out of this — 15.3% self-employment, plus whatever it adds to your income, which means it’s adding to your highest bracket. For me, this means I’ll keep a hair over half of it. That’s one of the reasons contracting fees are often higher than you’d think.)
Now, I estimate I will spend three to nine hours reviewing a paper. I am quite annoyingly meticulous, actually. This includes responses and editions and wrangling with journal management systems, and everything.
So if we put the low with the high, and the high with the low we get… $450, actually, both ways.
Give me four hundred and fifty dollars.
Business
Universities, throughout every decision they’ve ever made in the last 30 years, have made it very clear that they are businesses. You work for a business. A university may claim to have higher values and they may blather on about them more often, but it is an institution run by professional business administrators. Often they used to work at large banks, hedge funds, higher levels of government, other financial institutions, et al.
Students are customers. Good staff who bring in grant money are assets. On and on it goes. They aren’t joking when they use this language.
Journals, as privately owned or publicly traded companies, have made it even clearer. They publish P&L statements. They have investors, and boards of directors, and they calculate revenue growth, and operating income. Part of their revenue growth and operating income is made out of your research.
So, you work for a business, and a journal is a business, and — under what is at the very least a quasi-commercial agreement — that business is asking for your time to ensure the quality of their core product?
Cool. We are all jolly and mercantile together then, aren’t we?
They can pay you four hundred and fifty dollars.
Conclusion
I have no call to arms, no banners to shake at the sky.
I am not ‘a radical’, I oscillate between poles of the maintenance of basic human dignity, and free speech (I need that one, I can be… Robustly Australian at times).
I don’t want to ‘change the world’.
I’m not the slightest bit upset writing this. I am not angry.
I’m not sticking it to the man.
I am a person with a set of skills, in a commercial market, skills that other people offer to pay for all the time.
I want to be compensated for my labor in the same way any other grinning fool in LITERALLY ANY OTHER COMMERCIAL CONTEXT ON THE ENTIRE PLANET WOULD BE.
I want four hundred and fifty dollars.
Give me four hundred and fifty dollars.
Q&A
Wait, aren’t you that open science guy?
Yep.
But many, many journal groups obviously don’t want what I want.
They want to publish research within a closed environment, not building community resources, not making processes transparent, not sufficiently weeding out bad research, etc.
This is because they don’t care. Some combination of (a) they have no commercial pressure to do so (b) they don’t care (c) it’s too hard (d) they don’t know it’s a good idea.
They are companies. Their fiduciary responsibility is to their shareholders. As far as they are concerned, they owe you nothing as a community resource.
So why would you demand anything from them except money? Clearly it doesn’t work very well, or something might have changed in the shinty-six years we’ve been talking about it.
Help people. Help the community. Bill companies.
What about community journals, not-for-profit journals, and society journals struggling to get by?
I will review their papers. Even now, in my often-dilapidated corporate state. Quickly, and well, and for nothing. It will not even occur to me to charge them money. That is unthinkable and unfair.
Aren’t you holding up the work of people who need the publications?
No. I am offering the journal group the chance to employ me. Wherever possible, I will tell the editor that this is happening. It should take no longer than a regular review acceptance. Either party is entirely entitled at any point to tell me to go fuck myself, and this is FINE. Contracts are often not established for a variety of reasons. This is just A-OK.
If *I* did this, wouldn’t I get in trouble?
From who? Who are they going to rat you out to, your mum? They just asked for a commercial arrangement, you provided one. And they’re going to complain if they don’t like it?
If we put on our Business Hats, let’s think of:
- investment (reviewing papers takes ages and prevents you advancing your career, making money elsewhere, or sitting on the couch and staring happily at the ceiling) and
- return (thankless task, where how many papers you reviewed and how well you did that job counts for nothing). And yes, I know about Publons, where you can see all the reviews I did for free.
Peer review has almost no determination over jobs, tenure, or promotion. Journal editors, especially fancy ones, will lie right to your face and tell you that it’ll ‘get your name in front of the right people’.
It won’t. Or, at least, if it does, it’s vanishingly uncommon.
Peer review has almost no immediate career benefit at all unless you literally steal the manuscript. (Don’t do that.)
The only reason you should do it is to support the community, right?
GOOD. DO THAT. Here are five ways you can do “service” right now:
- Write to an editor at a small community journal and offer to review manuscripts within an area of mutual interest.
- Come up with a sci comm seminar and teach it at a local school.
- Put a statement on your faculty website that says ‘I will help any graduate student, within reason, for free, with no questions asked’.
- Offer to read people’s resumes or conduct mock job interviews.
- Set up email alerts so you can capture and review preprints in your area.
But people don’t do this shit, because it’s work.
Or it isn’t fancy enough for them.
Perhaps they simply like the idea of their opinion being important in the right context. Their idea of service is somehow confined to reviewing for prestige journals only. And never replying to the emails I sent as a graduate student.
I’m an editor and you’re really pissing me off.
So write to someone within your journal’s organisational structure, someone that works for the publishing company that owns your journal and gets paid to do that work, and tell them to pay me.
If you don’t know who that is, I guess that’s a lesson in how utterly divorced they are from you — you’re working for a company for free, and you don’t know a paid representative of your employer?
You’re a stooge.
(Also, I can guarantee that reviews you pay me for will be superb. You’ll be so happy. Just, you know, when your overlords pay me four hundred and fifty dollars.)
Do I have to charge four hundred and fifty dollars?
No. It’s a contract under negotiation. That’s where it starts. You determine where it ends.
What will you do when you get four hundred and fifty dollars?
Probably buy a bottle of rye (I’m out) and give the rest to Rosie’s Place.
What happens when they don’t pay you?
… then I don’t get paid. You must be new at this.
They WON’T pay you, you know that, right?
OK, Mystic Meg. Glad you can read minds.
Seriously: business environments change. Why do you think they change? Magic? The waning of the season? No, you patronising div. Market pressure. I’m just some gink…
… but I wonder what would happen if I brought along several thousand of my best friends?
Anything else?
WHERE’S THE CONTRACT?? <- there.
Now, bear several things in mind.
(1) I’m so incredibly not a lawyer. This is not legal advice. It’s a sample or a template.
(2) This is very US specific. Sorry, I don’t work elsewhere.
(3) Let’s find someone with contract law experience to make sure it’s 100% kosher before handing it out willy-nilly. I can’t even remember where I got the stimulus material. | https://jamesheathers.medium.com/the-450-movement-1f86132a29bd | CC-MAIN-2022-27 | refinedweb | 2,522 | 76.22 |
Java language provides three styles (Single Line, Multi-line, and Javadoc) of comments. Before talking of a particular Java comment type we should know that a Java comment can be inserted anywhere in a program code where a white space can be. Java compiler does not include them in final executable. So, you can insert as many Java comments as you want until they are proven to be useful. Following three styles of comments Java supports. and provide additional information that is not readily available in the code itself.
On the other hand Javadoc comments too inserted into the source code but they are read by a special javadoc tool in order to generate source code documentation.
Java's single line comment starts with two forward slashes with no white spaces (//) and lasts till the end of line. If the comment exceeds one line then put two more consecutive slashes on next line and continue the comment.
Java's single line comments are proved useful for supplying short explanations for variables, function declarations, and expressions. And sometimes to comment out one or more lines of code. Java single line comments are usually not used on consecutive multiple lines for text comments. See the following piece of code demonstrating single line comment
if(x < y) { // begin if block x = y; y = 0; } // end if block
Java's multi-line or slash-star or traditional comment is a piece of text enclosed in slash-star (
/*) and star-slash (
*/). Again there should be no white space between slash and star. Java's multi-line comments are useful when the comment text does not fit into one line; therefore needs to span across lines. Slash-star comments are the only choice when a comment needs to be inserted in middle of the code line. For example:
// CommentDemo.java // Demonstrating multi-line comments public class CommentDemo { public static void main(String[] args) { for (int i = 0; i < 5; /* exits when i reaches to 5 */ i++) { System.out.print(i + " "); } } } OUTPUT ====== 0 1 2 3 4
In
CommentDemo.java a comment is inserted in
for loop header. If we need to supply a comment somewhere in middle of the code line, then we are left with the only choice of slash-star comment.
It is very important to keep following points in mind, while inserting comments in a Java programs.
/*and
//) or single line comments.
Java's slash-star comments end with closing star-slash (
*/). So as soon as a closing
*/ is encountered the comment should be considered ended. If you try to nest a slash-star comment then wherever the
*/ of inner comment is encountered the whole comment will be treated finished, and rest of the text will be assumed usual Java code that will result into compile time error. As an example, below piece of comment code is not correct:
/* This comment contains /* nested comment. */ * The comment ends where the first star-slash (*/) * occurs. Rest of the text after the first * star-slash is treated as usual Java code. */
Nesting of single line or slash-slash (
//) comments would not result into compile time errors, because a single line comment automatically gets ended with the line.
If a Java comment is surrounded by double quotes, it is processed as a string literal instead of a comment by the Java compiler. For example, the following declarations of
String commS, and
String commS1 contain comment like strings but these are not treated as valid Java comments and get printed as it is if printed by
System.out.println().
/* CommentDemo.java * Java comments surrounded by double quotes are processed * as string literals instead of a comment by the Java compiler. */ public class CommentDemo { public static void main(String[] args) { String commS = "/* It looks like a comment but It is treated as string */"; String commS1 = "// It too look like a comment but It is treated as string"; System.out.println(commS); System.out.println(commS1); } } OUTPUT ====== /* It looks like a comment but It is treated as string */ // It too look like a comment but It is treated as string
Java's slash-star (
/* and
//) have no special meaning and get ended with the current line. In real life programming you should not do that nesting because it is confusing and serve no purpose.
Vice versa Java's double slash (
/**) comments have no special meaning but create confusion. You must avoid doing that.
Java programs interpret Unicode characters as usual tokens even in comments because Unicode sequence is processed early in the Java compiler's lexical scanning of the source file, even before comments are processed. So at compile time during code scanning, wherever a Unicode presenting a character appears it is recognized as usual Java token. For example, Unicode for character
* is
\u002a and for
/ is
\u002f; therefore you can write a comment as
/* comment is ended by using Unicode characters \u002a\u002f. Here, Java compiler will recognize
\u002a as
*, and
\u002f as
/ and the compiler will treat it a right comment. Here goes an example.
/* UnicodeCommentDemo.java * Java programs are written using Unicode characters * Unicode presenting a character recognized as usual Java token */ public class UnicodeCommentDemo { public static void main(String[] args) { double pi = Math.PI; /* multiplies pi by 4 \u002a\u002f // above comment is lexically equivalent to the following /* multiplies pi by 4 */ System.out.println(pi \u002A 4); // \u002A is Unicode of * } } OUTPUT ====== 12.566370614359172
The newline character, if supplied by its Unicode then you can write more than one statement on a single line as follows:
The above piece of code will be compiled successfully and print 10 on the screen. The Unicode
\u000a supplies a new line to end the first comment.
Java documentation (Javadoc) comments are written to document the APIs a class reveals to its users. Users of a class, of course, are programmers. API documentation is developed as part of the source code and kept in source files. By using Java documentation comments classes, fields, constructors and methods are documented.
Javadoc is a tool which comes as part of JDK. This tool generates html documentation from the source files. It parses the comments enclosed in
/** and
*/ from
.java source files and outputs html documentation comments in form of html pages. That's the reason comments surrounded by
/** and
*/ are called Javadoc comments. Under this topic we will discuss how Javadoc comments are written, what guidelines should be followed while writing Javadoc comments, how to use
javadoc tool to generate Java documentation comments, and how to see generated Javadoc comments.
Java documentation comments are written in html surrounded by
/** and
*/ and like traditional comments can span multiple lines of code.
Java documentation (Javadoc) comment is placed just before the entity to be documented. As said before, it can be a class, field, constructor, or method. A documentation comment is formed of two parts -- a description followed by one or more Javadoc tags. Predefined Javadoc tags (whose names start with
@) control the look of resultant HTML page. Let's take a small example to show how documentation comments are written. In mentioned example, Javadoc comments are in italic green.
// JavadocCommentDemo.java /** * First sentence of the comment should be a summary sentence. * Documentation comment is written in HTML, so it can * contain HTML tags as well. * For example, below is a paragraph mark to separate * description text from Javadoc tags. * <p/> * @author Krishan Kumar */ public class JavadocCommentDemo { /** main method * @param args String[] */ public static void main(String[] args) { System.out.println(sqrt(16)); } /** * computes sqrt of passed number of type double. * @param x double * @return sqrt of x */ public static double sqrt (double x) { return Math.sqrt(x); } }
From the comments inserted in
.java file HTML documentation is generated with help of
javadoc tool. The
javadoc is a tool that comes with JDK and used to generate HTML documentation pages. But, before trying
javadoc tool to generate documentation pages we will see Javadoc tags that starts with
@ symbol (for an instance,
@author tag used in
JavadocCommentDemo.java)
Javadoc tags follow the syntax
@tag [tag description]. The
<tag description> is an optional part, but the tag would have no worth if description is omitted. Table 1 explains most used tags concisely.
In addition to above Javadoc tags listed in Table 1, there are a few more. Aforementioned are the most used ones.
While documenting an interface, class, field, method or constructor Javadoc tags are included in the given order depending upon their applicability to the item.
@author(classes and interfaces only, required)
@version(classes and interfaces only, required.)
@param(methods and constructors only)
@return(methods only)
@exception(@throws is a synonym added in Javadoc 1.2)
@seeadditional information (mostly another class)
@sinceversion number since the class exists
@serial(or @serialField or @serialData)
@deprecatedshould not be used, it's for backward compatibility
To generate Javadoc Comments in form of HTML documentation pages, the
javadoc tool is run from the command line much like the Java compiler. This tool is invoked by supplying
javadoc command along with required number of command line parameters. Following is the command line syntax of
javadoc tool.
command-line# javadoc [options] [file list separated by spaces]
Here are some Javadoc options:
-author- Include
@authorparagraphs
-classpath <pathlist>- Specify where to find user
.classfiles.
-d <directory>- Destination directory for output files.
-private- generated documentation will include private fields and methods (only public and protected ones are included by default).
-sourcepath <pathlist>- Specify where to find source files to generate documentation pages.
-version- generated documentation will include a version section
Suppose our example file
JavadocCommentDemo.java is located on C:\ drive and to generate Javadoc documentation for that, we have to execute the following command.
C:\> javadoc -author JavadocCommentDemo.java
On successful execution of the above command you will get a JavadocCommentDemo.html page along with a few more html pages generated on C:\ drive.
In this tutorial we talked of Java's single line, multi-line, and Javadoc comments. Single line, and multi-line comments are also called implementation comments, while Javadoc comments are called documentation comments. Java's single line comments are used for one lines explanation they are also called trailing or end-of-line comments. Multi-line comments or slash-star comments are called block comments. Java's block comments are used when more than one line of comments are written. Javadoc comments or documentation comments are inserted to describe classes, interfaces, fields, methods, or constructors. | http://cs-fundamentals.com/java-programming/java-comments-javadoc-single-multi-line.php | CC-MAIN-2017-17 | refinedweb | 1,735 | 54.83 |
Apache OpenOffice (AOO) Bugzilla – Issue 22516
bullets turn into > when file imported from Word v.X is reexported to v.X
Last modified: 2013-08-07 14:38:26 UTC
Create a file in the OS X version of Word. Simple file:
* 1
* 2
import into OO 1.1 (version in contrib directory). Export to .doc format again. The resulting file is
OK on Windows version of Word, but if you open in OS X Word, the bullets have turned into }.
Note that this has nothing to do with starsymb or opensymbol. Word uses Symbol. I have a
converted version of Symbol in OO. It looks fine in OO. However if I look at the definition of the
bullet in OO, the Symbol font looks different from the contents of the actual font: in addition to the
glyphs that are actually there, OO claims that there's a duplicate set of glyphs at F000 + the actual
value. The bullet is shown as F0B7. Note that OS X Word doesn't understand Unicode. I wonder
whether the bullet is being written out as F0B7. If so, Windows Word would understand, but Mac
Word might not. However I have no way to look at the binary Word file, so I can't tell what is going
on.
If I export to RTF and bring it back into OS X Word, the bullets are right. However the spacing is
now wrong: the first line text starts right after the bullet, rather than being tabbed properly.
I can give you example files, but this is easy enough to reproduce.
When I author within OO, this is easy to fix. Rather than fighting with the problems of the various
symbol fonts, I set the bullets to be the bullet character from my text font. I think that's the right
default anyway: the designed of the font presumably chose a bullet of the size that he thought
would look right with his font. Why go to some other font that may or may not work? I think that
would be an easier solution than embedding opensym or some of the other things I've seen
discusssed in the reports here. But that doesn't solve the problem of files imported from Word.
You don't want to change those to a different font that may not look the same.
Incidentally, if you need any magic done in the fonts to help with this, I'd be happy to help. I have a
copy of Fontlab, and I'm fairly familiar with fonts. I have access to Windows, Mac and Linux. But I
doubt that this is an issue with the font. I suspect the OO code itself is doing something
inappropriate with the font encoding.
If you want to contact me, my email address is hedrick@rutgers.edu.
oooqa keyword added.
I will try to reproduce when I get a chance. Thanks for the helpful
info.
I have reproduced the problem using MS Word for Mac OS X v.X, and OOo
1.1(rc5) pre-release for Mac OS X. I will attach the sample documents. And
just to be clear when the reporter says import/export to/from OOo he means
open/save.
Created attachment 11469 [details]
Original MS Word for Mac OS X v.X sample document
Created attachment 11470 [details]
Same document imported into OOo 1.1 and saved back as MS Word
HI->MRU: See Platform.
SBA: Target set to 2.0.
MRU->CP: I checked this with Win/Linux. It seems to be MAC only problem, please
reassign to correct developer. Thanks.
cp->Dan: I have no idea about this one (we don't have MS Office for Mac here)
Greetigs Dan,
are you going to work on this issue in the 2.0 timeframe Martin described on
releases?
If not I would like to change the target milestone.
Thanks, Stefan
Retarget to OOoLater.
Reset assignee on issues not touched by assignee in more than 2000 days. | https://bz.apache.org/ooo/show_bug.cgi?id=22516 | CC-MAIN-2020-45 | refinedweb | 668 | 76.01 |
wcsncmp − compare two fixed-size wide-character strings
#include <wchar.h>
int wcsncmp(const wchar_t *s1, const wchar_t *s2, size_t n);
The wcsncmp() function is the wide-character equivalent of the strncmp(3) function. It compares the wide-character string pointed to by s1 and the wide-character string pointed to by s2, but at most n wide characters from each string. In each string, the comparison extends only up to the first occurrence of a null wide character (L'\0'), if any.
The wcsncmp() function returns zero if the wide-character strings at s1].
C99.
strncmp(3), wcsncasecmp(3)
This page is part of release 3.51 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at. | https://manpag.es/f19/3+wcsncmp | CC-MAIN-2022-40 | refinedweb | 128 | 65.42 |
What is the best way to get the first item from an iterable matching a condition?
In Python, I would like to get the first item from a list matching a condition. For example, the following function is adequate:
def first(the_iterable, condition = lambda x: True): for i in the_iterable: if condition(i): return i
This function could be used something like this:
>>> first(range(10)) 0 >>> first(range(10), lambda i: i > 3) 4
However, I can't think of a good built-in / one-liner to let me do this (and I don't particularly want to copy this function around if I don't have to). Any ideas?
(It's important that the resulting method not process the entire list, which could be quite large.)
Answers
In Python 2.6 or better:
next(x for x in the_iterable if x > 3)
if you want StopIteration to be raised if no matching element is found,
next( (x for x in the_iterable if x>3), default_value)
if you want default_value (e.g. None) to be returned instead. Note that you need an extra pair of parentheses around the generator expression in this case - they are needed always when the generator expression isn't the only argument.
I see most answers resolutely ignore the next built-in and so I assume that for some mysterious reason they're 100% focused on versions 2.5 and older -- without mentioning the Python-version issue (but then I don't see that mention in the answers that do mention the next built-in, which is why I thought it necessary to provide an answer myself -- at least the "correct version" issue gets on record this way;-).
In 2.5, the .next() method of iterators immediately raises StopIteration if the iterator immediately finishes -- i.e., for your use case, if no item in the iterable satisfies the condition. If you don't care (i.e., you know there must be at least one satisfactory item) then just use .next() (best on a genexp, line for the next built-in in Python 2.6 and better).
If you do care, wrapping things in a function as you had first indicated in your Q seems best, and while the function implementation you proposed is just fine, you could alternatively use itertools, a for...: break loop, or a genexp, or a try/except StopIteration as the function's body, as various answers suggested. There's not much added value in any of these alternatives so I'd go for the starkly-simple version you first proposed.
Similar to using ifilter, you could use a generator expression:
>>> (x for x in xrange(10) if x > 5).next() 6
In either case, you probably want to catch StopIteration though, in case no elements satisfy your condition.
Technically speaking, I suppose you could do something like this:
>>> foo = None >>> for foo in (x for x in xrange(10) if x > 5): break ... >>> foo 6
It would avoid having to make a try/except block. But that seems kind of obscure and abusive to the syntax.
The itertools module contains a filter function for iterators. The first element of the filtered iterator can be obtained by calling next() on it:
from itertools import ifilter print ifilter((lambda i: i > 3), range(10)).next()
For older versions of Python where the next built-in doesn't exist:
(x for x in range(10) if x > 3).next()
I would write this
next(x for x in xrange(10) if x > 3)
Since you've requested a built-in one-liner, this will avoid the issue of a StopIteration exception, though it requires that your iterable is small so you can cast it to a list, since that is the only construct I know of which will swallow a StopIteration and let you peek at the values:
(lambda x:x[0] if x else None)(list(y for y in ITERABLE if CONDITION))
(If no element matches, you will get None rather than a StopIteration exception.)
Damn Exceptions!
I love this answer. However, since next() raise a StopIteration exception when there are no items, i would use the following snippet to avoid an exception:
a = [] item = next(x for x in a) if any(a) else None
For example,
a = [] item = next(x for x in a)
Will raise a StopIteration exception;
Traceback (most recent call last): File "<stdin>", line 1, in <module> StopIteration
As a reusable, documented and tested function
def first(iterable, condition = lambda x: True): """ Returns the first item in the `iterable` that satisfies the `condition`. If the condition is not given, returns the first item of the iterable. Raises `StopIteration` if no item satysfing the condition is found. >>> first( (1,2,3), condition=lambda x: x % 2 == 0) 2 >>> first(range(3, 100)) 3 >>> first( () ) Traceback (most recent call last): ... StopIteration """ return next(x for x in iterable if condition(x))
Oneliner:
thefirst = [i for i in range(10) if i > 3][0]
If youre not sure that any element will be valid according to the criteria, you should enclose this with try/except since that [0] can raise an IndexError.
Need Your Help
ASP.NET MVC Session with ID in custom header
c# asp.net asp.net-mvc session-stateIs it possible in ASP.NET MVC (without writing toooo much code) to manually select what the current sessionId is? I don't mean like creating my own sessionId, but rather I want to store the session...
Need help in using Mechanize module in Python script
python python-2.7 mechanize mechanize-pythonI am learning to parse the data using mechanize but I have got a problem in the script: | http://unixresources.net/faq/2361426.shtml | CC-MAIN-2019-09 | refinedweb | 951 | 58.32 |
Django, Databases, and Decorators: Routing Requests to Different Databases
Connecting and using multiple databases in Django
As a growing company with a growing customer base, we’re always thinking about ways to boost the performance of our app (but who isn’t?). We’ve got some long-running queries that occasionally result in page timeouts, and if not, some general slowness for the user. These are things to be avoided. This week, we connected our follower database to the app to pull some of these read-only queries off of the main database to decrease the load.
Initial setup of an additional (already existing) database is fairly basic, and can be done by following the docs. We have something like this in our settings:
DATABASES = {
'default': {
...
},
'read-only': {
...
}
}
All database queries, both reads and writes, will run off of ‘default’ unless you specify otherwise. You can make a query off of the ‘read-only’ database like so:
Object.objects.using('read-only').get(id=1)
Seems great. And is great, if your scope is limited. But I don’t want
.using('read-only’) calls sprinkled all throughout my app, and I don’t want to remember to write them. We wanted to be able to specify a different database than the default for an entire view method (controller action, for the Rails folks out there), that would apply to every call within it. Decorators to the rescue!
What are decorators?
Coming from a Rails background, the closest equivalent I can come up with is a
before_action, though Django’s decorators are actually a little more versatile in that. Basically, they are functions that take another function as an argument, and execute their own code around it. Whereas
before_actions in Rails happen, definitionally, before the action (function) is called, decorators wrap around a function and can execute code both before and after. Here’s a common use case:
def login_required_or_401(view_func):
def _decorator(request, *ars, **kwargs):
if not request.user.is_authenticated():
return HttpResponse(status=401)
return view_func(request, *ars, **kwargs)
return wraps(view_func)(_decorator)
You’d call this like so:
@login_required_or_401
def some_view_function(self):
...
Writing a custom decorator
A lot of what comes next was inspired by this blog, with some tweaks to fit our use case. You can read more about threading here.
Where you put your decorator depends on where you think you’ll use it — you know best! We may use ours project-wide, so they’re in an app called ‘common’ we have for just that purpose, in a file called decorators.py. Class based decorators aren’t particularly common, but it proves useful for reasons we’ll come back to.
import threadingthreadlocal = threading.local()class use_db_for_reads(object): def __init__(self, database_name):
self.database_name = database_name def __enter__(self):
setattr(threadlocal, ‘DB_FOR_READ_ONLY’, self.database_name) def __exit__(self, exc_type, exc_value, traceback):
setattr(threadlocal, ‘DB_FOR_READ_ONLY’, None) def __call__(self, test_func):
@wraps(test_func)
def inner(*args, **kwargs):
return test_func(*args, **kwargs)
return innerdef get_thread_local(attr, default=None):
return getattr(threadlocal, attr, default)
This is what I mean by being able to wrap your code, not just execute something before it, using the
__enter__ and
__exit__ functions.
You then need a router. More info on routers can be found back in the docs, but basically, they tell the code which database to execute a request on. Our writes will always go to default, while our reads will come from the ‘read-only’ database, if we use the decorator from above:
from common.decorators import get_thread_local
class AnalyticsRouter(object):
def db_for_read(self, model, **hints):
return get_thread_local('DB_FOR_READ_ONLY', 'default')
def db_for_write(self, model, **hints):
return 'default'
def allow_relation(self, obj1, obj2, **hints):
return True
Back in
settings.py, don’t forget to add your new router, keeping in mind that they’re run through sequentially, and stop once a match has been found — order matters.
DATABASE_ROUTERS = ['common.routers.AnalyticsRouter']
Using the decorator
The benefit of the class based decorator is that it can be used as a decorator:
@use_db_for_reads('read-only')
def view_function(request, *args):
...
But it can also be used for a block, if you only want queries in part of a view to be read off the read-only database:
def view_function(request, *args):
with use_db_for_reads('read_only'):
...
#Below will use the default db
... | https://adriennedomingus.medium.com/django-databases-and-decorators-14fa1f9a5c97 | CC-MAIN-2021-21 | refinedweb | 711 | 53.92 |
>>> +/**>>> + * rol16 - rotate a 16-bit value left>>> + * @word: value to rotate>>> + * @shift: bits to roll>>> + */>>> +static inline __u16 rol16(__u16 word, unsigned int shift)>>> +{>>> + return (word << shift) | (word >> (16 - shift));>>> +}>>>> This doesn't work for shift values of 0: you get word >> 16, and>> shifts greater than or equal to the word size aren't valid C. GCC>> will warn about this, too.>> On the other hand, a value narrower than int will always be promoted> first,Erm, yes of course. It is promoted to _signed_ int though, butthat works okay here.> so this is not a problem in this case.It still needs documentation for the valid values of "shift".Segher | http://lkml.org/lkml/2008/3/11/367 | CC-MAIN-2014-42 | refinedweb | 112 | 79.09 |
Tutorial
How to Deploy a Resilient Go Application to DigitalOcean Kubernetes
The author selected Girls Who Code to receive a donation as part of the Write for DOnations program.
Introduction
Docker is a containerization tool used to provide applications with a filesystem holding everything they need to run, ensuring that the software will have a consistent run-time environment and will behave the same way regardless of where it is deployed. Kubernetes is a cloud platform for automating the deployment, scaling, and management of containerized applications.
By leveraging Docker, you can deploy an application on any system that supports Docker with the confidence that it will always work as intended. Kubernetes, meanwhile, allows you to deploy your application across multiple nodes in a cluster. Additionally, it handles key tasks such as bringing up new containers should any of your containers crash. Together, these tools streamline the process of deploying an application, allowing you to focus on development.
In this tutorial, you will build an example application written in Go and get it up and running locally on your development machine. Then you’ll containerize the application with Docker, deploy it to a Kubernetes cluster, and create a load balancer that will serve as the public-facing entry point to your application.
Prerequisites
Before you begin this tutorial, you will need the following:
- A development server or local machine from which you will deploy the application. Although the instructions in this guide will largely work for most operating systems, this tutorial assumes that you have access to an Ubuntu 18.04 system configured with a non-root user with sudo privileges, as described in our Initial Server Setup for Ubuntu 18.04 tutorial.
- The
dockercommand-line tool installed on your development machine. To install this, follow Steps 1 and 2 of our tutorial on How to Install and Use Docker on Ubuntu 18.04.
- The
kubectlcommand-line tool installed on your development machine. To install this, follow this guide from the official Kubernetes documentation.
- A free account on Docker Hub to which you will push your Docker image. To set this up, visit the Docker Hub website, click the Get Started button at the top-right of the page, and follow the registration instructions.
- A Kubernetes cluster. You can provision a DigitalOcean Kubernetes cluster by following our Kubernetes Quickstart guide. You can still complete this tutorial if you provision your cluster from another cloud provider. Wherever you procure your cluster, be sure to set up a configuration file and ensure that you can connect to the cluster from your development server.
Step 1 — Building a Sample Web Application in Go
In this step, you will build a sample application written in Go. Once you containerize this app with Docker, it will serve
My Awesome Go App in response to requests to your server’s IP address at port
3000.
Get started by updating your server’s package lists if you haven’t done so recently:
- sudo apt update
Then install Go by running:
- sudo apt install golang
Next, make sure you’re in your home directory and create a new directory which will contain all of your project files:
- cd && mkdir go-app
Then navigate to this new directory:
- cd go-app/
Use
nano or your preferred text editor to create a file named
main.go which will contain the code for your Go application:
- nano main.go
The first line in any Go source file is always a
package statement that defines which code bundle the file belongs to. For executable files like this one, the
package statement must point to the
main package:
package main
Following that, add an
import statement where you can list all the libraries the application will need. Here, include
fmt, which handles formatted text input and output, and
net/http, which provides HTTP client and server implementations:
package main import ( "fmt" "net/http" )
Next, define a
homePage function which will take in two arguments:
http.ResponseWriter and a pointer to
http.Request. In Go, a
ResponseWriter interface is used to construct an HTTP response, while
http.Request is an object representing an incoming request. Thus, this block reads incoming HTTP requests and then constructs a response:
. . . import ( "fmt" "net/http" ) func homePage(w http.ResponseWriter, r *http.Request) { fmt.Fprintf(w, "My Awesome Go App") }
After this, add a
setupRoutes function which will map incoming requests to their intended HTTP handler functions. In the body of this
setupRoutes function, add a mapping of the
/ route to your newly defined
homePage function. This tells the application to print the
My Awesome Go App message even for requests made to unknown endpoints:
. . . func homePage(w http.ResponseWriter, r *http.Request) { fmt.Fprintf(w, "My Awesome Go App") } func setupRoutes() { http.HandleFunc("/", homePage) }
And finally, add the following
main function. This will print out a string indicating that your application has started. It will then call the
setupRoutes function before listening and serving your Go application on port
3000.
. . . func setupRoutes() { http.HandleFunc("/", homePage) } func main() { fmt.Println("Go Web App Started on Port 3000") setupRoutes() http.ListenAndServe(":3000", nil) }
After adding these lines, this is how the final file will look:
package main import ( "fmt" "net/http" ) func homePage(w http.ResponseWriter, r *http.Request) { fmt.Fprintf(w, "My Awesome Go App") } func setupRoutes() { http.HandleFunc("/", homePage) } func main() { fmt.Println("Go Web App Started on Port 3000") setupRoutes() http.ListenAndServe(":3000", nil) }
Save and close this file. If you created this file using
nano, do so by pressing
CTRL + X,
Y, then
ENTER.
Next, run the application using the following
go run command. This will compile the code in your
main.go file and run it locally on your development machine:
- go run main.go
OutputGo Web App Started on Port 3000
This output confirms that the application is working as expected. It will run indefinitely, however, so close it by pressing
CTRL + C.
Throughout this guide, you will use this sample application to experiment with Docker and Kubernetes. To that end, continue reading to learn how to containerize your application with Docker.
Step 2 — Dockerizing Your Go Application
In its current state, the Go application you just created is only running on your development server. In this step, you’ll make this new application portable by containerizing it with Docker. This will allow it to run on any machine that supports Docker containers. You will build a Docker image and push it to a central public repository on Docker Hub. This way, your Kubernetes cluster can pull the image back down and deploy it as a container within the cluster.
The first step towards containerizing your application is to create a special script called a Dockerfile. A Dockerfile typically contains a list of instructions and arguments that run in sequential order so as to automatically perform certain actions on a base image or create a new one.
Note: In this step, you will configure a simple Docker container that will build and run your Go application in a single stage. If, in the future, you want to reduce the size of the container where your Go applications will run in production, you may want to look into mutli-stage builds.
Create a new file named
Dockerfile:
- nano Dockerfile
At the top of the file, specify the base image needed for the Go app:
FROM golang:1.12.0-alpine3.9
Then create an
app directory within the container that will hold the application’s source files:
FROM golang:1.12.0-alpine3.9 RUN mkdir /app
Below that, add the following line which copies everything in the
root directory into the
app directory:
FROM golang:1.12.0-alpine3.9 RUN mkdir /app ADD . /app
Next, add the following line which changes the working directory to
app, meaning that all the following commands in this Dockerfile will be run from that location:
FROM golang:1.12.0-alpine3.9 RUN mkdir /app ADD . /app WORKDIR /app
Add a line instructing Docker to run the
go build -o main command, which compiles the binary executable of the Go app:
FROM golang:1.12.0-alpine3.9 RUN mkdir /app ADD . /app WORKDIR /app RUN go build -o main .
Then add the final line, which will run the binary executable:
FROM golang:1.12.0-alpine3.9 RUN mkdir /app ADD . /app WORKDIR /app RUN go build -o main . CMD ["/app/main"]
Save and close the file after adding these lines.
Now that you have this
Dockerfile in the root of your project, you can create a Docker image based off of it using the following
docker build command. This command includes the
-t flag which, when passed the value
go-web-app, will name the Docker image
go-web-app and tag it.
Note: In Docker, tags allow you to convey information specific to a given image, such as its version number. The following command doesn’t provide a specific tag, so Docker will tag the image with its default tag:
latest. If you want to give an image a custom tag, you would append the image name with a colon and the tag of your choice, like so:
- docker build -t sammy/image_name:tag_name .
Tagging an image like this can give you greater control over your images. For example, you could deploy an image tagged
v1.1 to production, but deploy another tagged
v1.2 to your pre-production or testing environment.
The final argument you’ll pass is the path:
.. This specifies that you wish to build the Docker image from the contents of the current working directory. Also, be sure to update
sammy to your Docker Hub username:
- docker build -t sammy/go-web-app .
This build command will read all of the lines in your
Dockerfile, execute them in order, and then cache them, allowing future builds to run much faster:
Output. . . Successfully built 521679ff78e5 Successfully tagged go-web-app:latest
Once this command finishes building it, you will be able to see your image when you run the
docker images command like so:
- docker images
OutputREPOSITORY TAG IMAGE ID CREATED SIZE sammy/go-web-app latest 4ee6cf7a8ab4 3 seconds ago 355MB
Next, use the following command create and start a container based on the image you just built. This command includes the
-it flag, which specifies that the container will run in interactive mode. It also has the
-p flag which maps the port on which the Go application is running on your development machine — port
3000 — to port
3000 in your Docker container:
- docker run -it -p 3000:3000 sammy/go-web-app
OutputGo Web App Started on Port 3000
If there is nothing else running on that port, you’ll be able to see the application in action by opening up a browser and navigating to the following URL:
Note: If you’re following this tutorial from your local machine instead of a server, visit the application by instead going to the following URL:
After checking that the application works as expected in your browser, stop it by pressing
CTRL + C in your terminal.
When you deploy your containerized application to your Kubernetes cluster, you’ll need to be able to pull the image from a centralized location. To that end, you can push your newly created image to your Docker Hub image repository.
Run the following command to log in to Docker Hub from your terminal:
- docker login
This will prompt you for your Docker Hub username and password. After entering them correctly, you will see
Login Succeeded in the command’s output.
After logging in, push your new image up to Docker Hub using the
docker push command, like so:
- docker push sammy/go-web-app
Once this command has successfully completed, you will be able to open up your Docker Hub account and see your Docker image there.
Now that you’ve pushed your image to a central location, you’re ready to deploy it to your Kubernetes cluster. First, though, we will walk through a brief process that will make it much less tedious to run
kubectl commands.
Step 3 — Improving Usability for
kubectl
By this point, you’ve created a functioning Go application and containerized it with Docker. However, the application still isn’t publicly accessible. To resolve this, you will deploy your new Docker image to your Kubernetes cluster using the
kubectl command line tool. Before doing this, though, let’s make a small change to the Kubernetes configuration file that will help to make running
kubectl commands less laborious.
By default, when you run commands with the
kubectl command-line tool, you have to specify the path of the cluster configuration file using the
--kubeconfig flag. However, if your configuration file is named
config and is stored in a directory named
~/.kube,
kubectl will know where to look for the configuration file and will be able pick it up without the
--kubeconfig flag pointing to it.
To that end, if you haven’t already done so, create a new directory called
~/.kube:
- mkdir ~/.kube
Then move your cluster configuration file to this directory, and rename it
config in the process:
- mv clusterconfig.yaml ~/.kube/config
Moving forward, you won’t need to specify the location of your cluster’s configuration file when you run
kubectl, as the command will be able to find it now that it’s in the default location. Test out this behavior by running the following
get nodes command:
- kubectl get nodes
This will display all of the nodes that reside within your Kubernetes cluster. In the context of Kubernetes, a node is a server or a worker machine on which one or more pods can be deployed:
OutputNAME STATUS ROLES AGE VERSION k8s-1-13-5-do-0-nyc1-1554148094743-1-7lfd Ready <none> 1m v1.13.5 k8s-1-13-5-do-0-nyc1-1554148094743-1-7lfi Ready <none> 1m v1.13.5 k8s-1-13-5-do-0-nyc1-1554148094743-1-7lfv Ready <none> 1m v1.13.5
With that, you’re ready to move on and deploy your application to your Kubernetes cluster. You will do this by creating two Kubernetes objects: one that will deploy the application to some pods in your cluster and another that will create a load balancer, providing an access point to your application.
Step 4 — Creating a Deployment
RESTful resources make up all the persistent entities wihtin a Kubernetes system, and in this context they’re commonly referred to as Kubernetes objects. It’s helpful to think of Kubernetes objects as the work orders you submit to Kubernetes: you list what resources you need and how they should work, and then Kubernetes will constantly work to ensure that they exist in your cluster.
One kind of Kubernetes object, known as a deployment, is a set of identical, indistinguishable pods. In Kubernetes, a pod is a grouping of one or more containers which are able to communicate over the same shared network and interact with the same shared storage. A deployment runs more than one replica of the parent application at a time and automatically replaces any instances that fail, ensuring that your application is always available to serve user requests.
In this step, you’ll create a Kubernetes object description file, also known as a manifest, for a deployment. store your manifests in a separate subdirectory so as to keep everything organized.
Create a new file called
deployment.yml:
- nano deployment.yml
Different versions of the Kubernetes API contain different object definitions, so at the top of this file you must define the
apiVersion you’re using to create this object. For the purpose of this tutorial, you will be using the
apps/v1 grouping as it contains many of the core Kubernetes object definitions that you’ll need in order to create a deployment. Add a field below
apiVersion describing the
kind of Kubernetes object you’re creating. In this case, you’re creating a
Deployment:
--- apiVersion: apps/v1 kind: Deployment
Then define the
metadata for your deployment. A
metadata field is required for every Kubernetes object as it contains information such as the unique
name of the object. This
name is useful as it allows you to distinguish different deployments from one another and identify them using names that are human-readable:
--- apiVersion: apps/v1 kind: Deployment metadata: name: go-web-app
Next, you’ll build out the
spec block of your
deployment.yml. A
spec field is a requirement for every Kubernetes object, but its precise format differs for each type of object. In the case of a deployment, it can contain information such as the number of replicas of you want to run. In Kubernetes, a replica is the number of pods you want to run in your cluster. Here, set the number of
replicas to
5:
. . . metadata: name: go-web-app spec: replicas: 5
Next, create a
selector block nested under the
spec block. This will serve as a label selector for your pods. Kubernetes uses label selectors to define how the deployment finds the pods which it must manage.
Within this
selector block, define
matchLabels and add the
name label. Essentially, the
matchLabels field tells Kubernetes what pods the deployment applies to. In this example, the deployment will apply to any pods with the name
go-web-app:
. . . spec: replicas: 5 selector: matchLabels: name: go-web-app
After this, add a
template block. Every deployment creates a set of pods using the labels specified in a
template block. The first subfield in this block is
metadata which contains the
labels that will be applied to all of the pods in this deployment. These labels are key/value pairs that are used as identifying attributes of Kubernetes objects. When you define your service later on, you can specify that you want all the pods with this
name label to be grouped under that service. Set this
name label to
go-web-app:
. . . spec: replicas: 5 selector: matchLabels: name: go-web-app template: metadata: labels: name: go-web-app
The second part of this
template block is the
spec block. This is different from the
spec block you added previously, as this one applies only to the pods created by the
template block, rather than the whole deployment.
Within this
spec block, add a
containers field and once again define a
name attribute. This
name field defines the name of any containers created by this particular deployment. Below that, define the
image you want to pull down and deploy. Be sure to change
sammy to your own Docker Hub username:
. . . template: metadata: labels: name: go-web-app spec: containers: - name: application image: sammy/go-web-app
Following that, add an
imagePullPolicy field set to
IfNotPresent which will direct the deployment to only pull an image if it has not already done so before. Then, lastly, add a
ports block. There, define the
containerPort which should match the port number that your Go application listens on. In this case, the port number is
3000:
. . . spec: containers: - name: application image: sammy/go-web-app imagePullPolicy: IfNotPresent ports: - containerPort: 3000
The full version of your
deployment.yml will look like this:
--- apiVersion: apps/v1 kind: Deployment metadata: name: go-web-app spec: replicas: 5 selector: matchLabels: name: go-web-app template: metadata: labels: name: go-web-app spec: containers: - name: application image: sammy/go-web-app imagePullPolicy: IfNotPresent ports: - containerPort: 3000
Save and close the file.
Next, apply your new deployment with the following command:
- kubectl apply -f deployment.yml
Note: For more information on all of the configuration available to you for deployments, please check out the official Kubernetes documentation here: Kubernetes Deployments
In the next step, you’ll create another kind of Kubernetes object which will manage how you access the pods that exist in your new deployment. This service will create a load balancer which will then expose a single IP address, and requests to this IP address will be distributed to the replicas in your deployment. This service will also handle port forwarding rules so that you can access your application over HTTP.
Step 5 — Creating a Service
Now that you have a successful Kubernetes deployment, you’re ready to expose your application to the outside world. In order to do this, you’ll need to define another kind of Kubernetes object: a service. This service will expose the same port on all of your cluster’s nodes. Your nodes will then forward any incoming traffic on that port to the pods running your application.
Note: For clarity, we will define this service object in a separate file. However, it is possible to group multiple resource manifests in the same YAML file, as long as they’re separated by
---. See this page from the Kubernetes documentation for more details.
Create a new file called
service.yml:
- nano service.yml
Start this file off by again defining the
apiVersion and the
kind fields in a similar fashion to your
deployment.yml file. This time, point the
apiVersion field to
v1, the Kubernetes API commonly used for services:
--- apiVersion: v1 kind: Service
Next, add the name of your service in a
metadata block as you did in
deployment.yml. This could be anything you like, but for clarity we will call it
go-web-service:
--- apiVersion: v1 kind: Service metadata: name: go-web-service
Next, create a
spec block. This
spec block will be different than the one included in your deployment, and it will contain the
type of this service, as well as the port forwarding configuration and the
selector.
Add a field defining this service’s
type and set it to
LoadBalancer. This will automatically provision a load balancer that will act as the main entry point to your application.
Warning: The method for creating a load balancer outlined in this step will only work for Kubernetes clusters provisioned from cloud providers that also support external load balancers. Additionally, be advised that provisioning a load balancer from a cloud provider will incur additional costs. If this is a concern for you, you may want to look into exposing an external IP address using an Ingress.
--- apiVersion: v1 kind: Service metadata: name: go-web-service spec: type: LoadBalancer
Then add a
ports block where you’ll define how you want your apps to be accessed. Nested within this block, add the following fields:
name, pointing to
http
port, pointing to port
80
targetPort, pointing to port
3000
This will take incoming HTTP requests on port
80 and forward them to the
targetPort of
3000. This
targetPort is the same port on which your Go application is running:
--- apiVersion: v1 kind: Service metadata: name: go-web-service spec: type: LoadBalancer ports: - name: http port: 80 targetPort: 3000
Lastly, add a
selector block as you did in the
deployments.yml file. This
selector block is important, as it maps any deployed pods named
go-web-app to this service:
--- apiVersion: v1 kind: Service metadata: name: go-web-service spec: type: LoadBalancer ports: - name: http port: 80 targetPort: 3000 selector: name: go-web-app
After adding these lines, save and close the file. Following that, apply this service to your Kubernetes cluster by once again using the
kubectl apply command like so:
- kubectl apply -f service.yml
This command will apply the new Kubernetes service as well as create a load balancer. This load balancer will serve as the public-facing entry point to your application running within the cluster.
To view the application, you will need the new load balancer’s IP address. Find it by running the following command:
- kubectl get services
OutputNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE go-web-service LoadBalancer 10.245.107.189 203.0.113.20 80:30533/TCP 10m kubernetes ClusterIP 10.245.0.1 <none> 443/TCP 3h4m
You may have more than one service running, but find the one labeled
go-web-service. Find the
EXTERNAL-IP column and copy the IP address associated with the
go-web-service. In this example output, this IP address is
203.0.113.20. Then, paste the IP address into the URL bar of your browser to the view the application running on your Kubernetes cluster.
Note: When Kubernetes creates a load balancer in this manner, it does so asynchronously. Consequently, the
kubectl get services command’s output may show the
EXTERNAL-IP address of the
LoadBalancer remaining in a
<pending> state for some time after running the
kubectl apply command. If this the case, wait a few minutes and try re-running the command to ensure that the load balancer was created and is functioning as expected.
The load balancer will take in the request on port
80 and forward it to one of the pods running within your cluster.
With that, you’ve created a Kubernetes service coupled with a load balancer, giving you a single, stable entry point to application.
Conclusion
In this tutorial, you’ve built Go application, containerized it with Docker, and then deployed it to a Kubernetes cluster. You then created a load balancer that provides a resilient entry point to this application, ensuring that it will remain highly available even if one of the nodes in your cluster fails. You can use this tutorial to deploy your own Go application to a Kubernetes cluster, or continue learning other Kubernetes and Docker concepts with the sample application you created in Step 1.
Moving forward, you could map your load balancer’s IP address to a domain name that you control so that you can access the application through a human-readable web address rather than the load balancer IP. Additionally, the following Kubernetes tutorials may be of interest to you:
- How to Automate Deployments to DigitalOcean Kubernetes with CircleCI
- White Paper: Running Cloud Native Applications on DigitalOcean Kubernetes
Finally, if you’d like to learn more about Go, we encourage you to check out our series on How To Code in Go. | https://www.digitalocean.com/community/tutorials/how-to-deploy-resilient-go-app-digitalocean-kubernetes | CC-MAIN-2020-40 | refinedweb | 4,341 | 60.35 |
Practical Uses for Closures
The closure is a powerful tool in JavaScript. It is commonly used in functional programming languages, but often misunderstood. Like other JavaScript fundamentals, understanding closures is necessary to write expressive, concise and maintainable scripts.
What is a closure?
At first glance, a closure is simply a function defined within another function. However, the power of closures is derived from the fact that the inner function remembers the environment in which it was created. In other words, the inner function has access to the outer function’s variables and parameters.
What’s it look like?
Below is an example of a closure (courtesy of Mozilla):
Our outer function —
pam — does three things:
- Define a local variable,
name
- Define a function,
displayName
- Call
displayName
displayName doesn’t define any local variables, yet it is able to alert
name because
name has been defined in the scope in which the closure was created — that of its outer function.
Closures can do more than just read their outer functions’ local variables — they can overwrite them, too. Observe below:
As we can see, closures are capable of not only reading, but also manipulating the variables of their outer functions.
Function factories
One powerful use of closures is to use the outer function as a factory for creating functions that are somehow related.
Using closures as function factories is a great way to keep your JavaScript DRY. Five powerful lines of code allow us to create any number of functions with similar, yet unique purposes.
Namespacing private functions
Many object-oriented languages provide the ability to declare methods as either public or private. JavaScript doesn’t have this functionality built in, but it does allow to emulate this functionality through the use of closures, which is known as the module pattern.
Using closures to namespace private functions keeps more general namespaces clean, preventing naming collisions. Neither the
salary variable nor the
changeBy function are available outside of
dwightSalary. However,
raise,
lower and
currentAmount all have access to them and can be called on
dwightSalary.
These are a few popular uses for closures. You’ll surely encounter closures used for other purposes, but these are a couple simple ways to incorporate closures into your code in an immediately useful way. | http://pat-whitrock.github.io/blog/2014/04/29/practical-uses-for-closures/ | CC-MAIN-2022-05 | refinedweb | 377 | 53.51 |
XLink and XML Base are implemented or partially implemented by Mozilla. This hack explores these technologies, using Mozilla as a platform.
The XML Linking Language or XLink () defines a vocabulary for creating hyperlinks to resources using XML syntax. XLink goes beyond the simple linking in HTML and XHTML by adding concrete semantics and extended links that can link to more than one resource. XLink hasn't really taken off yet, but Mozilla supports simple links in XLink, though not extended links (). Use of XLink is growing, if slowly?see, for example, the use of XLink in the OpenOffice specification ().
XML Base () consists of a single XML attribute, xml:base, that acts like the base element from HTML and XHTML; i.e., it explicitly sets the base URI for a document. A base URI is often understood implicitly by a program such as a Web browser by the location of a resource, such as a location on the web or the location of a file in a directory or file structure. In HTML or XHTML, this base URI could be set directly with the base element, as shown in this fragment of XHTML markup (note bold):
<html xmlns="" xml: <head> <title>Links</title> <base href=""/> </head> <body> ...
You set a base URI explicitly using xml:base, as shown in the example document base.xml (Example 2-21). It is also displayed in Mozilla Firefox in Figure 2-27.
<?xml version="1.0" encoding="UTF-8"?> <?xml-stylesheet <heading>Resources on XML.com.</heading> <block>Start here: <link xlink:Home</link></block> <block>Topics: <link xlink:Programming articles</link> : <link xlink:Schema articles </link> : <link xlink:Style articles</link> </block> <block xml:Logo for XML.com: <link xlink:logo</link> </block> </links>
The base URI is set to with xml:base on line 4. This setting is inherited by the children of links. The XLink to index.csp on line 8, therefore, is able to resolve to because index.csp is relative to.
The xml:base attribute on line 14 adds the images directory to the base URI so that it becomes, changing the earlier setting on line 4. This is in effect only for the children of the block where it is set. Hence, the XLink on line 15 can find the JPEG image of the XML.com logo with only the filename logo_tagline.jpg.
The namespace for XLink is declared on line 5 and is associated with the xlink prefix (xmlns:xlink=""). The xlink prefix is customary, and Mozilla won't work without it.
The namespace declaration allows the link elements on lines 8, 10, 11, 12, and 15 to use the XLink attributes xlink:type and xlink:href. The value of xlink:type states the type of XLink (simple in Example 2-21). Other possible values are extended, locator, arc, resource, title, or none, but because Mozilla supports only simple links, I only used simple. The xlink:href attribute contains a URI that identifies the resource that the link can traverse, similar to the href attribute on the a element in HTML or XHTML.
When the mouse pointer hovers over the XLinks in base.xml in the browser, the background changes according to the CSS styling defined in base.css, which is referenced by the XML stylesheet PI on line 2.
Three other XLink attributes merit some discussion: xlink:actuate, xlink:show, and xlink:label. Unlike xlink:type and xlink:href, these attributes have not been implemented in Mozilla. In fact, they apparently have not been implemented in software that can be easily demonstrated on the Web, so unfortunately I can't show you any practical examples of them. Nevertheless, it would be good for you to at least get familiar with the intended functionality of these XLink attributes, should they ever reach the masses.
The xlink:actuate attribute indicates when or how a link is to be traversed. The value of xlink:actuate can be onRequest, onLoad, none, or other. onRequest means the link is actuated when it is clicked or triggered in some way. onLoad means that the link is actuated when the page or resource is loaded by an application such as a browser. A value of none essentially turns off the behavior, and a value of other allows for application-specific behavior.
The xlink:show attribute can have these values and behaviors:
Load the ending resource in a new window.
Load the resource in the same window, frame, pane, or what have you.
Load the resource at the place where the link is actuated.
Application-specific behavior.
Essentially "don't do anything," though behavior is not specified by the spec.
The xlink:label attribute contains a label that identifies an element holding a link to a resource. Then the xlink:from and xlink:to attributes can contain values that match values in xlink:label attributes (see Example 2-23).
Extended links () are hard to explain. There is no simple way to demonstrate them because browsers don't support them. The fact that they are complicated is perhaps one reason why they are not widely implemented. But they are worthy of a few passing remarks, as they hold potential.
We are used to simple links that go in one direction from one resource to another. Extended links can point to more than one resource at a time. This is done by using the xlink:type="locator" attribute/value pair on an element, in combination with xlink:href. You can provide some advisory text with xlink:title. xlink:role can contain a URI value that annotates the link, but doesn't actually link to a resource. An extended link using all these attributes would look something like Example 2-22.
<link xml: <country>United States</country> <state>Rhode Island</state> <city xlink:Bristol</city> <city xlink:Newport</city> </link>
An XLink link database or linkbase () is an XML document that contains one or more inbound and third-party links. Linkbases are a little hard to grasp, especially because, once again, you can't really demonstrate them.
An arc provides information about how to traverse a pair of resources, including direction and perhaps some information about how an application may act. An arc is said to be outbound if it starts at a local resource and ends at a remote resource. An arc is inbound if it starts at a remote resource and lands in a local one. An arc is a third-party arc if it neither starts nor ends at a local resource. In other words, a third-party arc is one for which you do not or cannot create the link at either the starting or ending resource.
The following XML fragment (Example 2-23) indicates that when exterior.xml is loaded, the linkbase.xml document should be loaded. On the load element, the xlink:actuate attribute specifies load behavior, and using the labels in the link and linkbase elements (defined by xlink:label), load also establishes a traversal from ext to lb using the xlink:to and xlink:from attributes.
<block> <link xlink: <linkbase xlink: <load xlink: </block>
For more insight into XLink: | http://etutorials.org/XML/xml+hacks/Chapter+2.+Creating+XML+Documents/Hack+28+Explore+XLink+and+XML/ | CC-MAIN-2017-22 | refinedweb | 1,189 | 64.2 |
Following a keynote speech which laid out the new features of Facebook (see earlier news story), the rest of yesterday’s F8 conference in London saw speakers dig into the Open Graph API and explain how app developers could take advantage of it.
Second on stage was platform engineer Simon Cross, who did a live demo centred around creating a social recipe app, which was able to post information-rich updates to Facebook.
"The Facebook Platform is a world away from what it was six months ago,” he stressed. “A lot of improvements have been made under the hood.”
Cross announced that Facebook has dumped Bugzilla and is working on a new debugging tool. There’s also a new policy of giving devs 90 days' notice of breaking changes. More effort is also being put into the developer blog and, if devs need to contact engineers with questions, they can now find them at facebook.stackoverflow.com, he added.
Fundamental questions
Taking advantage of social integration involves asking two fundamental questions, Cross explained. The first was: do users take actions in your product (such as listening to a song)?
The second was: do they have an ongoing relationship with you? (He gave the example of how Spotify lets you represent your identity through your musical tastes, tying you into the app emotionally.)
"Building on Facebook is just as easy as building for the web,” he said, and urged app developers to "build them and ship them today".
Gaming and mobile
Following lunch, Gareth Morris addressed gaming. "Over 200million+ users play Facebook games every month," he enthused. But to take advantage of this huge audience, you need your game to spread game virally. Key to this is considering what achievements players will want to boast about, and what their friends will be interested in.
"The Open Graph is an essential tool for games," he concluded. "The distribution possibilities are huge." You just need to make an investment up front, and make sure you always put your users first, he said.
Next up, Matt Kelly focused on mobile. Mobile users of Facebook are twice as active as desktop users, he pointed out. This offers app developers a huge opportunity: when EA/Playfish launched Sims Social, for instance, they gained 40million users in just a month. Facebook is working hard to make it easier for users to share things across different devices, he added.
A showcase of mobile apps already using the Open Graph API can be found at, with documentation at docs on developers.facebook.com/mobile and developers.facebook.com/html5.
Marketing API & case study
Tom Elliot followed with a talk on how you can use the Marketing API to promote your app beyond the usual social channels. The main benefits are scale ("you can build hundreds or thousands of adverts in seconds") and automation (for example, you can link up with your real-time stock information and automatically alter your ad spend accordingly).
Elliot demonstrated how you can target an ad based upon users who have performed an action. For example, “because I listened to Lady Gaga on Spotify, her management can target me to sell me tickets when she's on tour.”
The final third of the afternoon kicked off with a talk from Mat Clayton from music service Mixcloud about their Open Graph launch. Integrating social features into the app resulted in a 55 per cent drop in bounce rate and a big increase in dwell time, he explained, while using the Facepile social plug-in increased signup conversion by 200-300 per cent.
Q&A
Lastly, a Q&A session with Christian Hern, Simon Cross and Ethan Beard answered a series of technical questions from the audience. We learned, among other things, that:
- Facebook will be announcing a new class of their Preferred Developer programme "sometime soon".
- There are no plans to introduce Timeline for brand pages.
- It is possible to populate back-history on a Timeline, but you should always prompt the user first.
- If you want to bulk upload historical data, you can add no_feed_story=1 to prevent a feed story coming up.
- FQL is "not going away".
- The ability to stream music is currently available only to approved partners due to legal issues; however, Facebook is open to new partners as long as they have legal access to music (or other audio).
- You should view the Open Graph as a store for your app's data: store it in your own database.
- You should publish any action you think is a relevant action - don’t hold back ("We can do smart aggregation to stop users being overwhelmed by updates").
- It's not possible to share namespaces across apps. | https://www.creativebloq.com/netmag/report-f8-london-part-2-10116692 | CC-MAIN-2018-13 | refinedweb | 784 | 61.56 |
Code snippets are a great productivity feature in Visual Studio 2005 (or a mind rot – depends on your perspective). Michael Palermo even has a site dedicated to code snippets: GotCodeSnippets.com.
Code snippets are easy to author. I became tired of typing in the same keystrokes to start a unit test, and wrote a snippet myself. Now I type tm+TAB+TAB, and voilà, the following code appears. A savings of 20 keystrokes:
The word Test is highlighted in green as a replacement – I can type over the word with a new method name. All of this is setup by dropping the a .snippet file into My Documents\Visual Studio 2005\Code Snippets\Visual C#\My Code Snippets. Here are the contents:
It looks like a lot of work, but if you copy an existing snippet it’s almost too easy.
Snippets are better in VB.NET. A code snippet in VB.NET can add Imports for required namespaces to the source file, and reference any required assemblies when it expands. C# cannot. Perhaps this explains why there are over 350 VB.NET code snippets installed by Visual Studio, and only 50 for C#. It would be great to write a TestFixture snippet for C# that automatically added a using NUnit.Framework, and added a project reference to the NUnit assembly. Perhaps in the next version…
I want to write some Snippets for my own use but can't find free time.
Fortunately Microsoft has given Snippet editor for VB so I would write a few lines of code ;) | https://odetocode.com/blogs/scott/archive/2005/12/06/snippets.aspx | CC-MAIN-2019-18 | refinedweb | 259 | 74.49 |
Using wrapped Qt segfaults on Linux with only a basic example
I posted this exact thing on Stack Overflow, but then I thought I would have better luck here.
I've been experimenting with wrapping some Qt classes with a C interface to use with my latest D project. For what ever reason, this code works fine on Windows but segfaults on Linux and I haven't been able to track down the reason behind it. I haven't tried building on OSX yet.
I'm using Qt 5.3, and running Linux Mint.
The code is kind of spread out over a few different files so I thought it might be easier if I put all the related code into some pastebins.
"QApplication wrapper stuff":
"QMainQWindow wrapper stuff":
These are very thin wrappers though, so even if you don't look at them it should be easy enough to understand my test program.
@#include <Application.h>
#include <MainWindow.h>
int main( int argc, char* argv[])
{
Qt_Application* app = Qt_Application_create(argc, argv); Qt_MainWindow* window = Qt_MainWindow_create(); Qt_MainWindow_show(window);//<- Segfault happens here Qt_Application_exec(app); Qt_Application_destroy(app); Qt_MainWindow_destroy(window); return 0;
}@
Due to some printf tests, I know the segfault happens on when I try to call Qt_MainWindow_show, and likewise I know the window object exists when I pass it so that isn't the cause. Also, if I comment Qt_MainWindow_show out, Qt_Application_exec will get called no problem so as far as I know the wrapped objects are being created correctly.
When I run gdb, it says:
bq. Program received signal SIGSEGV, Segmentation fault.
0x00007ffff789a92a in strlen () from /lib/x86_64-linux-gnu/libc.so.6
getting the backtrace at the point of the segfault shows this:
bq. #0 0x00007ffff789a92a in strlen () from /lib/x86_64-linux-gnu/libc.so.6
#1 0x00007ffff6b6c6bc in QCoreApplication::arguments() ()
from /home/jebbs/Qt/5.3/gcc_64/lib/libQt5Core.so.5
#2 0x00007ffff1470213 in ?? ()
from /home/jebbs/Qt/5.3/gcc_64/plugins/platforms/libqxcb.so
#3 0x00007ffff14705f9 in ?? ()
from /home/jebbs/Qt/5.3/gcc_64/plugins/platforms/libqxcb.so
#4 0x00007ffff147d127 in ?? ()
from /home/jebbs/Qt/5.3/gcc_64/plugins/platforms/libqxcb.so
#5 0x00007ffff1470009 in ?? ()
from /home/jebbs/Qt/5.3/gcc_64/plugins/platforms/libqxcb.so
#6 0x00007ffff5d47e03 in QWindow::create() ()
from /home/jebbs/Qt/5.3/gcc_64/lib/libQt5Gui.so.5
#7 0x00007ffff716b97a in QWidgetPrivate::create_sys(unsigned long long, bool, bool) () from /home/jebbs/Qt/5.3/gcc_64/lib/libQt5Widgets.so.5
#8 0x00007ffff714e6f5 in QWidget::create(unsigned long long, bool, bool) ()
from /home/jebbs/Qt/5.3/gcc_64/lib/libQt5Widgets.so.5
#9 0x00007ffff71512ea in QWidget::setVisible(bool) ()
from /home/jebbs/Qt/5.3/gcc_64/lib/libQt5Widgets.so.5
#10 0x00007ffff7bd8205 in Qt_MainWindow_show ()
from /home/jebbs/Documents/projects/HeliosCTest/libqtcl.so.1
#11 0x0000000000400922 in main (argc=1, argv=0x7fffffffe158) at main.cpp:22 <-actually points to Qt_MainWindow_show(window), but this is from a test with printf's in it
So it looks like some string somewhere is NULL and strlen cries? I couldn't find any reason that QMainWindow.show() might segfault. Any pointers into where I should look or what I should do next would be an excellent help. | https://forum.qt.io/topic/44136/using-wrapped-qt-segfaults-on-linux-with-only-a-basic-example | CC-MAIN-2018-26 | refinedweb | 524 | 50.12 |
Since one of my clients had a high interest in “embedded analytics” they decided to start a (in their eyes) “simple” project. The goal is to show a couple of KPIs within the standard transactions for shipments (VT01N, VT02N & VT03N) to help the transport planners in optimizing the outgoing trucks. Ideally before going out a truck should contain a certain “sales value”, be not “too empty”, etc… Ideally this should be represented in a dashboard.
So, we started “investigating” the embedded analytics. I’ll try to share our/my experiences during this journey. Instead of changing standard SAP transaction screens, you can use the so-called “side panel” in NetWeaver Business Client (NWBC). If you’re not yet working with NWBC (as we were), then you’ll need to investigate this as well (as it changes the way end users work). I won’t go into detail, but what you need to remember is that within NWBC it is possible to add a “side panel” to a standard SAP screen in which you can display reports, dashboards, web pages, pictures, … It’s actually pretty cool once you get it to work. Obviously you need to somehow “connect” the side panel to the main window (of your transaction). In our case we need to pass the shipment number to the side panel (shipment number is a variable in our reports). This is done through “tagging”. I’ll probably write more about that in another blog of this series.
First things first, we need to make sure our ERP system is “ready” for embedded analytics. For this the system needs to meet certain requirements which can be found here:. If your system meets these requirements, you can start “activating” embedded analytics on it. The basic customizing can be found here:
The first action is to assign the administrator role to your userid (and that of your team members if you work in a team). The role is called SAP_ESH_LOCAL_ADMIN. Easy peasy!
Next we need to activate web dynpro services. These are necessary to be able to use NWBC. Launch transaction SICF:
hit Execute & navigate to: default_host > sap > bc > nwbc
right-mouse click on “nwbc” and select Activate Service
select the 2nd Yes button (in order to activate the underlying services as well).
The following step is “Set TREX/BWA Destination or SAP HANA DB Connection”. Obviously this depends on whether you have TREX, BWA or HANA. In our case we have none of those, so we set the “destination” to NO TREX (check the documentation next to the customizing action for more details).
The next two steps are only relevant if you have different servers which we don’t (in our DEV environment), so we skipped them.
Now we’re getting to the “real stuff”. In order to avoid errors in the next step, you have to make sure your system is “somewhat” open. Actually it should be “open” for Repository objects only and more specifically for namespaces /BI0/ & /BIC/. This is well explained in the documentation next to the customizing activity. Start the customizing activity and enter your “BW Client”. We chose to use our (ECC) “customizing” client. Just a clarification as this seems to confuse people. The “BW” here refers to the “embedded” BW system of your ECC box. As of ECC 5.0 (I think) all ECC systems come with a (mini) BW system inside. It’s that system that we’re actually configuring for “embedded analytics”.
The documentation states that you can only set this client “once”. Well, that’s not entirely true… we found out upon going through these settings in our QAS system (most of these cannot be transported and have to be repeated throughout different systems of the landscape) that somehow a (default) client was set (back in 2006) to 100 (a client which actually doesn’t exist at all on our QAS box). For the technies, this information is stored in tables RSADMINA, RSADMINC & RSADMINS. A bit of debugging (and searching the web) led us to function module RS_MANDT_UNIQUE_SET. Now, SAP probably has a very good reason for not (directly) allowing you to change this client, but in this case it simply had to be done.
If all goes well you should not have to use this function module and you’ll get a popup in which you select your “source system”. The only option should be a Self -def’d system (basically an RFC pointing to your “MySelf” BW system – again, the embedded one). If you do not get this, the transaction will still continue, but result in errors (it cannot create the personalization DSO objects). What you do then is create the “MySelf” system via transaction RSA1. If you’re not familiar with this, ask your system admin(s) to do this for you.
Now it’s time to define the modeling client. Again, read the documentation next to the customizing activity for more information. In our case, we entered our customizing client as well (i.e. the same client as in the previous step). In our QAS environment we skipped this step.
The final step has to do with “extending” Datasources and we’re not quite ready for that (just yet).
Ok, your system should now be ready for embedded analytics.
In my next blog I’ll show how you can now replace ALV lists with Crystal Reports
Raf, thanks for sharing, I gonna take a fine mojito or two 😉 now and will continue reading the next parts !!!
there’s a (slight) delay on the next parts… due to a little “overload” right now 🙁
Raf, going through these blogs and following customizing guide for some of the SD analytics. Simple question for you … In the BWA/TREX destination step, did you have to create a dummy RFC Destination for NO TREX, or did you hard code NO TREX.
hard code NO TREX (with a space in between and no “underscore” as some documents seem to show)
Hi Raf,
I have now the same situation of implementing SAP BO Dahsboards and Crystal Reports directly on the ERP but without BW, so there´s an option with SAP Best Practices PAckages, have you used them? I would like to ask your recomendation my question was posted here:
Thanks in advance for your support!
Kind regards.
Araceli G. | https://blogs.sap.com/2014/05/16/building-your-own-embedded-analytics-application-part-1/ | CC-MAIN-2017-30 | refinedweb | 1,054 | 70.13 |
It might be fun doing small tests to test how the tests work, but let's remember our goal - to test our Game class!
So, we’ve now got 2 targets in the project - the main app and tests. They are somewhat independent. Let’s connect the 2 together and perform the actual testing!
Importing an application
The code in both targets of our project coexist in the same project but technically act like complete strangers to each other. 😒🧐 In technical terms, they form different modules.
If you attempt to use the Game class or any other class in the tests code, Swift will get puzzled and assume it's an error. 😱
No need to panic! To resolve this, all we need to do is import the main module to the tests. 😎
And what would be the name of the main module?
The module name matches the name of the corresponding target, so for our main application, it will be
TicTacToe !
Let's now import it to the tests by including the following line:
import TicTacToe
Now the application module is imported to your test file.
This is good, but not enough. To understand why, let's do a little refresher of access control.
There are 5 access levels - let's illustrate them in relation to a module:
Internal is the default level (it's assigned automatically if we don't specify a level explicitly). At this level, the scope of classes and their members is limited to the module that contains the class. Considering this, our Game class and all its methods are not available outside of the main application. So when we import the TicTacToe module to the test code, we sill don't have access to Game and the other classes of the main application - because they are part of a different module.
We've got two options to solve this:
Modify the classes and methods of the main application so that they are all public. This would take quite some time - and imagine working with a much larger project! On top of that, it's not secure. Those classes and class members are probably not public for a reason, so we can't change all that just for the sake of making it available for testing!
Use the
@testabledecorator. It's placed before importing a module we intend to test. This is like the imported module will pretend to be part of the module it's being imported in. Looks like this is a more suitable option to solve our problem! 😎
So, let's make this adjustment:
@testable import TicTacToe
We now have access to the Game class and we'll be able to test it!
Implementing the tests
We can delete the testExample, and replace it with a real deal!
To name the tests, I suggest using a very practical technique called Behavior Driven Development - development motivated by behavior (BDD for short). BDD suggests creating test names as a composition of 3 parts,
Given_When_Then :
Given - The given part describes the state of the world before you begin the behavior you're specifying in this scenario. You can think of it as the pre-conditions to the test.
When - The when section is that behavior that you're specifying.
Then - The then section describes the changes you expect due to the specified behavior.
For example, if we had to name a test that validates the functionality of a "Like" activity, it could be:
GivenPostHasZeroLikes_WhenPostIsLiked_ThenPostHasOneLike// GIVEN - post has ZERO likes// WHEN - post is LIKED// THEN - posy has ONE like
We figured we're going to test the Game class, so we'll need an instance of that class. For that we could declare a constant in our test file:
let game = Game()
However, this may not be the best idea as we'll likely need more than one test and each of them will use the same stance of the Game object. It might be unnecessarily challenging to always reset it to a needed state.
Likely, there's a method in the test class that is called every time before any test is executed. All we need to do is override it:
var game: Game!override func setUp() {super.setUp()game = Game()}
It's good now. We are ready for the first test.
Test 1. Validate the current card and next player on advance
Let's validate our code for the situation when we have a brand new game and, while making the first advance, we expect the title of the advanced card to be 'X' and the next player - Player2:
func testGivenGameIsAtStart_WhenAdvanced_ThenCardShouldBeMarkedWithXAndNextPlayerTwo() {let title = game.advance(atIndex: 0)XCTAssertEqual(title, "X")XCTAssertEqual(game.nextPlayer, .two)}
Let's analyze:
Assuming the game is at its very beginning (that's guaranteed by the setUp method), we've advanced the game for the first time and captured the card title - the
titleconstant.
We've evaluated the
titleconstant value against the expected value, "X."
We've evaluated the
nextPlayerproperty value against the expected value,
.one.
Done! Let's test it - run the test and observe that it turns GREEN! 💚
Prepare for the future
We can anticipate that while testing this game we'll have to advance through the game a lot. Instead of repeating the same lines a specific number of times.
Let's implement a few helper elements:
enum WinningVariant {case row, column, diagonalOne, diagonalTwo}func makeWinningIndices(player: Player, variant: WinningVariant) -> [Int] {switch variant {case .row: return player == .one ? [0, 7, 1, 8, 2] : [6, 0, 7, 1, 8, 2]case .column: return player == .one ? [0, 1, 3, 5, 6] : [7, 0, 1, 3, 5, 6]case .diagonalOne: return player == .one ? [0, 1, 4, 5, 8] : [7, 0, 1, 4, 5, 8]case .diagonalTwo: return player == .one ? [2, 1, 4, 5, 6] : [7, 2, 1, 4, 5, 6]}}func makeDrawIndices() -> [Int] {return [0, 3, 6, 2, 5, 8, 1, 4, 7]}func advance(indices: [Int]) {for i in 0 ..< indices.count {_ = game.advance(atIndex: indices[i])}}
Let's review the above:
In this game it's essential to test the winning situations, so here we are creating an assisting enumeration specifying those variants:
WinningVariant.
We've implemented a function that will generate sample sequences of indices for each of the winning variants.
We've got a similar function that will generate advancing indices for the draw situation.
And finally, we've created a function that will advance the game filling up the cards according to the indices passed in a parameter. Notice here we don't need to specify a player as the playing turns are alternating within the advance method of the game object.
Good! We are now all set and ready to continue! 🙌
Test 2. Validate "row game over" for player one
In this test, we'll advance our game to the winning position in a row for player one and validate 3 elements:
The
game.isOverproperty is expected to be true.
The
game.winnerproperty is expected to be .one.
The
game.winningIndicesproperties are expected to be not nil.
func testGivenGameIsAtStart_WhenAdvancedWithCrossesInRowOne_ThenGameShouldBeOverWithWinnerAsPlayerOneAndWinningIndicesNotNil() {advance(indices: makeWinningIndices(player: .one, variant: .row))XCTAssertTrue(game.isOver)XCTAssertEqual(game.winner, .one)XCTAssertNotNil(game.winningIndices)}
Execute the test... It works again!
Test 3. Validate "column game over" for player two
In this test, we'll advance our game to the winning position in a column for player two and validate 3 of the same elements as in our previous test. Except this time, we expect the value of the
game.winner property to be .two:
func testGivenGameIsAtStart_WhenAdvancedWithZerosInRowOne_ThenGameShouldBeOverWithWinnerAsPlayerTwoAndWinningIndicesNotNil() {advance(indices: makeWinningIndices(player: .two, variant: .row))XCTAssertTrue(game.isOver)XCTAssertEqual(game.winner, .two)XCTAssertNotNil(game.winningIndices)}
Run the test... It works!
Test 4-6. Validate remaining winning situations
In the next 3 tests, we'll validate the remaining winning situations: column and diagonals 1 and 2 for player two. For this, it's sufficient to validate only the
game.winner property to be not nil:
func testGivenGameIsAtStart_WhenAdvancedWithCrossesInColumnOne_ThenWinnerShouldBeNotNil() {advance(indices: makeWinningIndices(player: .two, variant: .column))XCTAssertNotNil(game.winner)}func testGivenGameIsAtStart_WhenAdvancedWithCrossesInDiagonalOne_ThenWinnerShouldBeNotNil() {advance(indices: makeWinningIndices(player: .two, variant: .diagonalOne))XCTAssertNotNil(game.winner)}func testGivenGameIsAtStart_WhenAdvancedWithCrossesInDiagonalTwo_ThenWinnerShouldBeNotNil() {advance(indices: makeWinningIndices(player: .two, variant: .diagonalTwo))XCTAssertNotNil(game.winner)}
Run the tests... All 3 are in tact!
Test 7. Validate the draw situation
The last test for the game class we'll need to do is to validate the draw situation. In this function, we'll also validate 3 components. Just expect different values:
The
game.isOverproperty is expected to be true.
The
game.winnerproperty is expected to be nil.
The
game.winningIndicesproperties are expected to be nil.
func testGivenGameIsAtStart_WhenAdvancedToFill_ThenGameShouldBeOverWithWinnerAsNilAndWinningIndicesAsNil() {advance(indices: makeDrawIndices())XCTAssertTrue(game.isOver)XCTAssertNil(game.winner)XCTAssertNil(game.winningIndices)}
Run the test... Observe that it works! You are awesome! 😊
Let's Recap!
The names of the tests are written following Behavior Design Development as a composition of three parts: Given, When, Then.
The
setupmethod is recalled before each test. It ensures initialization.
Test code must be treated the same way as the main code - nice and clean. Refactoring is encouraged!
There are several variants of
XCTAssert. It's best to use the most suitable version for each test to improve the readability and clarity of the intentions of the tests. | https://openclassrooms.com/en/courses/4554386-enhance-an-existing-app-using-test-driven-development/5095701-perform-tests-on-a-class | CC-MAIN-2022-21 | refinedweb | 1,530 | 57.37 |
Multithreading and Memory Contention in Excel
Last modified: May 06, 2010
Applies to: Excel 2010 | Office 2010 | VBA | Visual Studio
In this article
Thread-Safe Functions
Memory Accessible by Only One Thread: Thread-Local Memory
Memory Accessible Only by More than One Thread: Critical Sections
Versions of Microsoft Excel earlier than Excel 2007 use a single thread for all worksheet calculations. However, starting in Excel 2007, Excel can be configured to use from 1 to 1024 concurrent threads for worksheet calculation. On a multi-processor or multi-core computer, the default number of threads is equal to the number of processors or cores. Therefore, thread-safe cells, or cells that only contain functions that are thread safe, can be allotted to concurrent threads, subject to the usual recalculation logic of needing to be calculated after their precedents.
Most of the built-in worksheet functions starting in Excel 2007 are thread safe. You can also write and register XLL functions as being thread safe. Excel uses one thread (its main thread) to call all commands, thread-unsafe functions, xlAuto functions (except xlAutoFree and xlAutoFree12), and COM and Visual Basic for Applications (VBA) functions.
Where an XLL function returns an XLOPER or XLOPER12 with xlbitDLLFree set, Excel uses the same thread on which the function call was made to call xlAutoFree or xlAutoFree12. The call to xlAutoFree or xlAutoFree12 is made before the next function call on that thread.
For XLL developers, there are benefits for creating thread-safe functions:
They allow Excel to make the most of a multi-processor or multi-core computer.
They open up the possibility of using remote servers much more efficiently than can be done using a single thread.
Suppose that you have a single-processor computer that has been configured to use, say, N threads. Suppose that a spreadsheet is running that makes a large number of calls to an XLL function that in turn sends a request for data or for a calculation to be performed to a remote server or cluster of servers. Subject to the topology of the dependency tree, Excel could call the function N times almost simultaneously. Provided that the server or servers are sufficiently fast or parallel, the recalculation time of the spreadsheet could be reduced by as much as a factor of 1/N.
The key issue in writing thread-safe functions is handling contention for resources correctly. This usually means memory contention, and it can be broken down into two issues:
How to create memory that you know will only be used by this thread.
How to ensure that shared memory is accessed by multiple threads safely.
The first thing to be aware of is what memory in an XLL is accessible by all threads, and what is only accessible by the currently executing thread.
Accessible by all threads
Variables, structures, and class instances declared outside the body of a function.
Static variables declared within the body of a function.
In these two cases, memory is set aside in the DLL’s memory block created for this instance of the DLL. If another application instance loads the DLL, it gets its own copy of that memory so that there is no contention for these resources from outside this instance of the DLL.
Accessible only by the current thread
Automatic variables within function code (including function arguments).
In this case, memory is set aside on the stack for each instance of the function call.
Given that static variables within the body of a function are accessible by all threads, functions that use them are clearly not thread safe. One instance of the function on one thread could be changing the value while another instance on another thread is assuming it is something completely different.
There are two reasons for declaring static variables within a function:
Static data persist from one call to the next.
A pointer to static data can safely be returned by the function.
In the case of first reason, you might want to have data that persists and has meaning for all calls to the function: perhaps a simple counter that is incremented every time the function is called on any thread, or a structure that collects usage and performance data on every call. The question is how to protect the shared data or data structure. This is best done by using critical section as the next section explains.
If the data is intended only for use by this thread, which could be the case for reason 1 and is always the case for reason 2, the question is how to create memory that persists but is only accessible from this thread. One solution is to use the thread-local storage (TLS) API.
For example, consider a function that returns a pointer to an XLOPER.
This function is not thread safe because one thread can return the static XLOPER12 while another is overwriting it. The likelihood of this happening is greater still if the XLOPER12 needs to be passed to xlAutoFree12. One solution is to allocate an XLOPER12, return a pointer to it, and implement xlAutoFree12 so that the XLOPER12 memory itself is freed. This approach is used in many of the example functions shown in Memory Management in Excel.
LPXLOPER12 WINAPI mtr_safe_example_1(LPXLOPER12 pxArg) { // pxRetVal must be freed later by xlAutoFree12 LPXLOPER12 pxRetVal = new XLOPER12; // code sets pxRetVal to a function of pxArg ... pxRetVal->xltype |= xlbitDLLFree; // Needed for all types return pxRetVal; // xlAutoFree12 must free this }
This approach is simpler to implement than the approach outlined in the next section, which relies on the TLS API, but it has some disadvantages. First, Excel must call xlAutoFree/xlAutoFree12 whatever the type of the returned XLOPER/XLOPER12. Second, there is a problem when returning XLOPER/XLOPER12s that are the return value of a call to a C API callback function. The XLOPER/XLOPER12 may point to memory that needs to be freed by Excel, but the XLOPER/XLOPER12 itself must be freed in the same way it was allocated. If such an XLOPER/XLOPER12 is to be used as the return value of an XLL worksheet function, there is no easy way to inform xlAutoFree/xlAutoFree12 of the need to free both pointers in the appropriate way. (Setting both the xlbitXLFree and xlbitDLLFree does not solve the problem, as the treatment of XLOPER/XLOPER12s in Excel with both set is undefined and might change from version to version.) To work around this problem, the XLL can make deep copies of all Excel-allocated XLOPER/XLOPER12s that it returns to the worksheet.
A solution that avoids these limitations is to populate and return a thread-local XLOPER/XLOPER12, an approach that necessitates that xlAutoFree/xlAutoFree12 does not free the XLOPER/XLOPER12 pointer itself.
LPXLOPER12 get_thread_local_xloper12(void); LPXLOPER12 WINAPI mtr_safe_example_2(LPXLOPER12 pxArg) { LPXLOPER12 pxRetVal = get_thread_local_xloper12(); // Code sets pxRetVal to a function of pxArg setting xlbitDLLFree or // xlbitXLFree as required. return pxRetVal; // xlAutoFree12 must not free this pointer! }
The next question is how to set up and retrieve the thread-local memory, in other words, how to implement the function get_thread_local_xloper12 in the previous example. This is done using the Thread Local Storage (TLS) API. The first step is to obtain a TLS index by using TlsAlloc, which must ultimately be released using TlsFree. Both are best accomplished from DllMain.
// This implementation just calls a function to set up // thread-local storage. BOOL TLS_Action(DWORD Reason); // Could be in another module BOOL WINAPI DllMain(HINSTANCE hDll, DWORD Reason, void *Reserved) { return TLS_Action(Reason); } DWORD TlsIndex; // Module scope only. The Windows Development Documentation recommends doing this every time the DllMain callback function is called with a DLL_THREAD_ATTACH event, and freeing the memory on every DLL_THREAD_DETACH. However, following this advice would cause your DLL to do unnecessary work for threads not used for recalculation.
Instead, it is better to use an allocate-on-first-use strategy. First, you need to define a structure that you want to allocate for each thread. For the previous examples that return XLOPERs or XLOPER12s, the following suffices, but you can create any structure that satisfies your needs. with this thread } return (TLS_data *)pTLS; }
Now you can see how the thread-local XLOPER/XLOPER12 memory is obtained: first, you get a pointer to the thread’s instance of TLS_data, and then you return a pointer to the XLOPER/XLOPER12 contained within it, as follows.
The mtr_safe_example_1 and mtr_safe_example_2 functions can be registered as thread-safe worksheet functions when you are running Excel. However, you cannot mix the two approaches in one XLL. Your XLL can only export one implementation of xlAutoFree and xlAutoFree12, and each memory strategy requires a different approach. With mtr_safe_example_1, the pointer passed to xlAutoFree/xlAutoFree12 must be freed along with any data it points to. With mtr_safe_example_2, only the pointed-to data should be freed.
Windows also provides a function GetCurrentThreadId, which returns the current thread’s unique system-wide ID. This provides the developer with another way to make code thread safe, or to make its behavior thread specific.
You should protect read/write memory that can be accessed by more than one thread using critical sections. You need a named critical section for each block of memory you want to protect. You can initialize these during the call to the xlAutoOpen function, and release them and set them to null during the call to the xlAutoClose function. You then need to contain each access to the protected block within a pair of calls to EnterCriticalSection and LeaveCriticalSection. Only one thread is permitted into the critical section at any time. Here is an example of the initialization, uninitialization, and use of a section called g_csSharedTable.
CRITICAL_SECTION g_csSharedTable; // global scope (if required) bool xll_initialised = false; // Only module scope needed int WINAPI xlAutoOpen(void) { if(xll_initialised) return 1; // Other initialisation omitted InitializeCriticalSection(&g_csSharedTable); xll_initialised = true; return 1; } int WINAPI xlAutoClose(void) { if(!xll_initialised) return 1; // Other cleaning up omitted. DeleteCriticalSection(&g_csSharedTable); xll_initialised = false; return 1; } #define SHARED_TABLE_SIZE 1000 /* Some value consistent with the table */ the operating system overhead that this would create.
When you have code that needs access to more than one block of protected memory at the same time, you need to consider very carefully the order in which the critical sections are entered and exited. For example, the following two functions could create a deadlock.
// WARNING: Do not copy this code. These two functions // can produce a deadlock and are provided for // example and illustration only. bool copy_shared_table_element_A_to_B(unsigned int index) { if(index >= SHARED_TABLE_SIZE) return false; EnterCriticalSection(&g_csSharedTableA); EnterCriticalSection(&g_csSharedTableB); shared_table_B[index] = shared_table_A[index]; // Critical sections should be exited in the order // they were entered, NOT as shown here in this // deliberately wrong illustration..
Where possible, it is better from a thread co-operation InitializeCriticalSectionAndSpinCount when initializing the section or SetCriticalSectionSpinCount once, see the Windows SDK documentation. | https://msdn.microsoft.com/en-us/library/bb687868(v=office.14).aspx | CC-MAIN-2015-14 | refinedweb | 1,807 | 50.36 |
FFI Introduction
From HaskellWiki
Revision as of 11:38, 1 August 2013
Haskell's FFI is used to call functions from other languages (basically C at this point), and for C to call haskell functions.
1 Links
- Definition#Addenda to the report has the official description of the FFI.
- FFI cook book has useful examples.
- FFI
2 Short version
There are many more useful examples in the FFICookBook, but here's a few basic ones:
{-# INCLUDE <math.h> #-} {-# LANGUAGE ForeignFunctionInterface #-} module FfiExample where import Foreign.C -- get the C types -- pure function foreign import ccall "sin" c_sin :: CDouble -> CDouble sin :: Double -> Double sin d = realToFrac (c_sin (realToFrac d))
Note that the FFI document recommends putting the header in the double quotes, like
foreign import ccall "math.h sin" c_sin :: CDouble -> CDouble
GHC since 6.10.x ignores both the INCLUDE pragma (equivalently command line -#include) and the header in the double quotes. GHC 6.8.x and before prefers the INCLUDE pragma (equivalently command line -#include) and in Cabal package descriptions. Other compilers probably prefer the header in the double quotes (if they compile via C) or ignore (if they do not compile via C)—check their documentations.
Notice that C types are not the same as haskell types, and you have to import them from Foreign.C. Notice also that, as usual in haskell, you have to explicitly convert to and from haskell types. Using c_<name_of_c_function> for the raw C function is just my convention..
For details on impure functions, pointers to objects, etc., see the cookbook.
3 Marshalling and unmarshalling arguments
See the cookbook. It's nicer to do the marshalling and unmarshalling in haskell, but it's still low-level repetetive stuff. The functions are all available below Foreign, which supports memory allocation and pointers (and hence C arrays and "out" parameters). One thing it doesn't support is structs.
Tools like GreenCard were created to help with this (as well as the low-level boilerplate thing).
[ TODO: more detail here? examples in greencard? ]
4 Compiling FFI-using modules
4.1 GHC
Here's a makefile fragment to compile an FfiExample module that uses C functions from c_functions.c, which uses library functions from libcfuncs:
HFLAGS=-I/path/to/lib/include -L/path/to/lib _dummy_target: c_functions.o c_functions.h ghc $(HFLAGS) -main-is FfiExample --make -o ffi_example c_functions.o -lcfuncs
Notice the use of _dummy_target and --make. The idea is that you get make to compile what is necessary for C, and then always run ghc with --make, at which point it will figure out what is necessary to compile for haskell.
Actually, this is broken, because ghc --make will not notice if a .o file has changed!
[ this is just my hack, anyone have a better way to do this? ]
4.2 Other compilers
Fill me in!
5 Complete example with GHC
GHC's libs don't (apparently?) support generic termios stuff. I could implement the whole tcgetattr / tcsetattr thing, but let's just turn ICANON on and off, so IO.getChar doesn't wait for a newline:
termops.c:
#include <termios.h> #include "termops.h" void set_icanon(int fd) { struct termios term; tcgetattr(0, &term); term.c_lflag |= ICANON; tcsetattr(fd, TCSAFLUSH, &term); } void unset_icanon(int fd) { struct termios term; tcgetattr(0, &term); term.c_lflag &= ~ICANON; tcsetattr(fd, TCSAFLUSH, &term); }
termops.h:
void set_icanon(int fd); void unset_icanon(int fd);
Termios.hs:
{-# INCLUDE <termios.h> #-} {-# INCLUDE "termops.h" #-} {-# LANGUAGE ForeignFunctionInterface #-} module Termios where import Foreign.C foreign import ccall "set_icanon" set_icanon :: CInt -> IO () foreign import ccall "unset_icanon" unset_icanon :: CInt -> IO ()
FfiEx.hs:
module FfiEx where import Control.Exception import System.IO import qualified Termios import Control.Monad (when) main = bracket_ (Termios.unset_icanon 0) (Termios.set_icanon 0) (while_true prompt) while_true op = do continue <- op when continue (while_true op) prompt = do putStr "? " hFlush stdout c <- getChar putStrLn $ "you typed " ++ [c] return (c /= 'q')
makefile:
_ffi_ex: termops.o ghc --make -main-is FfiEx -o ffi_ex FfiEx.hs termops.o
[this only worked for me when I omitted termops.o at the end of the `ghc --make` command. Seems like it searches for and finds the .o automatically? --lodi ]
And now:
% make gcc -c -o termops.o termops.c ghc --make -main-is FfiEx -o ffi_ex FfiEx.hs termops.o [1 of 2] Compiling Termios ( Termios.hs, Termios.o ) [2 of 2] Compiling FfiEx ( FfiEx.hs, FfiEx.o ) Linking ffi_ex ... % ./ffi_ex ? you typed a ? you typed b ? you typed q % | https://wiki.haskell.org/index.php?title=FFI_Introduction&oldid=56485&diff=prev | CC-MAIN-2017-30 | refinedweb | 742 | 69.58 |
21 S
safe load: A process of loading a file in which additional error checking is performed and various corruption patterns in the file are detected and repaired.
salt: An additional random quantity, specified as input to an encryption function that is used to increase the strength of the encryption.
sample: (1) A unit of media data sent from the server to the client.
(2) The smallest fundamental unit (such as a frame) in which media is stored and processed.
sandboxed solution: A custom solution that can be deployed to a site by a site collection administrator, without approval from the server farm administrator.
SASL: The Simple Authentication and Security Layer, as described in [RFC2222]. This is an authentication (2)mechanism used by the Lightweight Directory Access Protocol (LDAP).
Scalar: A type of MethodInstance that can be called to return a scalar value.
Scale Secure Real-Time Transport Protocol (SSRTP): A Microsoft proprietary extension to the Secure Real-Time Transport Protocol (SRTP), as described in [RFC3711].
scan device: A scanner, copier, or multifunction peripheral that supports the Devices Profile for Web Services [DPWS].
scan document: A single image file created by a scan device and transferred to the scan repositoryserver during the processing of a PostScan job.
scan repository: A service that supports processing PostScan jobs based on data and instructions in a PostScan process.
scan ticket: An element that communicates the appropriate settings that should be used by a scan device when creating a scan document.
scatter chart: A chart that displays values on both the x and y axes to represent two variables as a single data point.
scenario: A named set of input values (changing cells) that can be substituted in a worksheet model.
Scenario Manager: A process for creating and managing different sets of input values for calculation models in a worksheet.
scene: An independent part of a tour that has a beginning and end, and a specific time duration in which a particular data visualization on the map occurs.
Schedule: The frequency at which FRS replicates data under replica tree root.
schema: (1) The set of attributes and object classes that govern the creation and update of objects.
(2) A container that defines a namespace that describes the scope of EDM types. All EDM types are contained within some namespace. (1) or an object class. Schema objects are contained in the schema naming context (schema NC).
schema version: An integer value that represents the version number of the schema for a deployment package.
scheme: The name of a specification to refer to when assigning identifiers within a particular URIscheme, as defined in [RFC3986] section 3.1.
scope: (1) A range of IP addresses and associated configuration options that are allocated to DHCP clients in a specific subnet.
(2) The term "Scope" that is defined in [WS-Discovery1.1].
(3) An item that represents a hierarchy in a report. There are explicit scopes (such as data region, dataset, group) and implicit scopes (such as report scope). At any level in the hierarchy, there can be only one ancestor scope (except for the top-level report scope and the page scope) but an unlimited number of descendants as well as peer scopes.
scope identifier: A GUID that uniquely identifies a scope within a site collection.
scope index key: A basic scope index key or a compound scope index key that references a scope index record.
scorecard: A report that depicts organizational and business performance by displaying a collection of key performance indicators (KPIs) with performance targets for those KPIs. Each KPI compares actual performance to goals for an area. A scorecard can be organized hierarchically and typically contains visualization tools such as trend charts and conditional formatting.
SDP answer: A Session Description Protocol (SDP) message that is sent by an answerer in response to an offer that is received from an offerer.
SDP offer: A Session Description Protocol (SDP) message that is sent by an offerer.
sealed content type: A named and uniquely identifiable collection of settings and fields that cannot be changed. A seal can be removed only by a site collection administrator. See also content type.
search alert: An Internet message that is sent to subscribers automatically for a specific query. It notifies subscribers when one or more new results exist, or an existing result was modified.
search application: A unique group of search settings that is associated, one-to-one, with a shared service provider.
search catalog: All of the crawl data that is associated with a specific search application. A search catalog provides information that is used to generate query results.
search down: A process of searching for information by ascending row and column numbers.
search folder: (1) A collection of related items to be crawled by a search service.
(2).
search index: A set of data structures that facilitates query evaluation by a search service application. The primary part of a search index is an inverted index of terms.
search provider: A component or application that provides data in response to a query. See also result provider. consumer: A site collection that uses a specific search scope display group.
search scope rule: An attribute that specifies which items are included in a search scope.
search security descriptor: (1) A Windows security descriptor.
(2) A custom security descriptor that is in an arbitrary format and is handled by alternate authentication providers in pluggable security authentication (2).
search service account: A user account under which a search service runs.
search service application: A shared service application that provides indexing and querying capabilities.
search setting context: An administrative setting that is used to specify when a search setting for a keyword is applied to a search query, based on the query context.
search up: A process of searching for information by descending row and column numbers.
secondary bar/pie: A secondary chart in a bar of pie or pie of pie chart that displays the detailed data of the grouped data point in the primary pie chart. The secondary bar/pie chart takes the form of a stacked bar chart or a pie chart that is connected to the primary pie chart with series lines..
secondary shortcut key: A user-defined combination of keys that are pressed simultaneously to execute a command. See also primary shortcut key..
section: (1) A collection of user profile properties that appear together on a profile site.
(2) A portion of a document that is terminated by a section break or the end of the document. A section can store unique, page-level formatting, such as page size and orientation, and other formatting features such as headers and footers.
(3) A part of a form or report, such as a header or footer, that appears at each instance of a specific level in that form or report. It can be shown or hidden independently of other sections.
(4) Specifies the layout and structure information of a report. A report section is comprised of a body, a header, and a footer. A section is specified by the Section element.
securable object: An object that can have unique security permissions associated with it.
secure audio video profile (SAVP): A protocol that extends the audio-video profile specification to include the Secure Real-Time Transport Protocol, as described in [RFC3711].
Secure Real-Time Transport Protocol (SRTP): A profile of Real-Time Transport Protocol (RTP) that provides encryption, message authentication (2), and replay protection to the RTP data, as described in [RFC3711]. (2) using X.509certificates (2). For more information, see [X509]. The SSL protocol is precursor to Transport Layer Security (TLS). The TLS version 1.0 specification is based on SSL version 3.0.
Secure Store Service (SSS): A service that is used to store credentials for a user or a group of users. It enables applications, typically on behalf of a user, to authenticate and gain access to resources. Users can retrieve only their own credentials from the secure store.
Secure Store Service (SSS) store: A persistent store that provides storage for target application definitions and credentials.
Secure Store Service (SSS) ticket: A token that contains the encrypted identity of a Secure Store Service (SSS) user in the form of a claim (2) and a nonce.
Secure Store Service (SSS) user: A security principal (2) that interacts with a Secure Store Service (SSS) implementation.
Security Account Manager (SAM): A centrally managed service, such as AD DS, that enables a server to establish a trust relationship with other authorized servers. The SAM also maintains information about domains and security principals (2), and provides client-to-server information by using several available standards for access control lists (ACLs).
Security Assertion Markup Language (SAML): The set of specifications that describe security assertions encoded in XML, profiles for attaching assertions to protocols and frameworks, request/response protocols used to obtain assertions, and the protocol bindings to transfer protocols, such as SOAP and HTTP.
security association (SA): A simplex "connection" that provides security services to the traffic carried by it. See [RFC4301] for more information.
security context: (1).
(2) The result of a TSIG [RFC2845] security negotiation between the server and a client machine.
(3) A data structure containing authorization information for a particular security principal in the form of a collection of security identifiers (SIDs). One SID identifies the principal specifically, whereas others may group: A named group of principals on a SharePoint site.
security group identifier: An integer that is used to uniquely identify a security group, distinguishing it from all other security principals (2) and site groups within the same site collection. their network should be secured..
(3) A unique entity identifiable through cryptographic means by at least one key. A security principal often corresponds to a human user but can also be a service offering a resource to other security principals. Sometimes referred to simply as a "principal".
(4) An identity that can be used to regulate access to resources, as specified in [MS-AUTHSOD] section 1.1.1.1. A security principal can be a user, a computer, or a group that represents a set of users.
.
(6).
security principal identifier: A value that is used to uniquely identify a security principal (2). In Windows-based systems, it is a security identifier (SID). In other types of systems, it can be a user identifier or other type of information that is associated with a security principal (2).). In AD LDS, any object containing the msDS-BindableObject auxiliary class is a security principal. See also computer object, group object, and user object.
security protocol: A protocol that performs authentication and possibly additional security services on a network.
security provider: (1) A Component Object Model (COM) object that provides methods that return custom information about the security of a site.
(2).
(3) realm or security domain: Represents a single unit of security administration or trust (for example, a Kerberos realm, for more information, see [RFC4120]; or a Windows Domain, for more information, see [MSFT-ADC]). one or more claims. Specifically in the case of mobile devices, a security token represents a previously authenticated user as defined in the Mobile Device Enrollment Protocol [MS-MDE].
security token service (STS): (1) A web service that issues claims (2) and packages them in encrypted security tokens.
(2) A web service that issues security tokens. That is, it makes assertions based on evidence that it trusts; these assertions are for consumption by whoever trusts it. For more information, see [WSFedPRP] sections 1.4 and 2 and [WSTrust] section 2.4. For [MS-ADFSPP], [MS-ADFSWAP], and [MS-MWBF], STS refers to services that support (either directly or via a front end) the protocol defined in each of those specifications.
(3) To communicate trust, a service requires proof, such as a signature to prove knowledge of a security token or set of security tokens. A service itself can generate tokens or it can rely on a separate STS to issue a security token with its own trust statement. (Note that for some security token formats, this can be just a re-issuance or co-signature.) This forms the basis of trust brokering.
(4) A special type of server defined in WS-Trust [WSTrust1.3].
security trimmer: A filter that is used to limit search results to only those resources that a user can view, based on the user's permission level and the access control list (ACL) for a resource. A security trimmer helps to ensure that search results display only those resources that a user has permission to view.
security zone: A setting that determines whether a resource, such as a website, can access data on other domains, or access files and settings on a user's computer. There are four security zones: Internet, Local intranet, Trusted sites, and Restricted sites. The zone to which a resource is assigned specifies the security settings that are used for that resource. See also form security level.
security-enabled group: A group object with GROUP_TYPE_SECURITY_ENABLED present in its groupType attribute. Only security-enabled groups are added to a security context. See also group object.
segment: (1) A subdivision of content. In version 1.0 Content Information, each segment has a size of 32 megabytes, except the last segment which can be smaller if the content size is not a multiple of the standard segment sizes. In version 2.0 Content Information, segments can vary in size.
(2) A set of stations that see each other’s link-layer frames without being changed by any device in the middle, such as a switch.
(3) A unit of content for discovery purposes. A segment is identified on the network by its public identifier, also known as segment ID or HoHoDk. A segment does not belong to any particular content; it can be shared by many content items if all those content items have an identical segment-sized portion at some offset.
selected: The condition of a set of items that has focus in a workbook.
selection: An item or set of items, such as cells, shapes, objects, and chart elements, that has focus in a document.
self SUBSCRIBE: A SUBSCRIBE request that is used by a publisher to be notified of changes to its own data. It is possible to subscribe to three different sets of data: categories (4), containers, and subscribers.
self subscriber: A SIP protocol client that is making a subscribe request for self-published category (4) information.
self-signed certificate: A certificate (1) that is signed by its creator and verified using the public key contained in it. Such certificates are also termed root certificates.
sequence: (1) A unique identifier for a delta that includes the user identifier for the endpoint (3) that created the delta.
(2) The set of message packets sent over a session that represent a message sequence. A message is associated with a sequence number that corresponds to its position within the sequence. Sequence numbers begin with 1 and increment by 1 with each subsequent message.
(3) A one-way, uniquely identifiable batch of messages between an RMS and an RMD.
sequence header: A set of encoding and display parameters that are placed before a group of pictures, as described in [SMPTE-VC-1]. See also entry point header.
Serialization Format: The structure of the serialized message content, which can be either binary or SOAP. Binary serialization format is specified in [MS-NRBF]. SOAP serialization format is specified in [MS-NRTP].
series line: A supplemental line on a stacked column, stacked bar, pie of pie, or bar of pie chart that connects each data point in a series with the next data point to increase legibility..
(4) For the Peer Content Caching and Retrieval Framework, a server is a server-role peer; that is, a peer that listens for incoming block-range requests from client-role peers and responds to the requests.
(5) Used as a synonym for domain controller. See [MS-DISO].
(6) Refers to the Group Policy server that is involved in a policy application sequence. See [MS-GPOL].
(7) The entity that responds to the HTTP connection. See [MS-TSWP].
(8) A server capable of issuing OMA-DM commands to a client and responding to OMA-DM commands issued by a client. See [MS-MDM]
(9) Used to identify the system that implements WMI services, provides management services, and accepts DCOM ([MS-DCOM]) calls from WMI clients.
(10) A domain controller. Used as a synonym for domain controller. See [MS-ADOD]
(11) An entity that transfers content to a client through streaming. A server might be able to do streaming on behalf of another server; thus, a server can also be a proxy. See [MS-WMLOG]
(12) Used as described in [RFC2616] section 1.3. See [MS-NTHT]
(13) For the purposes of [MS-RDC], the server is the source location.
(14) Any process that accepts commands for execution from a client by using the PowerShell Remoting Protocol.].
server name: The name of a server, as specified in the operating system settings for that server.
server object: (1) A class of object in the configuration naming context (config NC). A server object can have an nTDSDSA object as a child.
(2) Part of the Remoting Data Model. A server object is an instance of a Server Type. A server object is either an SAO or an MSO.
(3) The database object in the account domain with an object class of samServer.
Server Reflexive Candidate: A candidate whose transport addresses is a network address translation (NAT) binding that is allocated on a NAT when an endpoint (5) sends a packet through the NAT to the server. A Server Reflexive Candidate can be discovered by sending an allocate request to the TURN server or by sending a binding request to a Simple Traversal of UDP through NAT (STUN) server.
Server Scale Secure Real-Time Transport Protocol (Server SSRTP): A derivative of the Scale Secure Real-Time Transport Protocol (SSRTP) that is used by applications to receive media from multiple senders and fan-out media to multiple receivers. Typically, applications such as Multipoint Control Units (MCUs) use this mode of encryption.
Server Type: Part of the Remoting Data Model. A Server Type contains Remote Methods.
server-activated object (SAO): A server object that is created on demand in response to a client request. See also marshaled server object.
server-relative URL: A relative URL that does not specify a scheme or host, and assumes a base URI of the root of the host, as described in [RFC3986].
service: (1) A process or agent available on the network, offering resources or services for clients. Examples of services include file servers, web servers, and so on.
(2) A process or agent that is available on the network, offering resources or services for clients. Examples of services include file servers, web servers, and so on.
(3) A program that is managed by the Service Control Manager (SCM). The execution of this program is governed by the rules defined by the SCM.
(4) The receiving endpoint of a web services request message, and sender of any resulting web services response message.
(5) A logical functional unit that represents the smallest units of control and that exposes actions and models the state of a physical device with state variables. For more information, see [UPNPARCH1.1] section 3.
(6) An application that provides management services to clients through the WS-Management Protocol and other web services.
(7) A SIP method defined by Session Initiation Protocol Extensions used by the client to request a service from the server.
SERVICE: A method that is defined by Session Initiation Protocol (SIP) extensions and is used by an SIP client to request a service from a server.
service application: A middle-tier application that runs without any user interface components and supports other applications by performing tasks such as retrieving or modifying data in a database.
Service Control Manager (SCM): An RPC server that enables configuration and control of service programs. application data in system memory. It is used to maintain state for application data that is being manipulated or monitored on a protocol server by a user.
(3) A collection of multimedia senders and receivers and the data streams that flow between them. A multimedia conference is an example of a multimedia session.
(4) In Kerberos, an active communication channel established through Kerberos that also has an associated cryptographic key, message counters, and other state.
(5) In Server Message Block (SMB), a persistent-state association between an SMB client and SMB server. A session is tied to the lifetime of the underlying NetBIOS or TCP connection.
(6) In the Challenge-Handshake Authentication Protocol (CHAP), a session is a lasting connection between a peer and an authenticator.
(7) In the Workstation service, an authenticated connection between two computers.
(8) An active communication channel established through NTLM, that also has an associated cryptographic key, message counters, and other state.
(9) In OleTx, a transport-level connection between a Transaction Manager and another Distributed Transaction participant over which multiplexed logical connections and messages flow. A session remains active so long as there are logical connections using it.
(10) The state maintained by the server when it is streamingcontent to a client. If a server-side playlist is used, the same session is used for all content in the playlist.
(11).
(12) An authenticated communication channel between the client and server correlating a group of messages into a conversation.
(13) A collection of state information on a directory server. An implementation of the SOAP session extensions (SSE) is free to choose the state information to store in a session.
(14) In LU 6.2, a session is a connection between LUs that can be used by a succession of conversations. A given pair of LU 6.2s may be connected by multiple sessions. For a more complete definition, see [LU62Peer].
(15) A context for managing communication over LLTD among stations.
(16) The operational environment in which an application and its commands execute.
(17) A context for managing communication over qWave-WD among devices. This is equivalent to a TCP connection.
(18) A multimedia session is a set of multimedia senders and receivers and the data streams flowing from senders to receivers. A multimedia conference is an example of a multimedia session.
(19) A set of multimedia senders and receivers and the data streams flowing from senders to receivers. A multimedia conference is an example of a multimedia session.
Session Description Protocol (SDP): (1) A protocol that is used to announce sessions, manage session invitations, and perform other types of initiation tasks for multimedia sessions, as described in [RFC3264].
(2) A protocol that is used for session announcement, session invitation, and other forms of multimedia session initiation. For more information see [MS-SDP] and [RFC3264].
session identifier: (1) A unique string that is used to identify a specific instance of session data and is used by stored procedures as an opaque primary key.
(2) A key that enables an application to make reference to a session.
(3) Unique identifier that an operating system generates when a session is created. A session spans the period of time from logon until logoff from a specific system.
Session Initiation Protocol (SIP): An application-layer control (signaling) protocol for creating, modifying, and terminating sessions with one or more participants. SIP is defined in [RFC3261].
Session Initiation Protocol (SIP) address: A URI that does not include a "sip:" prefix and is used to establish multimedia communications sessions between two or more users over an IP network, as described in [RFC3261].
session key: (1) A symmetric key that is derived from a master key and is used to encrypt or authenticate a specific media stream by using the Secure Real-Time Transport Protocol (SRTP) and Scale Secure Real-Time Transport Protocol (SSRTP).
(2) recycling: A process in which active sessions (2) are closed to start new sessions and to limit the total number of active sessions.
Setting: A partition of a metadata store. It is used to store Properties, localized names, and access control entries (ACEs) for MetadataObjects.
setup path: The location where supporting files for a product or technology are installed.
SHA: See system health agent (SHA).
SHA-1: An algorithm that generates a 160-bit hash value from an arbitrary amount of input data, as described in [RFC3174]. SHA-1 is used with the Digital Signature Algorithm (DSA) in the Digital Signature Standard (DSS), in addition to other algorithms and standards.
SHA-1 hash: A hashing algorithm as specified in [FIPS180-2] that was developed by the National Institute of Standards and Technology (NIST) and the National Security Agency (NSA).
SHA-256: An algorithm that generates a 256-bit hash value from an arbitrary amount of input data, as described in [FIPS180-2].
shade: A color that is mixed with black. A 10-percent shade is one part of the original color and nine parts black.
shading pattern: A background color pattern against which characters and graphics are displayed, typically in tables. The color can be no color or it can be a specific color with a transparency or pattern value.
shadow copy: A duplicate of data held on a volume at a well-defined instant in time.
shadow effect: A formatting effect that makes a font or object appear to be elevated from the page or screen surface, and therefore casts a shadow.
shallow refinement: A type of query refinement that is based on the aggregation of managed property statistics for only some results of a search query. The number of refined results varies according to implementation. See also deep refinement.
shape: A collection of qualifiers, such as names, and quantifiers, such as coordinates, that is used to represent a geometric object. A shape can be contained in a document, file structure, run-time structure, or other medium.
shape identifier: An integer that corresponds to a shape object or an instantiation of a shape object.
share: (1).
(2) To make content on a host desktop available to participants. Participants with a sufficient control level may interact remotely with the host desktop by sending input commands.
(3).
shared lock: A condition in which multiple protocol clients or protocol servers can read or write data concurrently, but no transaction can acquire an exclusive lock on the data until all of the shared locks have been released.
shared search scope: An administrator-defined restriction (1) that can be added to a query to limit query results to a collection of content. This restriction is available to multiple site collections.
Shared Services Provider (SSP): A logical grouping of shared service applications, and their supporting resources, that can be configured and managed from a single server and can be used by multiple server farms.
shared space: A set of tools that is synchronized between different endpoints (3), as described in [MS-GRVDYNM].
shared view: A view of a list or Web Parts Page that every user who has the appropriate permissions can see.
shared workbook: A workbook that is configured to enable multiple users on a network to view and make changes to it at the same time. Each user who saves the workbook sees the changes that are made by other users. stream: See stream (1) and document stream.
sheet tab: A control that is used to select a sheet.
sheet view: A collection of display settings, such as which cells are shown, and the zoom level for a sheet window.
short-term lock: A type of check-out process in Windows SharePoint Services. Short-term checkouts are implicit and are done when a file is opened for editing. A lock is applied to the file while it is being edited in the client application so that other users cannot modify it. After the client application is closed, the lock is released.
shrink to fit: The process of adjusting the font size of text in a cell to fit the current height and width of the cell.
Side: An area on a physical medium that can store data. Although most physical media have only a single side, some may have two sides. For instance, a magneto-optic (MO) disk has two sides: an "A" side and a "B" side. When an MO disk is placed in a drive with the "A" side up, the "A" side is accessible and the "B" side is not. To access the "B" side, the disk must be inserted with the "B" side up. The data stored on different sides of the same physical medium are independent of one another.
signature: (1) A synonym for hash.
(2) A value computed with a cryptographic algorithm and bound to data in such a way that intended recipients of the data can use the signature to verify that the data has not been altered and/or has originated from the signer of the message, providing message integrity and authentication. The signature can be computed and verified either with symmetric key algorithms, where the same key is used for signing and verifying, or with asymmetric key algorithms, where different keys are used for signing and verifying (a private and public key pair are used). For more information, see [WSFedPRP].
(3) The lowest node ID in the graph.
(4) A structure containing a hash and block chunk size. The hash field is 16 bytes, and the chunk size field is a 2-byte unsigned integer.
silence suppression: A mechanism for conserving bandwidth by detecting silence in the audio input and not sending packets that contain only silence.
Simple Mail Transfer Protocol (SMTP): A member of the TCP/IP suite of protocols that is used to transport Internet messages, as described in [RFC5321].
Simple Symmetric Transport Protocol (SSTP): A protocol that enables two applications to engage in bi-directional, asynchronous communication. SSTP supports multiple application endpoints (5) over a single network connection between client nodes.
Simple Symmetric Transport Protocol Security Protocol (SSTP) security: An independent sub-protocol that is exchanged within defined Simple Symmetric Transport Protocol (SSTP) messages, and is used for mutual authentication (2) between a relay server and a client device or an account.
Simple Traversal of UDP through NAT (STUN): A protocol that enables applications to discover the presence of and types of network address translations (NATs) and firewalls that exist between those applications and the Internet.
simple type: An element that can contain only text and appears as <simpleType> in an XML document or any attribute (1) of an element. Attributes are considered simple types because they contain only text. See also complex type.
single accounting: An underline style that places one line beneath the text. Single accounting can be used to indicate subtotals.
single sign-on (SSO): A process that enables users who have a domain user account to log on to a network and gain access to any computer or resource in the domain without entering their credentials multiple times.
single sign-on (SSO) administrator: A security principal (2) who is authorized to change a single sign-on (SSO) configuration and to obtain master secrets from a master secret server.
single sign-on (SSO) identifier: A string that represents the definition of user credentials that permit a user to access a network. See also single sign-on (SSO).
single-valued claim: See claim.
SIP element: An entity that understands the Session Initiation Protocol (SIP).
SIP message: The data that is exchanged between Session Initiation Protocol (SIP) elements as part of the protocol. An SIP message is either a request or a response.
SIP method: The primary function that an SIP request is meant to call on a server. This method is carried in the request message itself. Example methods are INVITE and BYE.
SIP protocol client: A network client that sends Session Initiation Protocol (SIP) requests and receives SIP responses. An SIP client does not necessarily interact directly with a human user. User agent clients (UACs) and proxies are SIP clients.
SIP registrar: A Session Initiation Protocol (SIP) server that accepts REGISTER requests and places the information that it receives from those requests into the location service for the domain that it handles.
SIP request: A Session Initiation Protocol (SIP) message that is sent from a user agent client (UAC) to a user agent server (UAS) to call a specific operation.
SIP response: A Session Initiation Protocol (SIP) message that is sent from a user agent server (UAS) to a user agent client (UAC) to indicate the status of a request from the UAC to the UAS.
SIP response code: A three-digit code in a Session Initiation Protocol (SIP) message, as described in [RFC3261].
SIP transaction: A SIP transaction occurs between a UAC and a UAS. The SIP transaction comprises all messages from the first request sent from the UAC to the UAS up to a final response (non-1xx) sent from the UAS to the UAC. If the request is INVITE, and the final response is a non-2xx, the SIP transaction also includes an ACK to the response. The ACK for a 2xx response to an INVITErequest is a separate SIP transaction..
(3) A collection of one or more well-connected (reliable and fast) TCP/IP subnets. By defining sites (represented by site objects) an administrator can optimize both Active Directory access and Active Directoryreplication collection: A set of websites (1) navigational structure. identifier: A GUID that identifies a site collection. In stored procedures, the identifier is typically "@SiteId" or "@WebSiteId". In databases, the identifier is typically "SiteId/tp_SiteId". hop: The process of traversing from one website to another during a crawl. See also page hop.
site identifier: A GUID that is used to identify a site in a site collection.
site map provider: An object that provides a hierarchy of nodes that represent navigation for a site (2).
site membership: The status of being a member of a site and having a defined set of user rights for accessing or managing content on that site.
site solution: A deployable, reusable package that contains a set of features, site definitions, and assemblies that apply to sites, and can be enabled or disabled individually.
site subscription: A logical grouping of site collections that share a common set of features and service data.
site subscription identifier: A GUID that is used to identify a site subscription.
site template: An XML-based definition of site settings, including formatting, lists, views, and elements such as text, graphics, page layout, and styles. Site templates are stored in .stp files in the content database.
site-collection relative URL: A URL that is relative to the site collection that contains a resource, and does not begin with a leading slash (/).
site-relative URL: A URL that is relative to the site that contains a resource and does not begin with a leading slash (/)..
slide: A frame that contains text, shapes, pictures, or other content. A slide is a digital equivalent to a traditional film slide.
slide layout: An organizational scheme, such as Title Only or Comparison, for content on a presentation slide.
Slide Library: A type of a document library that is optimized for storing and reusing presentation slides that conform to the format described in [ISO/IEC-29500:2008].
slide show: A delivery of a sequence of presentation slides, typically to an audience.
slide show broadcast: A delivery of a sequence of presentation slides, typically to an audience, as a single session between a protocol server and one or more protocol clients.
Slot: A storage location within a library. For example, a tape library has one slot for each tape that the library can hold. A stand-alone drivelibrary has no slots. Most libraries have at least four slots. Sometimes slots are organized into collections of slots called magazines. Magazines are usually removable.
smart document: A file that is programmed to assist the user as the user creates or updates the document. Several types of files, such as forms and templates, can also function as smart documents.
smart tag: A feature that adds the ability to recognize and label specific data types, such as people's names, within a document and displays an action button that enables users to perform common tasks for that data type.
smart tag actions button: A user interface control that displays a menu of actions that are associated with a specific smart tag.
smart tag indicator: A triangular symbol that appears in the bottom right corner of a cell and indicates that the cell contains a smart tag.
smart tag recognizer: An add-in that can interpret a specific type of smart tag, such as an address or a financial symbol, in a document and display an action button that enables users to perform common tasks for that data type.
snapshot: (1) A copy of a workbook that contains only values and formatting. It does not contain any formulas or data connections.
(2) 1.1: (1) Version 1.1 of the SOAP (Simple Object Access Protocol) standard. For the complete definition of SOAP 1.1, see [SOAP1.1].
(2) Simple Object Access Protocol (SOAP) 1.1 message: An XML document consisting of a mandatory SOAP envelope, an optional SOAP header, and a mandatory SOAP body. See [SOAP1.2-1/2007] section 5 for more information.
SOAP Message: The data encapsulated in a SOAP envelope that flows back and forth between a protocol client and a web service, as described in [SOAP1.1].
SOAP Message Transmission Optimization Mechanism (MTOM): A method that is used to optimize the transmission and format of SOAP messages by encoding parts of the message, as described in [SOAP1.2-MTOM].
SOAP node: An element in a SOAP message that identifies the node on a SOAP message path that causes a fault to occur, as described in [SOAP1.1].
SOAP operation: An action that can be performed by a Simple Object Access Protocol (SOAP) service, as described in [SOAP1.1].
SOAP session extensions (SSE): Extensions to DSML that make it possible to maintain state information across multiple request/response operations.
social data: A collection of ratings, tags, and comments about webpages and items on a SharePoint site or the Internet. Individual users create this data and, by default, share it with other users.
social networking: The use of websites and services that provide enhanced information and interaction capabilities with regard to people and resources.
social rating: A user-defined value that indicates the perceived quality of a webpage or item on a SharePoint site or the Internet. Individual users create these ratings and, by default, share them with other users.
social tag: A user-defined keyword and hyperlink to a webpage or item on a SharePoint site or the Internet. Individual users create these tags and, by default, share them with other users.
social tag user: The user who created a social tag.
SOCKS proxy: A network device that routes network packets between protocol clients and protocol servers by using the SOCKS protocol and the proxy server features that are described in [RFC1928].
solution gallery: A gallery (1) that is used to store solution packages.
solution package: A compressed file that can be deployed to a server farm or a site. It can contain assemblies, resource files, site and feature definitions, templates, code access security policies, and Web Parts. Solution packages have a .wsp file name extension..
(3) The order in which the rows in a Table object are requested to appear. This can involve sorting on multiple properties and sorting of categories (5).
(4) The set of rules in a search query that define the ordering of rows in the search result. Each rule consists of a property (for example, name or size) and a direction for the ordering (ascending or descending). Multiple rules are applied sequentially.
source data: (1) The data that is used as the basis for charts, PivotTable reports, and other data visualization features.
(2) See source file.
source file: A file on a source location that is to be copied by RDC. Sometimes referred to as source.
source location: (1) A server, disk, file, document, or other collection of information from which a file or data is copied.
(2) The source location is the location from which a file is being transferred after it has been compressed with RDC.
source term: A specific instance of a term, in a specific term set, that is used to define permissions for the term.
source variation site: A website (2) that contains a collection of publishing pages to be copied to other sites, which are referred to as target variation sites. After the publishing pages are copied to a target variation site, they can be translated into another language. See also target variation site.
spam: An unsolicited email message.
sparkline: A miniature chart that can be inserted into text or embedded in a cell on a worksheet to illustrate highs, lows, and trends in data.
special folder: One of a default set of Folder objects that can be used by an implementation to store and retrieve user data objects..
split pane: A pane that consists of two or more discrete areas of a window. Each area displays content and scrolls independently from other areas of the window. See also frozen panes.
SplitButtonMRUPopup control: A type of SplitButtonPopup control whose icon changes to reflect the command that the user most recently selected from the menu that is displayed by that button.
SplitButtonPopup control: A type of Button control that performs an action when clicked, and can also display a menu of related commands when the user clicks a drop-down arrow that appears on the button.
SplitDropDown control: A type of Button control that performs a default action when clicked, and can also expand to display a list of other possible actions when the user clicks a drop-down arrow that appears on the button.
spool file: A representation of application content data than can be processed by a print driver. Common examples are enhanced metafile format and XML paper specification. For more information, see [MSDN-META] and [MSDN-XMLP].
spreadsheet data model: A local Online Analytical Processing (OLAP) storage of data used by a spreadsheet application.
SQL authentication: One of two mechanisms for validating attempts to connect to instances of SQL Server. In SQL authentication, users specify a SQL Server login name and password when they connect. The SQL Server instance ensures that the login name and password combination are valid before permitting the connection to succeed.
SQL statement: (1) A complete phrase in SQL that begins with a keyword and completely describes an action to be taken on data.
(2) A character string expression in a language that the server understands.
sRGB: (1) A standard color space that enables various devices, including cameras, scanners, displays, and printers, to produce colors that are reasonably identical, as described in [IEC-RGB].
(2) A standard, predefined color space that is portable across all devices and allows accurate color matching with little overhead. sRGB was developed by Hewlett-Packard and Microsoft and is specified in [IEC-RGB]. It is available to users of Windows. Windows NT 3.1, Windows NT 3.5, Windows NT 3.51, Windows 95, and Windows NT 4.0: sRGB color management technology is not available.
SsoTicketFilter: A FilterDescriptor type that is used in conjunction with a single sign-on (SSO) system to transmit an SSO ticket to a line-of-business (LOB) system.
SSRTP stream: A sequence of Scale Secure Real-Time Transport Protocol (SSRTP) packets from a sender and to a receiver who are identified by the same Synchronization Source (SSRC)..
staging object: A block of data that represents an instance of an object type as defined in the connected data source..
start address: A URL that identifies a point at which to start a crawl. Administrators specify start addresses when they create or edit a content source.
startup directory: The directory from which an application opens data files when the application starts.
state changing: A type of operation that changes the state of a session.
statement of health ReportEntry (SoH ReportEntry): A collection of data that represents a specific aspect of the health state of a client.
static rank: The component of a rank that does not depend on a search query. It represents the perceived importance of an item and can be related to the origin of the item, and relationships between the item and other items or business rules that are defined in the search application. See also dynamic rank.
station: Any device that implements LLTD.
Status-Code: A 3-digit integer result code in an HTTP response message, as described in [RFC2616].
Status-Line: The first line of an HTTP response message, as described in [RFC2616].
stemming: A type of query expansion that factors relationships between words by reducing inflected words to their stem form or expanding stems to their inflected forms. For example, the words "swimming" and "swam" can be associated with the stem "swim."
stock chart: A custom chart type that is designed to display stock market data on multiple series; for example, high, low, close, and volume.
stop word: A language-specific token that is not indexed and is ignored in a query. It typically has low semantic content and is used only for grammatical purposes, for example “a” and “and” in the English language.
storage: (1) An element of a compound file that is a unit of containment for one or more storages and streams, analogous to directories in a file system, as described in [MS-CFB].
(2) A set of elements with an associated CLSID used to identify the application or component that created the storage.
(3) A storage object, as defined in [MS-CFB]..
store-relative form: See store-relative URL.
store-relative URL: A URL that consists only of a path segment and does not include the leading and trailing slash..
(3).
(4) A sequence of bytes that typically encodes application data.
(5) A sequence of ASF media objects ([ASF] section 5.2) that can be selected individually. For example, if a movie has an English and a Spanish soundtrack, each may be encoded in the ASF file as a separate stream. The video data would also be a separate stream.
(6) A sequence of messages whose delivery is guaranteed exactly once and in order.
(7) A set of tracks interchangeable at the client when playing media.
(8) An individual audio or video data-flow in a presentation. The media data in an individual stream always uses the same media dataformat.
(9) A flow of data from one host to another host. May also be used to reference the flowing data.
(10) A stream object, as defined in [MS-CFB].
stream cipher: A cryptographic algorithm that transforms plaintext bits into cipher text one bit or byte at a time. When the process is reversed, cipher text is transformed into plaintext one bit or byte at a time. See also block cipher.
StreamAccessor: A type of MethodInstance that can be called to retrieve a Field (4) of an EntityInstance in the form of a data stream of bytes.
streaming: (1) The act of transferring content from a sender to a receiver.
(2) The act of processing a part of an XML Infoset without requiring that the entire XML Infoset be available.
strikethrough formatting: A formatting option in which characters are crossed out by horizontal line.
stripe band: One or more adjacent columns (2) or rows (2) that are in a table and have the same stripe formatting.
stripe formatting: A table formatting option that applies background colors to alternating rows (2) or columns (2) to increase legibility.
stroke order: A sort order that arranges items in a sort range according to the number of strokes that is used to write each glyph. Stroke order is used when sorting text that is written in some East Asian languages.
strong name: A name that consists of the simple text name, version number, and culture information of an assembly, strengthened by a public key and a digital signature that is generated over the assembly.
structural object class: An object class that is not an 88 object class and can be instantiated to create a new object.
structured document tag: An entity in a document that is used to denote content that is stored as XML data.
structured document tag bookmark: An entity in a document that is used to denote the location and presence of a structured document tag.
Structured Query Language (SQL): A database query and programming language that is widely used for accessing, querying, updating, and managing data in relational database systems.
structured XML query: An XML document that specifies a query that may contain multiple subqueries. For more information, see section 2.2.16.
STUN candidate: A candidate whose transport addresses are STUN-derived transport addresses. See also Simple Traversal of UDP through NAT (STUN).
STUN-derived transport address: A derived transport address that is obtained by an endpoint (5) from a configured STUN server. See also Simple Traversal of UDP through NAT (STUN).
style: A set of formatting options that is applied to text, tables, charts, and other objects in a document.].
submit: The process of sending data to an external data source such as a web service, database, Internet message, or SharePoint site.
subquery: A component of a structured XML query. For more information, see section 2.2.16.
Subrequest: A request within a SYNC_VOLUMES request. For details on requests, see section 3.1.4.
SUBSCRIBE: A Session Initiation Protocol (SIP) method that is used to request asynchronous notification of an event or a set of events at a later time.
subscriber: (1) A Session Initiation Protocol (SIP) client that is making a SUBSCRIBE request.
(2) An application that needs to receive events that are published by another application.
(3) An application that needs to receive historical data published by another application.
subscription: (1) The result of a SUBSCRIBE request from a Session Initiation Protocol (SIP) element.
(2) The end result of an act of a SIP element sending a SUBSCRIBE request.
(3) A registration performed by a subscriber to specify a requirement to receive events, future messages, or historical data.
(4) A request for a copy of a publication to be delivered to a subscriber. For more information, see [MSDN-RepPub].
subsite: A complete website that is stored in a named subdirectory of another website. The parent website can be the top-level site of a site collection or another subsite. Also referred to as subweb.
suffix length: An integer that represents the number of bytes of the current index key string minus the number of identical bytes at the beginning of the current and previous index key strings. See also prefix length.
summary: The orientation of outline expand and outline collapse symbols in relation to the data that is outlined.
Super P-frame (SP-frame): A special P-frame that uses the previous cached frame instead of the previous P-frame or I-frame as a reference frame.
surface chart: A chart that shows a three-dimensional surface that connects a set of data points. It can be used to determine the optimum combination between two sets of data.
surrogate pair: A pair of 16-bit Unicode encoding values that, together, represent a single 32-bit character, as described in [ISO-10646]. For more information about surrogate pairs and combining character sequences, see the Unicode Standard in [UNICODE].
survey list: A list that is preconfigured and optimized for conducting surveys and compiling survey results into graphical views.
survivable mode: A mode that enables a protocol client to access basic voice services if some server or network resources are unavailable.
switch: (1) A data link-layer device that propagates frames between segments and allows communication among stations on different segments. Stations that are connected through a switch see only those frames destined for their segments. Compare this term with hub and router.
(2) A logical device type that provides options to run a terminal window or a custom script for a dial-up connection. This device type is not used for dialing a connection.
switchable site map provider: A site map provider that uses other site map providers as its source data when constructing a site map.
symbol file: A file that contains information about an executable image, including the names and addresses of functions and variables.
symmetric key: A secret key used with a cryptographic symmetric algorithm. The key needs to be known to all communicating parties. For an introduction to this concept, see [CRYPTO] section 1.5.
synchronization engine: A code module that creates an integrated view of objects that are stored in multiple, connected data sources, and manages information in those data sources.
synchronization source (SSRC): The source of a stream (6). A synchronization source may change its data format (for example, audio encoding) over time. The SSRC identifier is a randomly chosen value meant to be globally unique within a particular RTP session. A participant need not use the same SSRC identifier for all the RTP sessions in a multimedia session; the binding of the SSRC identifiers is provided through RTCP. If a participant generates multiple streams in one RTP session, for example from separate video cameras, each MUST be identified as a different SSRC. See [RFC3550] section 3.
Synchronization Source (SSRC): A 32-bit identifier that uniquely identifies a media stream (2) in a Real-Time Transport Protocol (RTP) session. An SSRC value is part of an RTP packet header, as described in [RFC3550].
Synchronized Multimedia Integration Language (SMIL): An XML-based language that enables a data stream to be divided, transmitted as separate streams, and then recombined as a single stream, as described in [W3C-SMIL3.0].
syntax: See attribute syntax.
system health agent (SHA): The client components that make declarations on a specific aspect of the client health state and generate a statement of health ReportEntry (SoH ReportEntry).
system palette: (1) An itemization of all of the colors that can be displayed by the operating system for a device.
(2) The palette that is actually in use to reproduce colors on a device such as a computer screen. A system palette has predefined, device-specific colors that are used by default, so that individual applications do not have to set them up.
system partition: A partition that contains the boot loader needed to invoke the operating system on the boot partition. A system partition must also be an active partition. It can be, but is not required to be, the same partition as the boot partition.
system resources: The physical resources of a server computer, such as memory, disk space, CPU, and network bandwidth.
system volume (SYSVOL): A shared directory that stores the server copy of the domain's public files that must be shared for common access and replication throughout a domain.
SystemID: A binary identifier that is used to uniquely identify a security principal (2). For Windows integrated authentication, it is a security identifier (SID). For an ASP.NET Forms Authentication provider, it is the binary representation that is derived from a combination of the provider name and the user login name. | https://msdn.microsoft.com/en-us/library/dd950864 | CC-MAIN-2016-40 | refinedweb | 9,202 | 55.03 |
Having your computer know how you feel? Madness!
Or actually not madness, but OpenCV and Python. In this tutorial we’ll write a little program to see if we can recognise emotions from images.
How cool would it be to have your computer recognize the emotion on your face? You could make all sorts of things with this, from a dynamic music player that plays music fitting with what you feel, to an emotion-recognizing robot.
For this tutorial I assume that you have:
- Intermediate knowledge of Python;
- OpenCV installed (Apparently I’m old-fashioned and still use 2.4.9), installation instructions here;
- A Face Database containing emotions (I use the Cohn-Kanade database, get it here). these tutorials. If you don’t have this, please try a few more basic tutorials first or follow an entry-level course on coursera or something similar. This also means you know how to interpret errors. Don’t panic but first read the thing, google if you don’t know the solution, only then ask for help. I’m getting too many emails and requests over very simple errors. Part of learning to program is learning to debug on your own as well. If you really can’t figure it out, let me know.
Unix users: The current tutorial is written for use on windows systems. It will be updated in the near future to be cross-platform.
Citation format
van Gent, P. (2016). Emotion Recognition With Python, OpenCV and a Face Dataset. A tech blog about fun things with Python and embedded electronics. Retrieved from:
Getting started
To be able to recognize emotions on images we will use OpenCV. OpenCV has a few ‘facerecognizer’ classes that we can also use for emotion recognition. They use different techniques, of which we’ll mostly use the Fisher Face one. For those interested in more background; this page has a clear explanation of what a fisher face is.
Request and download the dataset, here (get the CK+). I cannot distribute it so you will have to request it yourself, or of course create and use your own dataset. It seems the dataset has been taken offline. The other option is to make one of your own or find another one. When making a set: be sure to insert diverse examples and make it BIG. The more data, the more variance there is for the models to extract information from. Please do not request others to share the dataset in the comments, as this is prohibited in the terms they accepted before downloading the set.
Once you have your own dataset, extract it and look at the readme. It is organised into two folders, one containing images, the other txt files with emotions encoded that correspond to the kind of emotion shown. From the readme of the dataset, the encoding is: {0=neutral, 1=anger, 2=contempt, 3=disgust, 4=fear, 5=happy, 6=sadness, 7=surprise}.
Let’s go!
Organising the dataset
First we need to organise the dataset. In the directory you’re working, make two folders called “source_emotion” and “source_images”. Extract the dataset and put all folders containing the txt files (S005, S010, etc.) in a folder called “source_emotion”. Put the folders containing the images in a folder called “source_images”. Also create a folder named “sorted_set”, to house our sorted emotion images. Within this folder, create folders for the emotion labels (“neutral”, “anger”, etc.).
In the readme file, the authors mention that only a subset (327 of the 593) of the emotion sequences actually contain archetypical emotions. Each image sequence consists of the forming of an emotional expression, starting with a neutral face and ending with the emotion. So, from each image sequence we want to extract two images; one neutral (the first image) and one with an emotional expression (the last image). To help, let’s write a small python snippet to do this for us:
import glob from shutil import copyfile emotions = ["neutral", "anger", "contempt", "disgust", "fear", "happy", "sadness", "surprise"] #Define emotion order participants = glob.glob("source_emotion\\*") #Returns a list of all folders with participant numbers for x in participants: part = "%s" %x[-4:] #store current participant number for sessions in glob.glob("%s\\*" %x): #Store list of sessions for current participant for files in glob.glob("%s\\*" %sessions): current_session = files[20:-30] file = open(files, 'r') emotion = int(float(file.readline())) #emotions are encoded as a float, readline as float, then convert to integer. sourcefile_emotion = glob.glob("source_images\\%s\\%s\\*" %(part, current_session))[-1] #get path for last image in sequence, which contains the emotion sourcefile_neutral = glob.glob("source_images\\%s\\%s\\*" %(part, current_session))[0] #do same for neutral image dest_neut = "sorted_set\\neutral\\%s" %sourcefile_neutral[25:] #Generate path to put neutral image dest_emot = "sorted_set\\%s\\%s" %(emotions[emotion], sourcefile_emotion[25:]) #Do same for emotion containing image copyfile(sourcefile_neutral, dest_neut) #Copy file copyfile(sourcefile_emotion, dest_emot) #Copy file
Extracting faces
The classifier will work best if the training and classification images are all of the same size and have (almost) only a face on them (no clutter). We need to find the face on each image, convert to grayscale, crop it and save the image to the dataset. We can use a HAAR filter from OpenCV to automate face finding. Actually, OpenCV provides 4 pre-trained classifiers, so to be sure we detect as many faces as possible let’s use all of them in sequence, and abort the face search once we have found one. Get them from the OpenCV directory or from here and extract to the same file you have your python files.
Create another folder called “dataset”, and in it create subfolders for each emotion (“neutral”, “anger”, etc.). The dataset we can use will live in these folders. Then, detect, crop and save faces as such;
import cv2 import glob faceDet = cv2.CascadeClassifier("haarcascade_frontalface_default.xml") faceDet_two = cv2.CascadeClassifier("haarcascade_frontalface_alt2.xml") faceDet_three = cv2.CascadeClassifier("haarcascade_frontalface_alt.xml") faceDet_four = cv2.CascadeClassifier("haarcascade_frontalface_alt_tree.xml") emotions = ["neutral", "anger", "contempt", "disgust", "fear", "happy", "sadness", "surprise"] #Define emotions def detect_faces(emotion): files = glob.glob("sorted_set\\%s\\*" %emotion) #Get list of all images with emotion filenumber = 0 for f in files: frame = cv2.imread(f) #Open image gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) #Convert image to grayscale #Detect face using 4 different classifiers face = faceDet.detectMultiScale(gray, scaleFactor=1.1, minNeighbors=10, minSize=(5, 5), flags=cv2.CASCADE_SCALE_IMAGE) face_two = faceDet_two.detectMultiScale(gray, scaleFactor=1.1, minNeighbors=10, minSize=(5, 5), flags=cv2.CASCADE_SCALE_IMAGE) face_three = faceDet_three.detectMultiScale(gray, scaleFactor=1.1, minNeighbors=10, minSize=(5, 5), flags=cv2.CASCADE_SCALE_IMAGE) face_four = faceDet_four.detectMultiScale(gray, scaleFactor=1.1, minNeighbors=10, minSize=(5, 5), flags=cv2.CASCADE_SCALE_IMAGE) #Go over detected faces, stop at first detected face, return empty if no face. if len(face) == 1: facefeatures = face elif len(face_two) == 1: facefeatures = face_two elif len(face_three) == 1: facefeatures = face_three elif len(face_four) == 1: facefeatures = face_four else: facefeatures = "" #Cut and save face for (x, y, w, h) in facefeatures: #get coordinates and size of rectangle containing face print "face found in file: %s" %f gray = gray[y:y+h, x:x+w] #Cut the frame to size try: out = cv2.resize(gray, (350, 350)) #Resize face so all images have same size cv2.imwrite("dataset\\%s\\%s.jpg" %(emotion, filenumber), out) #Write image except: pass #If error, pass file filenumber += 1 #Increment image number for emotion in emotions: detect_faces(emotion) #Call functiona
The last step is to clean up the “neutral” folder. Because most participants have expressed more than one emotion, we have more than one neutral image of the same person. This could (not sure if it will, but let’s be conservative) bias the classifier accuracy unfairly, it may recognize the same person on another picture or be triggered by other characteristics rather than the emotion displayed. Do this by hand: get in the folder and delete all multiples of the same face you see, so that only one image of each person remains.
Creating the training and classification set
Now we get to the fun part! The dataset has been organised and is ready to be recognized, but first we need to actually teach the classifier what certain emotions look like. The usual approach is to split the complete dataset into a training set and a classification set. We use the training set to teach the classifier to recognize the to-be-predicted labels, and use the classification set to estimate the classifier performance.
Note the reason for splitting the dataset: estimating the classifier performance on the same set as it has been trained is unfair, because we are not interested in how well the classifier memorizes the training set. Rather, we are interested in how well the classifier generalizes its recognition capability to never-seen-before data.
In any classification problem; the sizes of both sets depend on what you’re trying to classify, the size of the total datset, the number of features, the number of classification targets (categories). It’s a good idea to plot a learning curve. We’ll get into this in another tutorial.
For now let’s create the training and classification set, we randomly sample and train on 80% of the data and classify the remaining 20%, and repeat the process 10 times. Afterwards we play around with several settings a bit and see what useful results we can get.
import cv2 import glob import random import numpy as np emotions = ["neutral", "anger", "contempt", "disgust", "fear", "happy", "sadness", "surprise"] #Emotion list fishface = cv2.createFisherFaceRecognizer() #Initialize fisher face classifier data = {} def get_files(emotion): #Define function to get file list, randomly shuffle it and split 80/20 files = glob.glob("dataset\\%s\\*" %emotion) random.shuffle(files) training = files[:int(len(files)*0.8)] #get first 80% of file list prediction = files[-int(len(files)*0.2):] #get last 20% of file list return training, prediction def make_sets(): training_data = [] training_labels = [] prediction_data = [] prediction_labels = [] for emotion in emotions: training, prediction = get_files(emotion) #Append data to training and prediction list, and generate labels 0-7 for item in training: image = cv2.imread(item) #open image gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) #convert to grayscale training_data.append(gray) #append image array to training data list training_labels.append(emotions.index(emotion)) for item in prediction: #repeat above process for prediction set image = cv2.imread(item) gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) prediction_data.append(gray) prediction_labels.append(emotions.index(emotion)) return training_data, training_labels, prediction_data, prediction_labels: incorrect += 1 cnt += 1 return ((100*correct)/(correct + incorrect)) #Now run it metascore = [] for i in range(0,10): correct = run_recognizer() print "got", correct, "percent correct!" metascore.append(correct) print "\n\nend score:", np.mean(metascore), "percent correct!"
Let it run for a while. In the end on my machine this returned 69.3% correct. This may not seem like a lot at first, but remember we have 8 categories. If the classifier learned absolutely nothing and just assigned class labels randomly we would expect on average (1/8)*100 = 12.5% correct. So actually it is already performing really well. Now let’s see if we can optimize it.
Optimizing Dataset
Let’s look critically at the dataset. The first thing to notice is that we have very few examples for “contempt” (18), “fear” (25) and “sadness” (28). I mentioned it’s not fair to predict the same dataset as the classifier has been trained on, and similarly it’s also not fair to give the classifier only a handful of examples and expect it to generalize well.
Change the emotionlist so that “contempt”, “fear” and “sadness” are no longer in it, because we really don’t have enough examples for it:
#Change from: emotions = ["neutral", "anger", "contempt", "disgust", "fear", "happy", "sadness", "surprise"] #To: emotions = ["neutral", "anger", "disgust", "happy", "surprise"]
Let it run for a while again. On my computer this results in 82.5% correct. Purely by chance we would expect on average (1/5)*100 = 20%, so the performance is not bad at all. However, something can still be improved.
Providing a more realistic estimate
Performance so far is pretty neat! However, the numbers might not be very reflective of a real-world application. The data set we use is very standardized. All faces are exactly pointed at the camera and the emotional expressions are actually pretty exaggerated and even comical in some situations. Let’s see if we can append the dataset with some more natural images. For this I used google image search and the chrome plugin ZIG lite to batch-download the images from the results.
If you want, do this yourself, clean up the images. Make sure for each image that there is no text overlayed on the face, the emotion is recognizable, and the face is pointed mostly at the camera. Then adapt the facecropper script a bit and generate standardized face images.
Alternatively, save yourself an hour of work and download the set I generated and cleaned.
Merge both datasets and run again on all emotion categories except for “contempt” (so re-include “fear” and “sadness”), I could not find any convincing source images for this emotion.
This gave 61.6% correct. Not bad, but not great either. Despite what we would expect at chance level (14.3%), this still means the classifier will be wrong 38.4% of the time. I think the performance is actually really impressive, considering that emotion recognition is quite a complex task. However impressive, I admit an algorithm that is wrong almost half the time is not very practical.
Speaking about a practical perspective; depending on the goal, an emotion classifier might not actually need so many categories. For example, a dynamic music player that plays songs fitting to your mood would already work well if it recognized anger, happiness and sadness. Using only these categories I get 77.2% accurate. That is a more useful number! This means that almost 4 out of 5 times it will play a song fitting to your emotional state. In a next tutorial we will build such a player.
The spread of accuracies between different runs is still quite large, however. This either indicates the dataset is too small to accurately learn to predict emotions, or the problem is simply too complex. My money is mostly on the former. Using a larger dataset will probably enhance the detection quite a bit.
Looking at mistakes
The last thing that might be nice to look at is what mistakes the algorithm makes. Maybe the mistakes are understandable, maybe not. Add an extra line to the the last part of the function run_recognizer() to copy images that are wrongly classified, also create a folder “difficult” in your root working directory to house the images:: cv2.imwrite("difficult\\%s_%s_%s.jpg" %(emotions[prediction_labels[cnt]], emotions[pred], cnt), image) #<-- this one is new incorrect += 1 cnt += 1 return ((100*correct)/(correct + incorrect))
I ran it on all emotions except “contempt”, and ran it only once (for i in range(0,1)).
Some mistakes are understandable, for instance:
“Surprise”, classified as “Happy”
, honestly it’s a bit of both
“Disgust”, classified as “Sadness”
, he could also be starting to cry.
“Sadness”, classified as “Disgust”
But most are less understandable, for example:
“Anger”, classified as “Happy”
“Happy”, classified as “Neutral”
It’s clear that emotion recognition is a complex task, more so when only using images. Even for us humans this is difficult because the correct recognition of a facial emotion often depends on the context within which the emotion originates and is expressed.
I hope this tutorial gave you some insight into emotion recognition and hopefully some ideas to do something with it. Did anything cool with it or want to try something cool? Let me know below in the comments!
The dataset used in this article is the CK+ dataset, based on the work of:
–.
427 Comments
Ajesh K RMay 23, 2016
Hi sir,
Its really a brilliant work you have done. I am really interested in it, and wants to know more about how the classification of emotions are done. Can you provide some more stuff that help me understand the details??
Paul van GentMay 28, 2016
Hi Ajesh,
Thanks for your comment. I’m not sure what exactly you’re interested in. Do you want more background information or are you having problems with something in the code or explanations?
If you want more background info please see here or here.
Em AasimDecember 11, 2016
hi sir!
i am having trouble downloading data set. is there any other way to download it?
Paul van GentDecember 15, 2016
Hi Em Aasim. I cannot distribute it, so if the authors choose not to share it with you, then that’s it I’m afraid. You can always make your own set, although this is not a trivial task.
SalmaAugust 15, 2017
here you can agree to the conditions of use submit in order to have the dataset.
Ho Xuan VuongJune 25, 2018
How can I download it or make it myself
Den003April 20, 2019
Hi Sir!
I’m having trouble with the first code in which you are sorting images.
It gives me an error.Can you help me!
PermissionError Traceback (most recent call last)
in
8 for files in glob.glob(“%s\\*” %sessions):
9 current_session = files[20:-30]
—> 10 file =open(files, ‘r’)
11 emotion = int(float(file.readline())) #emotions are encoded as a float, readline as float, then convert to integer.
12 sourcefile_emotion = glob.glob(“source_images\\%s\\%s\\*” %(part, current_session))[-1] #get path for last image in sequence, which contains the emotion
PermissionError: [Errno 13] Permission denied: ‘source_emotion\\Emotion_labels\\Emotion\\S005’
Thanks!
palkabApril 30, 2019
Likely the path is incorrect and no files are there. If you’re on a unix system, use forward slashes for paths.
ildar lomovFebruary 11, 2019
Code is shit, i’m sorry but it is true. pls attend some coding courses
palkabFebruary 13, 2019
You are 100% correct. This post was written years ago when I was starting out. I’ve kept it up because from looking at the traffic stats it seems to benefit many people..
I’ve had it on the list to update the code blobs for a while now, so thanks for the reminder at any rate!
Michał ŻołnierukMay 31, 2016
Hi from Poland!
Really cool stuff! It helped me and my friend a lot with our project to write an app for swapping face to adequate emoji. We did use your code to harvest faces from CK dataset and mentioned it in the repo we’ve just created. Take a look and tell us if you are fine with this.
Thanks a lot again, your blog is really interesting and I can’t wait for new posts.
Paul van GentMay 31, 2016
Hi Michał!
I like your project, really cool stuff! I’ll be sure to give it a try tonight.
Also thanks for mentioning me, it’s perfect this way.
Keep up the good work!
Sunny BhadaniNovember 22, 2017
hey bro,i found your GitHub project on emoji really cool….i myself is making a project on facial expression recognition.It will be great if we can collaborate.
Looking forward to hear from you.
HKHKSeptember 21, 2018
That is a nice project. But when I use Python 2.7.15 and open 3.2.0.7 to run your project. The accuracy is very bad. Could you specify all library version? Could you help? My email is cognsoft(at)gmail(dot)com
palkabSeptember 28, 2018
If you’re on linux or osx, make sure the images from the dataset are ordered properly in
glob(). Wrap it in a
sorted()function to be sure.
– Paul
YashMarch 20, 2019
hey, can you help me out on this?I tried to learn from your code but execution gives me an error :
$python3 webcam.py
Traceback (most recent call last):
File “webcam.py”, line 66, in
fisher_face = cv2.face.FisherFaceRecognizer_create()
AttributeError: module ‘cv2.cv2’ has no attribute ‘face’
palkabApril 4, 2019
Hi Yash, You’re using a newer version of OpenCV. They’ve changed their module structure. You need to install an extra module with pip:
python -m pip install opencv-contrib-python
Should work after that!
– Paul
SamJune 16, 2016
It was so well explained and so helpful !!! Thank you so much !! We quoted your work in our synposis
Paul van GentJune 16, 2016
Thank you Sam! What did you make with it? I’m curious 🙂
JasperJuly 17, 2016
Hi Sir!
I’m trying to do your Emotion-Aware Music Player but I’m having a problem. Whenever I run the code that crops the image and save it to the “dataset” folder, I get the error “UnboundLocalError: local variable ‘x’ referenced before assignment”. Any help with that? I’m using Spyder with python2.7 bindings and OpenCV 2.4.13.
Paul van GentJuly 17, 2016
Hi Jasper,
Can you send me an e-mail with your code attached, and the full error message you’re getting? You can send it to “palkab29@gmail.com”. I’ll have a look in the afternoon :).
JasperJuly 17, 2016
I’ve just sent it to you! Thank you!
Paul van GentJuly 17, 2016
Turns out I missed a line when updating the code. Thanks for pointing it out Jasper! It’s updated now.
AlexAugust 4, 2016
How would you edit the code that sorts the images in the files to sort the landmarks into different files?
Paul van GentAugust 4, 2016
Hi Alex,
Most of the parts are there, if you look under “Organising the dataset”, for each iteration the code temporarily stores the participant ID in the variable part, the sessions in sessions and the emotion in the variable emotion. You can use these labels to also access the data from the landmarks database, since these are organized with the same directory structure as the images and emotion labels are.
On a side-note, I’ll be posting a tutorial on landmark-based emotion detection somewhere this week. Keep an eye on the site if you get stuck. Good luck 🙂
MrinaliniJuly 18, 2018
Hi Paul,
I am getting an error “list index out of range” while organizing the dataset. could you please help me where I am doing wrong.
Thanks in advance
palkabAugust 8, 2018
Hi Mrinalini,
That error usually means the list is empty, since the code tries to retrieve the last item in the list (the last item of an empty list does not exist). You need to check that the paths generated are correct, and that all the files are there.
– Paul
Sandhya MalegereApril 20, 2019
check whether your folder name and path is correct
AliAugust 5, 2016
Great Article.
Thanks!
Pingback: يه نکته ی جالب که به احتمال زیاد نمی دونستید | وبلاگ
AlexanderAugust 14, 2016
The python script that sorts the images into emotion types slices ‘Sx’ (S0, S1, S2… S9 etc) from the subject participant part at the beginning of the filename of each image. I used your algorithm to sort the landmarks into facial expression files the same way and it retained the whole filename. Would you know why this happens? Basically the first two characters of the filename of each image are snipped off.
Paul van GentAugust 14, 2016
Hi Alex,
This is because of line 19 and 20 where I slice the filenames from the path using “sourcefile_neutral[25:] (and the same for sourcefile_emotion). If you want a clean way of dealing with filenames of different lengths, first split the filename on the backslash using split(), for example sourcefile_neutral.split(“\\”). This returns a list of elements. Take the last element in the list with [-1:] to get the complete filename.
Good luck!
AlexAugust 15, 2016
Thanks, I dissected your code and figured that if you change “sourcefile_neutral[25:]” to “sourcefile_neutral[23:]” that I keep the whole .png filename. Oddly… I then had to change it to “sourcefile_neutral[26:]” for .txt files even though [25:] worked fine for them previously.
I have one more issue… the code that detects the faces in each image and normalizes the dimensions of each image doesn’t appear to do anything. I’ve placed it in a directory above all the folders such as sorted_set, source_emotion etc. Is that the correct location for the script? Thanks again!
AlexAugust 15, 2016
Turns out the issue is that either the classifiers are failing to detect the face (unlikely) or the script isn’t actually accessing the classifiers stored in opencv/sources/data/haarcascades
AlexAugust 15, 2016
Added the full path to the xml files “C:\opencv\sources\data\haarcascades\…xml” and it worked!
Paul van GentAugust 15, 2016
Hi Alex, that would also work. In the script I assume you put the trained cascade classifier files in the same directory as the python files. I have updated the tutorial to make this more clear. Thanks!
JaneNovember 19, 2016
Always a good job right here. Keep rolling on thorguh.
CarlosAugust 17, 2016
Hello Paul! thanks a lot for this wonderful code, it has helped me a lot. Just one question, I am always getting the first 4 saved images wrong of each set when using the save_face(emotions) but not in the first set. For example I start recording happy face, then angry so in the data set all pictures of happy are fine but the first four pictures in the angry data set are actually happy faces. What can my problem be? It is weird because the name of the picture is angry even if the face is happy. This happens to all subsequent emotions, just the first emotion data set is all good.
AliSeptember 15, 2016
Hi Paul
I have a outOfMemoryError. I use 4 Gigabytes of ram.
Is there a solution for this problem? or just I should upgrade the ram to 8?
thanks paul
Paul van GentSeptember 18, 2016
Hi Ali,
I’m not sure, the program shouldn’t be that memory intensive. Are you sure you’re not storing all images somewhere and leaving them in memory? Feel free to mail me your code if you want me to have a look.
Cheers
RenaNovember 19, 2016
That’s really thinking at an imivrsspee level
SridhharSeptember 18, 2016
hi!! i am getting an error while running the 1st code (organising the dataset)
” file = open(files, ‘r’)
IOError: [Errno 13] Permission denied: ‘source_emotion\\Emotion\\S005\\001′”
can you help me out with this??
if possible can u mail me the entire code?
thank you
Paul van GentSeptember 18, 2016
Hi Sridhar,
Read the error message; “Permission denied”. It seems you don’t have permission to read from these folders. What system are you using?
SridharSeptember 19, 2016
i’m using windows 10.
I tried changing the directories . Moved the entire folder to C and D drive. Didn’t work !!!
SridharSeptember 19, 2016
I’m new to python. Could you please explain it
ChartricNovember 19, 2016
Wonderful exoltnaaipn of facts available here.
Piyush SaraswatMay 28, 2017
Hi paul,
i’m getting same error, Permission denied: ‘source_emotion\\Emotion_labels\\Emotion\\S005’
what to do?
Paul van GentJune 12, 2017
– Check the folders exist and have the exact names as in the code.
– Does your user account have the correct permissions to these folders?
– You might need to run the code with elevated privileges.
AkshayFebruary 13, 2019
Hi Sridhhar I also got the same error.
Have you resolved It?
May I know how you resolved it?
can you send me corrected code?
my mail-ID – adhomse99@gmail.com
Thank you….!
sridharSeptember 19, 2016
hi Paul!!!
The code is working now.
There was a mistake in the directory path.
Thanks for the support
Paul van GentSeptember 19, 2016
I suspected something like this. Good to hear you found the issue. Good luck!
HinesNovember 19, 2016
You have the monopoly on useful infaomation-rren’t monopolies illegal? 😉
Prem SekarMarch 6, 2017
hi sridhar,
may i know how u corrected ur error…
hirenNovember 1, 2017
hey…bro…how do you solve it?
deepakNovember 16, 2018
how did u solved it brother .
AkshayFebruary 13, 2019
can you share what changes you made?
Justin CruzSeptember 30, 2016
Good Day Mr. Paul! Can you mail me your entire code? If it’s okay with you? Thank you in advance!
Paul van GentOctober 1, 2016
Hi Justin,
All the information you need is in the article, I’m sure you can figure it out :)!
Cheers
Paul
DONGNovember 17, 2016
u r amazing
Virat TrivediNovember 18, 2016
Thank you so much Sir.
Your guide has been of IMMENSE help in my work, can’t thank you enough.
I just had one doubt which is that you said that “In a next tutorial we will build such a player.” which is a hyperlink.
But that hyperlink gives a 404 error. Can you please provide us an updated link to the same?
Paul van GentNovember 24, 2016
Hi Virat,
I’m glad to hear it helped. I will update the link, but you can also find it through the home page :-). Please don’t forget to cite me! Cheers,
Paul
JonathanNovember 24, 2016
Hello,
This is amazing. Can I get this accessed from iPhone project? I want to detect emotion from iOS device camera when user look at it. How to achieve this?
Paul van GentNovember 24, 2016
Hi Jonathan,
I think you could, please see this link for some tutorials on how to get started with OpenCV and iOS:
I wouldn’t know the specifics because I don’t develop for iOS, but translating the code probably won’t be too difficult. You can also probably re-use trained models as the core is still OpenCV.
Good luck! Let me know if you manage to get it working.
ramyavrNovember 30, 2017
Hi Paul
I am getting
Traceback (most recent call last):
File “extract.py”, line 49, in
filenumber += 1 #Increment image number
NameError: name ‘filenumber’ is not defined
vikrantDecember 16, 2016
After running traing code i am getting this massge— AttributeError: ‘module’ object has no attribute ‘createFisherFaceRecognizer’
I am using window 10. I have installed opencv.
Plz help me.
vikrantDecember 16, 2016
thank you very much Paul… i installed latest version of opencv. now its running.
AshwinJanuary 29, 2017
I have the latest version of opencv 3.2 but I’m still getting the error-AttributeError: ‘module’ object has no attribute ‘createFisherFaceRecognizer’
I am using Windows 10 64 bit
python version 2.7.13
Please help…
Paul van GentFebruary 8, 2017
Hi Ashwin. Either check the docs to see what changed from 2.4 to 3.2, or use the OpenCV version from the tutorial.
leon trimbleFebruary 13, 2017
hey! you’re the python expert i was hoping you’d tell us! i got facial recognition working from this tutorial
it took an age to work out how to swap out the webcam for the raspberry pi cam, please help! i need a more fundamental understanding of the codebase to work across versions!!!
Paul van GentFebruary 14, 2017
Hi Leon. I´m not sure what you mean. Do you want more information about using webcams in conjunction with Python? Do you want more information on how to use images from different sources with the visual recognition code on this site? Let me know.
Cheers
Paul
leon trimbleFebruary 16, 2017
…getting it working on opencv 3.
Paul van GentFebruary 16, 2017
The docs provide all the answers.. It seems a new namespace ‘face’ is added in the new opencv versions.
Aniket MoreJune 30, 2017
use cv2.face.createFisherFaceRecognizer
yasinMarch 6, 2019
PLEASE SAY ABOUT YOUR PROBLEM ERROR HOW TO SOLVE THIS ERROR
please give me a solution because i also fetching thi s type of error
i also use opncv 4.0.0.21 lates version of opencv
its not worked
Aniket MoreJune 30, 2017
I am getting only 21.3% accuracy what will be the reason?
Paul van GentJuly 1, 2017
Hi Aniket. The reasons could be numerous. To find out, I would check:
– Your dataset could be too small for the task you are trying to accomplish. It could be the images you’re trying to recognize the emotions on are diverse, difficult, or too few. Remember that the algorithm needs to have a large range of examples in order to quantify the underlying variance. The most subtle the emotions, or the more variation within each emotion, the more data is required. It is also possible that this algorithm simply isn’t up for the task given your dataset. Also look at my other tutorial using a support vector machine classifier in conjunction with facial landmarks.
– Where are most mistakes made? Maybe one or two categories have little data in them and are throwing the rest off.
– Are there no labeling errors or file retrieval errors? If emotion images receive an incorrect label this will obviously wreck performance.
Good luck!
-Paul
Aniket MoreJuly 5, 2017
Thanks for the reply Paul, actually I am using the same data set you suggested (CK+). And I am training on 80% of data and classifying 20% as you said, still I am getting 21-23 % accuracy with all the categories and 36% using the reduced set of emotions. I am not getting why the same code with same data set is giving me different results.
I am using Ubuntu 14.04, OpenCV 3.0.0. Also as it’s mentioned in some of the comments above glob does not work well in Ubuntu, I verified the data after sorting it is same as you mentioned “contempt” (18), “fear” (25) and “sadness” (28).
Paul van GentJuly 5, 2017
I’ve been getting more reports of 3.0 behaving differently. In essence the tutorial ‘abuses’ a facial recognition algorithm to in stead detect variations within the face. It’s likely in 3.0 the approach has been tweaked and doesn’t work so well for this application anymore. Be sure to check software versions specified in the beginning of each tutorial, sometimes higher versions are not better..
You can also take a look at the other tutorial on emotion recognition, it’s a bit more advanced but also a more ‘proper way’ of approaching this problem.
-Paul
senoritaJanuary 2, 2017
hi,
when i’m trying to run the first snippet of code for organizing data into neutral and expression, i’m getting this error :
Traceback (most recent call last):
File “C:\Users\310256803\workspace\FirstProject\pythonProgram\tryingcv.py”, line 12, in
file = open(files, ‘r’)
IOError: [Errno 13] Permission denied: ‘c:\\source_emotion\\Emotion\\S005\\001’
can anyone please help me
Paul van GentJanuary 12, 2017
It seems your script doesn’t have permission to access these files. Is the folder correct? Also be sure to run the cmd prompt in administrator mode. If that doesn’t work, try moving the folder to your documents or desktop folder, that is often a quick fix for permission errors.
AndrejJanuary 16, 2017
Hello, thank you very much for your great tutorial. I was wondering if there is anyway I can save this trained model for later use.
Paul van GentJanuary 17, 2017
Yes definitely, please view the specific CV2 documentation here:
NkululekoJanuary 17, 2017
Hi Paul, thank you for this tutorial. It really helped me with my honours project. I would also like to learn how a Neural Network would do in classifying the emotions, maybe a SVM as well. Thanks.
Paul van GentJanuary 18, 2017
Hi Nkululeko,
Glad to hear it was of some help! If you want to learn about how other classifiers work with emotion recognition, you have to make a few intermediary steps of extracting features from images. Take a look at this tutorial. It also discusses the performance of an SVM and Random Forest Classifiers, and some pointers.
In the near future I plan on writing a similar one for convoluted neural nets (deep learning networks)
baharAugust 29, 2017
hi , do you write this program with deep learning networks? if yes, please give us the link 🙂
Paul van GentSeptember 11, 2017
Hi Bahar. This is planned, but not there yet.
VishFebruary 8, 2017
Hi Paul, Thank you for such a detailed guide!
I needed your assistance for my project which would to scan faces and detect emotions ( predicting mental disorders is an enhancement I intend to incorporate) . I’m completely new to this technique and find myself in a fix from where to begin 🙁
Could you please guide me on the choice of softwares to be used, whether I should opt for MATLAB or OpenCV, or something else? This first step needs to be completed for me to proceed with the development of the application. I would really appreciate your assistance on this.
Paul van GentFebruary 8, 2017
Hi Vish. For the software I would say whichever you feel most comfortable with. You are undertaking a complex project so the most important thing is that you are very familiar with your tools, otherwise you might end up demotivated quickly.
Regarding classifying mental disorders; I don’t think that is possible from just images. Think about how you could automatically extract features to use in classification from other sources than pictures. However, don’t let me discourage you. If you want, keep me updated on your progress (info@paulvangent.com), I’d like that.
VishFebruary 9, 2017
Thank you Paul for your quick response 🙂 Could you tell me whether the selection of software varies with the scope of the application?
For instance, my requirement is to scan a photo clicked from the front camera of an android device. This photo is then processed at a remote server which returns the mood of the person.
Is this scenario limited to a certain software package or do I have choices? I’m sorry if my questions sound silly, just confused from where to begin. Your guidance will really prove beneficial for me to begin.
Paul van GentFebruary 9, 2017
No problem. If you have a server-side application running you need to think about two main things:
– How much traffic are you expecting and how does your solution scale?
– What is available on the server OS?
I’m expecting that sending images to the server for analysis and receiving the results back quickly gets impractical as the number of users grows (you don’t want to wait more than a few seconds for the result..), and puts a lot of strain on server resources.
However, if you’re developing an Android app, note that OpenCV is available on the platform as well. You can also train several classifiers from the SKLearn framework and use the trained models in an Android app. See ths following link for pointers:
Only simple math is required.
VishApril 25, 2017
Hello Paul,
I have reduced the scope of my application to detect only sad and happy emotions since I have struggled with using MATLAB as I have no prior knowledge about it. Could you please let me know how do I implement your tutorial on Mac?
I have created the necessary folder structure but need to know how do I execute the files.
Paul van GentApril 25, 2017
Hi Vish. It should be similar to windows, except you use Terminal instead of Command Prompt. To install the necessary packages see the repo manuals for each package
.
Prashanth P PrabhuFebruary 22, 2017
Try as i might I am not able to go beyond 36% accuracy for the combined data set. Any idea why you may be getting better accuracy than me ? Is this dependent on the system that I am using (I doubt it).
Paul van GentFebruary 22, 2017
I doubt the system has much to do with it either. What OpenCV version are you using? An earlier report of low accuracy used OpenCV3.x I believe (I mention I used 2.4.9).
Remember I’m “hijacking” a face recognition algorithm for emotion recognition here. It is very possible that optimizations done on OpenCV’s end in newer versions impair this type of detection in favour of more robust face recognition.
Take a look at the next tutorial using facial landmarks, that is more robust.
Prashanth P PrabhuFebruary 22, 2017
Paul thanks for your reply however I found the root cause which was to do with diferent glob.glob implementation between version python2 and 3. In py3 you need to explicitly sort the lists returned. I was not doing that initially which resulted in the training data set getting wrong images…for example sometimes anger would slip into neutral. Fixing this takes the accuracy to about 83% out of box which is pretty cool 🙂 Awesome work!
Will definitely try out your landmark based tutorial to compare the approaches. Is it out yet ?
Paul van GentFebruary 22, 2017
Great you found the issue! Thanks for replying so that others may also benefit :).
The other is out, see the home page, or use this link.
Good luck!
GBooFebruary 22, 2017
please…help me
I don’t understand;;;
I did download ck+ files(4zip…) and made new two folders(“source_emotion” and “source_images”)
but… i don’t understand next someting….
how extract file?? images,,, txt,,, ??? i don’t mean…
i hope to this tutorial video… T T
Paul van GentFebruary 22, 2017
Hi GBoo,
Just follow the tutorial. It’s all there. Looking at the code may also help. If you can’t figure it out I suggest you try a few simpler Python tutorials first, this one assumes at least intermediate Python skills.
– Paul
KingKongFebruary 23, 2017
I have some question…
please answer to me
I just a little English skills…
1. Extract 3 zip files(emotion_labels, FACS_labes, Landmarks) and put together in Source_emotion folder?
source_emotion
└S005
└001
└S005_001_00000001_landmarks.txt ( 11files 1~11)
└S005_001_00000011_emotion.txt
└S005_001_00000011_facs.txt
└S010
└001
2. Extract extended-cohn-kanade-images.zip files and move to source_images folder right?
source_images
└S005
└.DS_Store
└001
└S005_001_00000001.png ( 11files 1~11)
└S010
└001
└S010_001_00000001.png ( 14files 1~14)
└002
3.
emotion = int(file.readline())
ValueError: invalid literal for int() with base 10: ‘2.1779878e+02 2.1708728e+02\n’
I want to try this tutorial but have some problem…
Please help me…
KingKongFebruary 23, 2017
3.
emotion = int(float(file.readline()))
ValueError: invalid literal for float(): 2.1779878e+02 2.1708728e+02
Paul van GentFebruary 23, 2017
It seems like you’re opening the landmarks file, not the emotion text file. The emotion text files contain single floats like 2.000000
AppyFebruary 24, 2017
I am getting the same error. Did you figure out the problem?
Paul van GentFebruary 24, 2017
The mentioned floats are not present in the text files containing the emotion, in these files you should only find integers disguised as floats (e.g. “7.0000000e+00”), not actual floats (e.g. 2.1779878e+02). Please verify which files the code is trying to access when it gives an error.
AppyFebruary 24, 2017
It was accessing the landmark file. I made the following change to the code and it worked.
It was:
for files in glob.glob(“%s\\*” %sessions):
I changed it to:
for files in glob.glob(“%s\\*emotion.txt” %sessions):
Paul van GentFebruary 24, 2017
I thought something like that was happening. Good you found it. Happy coding!
-Paul
AppyFebruary 24, 2017
Thank you Paul for guiding in the right direction
KeshavFebruary 25, 2017
Hey Paul, When I try to execute the first python file, instead of taking only the neutral image, it is taking emotional images as well. Any idea why that is happening?
KeshavFebruary 25, 2017
I meant the first code where you split the different emotions. Other emotions are splitted in a correct way but neutral images has mixtures of both neutral and emotional images.
sourcefile_neutral = glob.glob(“source_images//%s//%s//*” %(part, current_session))[0]
should return only the first image right?
KeshavFebruary 25, 2017
Okay I found the error, we should sort the directory using
sorted(glob.glob(“source_images//%s//%s//*” %(part, current_session)))[0]
It works fine then..
Paul van GentFebruary 25, 2017
Strange, I didn’t need to sort it, as it was sorted by glob. What python version ans OS are you using?
HjorturMarch 10, 2017
I am using Ubuntu 14 and was working out a few of your posts with much lower accuracy. When I looked at the images I found them generously classified. The problem was what Keshav found that it should be sorted.
Thanks for these great, great articles!
KaranFebruary 27, 2017
Hey Paul,
I am getting this error when i am trying to run your script on ubuntu OS.
fish_face.train(training_data, np.asarray(training_labels))
cv2.error: /build/opencv-vU8_lj/opencv-2.4.9.1+dfsg/modules/contrib/src/facerec.cpp:455: error: (-210) In the Fisherfaces method all input samples (training images) must be of equal size! Expected 313600 pixels, but was 307200 pixels. in function train
Thanks for your help.
Paul van GentMarch 13, 2017
Hi Karan. The error means the images you supply are not similarly sized. All training images and all prediction images need to be the exact same dimensions for the classifier to work properly. Resize your images with either numpy or opencv.
Cheers
NafisMarch 1, 2017
Hi Paul,
I faced this error:
training fisher face classifier
size of training set is: 506 images
OpenCV Error: Insufficient memory (Failed to allocate 495880004 bytes) in cv::Ou
tOfMemoryError, file ..\..\..\..\opencv\modules\core\src\alloc.cpp, line 52
I used your code without any change. Any idea why this might happen? I am using 4Gb of RAM.
Paul van GentMarch 13, 2017
Hi Nafis. All images are stored in the training_data and prediction_data lists. Are you using 32-bit Python? I believe Windows should allocate virtual memory of OpenCV needs more. In this case I recommend 64-bit python.
If you can’t, don’t want to or are already using 64-bit python and still get the error, you could try several things:
– Reduce the number of images in the dataset
– Reduce the resolution of the images
– Change the code so that only the training set is loaded when training, then delete this set and load the prediction set once you’re ready to evaluate the trained model.
Hope this guides you in a usable direction.
KarthikeyanSeptember 9, 2017
In emotion_dtect. py file im getting a “type error: ‘int’ object is not iterable” in line:pred, conf=fishface. predict(image)
How to resolve this sir?
simuxxMarch 3, 2017
Hi Paul,
Thank you for your job. your tutorials will help me a lot as I’m working on emotion recognition.
I’m trying to run the code but i’m having this error
sourcefile_emotion = glob.glob(“C:/…/source_images/%s/%s/*” %(part, current_session))[-1]
IndexError: list index out of range
Can you help me please
Paul van GentMarch 13, 2017
Hi Simuxx. The error is explicit: it cannot find the index of the list you specify, so that likely means the list returned by glob.glob is empty.
AniketJune 30, 2017
Did you resolve this issue @simuxx?
Paul van GentJuly 1, 2017
Take a look at the list “sourcefile_emotion”, likely it is empty. Are the folders that you feed to glob.glob() correct? Is there something in the folders?
Prem SekarMarch 6, 2017
hi paul,
i executed ur code to clean the dataset but it shows error…could you help me with it
karthikMarch 6, 2017
sir the dataset link provided by you contains many folders and images can u please explain me how to create my own data set with a small example .
suppose I have a list of images and I stored in source_images folder. and what I need to store In source_emotion folder …… jst can I save happy=1 sad=2.. in theform of txt files
Paul van GentMarch 13, 2017
Hi Karthik.!
Cheers
AshMarch 13, 2017
Hi Paul!!
Excellent tutorial. Really easy to understand the flow. I just have this doubt, I saved the trained model and when I opened it, it displayed something like this –
2
1
122500
d
1.0575786924939467e+002 1.0452300242130751e+002
1.0227360774818402e+002 1.0003389830508475e+002
9.7685230024213084e+001 9.5399515738498792e+001
………….
So do you have any idea what these values are?
Paul van GentMarch 13, 2017
Hi Ash,
Thanks! I’m not sure, these could be either decision boundaries or hyperplane coefficients (see how SVM’s work for more info), depending on the approach the face recognizer class in OpenCV takes. I’m not sure anymore what approach it takes though, been a while since I read up on it.
Cheers
KeshavMarch 13, 2017
Hey Paul, I successfully did everything as per the tutorial and got 95% accuracy . I tried to make a device with intel EDISON board. When I train the system , it says OutofMemory Exception because of Fisherface.train, Any idea how to overcome the memory leak?
Paul van GentMarch 14, 2017
Hi Keshav. It’s not a memory leak, there just isn’t sufficient memory on the system for this type of task. You might try training a model on a computer and transferring the trained model to the Edison to use for just classification. Be sure to also test how the model performs on data from webcams and other sources, as it’s unlikely you retain the 95% when generalising to other sets (this is where the real challenge still lies!).
Good luck!
JackMarch 14, 2017
Hey Paul, amazing tutorial! I must be doing something wrong but in the run_recognizer function I am returned the following error and am not very sure what is going on.. printing the image variable clearly shows that it is storing a full image..
—> 58 pred, conf = fishface.predict(image)
59
60 if pred == prediction_labels[cnt]:
TypeError: ‘int’ object is not iterable
JackMarch 14, 2017
fixed ! just gotta remove the confidence interval returned.. I guess it’s all about those python incompatabilities
OussamaSeptember 28, 2017
Hello Jack,
I am facing the same error I removed the conf variable but I still get the same error.
can you please help?
thank you.
Paul van GentSeptember 28, 2017
Hi Oussama. Can you share the exact error message and/or the code with me? info@paulvangent.com
RajatMarch 25, 2017
Followed all the steps. Even tried with the dataset you provided as “googleset”. But I am not getting an accuracy more than 55% even with 5 expressions. Please help!!!!
Paul van GentApril 8, 2017
Please check whether the generated image list correctly matches the label list. However, 55% with 5 expressions is way above chance level, you would expect 20% (1/5).
The method will never reach 100% accuracy, and depending on what sets you use, 55% may be the maximum obtainable.
KeshavMarch 27, 2017
Oh I managed to solve all the problems and I have made a device for the blind people to detect the intruder with wrong intentions using emotion recognition. Thank you so much for the tutorial Paul. You have been a great inspiration. I have done it using Intel Edison board.
Paul van GentMarch 28, 2017
That sounds like a fun project! Can you share more information on it? You’ve made me curious :).
KeshavMarch 30, 2017
Basically, there’s a button which acts as a trigger. Once if you press it, the Video camera starts recording and constantly monitor the emotions. Sometimes the emotions might be incorrect, So I have set up a count value for emotions. So if any different emotions like anger , for example, is detected, the blind person is alerted via a beep sound or some vibration. Your project acts as a base for mine. In case , such emotions are detected, the blind person will be aware of the situation. Moreover once the emotion is detected to be anger, the snapshot of the person standing right in front of him will be stored inside the board And also If you hold the button for a long time, Your location will sent to the already chosen emergency contacts. 😀
Any idea on improving the accuracy of the detection?
JohnApril 2, 2017
I saved my xml model and it seems that it not detect very good emotions. The precision is poor.
I try to figure out what is wrong (with model or with classification).
Can you save and provide me your model to see if the problemes comes from my training?
Thanks in advance (my email : ioan_s2000@yahoo.com)
Paul van GentApril 8, 2017
Hi John. Check whether the labels correspond with the images when training and classifying the model. Also try to expand the training set with more images if performance remains poor.
Remember that high accuracy might not be possible for a given dataset. Fine-tuning accuracy itself with images beyond excluding outliers is irrelevant, as real-world performance will not increase anyway.
bilal rafiqueApril 4, 2017
Hi Paul,
I am getting this error, when i run the code of training fisher face classier
Traceback (most recent call last):
File “F:/Emotion Recognition/Ex3.py”, line 64, in
correct = run_recognizer()
File “F:/Emotion Recognition/Ex3.py”, line 45, in run_recognizer
fishface.train(training_data, np.asarray(training_labels))
error: ..\..\..\..\opencv\modules\core\src\alloc.cpp:52: error: (-4) Failed to allocate 495880004 bytes in function cv::OutOfMemoryError
Please I want to use your tutorial in my final year project 🙁 Please help me. i have just 10 days 🙁
regards,
bilal rafiqueApril 4, 2017
This Error is resolved bro 🙂 by installing x64 Pythhon & one by one folder training but when i done it. i again trained it & error comes here again
Traceback (most recent call last):
File “F:\Emotion Recognition\Ex3.py”, line 64, in
correct = run_recognizer()
File “F:\Emotion Recognition\Ex3.py”, line 41, in run_recognizer
training_data, training_labels, prediction_data, prediction_labels = make_sets()
File “F:\Emotion Recognition\Ex3.py”, line 28, in make_sets
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) #convert to grayscale
error: ..\..\..\..\opencv\modules\imgproc\src\color.cpp:3739: error: (-215) scn == 3 || scn == 4 in function cv::cvtColor
Paul van GentApril 8, 2017
It seems OpenCV doesn’t find colour channels. Are you loading greyscale images?
Paul van GentApril 8, 2017
It seems you either need more RAM, or need to install 64-bit python and openCV (likely the latter). The error explicitly states it runs out of memory
karthikApril 7, 2017
sir u explained that we can create your own dataset. i did a small change in such a way that i replace all the images in s005 with my new set of roger fedrer images but its showing a wrong emotion . is this the correct way to create our own dataset or something different .
else as u mentioned earlier!
but how im going to give emotion values in source_emotion sir i.e ur specifying with s005 emotion as 3.000000+e. how can i give that to my own create done
please tell me the procedure to create my own dataset sir
JohnApril 9, 2017
I have modified a little bit your algorithm.
I crop the face from image according with this tutorial.
The crop depends of eyes position and a desired offset (how much to cut from face).
In this way i eliminate some not useful parts from the image.
Also I want to capture some frames and do recognition in videos :
It seems that the capture must be first cleaned from noise and after rotated for maximum recognition rate.
MunApril 16, 2017
Hey I m trying to run the first python snippet but got following error. Can u help me out!
Traceback (most recent call last):
File “C:\Users\mrunali\Desktop\Master\img_seq.py”, line 22, in
copyfile(sourcefile_neutral, dest_neut) #Copy file
File “C:\Python27\Lib\shutil.py”, line 83, in copyfile
with open(dst, ‘wb’) as fdst:
IOError: [Errno 2] No such file or directory: ‘sorted_set\\neutral\\5_001_00000001.png’
Paul van GentApril 21, 2017
Hi Mun. The error tells you what is wrong. Check where you store your files and what you link to.
Alexander B VanCampenApril 23, 2019
Care to explain further? I got the same thing, but I’ve pretty much followed the instructions accurately. Here is my directory
04/23/2019 02:23 PM .
04/23/2019 02:23 PM ..
04/22/2019 09:25 PM 1,024 .Organizaition.py.swp
04/22/2019 11:45 PM dataset
04/22/2019 10:13 PM 2,071 Extracting.py
01/25/2015 03:31 PM 676,709 haarcascade_frontalface_alt.xml
01/25/2015 03:31 PM 540,616 haarcascade_frontalface_alt2.xml
01/25/2015 03:31 PM 2,689,040 haarcascade_frontalface_alt_tree.xml
09/27/2017 12:28 AM 930,127 haarcascade_frontalface_default.xml
04/23/2019 02:19 PM 1,246 Organization.py
04/22/2019 05:17 PM sorted_set
04/22/2019 05:08 PM source_emotion
04/22/2019 05:17 PM source_images
palkabApril 30, 2019
The error states no such file or directory exists. Double check the path is generated ok. On unix use forward slashes in the path
Let me know if there are further errors.
– Paul
chanduApril 18, 2017
Hi Paul, How did you download CK data set. I entered information in this site, then it is showing “Please wait for delivering the mail “. I waited but I didn’t get. Can you tell exactly how did you get data set.
karthikAugust 14, 2017
if you have downloaded ,could you pls tell me how you did?
johansonikApril 21, 2017
Hello!
My question is do I have to teach this model with my own dataset or can I use your dataset and then mine just to recognize emotions?
Thanks in advance for reply 🙂
TRomeshApril 26, 2017
Hi Paul
I would like to know the algorithm that you have used in this project. does it involve Machine learning techniques or Neural networks or just OpenCV’s inbuilt classification methods?
Paul van GentMay 4, 2017
Hi TRomesh. Sorry for the late reply, haven’t had much time for the site. This particular tutorial uses the FisherFace classifier from OpenCV, which can be considered a form of machine learning.
JoeMay 2, 2017
Hi Paul.
I am Joe.
I have some question.
This one is Machine Learning?
As far as I know, Fisherface used to LDA(Linear Discriminant Analysis) algorithm right?
Paul van GentMay 4, 2017
Hi Joe. I believe you are correct about the LDA.
However, Machine Learning is a broad field, among which algorithms with methods such as LDA or even simple linear regression may fall.
Dana MooreMay 3, 2017
Dear Paul,
Very nice piece of work. Excellent
Your script for sorting and organising the images dataset does not work on ubuntu (and other *nix systems such as OSX) as listed.
* For one thing, the file path separators are proper for windows systems only. One might consider using the python “os.sep” to yield greater flexibility; alternatively, one might also consider using os.join to mesh the siparate parts
* For another, glob.glob() does not guarantee an in order / sorted array; one might consider using os.listdir(), then sorting the result to yield a correct ordering where the array would start with *00000001.png and end with *00000nn.png
*For another, the array indexing to retrieve current_session is off (at least for *nix systems)
That said, it’s a terrific piece of work
I will attempt to paste in some code that worked for me just below, but no guarantees it formats correctly in a text box. Alternatively, I will be happy to email you a copy
========================= BEGIN =========================================
======================== END =========================================
Thank you again for your excellent tutorial
Paul van GentMay 8, 2017
Hi Dana,
Thanks for the comments! The code unfortunately doesn’t format well in the text box, but it did in the backend. I planned on updating everything to be unix compatible but kept putting it off due to most of my hours going into work, this is a great reminder to do it. Thanks again.
Nesthy VicenteJanuary 30, 2018
Good day,
can you send a copy of your code to me, too. I am using ubuntu and the images were not sorted properly when I used the code in this tutorial.
AnniMay 7, 2017
Good Day sir!
I followed the steps in this tutorial but I got this error message in the first code..
Traceback (most recent call last):
File “F:\Future Projects\files\emotion1.py”, line 12, in
file = open(files, ‘r’)
IOError: [Errno 13] Permission denied: ‘source_emotion\\Emotion\\S005\\001’
how to solve this sir?
I’m still new in python and i’m using windows 10.
I hope you can help me with this 🙂
I badly need it Thank you
HetanshMay 8, 2017
After running the initial organizing the dataset python script I get no files in the sorted_set folder. Can anyone help me on this?
Paul van GentMay 8, 2017
Troubleshoot what’s going wrong.
– Are the files properly placed in correctly named folders?
– Are the variables “participants”, “sessions” and “files” populated?
– Are the source and destination paths generated correctly?
Jacob JelenMay 16, 2017
Hi Paul, Thanks for this tutorial! Super helpful, but I’m having some issues…
When I run the training code I get the following error:
training fisher face classifier
size of training set is: 494 images
predicting classification set
Traceback (most recent call last):
File “trainModel.py”, line 64, in
correct = run_recognizer()
File “trainModel.py”, line 52, in run_recognizer
pred, conf = fishface.predict(image)
TypeError: ‘int’ object is not iterable
I have tried changing the line 51 from
for image in prediction_data:
to
for image in range(len(prediction_data)-1):
That might have solved one of the issues, however I’m still getting errors. It is complaining about the image size not being 350×350=122500 although all the images in my dataset folder are the correct size. And my user name is not ‘jenkins’ as it says in /Users/jenkins/miniconda… not sure where it comes from or how to replace it with my correct path to fisher_faces.cpp
size of training set is: 494 images
predicting classification set
OpenCV Error: Bad argument (Wrong input image size. Reason: Training and Test images must be of equal size! Expected an image with 122500 elements, but got 4.) in predict, file /Users/jenkins/miniconda/1/x64/conda-bld/conda_1486587097465/work/opencv-3.1.0/build/opencv_contrib/modules/face/src/fisher_faces.cpp, line 132
Traceback (most recent call last):
File “trainModel.py”, line 64, in
correct = run_recognizer()
File “trainModel.py”, line 52, in run_recognizer
pred, conf = fishface.predict(image)
cv2.error: /Users/jenkins/miniconda/1/x64/conda-bld/conda_1486587097465/work/opencv-3.1.0/build/opencv_contrib/modules/face/src/fisher_faces.cpp:132: error: (-5) Wrong input image size. Reason: Training and Test images must be of equal size! Expected an image with 122500 elements, but got 4. in function predict
Thanks for your help
Paul van GentJune 12, 2017
Hi Jacob. It seems the images are not loaded and/or stored correctly in prediction_data, and therefore it cannot iterate over it. You can step over this with your proposed change, but then it fails later because there was no image in the first place. Verify that image data is stored there, and if not, where it goes wrong.
-Paul
Dixit ThakurJune 15, 2017
Hi Paul,
I am facing the same issue.
Traceback (most recent call last):
File “trainData.py”, line 67, in
correct = run_recognizer()
File “trainData.py”, line 54, in run_recognizer
pred, conf = fishface.predict(image)
TypeError: ‘int’ object is not iterable
fishface.predict(image),following is the image data
image data : [[ 80 82 83 …, 108 108 109]
[ 82 83 83 …, 110 111 111]
[ 84 84 83 …, 111 112 112]
…,
[ 24 25 24 …, 14 17 19]
[ 25 26 25 …, 14 17 19]
[ 24 25 25 …, 13 16 17]]
what can be the possible reason for the failure?
Paul van GentJune 19, 2017
Hi Dixit. I cannot reproduce the error, that makes it a bit difficult to debug. Could you send me your code at info@paulvangent.com? I’ll see if that works over here, and then we know whether it’s a problem with the code or yours setup.
-Paul
Taghuo FongueMay 21, 2017
Extract the dataset and put all folders containing the txt files (S005, S010, etc.) in a folder called “source_emotion” //
Hi Paul,
should i extract Emotions, FACS, Landmarks folders under the same folder “source_emotions” or only the Emotions folders has to be extracted and put under the folder “source_emotions”.
which dataset should i extract exactly? Please let’s me know.
Taghuo FongueMay 21, 2017
Please can you send me a screen-shot how you have arrange your folder ?
Paul van GentJune 12, 2017
Hi Taghuo. You extract the emotion textfiles into this folder. So you get:
source_emotion\\S005\\001\\S005_001_00000011_emotion.txt
source_emotion\\S010\\…
etc
hanaaJune 13, 2017
please can you help me — would like to implement emotion recognition using the Raspberry Pi’s camera module, specifically recognizing angry only . I have some simple face detection going on using OpenCV and Python 2.7, but am having a hard time making the jump to emotion recognition. Initial searches yield results involving topics such as optical flow, affective computing, etc, which has so far been intimidating and hard to understand. can you tell me code with fisherface classifer ?
Paul van GentJune 13, 2017
For the simplest approach I would recommend looking at the section “Creating the training and classification set”, all the code you need is there. You can also take a look at the Emotion-Aware music player tutorial here, that might clarify some things.
Vadim PeretokinJune 25, 2017
It doesn’t seem that the link to download CK+ works anymore?
Paul van GentJune 30, 2017
It seems it has been taken offline yes. I’ll update the text
BharathJuly 3, 2017
Could you please provide the sorted_set folder even?
I’m not able to prepare that set from code you had provided
BharathJuly 3, 2017
Anyone who had prepared the dataset with separate folder for each emotion can please reply
Paul van GentJuly 4, 2017
Hi Bharath,
The sorted set folder simply contains folders which contain images of faces with emotions, like this:
sorted set
|
——-anger
|
——-contempt
|
——-disgust
|
——-etc
Where exactly are you getting stuck? Maybe I can help.
-Paul
Abhi khandelwalJuly 3, 2017
Hi Paul
Could you please tell me that which file should I run First .And In your Codes there is no command for opening WEBCAM[#Video.Capture(0) like this], So how it will detect my Emotion.
Paul van GentJuly 4, 2017
Hi Abhi,
If you follow the tutorial everything should go in the right order. This tutorial is about building a model using OpenCV tools. There’s another tutorial using a more advanced method here. You can take a peek there on how to access your webcam, or find one of the million pieces of boilerplate code for this online!
Good luck.
-Paul
srikanthJuly 8, 2017
Hii Mr Paul,
what is your advise for the beginner , i mean to learn all these face recognition stuff using opencv,
i want to learn it completely.
Paul van GentJuly 8, 2017
Hi Srikanth. I would recommend doing a few Python courses on Coursera before delving into OpenCV. After this, the OpenCV docs should provide you with a good basis. Do a few projects without tutorials. You’ll learn it quickly.
ValensJuly 8, 2017
Hi Paul
We are looking into having emotion study on our media. However can we do this alogarithm without having to snap picture or store picture. On the fly to understand and predict viewers esperience esp watching certain movie or program?
Paul van GentJuly 10, 2017
This would be possible, but with Python real-time implementations might be too slow. One option is to snap a picture every second and classify that. Even on a slow computer the algorithm will be more than fast enough for this.
However, be aware that results are not likely to be accurate unless the archetypical emotions the classifier is trained on are displayed. Also be aware that you cannot use the CK+ dataset for any purpose other than academic, so if you want to do this commercially you need explicit permission from the original author of the dataset.
ValensAugust 13, 2017
Hi Paul, TQ for your msg. Yes I think snap a picture and then do classification would be ideal and practical. As to have CK+ dataset for commercial use, surely we will get the permission first. Btw, are you the author yourself?
Paul van GentAugust 13, 2017
I’m not the author of the CK+ set, only of the stuff on this website.
I would recommend you get permission to use it commercially first. Would be a shame if you put a lot of work in it, and then don’t get permission..
Michael rusevJuly 9, 2017
Hi paul, nice tutorial i was told to do something like this as a school project. after detecting emotion the system should able to play an audio file for the user if it detects an happy face it should play a certain audio and a sad face another. sorry my english isn’t that good. i want to ask if this can implemented and how can i do something like that
Paul van GentJuly 9, 2017
Hi Michael. This shouldn’t be hard at all. Look at the module “PyAudio“, or the VLC wrappers if you’d rather use that framework.
You could even use os.open(), however this will open the default media player to play the file (which causes it to pop-up), so this is not a very nice solution..
Vincent van HeesJuly 10, 2017
Many thanks for this blog post. It seems the data is back online again, so you can change the text back to how it was 🙂
Paul van GentJuly 10, 2017
Alright, thanks for notifying me!
Vincent van HeesJuly 10, 2017
In the section “Extracting faces”, the sentence “The last step is to clean up the “neutral” folder. ”
Could you please make this sentence more explicit:
– Is this a description of what the Python code did and is no further action required from the reader?
– Is this an instruction to the reader to delete that folder manually?
– Is this an instruction to the reader to delete the files in that folder manually?
thanks
Paul van GentJuly 10, 2017
Thanks. I have updated the text. You need to do this manually.
JITESHJuly 12, 2017
Hi Paul,
I would like to integrate same system in C#. Can you please help how can I integrate CK+ Model in C#. If you have any sample in c# for same kindly update me on jitesh.facebook@gmail.com please.
Thanks,
Jitesh
Paul van GentJuly 14, 2017
Emgu CV is a .NET wrapper for OpenCV. I would look into that. Porting the code should be easy after that :).
-Paul
Zlatan RobbinsonJuly 24, 2017
Hello Paul, Brilliant tutorial i may say, just one question is it possible to design a system that can speak to the individual after detecting emotion like when it detects an happy face it should say something to them “Like you are happy keep it up” and when it detects a sad face its should say something like” You are sad cheer up”.
Just something of that nature, it will be form to see the system speak to the individual after it detects emotion. Please can this be implemented am planning on during this as my Final Year project
Paul van GentJuly 24, 2017
Hi Zlatan. This should be quite easy to implement. To make your life easy you need to look at a package that does TTS (text to speech), for example this one.
Then it is just a matter of:
– Detecting emotion
– Determining label of emotion
– Have the TTS engine say something.
Good luck 🙂
– Paul
Zlatan RobbinsonAugust 6, 2017
Please Paul how can i contact you personally just in case i have
something else to discuss
SalmaAugust 3, 2017
hello , the link of the CK database is broken , and i can not find it on the internet , is there any working link or other alternative for the database,
Paul van GentAugust 7, 2017
As far as I’m aware this is the only link. Sharing the database without the author’s consent is prohibited, so I’m afraid you need to look for another dataset.
Zlatan RobbinsonAugust 6, 2017
Thanks Paul you saved my day. Keep up the good work
PKeenanAugust 13, 2017
Mistake in line 29 of the second code snip facefeatures == face2.
Paul van GentAugust 13, 2017
Thanks for catching that. I’ve updated
Rodrigo MoraesAugust 21, 2017
I can’t access the dataset used in this article(). Do you know why?
Paul van GentAugust 21, 2017
It’s availability is intermittent, and access is not always granted. You can look at other available datasets or create your own :).
ThariAugust 22, 2017
I’m getting this error when run the “Creating the training and classification set” code.
My system is windows 10, visual studio 2017, python 2.7 (32-bit), RAM- 8GB (There are more than 3 GB free memory when run the code)
training fisher face classifier
size of training set is: 1612 images
OpenCV Error: Insufficient memory (Failed to allocate 1579760004 bytes) in cv::OutOfMemoryError, file ..\..\..\..\opencv\modules\core\src\alloc.cpp, line 52
Please help me resolve this.
Paul van GentAugust 22, 2017
Hi Thari. You have the 32-bit python version, that means it can only address the first ~4GB of your ram, but most of this is likely taken up by your OS and other applications.
Consider installing the 64-bit python
ThariAugust 26, 2017
Thank you for your reply.
When I run this with python 2.7(64 bit) I’m getting this error.
Traceback (most recent call last):
File “C:\Users\Thari\documents\visual studio 2017\Projects\PythonApplication1\PythonApplication1\PythonApplication1.py”, line 7, in
fishface = cv2.createFisherFaceRecognizer() #Initialize fisher face classifier
AttributeError: ‘module’ object has no attribute ‘createFisherFaceRecognizer’
Press any key to continue . . .
Please help me resolve this.
karthikeyanAugust 24, 2017
hello sir, i am doing a final year project on this module,but i am using webcam to detect emotions in real time,could you provide me the source code for the complete module for emotion detection using webcam? please, my mail id is : learnerkarthik@gmail.com
Paul van GentAugust 25, 2017
Hi Karthikeyan. All you need is in the texts and the docs of the opencv module (). Good luck!
Adarsh S NMarch 3, 2018
Sir,I am a newbie in python can you please elaborate the procedure.
Paul van GentMarch 5, 2018
A great place to start is the general Python tutorial.
– Paul
ThamAugust 25, 2017
Hi Paul,
createFisherFaceRecognizer(num_components,threshold);
How many to use num_components and threshold for your project?
Paul van GentAugust 25, 2017
Hi Tham. In the code they are not specified, so they will revert to defaults. Also see
sadafAugust 25, 2017
how can I learn python_opencv ?
Paul van GentAugust 25, 2017
I would start with the docs. It also helps to think of a simple project you want to program, and build it from the ground up with the help of the docs. This will help you get familiar with the structure of the module.
If you’re not very comfortable in Python I would suggest you do a few courses on this first. This will really help speed the rest up.
SadafAugust 25, 2017
Ok thanks:)
karthikeyanAugust 25, 2017
i am getting an error like “attribute error:no module named create fisher face recognizer” , i just copy pasted your code!
Paul van GentAugust 25, 2017
Hi Karthikeyan. I’m sure your final year project is not about copy pasting code. Please read the tutorial as well, that’s what it’s for.
KarthikeyanSeptember 9, 2017
In emotion_dtect. py file im getting a “type error: ‘int’ object is not iterable” in line:pred, conf=fishface. predict(image)
How to resolve this sir?
Nesthy VicenteFebruary 3, 2018
Hello Karthikeyan,
Have you resolved this one? I got the same error.
Nesthy VicenteFebruary 3, 2018
nvm. It’s ok now.
salmaSeptember 7, 2017
the ck+ have more the 3 folders full of .txt files , which ones should i use in the ” source_emotion” folder?
I’ve been trying since 10 days and i have no result for the emotion recognition , i could appreciate a little help thank you
ReimaOctober 10, 2017
Hi Paul,
I just wanted to let you know that I found somebody presenting your tutorial codes as his own handiwork. No citation or links to this page. So that is a clear license violation on his part. I’d say, it is a sign that you’ve made a great tutorial since the copycat pretty much copy-pasted your code snippets and only made minor value adjustments and changed some comment texts:
Anyway, thanks for a great tutorial,
-Reima
Paul van GentOctober 10, 2017
Thanks so much Reima, also for notifying me. Making original content is hard and takes time. Unfortunately due to the nature of the internet there will always be freeloaders that benefit of other’s work. I’ve contacted the author and let’s see what happens
JamilOctober 14, 2017
Hello Sir I’m beginner programmer and learner, but have spirit to do anything if someone give me proper guide will you accept as your student ?
Paul van GentOctober 14, 2017
Hi Jamil. If you have questions you can always send them to info@paulvangent.com. I cannot guarantee I will always respond quickly, though.
There are a lot of great Python tutorials and classes online. I can surely recommend “Python for Everybody” on
AdityaOctober 29, 2017
Hi, I’m just getting started with this and I have a question — when you say “Extract the dataset and put all folders containing the txt files (S005, S010, etc.) in a folder called “source_emotion” “, which folders containing the txt files do you mean?
I’m confused if its all the contents inside “Emotion_labels/Emotion/” or “FACS_Labels/FACS”
Please help me out. Thanks!
AdityaOctober 29, 2017
I have currently saved it as “source_emotion/S005/001/S005_001_00000011_emotion.txt”
“source_emotion/S010/001/” and “source_emotion/S010/001/S010_002_00000014_emotion.txt” and so on. Is that right?
Paul van GentOctober 30, 2017
Hi Aditya, that indeed looks right. If the code fails you can take a look at either the code or the traceback of the error to see where the mismatch happens.
-Paul
AdityaOctober 30, 2017
Thanks! And I really respect your quick reply, Paul! 🙂
HuzailNovember 1, 2017
Hello sir i’m having memory problem in while training the python
this the screenshot of the error how to deal with it
Paul van GentNovember 1, 2017
Hi Huzail. You’re running out of memory. Likely you are using 32-bit Python, a 32-bit IDE running the Python environment, or on a 32-bit system. Make sure it’s all 64-bit, try to free up memory, or use less images to train the model.
HirenNovember 1, 2017
Hey…Paul..i does not found any .txt file (S005 – S010) by extracting the database….i only found the images in folder from (S010 TO S130)….SO what should i store in emotion source_emotion…??
Paul van GentNovember 1, 2017
Hi Hiren. In the server where you downloaded the data from, there is a separate zip file containing the emotion labels. You can download this one and extract into source_emotion
-Paul
HirenNovember 13, 2017
thanks for reply Pual….I have completed the system and its run successfully…. but some time system only store one image instead of 15 (as per the code)….so what should i do??
KarthikNovember 6, 2017
Hey Paul, Thanks for this easy to understand tutorials.
And for anyone on opencv 3.3 and who got createFisherFaceRecogniser not found, install the opencv contrib from and then use cv2.face.FisherFaceRecognizer_create() instead of cv2.createFisherFaceRecogniser()
RoshanJanuary 24, 2018
Thanks lot Karthik. its worked perfectly.
DominiqueNovember 12, 2017
Hi Paul, first of all thank you for the tutorial, but where am I supposed to get the txt files for the source_emotion folder if the CK+ dataset link is broken?
Paul van GentNovember 13, 2017
Hi Dominique. The link functions intermittently. Either try again later, or try using another dataset..
-Paul
ramyavrNovember 30, 2017
Hi Paul
I am getting this error
Traceback (most recent call last):
File “extract.py”, line 49, in
filenumber += 1 #Increment image number
NameError: name ‘filenumber’ is not defined
could you please help me solving this
Paul van GentDecember 4, 2017
Hi Ramyavr. The variable “filenumber” is not defined prior to you using it, as the error states. Check that you initialize the variable correctly, and that the name is spelled correctly there (also see code section in “Extracting Faces”, line 14.
SumDecember 4, 2017
Hi, I have been trying to run the code but am stuck in the very first step where I am unable to get the right path for the txt files.
** for sessions in glob.glob(“%s//*” % participant): # store list of sessions for current participant
for files in glob.glob(“%s//*” % sessions): ***
gives me a permission denied error even after i have given all the permissions.
Please help
Paul van GentDecember 4, 2017
Hi Sum. What OS are you using? Try running the application with elevated privileges (“sudo python
” on Linux/MacOs, or run cmd prompt as an administrator on Windows).
Please check that the paths you reference exist and are spelled correctly in the code, sometimes this can give strange errors.
– Paul
SumDecember 4, 2017
I am working on windows and have already tried running as administrator
Still stuck with this error:
file = open(files, ‘r’)
IO Error: [Errno 13] Permission denied: ‘E:data/source_emotions\\Emotion\\S005’
this is my code section throwing the error:
****
for sessions in glob.glob(“%s/*” % participant): # store list of sessions for current participant
for files in glob.glob(“%s/*” % sessions):
current_session = files[20:-30]
file = open(files, ‘r’)
****
MADecember 5, 2017
Hy We are using python3. We ran your code but didn’t reach more than 40% of accuracy. The classifier seems to works well. How do you get 80 percents ?
MA
Paul van GentDecember 5, 2017
Hi MA. There are two likely things that I can think of. The first is glob.glob might sort detected files differently. Please verify that all images in a given emotion folder are actually of that emotion. The second possibility is you’re also using a different OpenCV version. We’re basically abusing a face recognition algorithm to detect emotions, which has been changed in later versions. Please take a look at the other emotion tutorial on here. It’s a bit more technical but also the more ‘proper’ way of going about this task.
– Paul
NinjaTunaDecember 8, 2017
Hello sir, may I ask what algorithm used in the tutorial is called?
Paul van GentDecember 8, 2017
Hi NinjaTuna. Here I (ab)use a facerecognition algorithm called a FisherFace algorithm (see this for more info on FisherFaces). You can find more info in the OpenCV documentation.
– Paul
NinjaTunaDecember 11, 2017
Thank you very much sir, we have a project at our university that must be able to detect emotions on side view faces and we have no idea where to start, so we would like to cite your work, thank you very much 😀
Paul van GentDecember 11, 2017
Thanks NinjaTuna. You can cite my work as:
van Gent, P. (2016). Emotion Recognition With Python, OpenCV and a Face Dataset. A tech blog about fun things with Python and embedded electronics. Retrieved from:
If you need any further help with the project, let me know.
– Paul
HDDecember 9, 2017
hey..Paul…when the first time i ran the program i was able to store more than one image for each emotion…and got accurate result…,but now it is only store one image for each emotion…and not getting accurate result….so, what should i do?…plz…reply..as soon as possible.
Paul van GentDecember 11, 2017
Hi HD. Could you elaborate a bit further? I’m not sure what the issue is.
– Paul
sarraDecember 10, 2017
hi sir , i had this error and i could not solve it
Traceback (most recent call last):
File “D:\facemoji-master\prepare_model.py”, line 72, in
correct = run_recognizer()
File “D:\facemoji-master\prepare_model.py”, line 60, in run_recognizer
fishface.train(training_data, np.asarray(training_labels))
error: C:\projects\opencv-python\opencv_contrib\modules\face\src\fisher_faces.cpp:67: error: (-5) Empty training data was given. You’ll need more than one sample to learn a model. in function cv::face::Fisherfaces::train
Can you help me?
Paul van GentDecember 11, 2017
Hi Sarra. The error says it all: “error: (-5) Empty training data was given. You’ll need more than one sample to learn a model.“. It seems the data is not loading correctly. Check whether you are referencing the correct paths, whether you have permission to read from the folders, and whether you store the data correctly in the array variable in python.
– Paul
AngiDecember 14, 2017
Hi paul
I am getting this error
sourcefile_emotion = glob.glob(“source_images\\%s\\%s\\*” %(part, current_session))[-1] # get path for last image in sequence, which contain the emotion.
the image is in source_images\S010\001 and my python file is in the same folder as source_images.
Can you help me?
Paul van GentDecember 15, 2017
Hi Angi. Could you post your error message? You seem to have accidentally pasted a line of code rather than the error message.
– Paul
Dhruvi PatelDecember 20, 2017
I have problem while copying file…
Permission denied: ‘images\\S005\\001’
How can I solve this?
Paul van GentDecember 24, 2017
The error is explicit, make sure you have permission to write in the target folder.
– Paul
DPDecember 20, 2017
Traceback (most recent call last):
File “C:/Users/Dhruvi/Desktop/Projects/master/img_seq.py”, line 34, in
imageWithEmotionEtraction()
File “C:/Users/Dhruvi/Desktop/Projects/master/img_seq.py”, line 29, in imageWithEmotionEtraction
shutil.copy(sourcefile_neutral, dest_neut) # Copy file
File “C:\Python27\lib\shutil.py”, line 119, in copy
copyfile(src, dst)
File “C:\Python27\lib\shutil.py”, line 82, in copyfile
with open(src, ‘rb’) as fsrc:
IOError: [Errno 13] Permission denied: ‘images\\S005\\001’
Plz help me to solve this issue!!
Paul van GentDecember 24, 2017
The error is explicit, make sure you have permission to read or write in the target folder.
– Paul
KARTHIKEYANDecember 24, 2017
DO YOU HAVE THE SAME SOURCE CODE FOR WINDOWS? AS I AM A BEGINNER IN PYTHON I FIND A BIT DIFFICULT IN PIVOTING TE CODE
Paul van GentDecember 27, 2017
Python is cross-platform, you should be able to follow the tutorial on windows (it was written on windows as well).
parth thakarDecember 31, 2017
hey paul.
hey paul your tutoria is so much useful to me . but i m stuck with several issues.
1). what should i put in “dataset” in classifier.py
2). if i put “sorted_set” in place of “dataset” then it gives me an error
fishface.train(training_data, np.asarray(training_labels))
cv2.error: ..\..\..\..\opencv\modules\contrib\src\facerec.cpp:455: error: (-210) In the Fisherfaces method all input samples (training images) must be of equal size! Expected 313600 pixel
please help me what to do. i will be thank ful to you.
Paul van GentJanuary 4, 2018
Hi Parth,
1. I’m not sure what you mean, in the code ‘dataset’ refers to the location of all image files.
2. The error is explicit..: make sure you resize all images when loading. In the tutorial I slice the faces from the image and save to a standardised size.
Good luck.
– Paul
parth thakarJanuary 7, 2018
yes paul.
Glad you answered.
would you please give me some advice about how i resize my image to get rid of that error
i tried several methods for resizing but still getting the same error
Luigi BerducciJanuary 10, 2018
Very interesting, I’m currently working on emotion detection and I’m testing different classifiers.
Using your code with the same dataset (all emotion) on my laptop give me a correctness of 25%. How is possible so different results? The reason is that the dataset is too small for a complete classification?
Thanks for your time and your work!
Paul van GentJanuary 25, 2018
Hi Luigi. In the past others have reported similar problems. In most cases the issue was because “glob.glob” sorts the detected files differently on Linux than on Windows. On *nix systems you need to make sure to first sort the returned list before you take the last element (the final image containing the emotion).
Another issue some have had is that newer versions of OpenCV use a different algorithm that works less well in this context. Please take a look at the facial landmarks tutorial. Not only is this a more proper way to detect emotions (albeit a bit more difficult), it will bypass OpenCV entirely.
– Paul
AanyaJanuary 14, 2018
Hy..Please help me. I am confused at first step.
There are three main folders named as Emotions_Labels , FACS_lables and Landmarks in the downloaded data set. Each of that folder is consist on sub-folders, Then in the sub folders there are text files.
Am i supposed to place these all three main folders, in the folder source_emotions (which has been mentioned by you) ?
Really I am too confused at first code snippet, please make detailed comment to explain that. What is purpose of that step? Please help me with detailed answer. Please. . .
Paul van GentJanuary 25, 2018
Hi Aanya. You need to put the contents of the Emotion_Labels folder into “source_emotions”. This folder contains all labels corresponding to the image set. Without it, the classifier has no idea which image represents which emotion.
– Paul
poonehJanuary 15, 2018
hi dear paul:) I’m a student and I wanna know more about fisher face and eigen face and lbph algorithms so I’ll be thankthful if you offer me a good and simple refference:)
thanks alot:)
Paul van GentJanuary 25, 2018
Hi Pooneh! You can find a good overview paper here:
Good luck!
– Paul
poonehFebruary 4, 2018
thanks alot:)
hazem ben ammarJanuary 15, 2018
hi paul thank you for this amazing tutorial but when i run the first code under raspbian with opencv 3.3.0 and python 3 i got this error
Traceback (most recent call last):
File “test.py”, line 15, in
sourcefile_emotion = glob.glob(“/home/pi/Desktop/code/image_source/%s/%s/*.*” %(part, current_session))[-1] #get path for last image in sequence, which contains the emotion
IndexError: list index out of range
i will be so glad if you answer me
Paul van GentJanuary 25, 2018
Hi Hazen. The error happens when trying to get the last element from the list generated by glob.glob. This implies the list is empty. Please make sure the path is correct, that includes the ‘part’ and ‘current_session’ items.
– Paul
AliJanuary 21, 2018
Your logical “important” part was nice :))
SardorJanuary 23, 2018
where is Dataset.zip file? how to i can make it? like extra dataset file?
Paul van GentJanuary 25, 2018
Hi Sardor,
There’s a link in the text to the small google images dataset. It’s this one
– Paul
Nesthy VicenteJanuary 25, 2018
Good day Paul,
I tried the python snippet for sorting the dataset but my sorted_set folder still contains nothing after running the code. What could be the problem?
I am using opencv3 and python 2.7.12 in ubuntu 16.04.
Paul van GentJanuary 25, 2018
Hi Nesthy. The code is written on windows, where the path is separated using “\\”, on Ubuntu you use “/”. Change this in the code and I think this will solve your problem!
– Paul
Nesthy VicenteJanuary 26, 2018
Didn’t see that. It worked now. Thanks! There’s still a problem though. There are misplaced pictures (e.g. sad picture in happy folder). And the neutral folder contains pictures with different emotions and almost no neutral emotion. Is there something I can do to make the sorting more accurate?
Parthesh SoniAugust 4, 2018
I had the same problem. Thanks a lot for helping!!
RoshanJanuary 25, 2018
Hey Paul,
Thanks for providing great two tutorial for us. Those are realy help for me. I am a student and this need to me for my project. Can you please tell how to implement this code to get precentage of emotions like happy = 0.02145362, sad = 0.001523652, neutral = 0.9652321 etc. Because i need the out put by analyzing real web cam frames. Please help.
Paul van GentJanuary 25, 2018
Hi Roshan. Take a look at the tutorial that uses Facial Landmarks, the answer is in there. When creating the classification object you need to pass it a”probability=True” flag. After this, if you use its “predict()” function, you’ll get back an array of shape (m,c), m being the number of passed images to classify, c being the total number of classes.
– Paul
RoshanJanuary 26, 2018
Thank You Sir, I will try it.
poonehFebruary 4, 2018
thanks alot:)
KarthiFebruary 12, 2018
Have any one here done real time emotion detection using webcam? If so plz mention
Paul van GentFebruary 14, 2018
Hi Karthi. You only need to adapt the code a little bit so that it grabs a frame from the webcam, classifies it and stores the result, and repeats the process.
For stability I recommend pooling results over a few seconds and taking the average prediction. Otherwise you will get a lot of (incorrect) result switching through prediction noise.
– Paul
KarthiFebruary 23, 2018
When i tried to grab the emotions from webcam it shows me the error like no module named fisherface?? Does opencv in windows supports fisherface classifier now? It was supporting earlier but!
Kowsi1997February 13, 2018
I’m getting the following error.please help me to fix this.
File “data_org.py”, line 16, in
sourcefile_emotion = glob.glob(“F:\proj\emotion\Emotion-Recognition-master\source_images\\%s\\%s\\” %(part, current_session))[-1] #get path for last image in sequence, which contains the emotion
IndexError: list index out of range
Paul van GentFebruary 14, 2018
Hi Kowsi. This error means glob returns empty lists (there is no last element, which only happens if the list is empty). Make sure the paths are correct, the string substitutions (%s) create correct paths, etc.
– Paul
Kowsi1997February 15, 2018
Thank you Paul,I fixed that error.but now i’m getting the following error,
In File “data_org.py”, line 14, in
emotion = int(float(file.readline())) #emotions are encoded as a float, readline as float, then convert to integer.
ValueError: invalid literal for float(): 1.0000000e+00 0.0000000e+00
getsurrealFebruary 14, 2018
So if you didn’t try to label an exact emotion and went with more of a positive, negative, or neutral reading, the accuracy could be much higher.
Paul van GentFebruary 19, 2018
This is definitely a possibility, especially since a mix of emotions is often present in in-situ data.
However, then we also need to annotate the dataset differently from what it is now, to allow for the model to fit multi-label outputs. That labeling is a lot of work that needs to be done first..
– Paul
SardorFebruary 17, 2018
Hello, My dear Paul.I am working on facial expression project for my master degree.Can you explain to me about what kind of method I can use if I make the project using your dataset and your coding?Please help me with this.
SardorFebruary 17, 2018
And I have some question for you, Please Can you give me your email or contact me .my email is mamarasulovsardor@gmail.com
VedantFebruary 19, 2018
Hi Paul ,
Thanks for your help
But I am getting an error
C:\bld\opencv_1506447021968\work\opencv-3.3.0\opencv_contrib-3.3.0\modules\face\src\fisher_faces.cpp:67: error: (-5) Empty training data was given. You’ll need more than one sample to learn a model. in function cv::face::Fisherfaces::train
What should I do to clear that error
Paul van GentFebruary 19, 2018
It seems something is going wrong when generating the dataset. The error mentions it is empty! Make sure you check for all steps whether they work as intended.
In a situation like this I find it helpful to just print (parts of) the output for every step just to verify where the information slow “stops”
– Paul
Nirajan ThapaFebruary 20, 2018
Hi Paul,
Thanks a lot for putting such a beautiful work. Moving on, I had problem in the very first source code. I am using ubuntu 16.04.3 (64 bit), anaconda3, pycharm, python 2.7.12 and opencv 2.4.9. I have already extracted the zip folder of extended cohn kanade dataset and managed the 3 folders in the conda environment as you said. I also changed all the separtors “\\” into “/” for linux compatibility. But as I run it, the following error occurs:
file = open(files, ‘r’)
IOError: [Errno 21] Is a directory: ‘source_emotion/Emotion/S005/001’
So, I changed the code:
for files in glob.glob(“%s/*” %sessions):
into
for files in glob.glob(“%s/* .txt” %sessions):
Now, the error doesn’t occur. But, there are no files in the “sorted_set” folder, it just contains only the folders like happy, anger,etc that I created earlier manually even though there are images in the “source_images” folder and txt files in the “source_emotion” folder. I had already tried with the latest versions of opencv and python, but no luck there too. Please help, I couldn’t figure out what is wrong.
-Nirajan
Paul van GentFebruary 21, 2018
Hi Nirajan. I suggest you print out the values of the variables included. For example, what’s the content of ‘files’ after your change? If you read the opened file object, what content does the Python interpreter find (just print() file.read() )? These kinds of steps will help you trace the problem.
Let me know what your findings are.
– Paul
SardorFebruary 21, 2018
Hello, My dear Paul.I am working on facial expression project for my master degree.Can you explain to me about what kind of method I can use if I make the project using your dataset and your coding?Please help me with this.And I have some question for you, Please Can you give me your email or contact me .my email is mamarasulovsardor@gmail.com
SardorFebruary 25, 2018
In this project, you have also used from Tensorflow?
SardorFebruary 25, 2018
In this project There is no deep learning part? I mean tensorflow or ..? what does it work ?what kind of method?
Adarsh S NMarch 2, 2018
Really great code.
Is there a way to alter the code so that it can be used in real-time through webcam.
Paul van GentMarch 5, 2018
Adapting the code to do real-time detection on a webcam isn’t too difficult. You need to grab a frame from the webcam, then you can run it through the classifier like just like a regular image. With Python there may be performance limits for this, so that you can only classify a few times a second. I would aggregate prediction results over several seconds at least to get rid of some classification noise.
– Paul
Pyae Phyo PaingMarch 6, 2018
Hello Sir, My name is Pyae Phyo Paing. I am from Myanmar. I am working on the project Emotion Detection. Your codes and tutorial is very helpful to me. But I am now facing with a problem. I can’t download the CK and CK+ dataset. So how can I do. So help me if you can. Thank you.
Paul van GentMarch 7, 2018
I recommend you take a look at another publicly available dataset. You can find other data sets for example here:
DikshantMarch 12, 2018
Hey paul i’m working on Emotion classification using videos( that is from ck+ dataset considering all frames) i’m having trouble loading it can you please help me in this matter.
Like how to calculate adjacency matrix for this data with frames.
Paul van GentMarch 21, 2018
Hi Dikshant. This type of classification is beyond my expertise as I’ve not done this before. I’m sure you can find info online on how to compute adjacency matrices in Python. There might even be a package available for this.
If you have sufficient data you might also want to look at LSTM or GRU deep networks for this type of classification. If you utilise a model with pre-trained weights for for example facial recognition, you might be able to get some results with limited data through what is called ‘transfer learning’.
– Paul
halaMarch 15, 2018
Hi Paul
Can I use these codes on Linux ?
Paul van GentMarch 21, 2018
Hi Hala. Yes, it’s all written in Python, which is multi-platform. Assuming you install the dependencies mentioned in the tutorial it should run with little-to-no changes.
– Paul
halaMarch 30, 2018
hi Paul
thank you for your reply, when i run the first code i get this error
Traceback (most recent call last):
File “organising_dataset.py”, line 16, in
sourcefile_emotion = glob.glob(“source_images\\%s\\%s\\*” %(part, current_se
ssion))[-1] #get path for last image in sequence, which contains the emotion
IndexError: list index out of range
ChandApril 9, 2018
Hi Paul, I am also getting same error . Please help me .
Paul van GentApril 11, 2018
This indicates that glob() cannot find any images int he folder path generated. Check that the generated path is correct, the target folder contains the images, and that glob returns something, before moving on in the code.
– Paul
karthikMarch 22, 2018
cv2.error: ..\..\..\..\opencv\modules\core\src\alloc.cpp:52: error: (-4) Failed to allocate 413560004 bytes in function cv::OutOfMemoryError
could you help out with this?
Paul van GentMarch 27, 2018
Hi Karthik. You’re out of memory. Likely you’re using 32-bit python. Consider switching to 64-bit so you can address more ram.
If you’re already on 64-bit but have limited ram in your machine, consider a larger swap partition.
– Paul
Omkar JoshiMarch 27, 2018
Hi Paul. I am using CK+ dataset for implementing this tutorial for my college project.
However, I am not able to understand the integration of labels.
files = glob.glob(“/home/pradeep/paul/source_images/*/*/*.png”). This runs perfectly and uses the images for training and prediction. But I am not able to figure out the labels, training data, prediction data.
But when i try the above command by using %emotion at the end, I get errors. Please assist.
Thanks !
Paul van GentMarch 27, 2018
Hi Omkar. Be sure to follow the whole tutorial, especially the part under “organising the dataset“. It explains how to segment and structure the dataset before training the model. I embed the labels in the folder structure for clarity’s sake and ease of adding data.
– Paul
JamesApril 17, 2018
Hi, I’m confused as to what to do after I have trained the classifier. If I want to get a prediction for a specific photo, how would I go about doing that after training the classifier?
Thanks
-James
Paul van GentApril 23, 2018
Hi James. You can use the classifier’s
.predict()function and feed it an image matrix. It will return a numerical label based on the training order of categories, which you need to translate to the correct emotion.
Asif Ayub MayoApril 18, 2018
Assalam u Alaikum Dear Paul, I have successfully followed your instructions and got an accuracy of 72.4%. First of all thank you for sharing this wonderful work 🙂 I want to know if you can tell me more about exporting the trained model for later use. I have used
fisherface = cv2.face.FisherFaceRecognizer_create()
fisherface.train(images,labels)
fisherface.write(filename)
for exporting purpose
i have exported it in xml format than I read it in another program
with
model = cv2.face.FisherFaceRecognizer_create()
model=fisherface.read(filepath)
but i am unable to read it if i use
print(type(model))
it returns None type
I hope you can understand what I am missing there
kindly reply as soon as possible…!!!
Just to let you know I am putting together a system that will monitor facial expression in real-time using local or remote camera as well as in any type of video stream for example skype,messenger or whatsapp video calls as well as I intend to create it on a dedicated hardware device and create API for emotion recognition services I am almost or nearly done with my first iteration of prototyping the product I would love to share my work with you.
currently I am following this tutorial I intend to work on another algorithm of CNN that is Meta-Cognition Fuzzy Inference System (McFIS) for facial expression recognition that has higher accuracy, I would love if you can read the paper and share your views on it.
But most importantly please reply ASAP! about the issue. I have to present my progress with in a week. Thanks Again!
Paul van GentApril 23, 2018
Hi Asif. Sure I’d like to read the paper and share some views on it, you can email me at info@paulvangent.com.
Regarding the model type I’m unable to reproduce the error. On my system saving the model weights using either
.write()or
.save(), and then loading it back up results in
class 'cv2.face_FisherFaceRecognizer'. Have you tried the .save() as well, just to exclude that something is going wrong with your distribution there? Using .write() results in a shorter model file than using .save(), although I’m unsure this is at the root of your issue.
p.s. I hope my reply was on time, I’ve been on holidays.
– Paul
Asif Ayub MayoApril 18, 2018
Python version – 3.6.4
OpenCV CV2 3.4.0
stemcinApril 21, 2018
Hi Paul,
I’m getting the following error message when trying to extract the faces. I was wondering if you could assist?
(base) C:\Desktop\python1>python extractfaces.py
OpenCV(3.4.1) Error: Assertion failed (!empty()) in cv::CascadeClassifier::detectMultiScale, file C:\bld\opencv_1520732670222\work\opencv-3.4.1\modules\objdetect\src\cascadedetect.cpp, line 1698
Traceback (most recent call last):
File “extractfaces.py”, line 50, in
detect_faces(emotion) #Call functiona
File “extractfaces.py”, line 20, in detect_faces
face = faceDet.detectMultiScale(gray, scaleFactor=1.1, minNeighbors=10, minSize=(5, 5), flags=cv2.CASCADE_SCALE_IMAGE)
cv2.error: OpenCV(3.4.1) C:\bld\opencv_1520732670222\work\opencv-3.4.1\modules\objdetect\src\cascadedetect.cpp:1698: error: (-215) !empty() in function cv::CascadeClassifier::detectMultiScale
I look forward to hearing from you
stemcinApril 21, 2018
Managed to work out the issue myself – Didn’t have the “haarcascade” files in the folder 🙁 – Sorted now.
stemcinApril 21, 2018
I am getting the following error message though with the train script? Any assistance would be much appreciated.
(base) C:\Desktop\python1>python train.py
File “train.py”, line 43
print “training fisher face classifier”
^
SyntaxError: Missing parentheses in call to ‘print’. Did you mean print(int “training fisher face classifier”)?
Paul van GentApril 23, 2018
You’re using Python 3+. You need to add parentheses to print statements in any Python 3 and above environment, as the error states.
print("training fisher face classifier")
Euan RobertsonApril 21, 2018
Hi Paul,
I’m following this tutorial in visual studio. No matter where i place the opencv files the “import cv2” line always throws an error.
Any ideas as to why this might be happening?
Paul van GentApril 23, 2018
Hi Euan. What version are you using? Are you using anaconda? Try
pip install opencv-pythonor
conda install opencv. Let me know if that fixes it for you.
Luis RuanoApril 25, 2018
Hi Paul, nice work with the tutorials. I am working in an Animatronix, so I am very interested to recognize emotions to perform a better interaction between human and robot. I got two questions.
1) When you call the function fishface.predict(), what does the conf means ?. Is is a weight of how accurate it was or something. Or what does it stores ?
2) I am following your next tutorial to do the algorithm in real time. But I have had problems with the section Detectin the emotion in a face. In the part of updating the model. I have already download the script of Update_Model. How do I know if it is updating ? Or how well it is updating ?
Luis RuanoApril 26, 2018
Hi Paul, nice work with the tutorials. I am working in an Animatronix Project, so I am very interested to recognize emotions to generate a better interaction between robot and human.
I got two questions.
1)In the part of fishface.predict(), the variable “conf”, what does this represents. I have printed it and it contains a float number. Is like the weight of how precise was the prediction ?
2) For the tutorial of the recognize emotions in real time to play music. I ´ve already download the Update_model py script. But I want know if it really is updating ?, how do I know?. And how precise was the preddiction to added to the database ?
Thanks for you answer !
Paul van GentApril 26, 2018
Hi Luis.
1. conf stores the confidence (‘distance’ from the stored representation of the class)
2. The code there grabs several images for each emotion you display, adds them to the folder structure, and re-trains the model. This is done to update the performance for the person whose pictures were added during the update routine. As the music player only functions for the user of the computer this is sufficient. However, in a project such as yours the problem is very much more difficult because you need to generalise to unknown people with different faces and different styles of facial expressions.
I recommend you take a look at the tutorial that uses facial landmarks and use that as a basis. As with any of these kinds of projects: you will need to get creative with you how collect your training dataset to achieve good real-world performance. Building the coding and the model is the easy part.
Let me know if you need further help.
– Paul
Luis RuanoMay 5, 2018
Ok I will follow that tutorial, and you are right about how to collect trainning, I would reduce the variable to minimum, I will to mantain constanta ilumination and background too get the dataset.
1.For your answer to conf
If i didn’t understand bad, I could use that distance, to generate graphics about the algorith, right ?. It is like a correlation ?
Please I would be nice if I can commuicate with you by another plataform. I am working my graduation work, so it would be nice if you can advice me, or help you investigate in something. Let me know Please
Paul van GentMay 10, 2018
Hi Luis. Yes it is a performance metric so you can generate graphics.
You can contact me on info@paulvangent.com.
– Paul
VarunApril 30, 2018
Hello,
I am having problem in training the dataset for Kinect .It has 2 sets one is RGB and the other is Depth and I am confused as how to train both together.I have trained both separately but have no idea as how to train them together .Kindly helpo me or if anyone has worked on Kinect Xbox kindly help me.I am stuck.
Paul van GentApril 30, 2018
Hi Varun. I think the easiest method is to fuse them together into a ‘4 channel’ image. I assume the depth information is single-channel and the same resolution as the RGB image. You can append the depth information as a fourth channel to the image array.
If you need some help, can you send me:
– 1 example of the RGB image
– 1 example of the Depth image
to info@paulvangent.com? I’ll have a look and help you fuse them.
– Paul
Shashank RaoMay 8, 2018
Hey paul,
I made my own dataset but I couldn’t really understand how to organize the dataset. Can you provide more info on it? how did you encode the txt files? All I did was made one text file with the names of the images followed by the emotions. Here’s the link to the file :
Shashank RaoMay 9, 2018
Figured it out!
Paul van GentMay 10, 2018
Good to hear!
– Paul
Paul van GentMay 10, 2018
Hi Shashank,
It depends a bit on what classifier and framework you use. The best organisation is to generate two arrays: X[] and Y[], with image data in every X[] index, and the corresponding label in the same index in Y[]. Note that the label needs to be numeric. You can translate back to a human-readable label after classification.
– Paul
nehaSeptember 9, 2018
hi shashank
i also want to make my own data set but i am unable to understand the encoded value of txt files.can u help me on this?
salih karanfilMay 9, 2018
Hello Paul;
First of all thank you for your efforts.
I have a problem.
training fisher face classifier
size of training set is: 0 images
Traceback (most recent call last):
File “C: \ Users \ Salih \ Desktop \ project \ step3.py”, line 71, in
correct = run_recognizer ()
File “C: \ Users \ Salih \ Desktop \ project \ step3.py”, line 51, in run_recognizer
fishface.train (training_data, np.asarray (training_labels))
error: C: \ projects \ opencv-python \ opencv_contrib \ modules \ face \ src \ fisher_faces.cpp: 71: error: (-5) Empty training data was given. You’ll need more than one sample to learn a model. in function cv :: face :: Fisherfaces :: train
Could you give me a solution?
Thank you 🙂
Paul van GentMay 10, 2018
Hi Salih,
There are no images loaded (‘size of training set: 0 images’). Verify the paths generated and the file detection of glob.
– Paul
Mutayyba WaheedMay 19, 2018
Hi Sir…..!!!
Can u plZzz send me source code of your this project on this Email-Address: mutayybawaeed@gmail.com
Paul van GentMay 22, 2018
All you need is in the tutorial!
– Paul
RanjanMay 24, 2018
hi sir,
what classification algorithm is used for classifying the emotions?
Paul van GentMay 30, 2018
Hi Ranjan. In this one we use fisher eigenfaces.
Mutayyba WaheedMay 25, 2018
Traceback (most recent call last):
File “E:\facemoji-master\facemoji-master\prepare_model.py”, line 13, in
fishface = cv2.createFisherFaceRecognizer()
AttributeError: ‘module’ object has no attribute ‘createFisherFaceRecognizer’
can plZzz tell me how can i fix this error …??
mauricioNovember 16, 2018
did you fix it? I have the same problem xd
RanjanMay 25, 2018
hey paul !!, we did organize the folders (dataset) in the same order as you mentioned in the article.But when we tried to execute the first code ,we didn’t get any output i.e. the images aren’t extracted to the respective folders of sorted_set. and then we tried to execute the second code, the same thing happen with it as well no error occured and yet we get no output.Can you please tell me, why are we facing such chaos?
Paul van GentMay 30, 2018
It is possible the folders are not correct. I would start with checking that the generated file lists (from glob) are populated with the respected files.
– Paul
RobinJuly 5, 2018
Hi Paul,
In my train.py –> in the –> def run_recognizer():
in have a problem with : fishface.train(training_data,np.asarray(training_labels))
The error is : fishface.train(training_data,np.asarray(training_labels))
TypeError: src is not a numpy array, neither a scalar
Can you help me ?
Best regards
palkabJuly 6, 2018
Hi Robin. Likely there’s an issue with loading (some of) the data. Take a look what’s in the arrays
training_dataand
training_labels. My bet is that one or both of them are empty.
– Paul
RobinJuly 9, 2018
Thk for your answer Paul, i will investigate.
For your information, a french magazine use your code. They put your website in the sources. That’s why i asked you some help. You can find your code page 40 of this french magazine :
I will come back to tell you if i found the solution
palkabJuly 11, 2018
Thanks Robin! Let me know of you need more help. Did you find if any images were loaded or not? You can then check what paths ‘glob’ uses to search, and check if those are ok.
Thanks for the magazine, I like to hear about those things 🙂
RobinJuly 11, 2018
I don’t understand because my glob path is okay, and training_data and training_labels are Ok. I will print you my code if you’ve some time to help me :3.
import cv2
import glob
import random
import numpy as numpy
from matplotlib import pyplot as plt
#Liste des emotions
emotions = [“neutre”,”colere”,”mepris”,”degout”,”peur”,”joie”,”tristesse”,”surprise”]
rep = glob.glob(“C:/Users/Robin/Desktop/Divers Projets/Reconnaissance_Faciale/labels/*”)
fishface = cv2.face.FisherFaceRecognizer_create()
data = {}
#On va répartir la base des images en deux : 80% pour l’apprentissage et 20% pour évaluer les performances de l’apprentissage !
def get_fich(emotion):
fich = glob.glob(“C:/Users/Robin/Desktop/Divers Projets/Reconnaissance_Faciale/datasheet/%s/*” % emotion)
random.shuffle(fich)
training = fich[:int(len(fich)*0.8)] #Utilise les 80% premiers fichiers
evalperf = fich[-int(len(fich)*0.2):] #Utilise les 20% derniers fichiers
return training,evalperf
#Organise les fichiers pour l’apprentissage
def make_sets():
training_data = []
training_labels = []
evalperf_data = []
evalperf_labels = []
for emotion in emotions:
training, evalperf = get_fich(emotion)
for i in training:
image = cv2.imread(i)
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
training_data.append(gray)
training_labels.append(emotions.index(emotion))
print(“1”)
for i in evalperf:
image = cv2.imread(i)
gray2 = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
evalperf_data.append(gray2)
evalperf_labels.append(emotions.index(emotion))
print(“2”)
return training_data, training_labels, evalperf_data, evalperf_labels
#Fonction d’apprentissage et d’évaluation
def run_recognizer():
#On vient créer des groupes d’images et de labels pour l’apprentissage et l’évaluation des performances
training_data=make_sets()
training_labels=make_sets()
evalperf_data=make_sets()
evalperf_labels=make_sets()
print(“1 : ” + str(training_data))
print (type(training_data))
print(“——- 2 : ” + str(training_labels))
print (type(training_data))
print (type(training_labels))
print(“Apprentissage de nos visages”)
fishface.train(numpy.asarray(training_data),numpy.asarray(training_labels))
print(“Evaluation des performances”)
cpt = 0
correct = 0
incorrect = 0
for im in evalperf_data:
evalperf, conf = fishface.predict(im)
if evalperf == evalperf_labels[cpt]:
correct = correct + 1
cpt = cpt + 1
else :
incorrect = incorrect + 1
cpt = cpt + 1
return ((100*correct)/(correct+incorrect))
resultat=[]
for i in range (0,5):
correct = run_recognizer()
print (“Résultat du cycle”, i, ” : “, correct, “%”)
resultat.append(correct)
print(“resultat toal : ” , np.mean(resultat), “%”)
subhiranJuly 8, 2018
very useful one . but sir actually i wanted to know how can i test my own input image.
your help will be appreciated.
palkabJuly 11, 2018
The fisherface classifier object has a .predict() function. Just call that and give it an image, it will predict the corresponding label!
Cheere
Clément AtlanJuly 25, 2018
Hi Paul,
Firstly, a big thanks for the work you did! Thank you for sharing it with us, it’s so much appreciable 🙂
I have no specific problem, I was just wondering few things…
I got a data set which was very extensive, in other words it contains many emotions (and most of them was really close), something like 16 emotions. Obviously the final result rate I got was not very high, about 20%, which still proves the algorithm works. So I modified a little my extracting process to merge some emotions so that I had less different emotions. Anyway, the very reason to use such an algorithm is to reach high rate of efficiency (Or maybe I am wrong ?). What we want is to be more efficient that could be a human judgment, and the thing is that it is not the case with this algorithm, or maybe with the all openCV librairy. So here is my question: Do you have any idea of how openCV functions work (train(), detectMultiscale() ) or if such a library allows a very accurate detection process ?
Once again, thank you !
palkabAugust 8, 2018
Hi Clément,
The problem here is two-fold:
– The fisherfaces are not the most optimal way of detecting emotion. However, they are an accessible one. I might do a more elaborate tutorial in the near future involving deep learning approaches I’m working on now.
– Getting better results than human observers is not always possible. Read ‘the imperfect finish’ on this link for example.
The bottom line is: is there enough variance in the data to separate all emotion categories into classes? Is there enough data to learn to generalise to unknown data as well? In the end this is what it will come down to I’m afraid.
– Paul
John PeterJuly 30, 2018
Hello from Canada!
It’s impressive the work that you did put together. From 2016 to now, what have you learned and have you improved the precision/accuracy? I’m looking/waiting for your next post! This subject is great!
Thanks!
palkabAugust 8, 2018
Hi John,
I’ve been working a lot on deep learning approaches in similar fields, as well as on open sourcing my heart rate analysis toolkit (and porting it to embedded C for Arduino and such). There will be another post soon-ish regarding emotion recognition building blocks and deep learning.
– Paul
Parthesh SoniAugust 4, 2018
I have this error but i dont know exactly why. I am new to Python and I have followed all of the instruction mentioned in the article except the one with deleting the duplicate neutral images manually. And thanks a lot for such a wonderful tutorial on this. The error is following…..
OpenCV Error: Bad argument (At least two classes are needed to perform a LDA. Reason: Only one class was given!) in lda, file /build/opencv-zcaJjh/opencv-3.2.0+dfsg/modules/core/src/lda.cpp, line 1018
Traceback (most recent call last):
File “trainPredict.py”, line 65, in
correct=run_recognizer()
File “trainPredict.py”, line 46, in run_recognizer
fishface.train(training_data, np.asarray(training_labels))
cv2.error: /build/opencv-zcaJjh/opencv-3.2.0+dfsg/modules/core/src/lda.cpp:1018: error: (-5) At least two classes are needed to perform a LDA. Reason: Only one class was given! in function lda
palkabAugust 8, 2018
Hi Partesh,
It seems that when you’re passing your training data and training labels, there is only one category! Either make sure all the folders are there including files, and make sure the labels (0, 1, 2, 3, etc) are generated properly (just ‘print()’ them at several steps and take a look where it goes wrong).
– Paul
GaurabAugust 11, 2018
can you please explain how fishface.predict works and how it shows accuracy of about 69%
palkabAugust 18, 2018
Predict runs the image through the network and generates a prediction. By comparing predictions to the expected values you can calculate accuracy.
– Paul
jocelineAugust 19, 2018
Hi Paul, i try to run the code in this tutorial,but it has an error:
Traceback (most recent call last):
File “C:/Users/USER/PycharmProjects/coba1/master/classifier.py”, line 70, in
correct = run_recognizer()
File “C:/Users/USER/PycharmProjects/coba1/master/classifier.py”, line 50, in run_recognizer
print(“size of training set is:”, len(training_labels), ‘images’, fishface.train(training_data, np.asarray(training_labels)))
AttributeError: ‘builtin_function_or_method’ object has no attribute ‘train’
can you tell me how to fix it?
palkabAugust 19, 2018
You need to make sure you’re using the right OpenCV version as specified in the tutorial. From 3.0 onwards they changed the API interface. I’m not sure what it has become exactly, but check the docs of whatever version you’re using I’d say.
– Paul
Clément AtlanAugust 23, 2018
Hi Paul,
Thanks for your reply.
I understand that the fishefaces method is not the best way for what I want to do. The main goal I have is to detect several scales of pain the most accurately. That algorithm is intended for people with mental disabilities who cannot express their feelings.
I’ve read “the imperfect finish” as you suggested, quite interesting especially for this kind of issue. I suppose that I should search for a completely different approach to get more training information about emotion features.
Moreover, you ‘re probably right, the data set I have is not big enough, and maybe the way I sort data is not consistent with my purpose.
Anyway, thanks for what you’ve done. I read that you were working on a heart rate analysis toolkit. Pretty nice ! The startup I am working for is currently developing its own sensor device (based on arduino as well) which aims to detect emotions via heart rate / skin conductance / temperature analysis.
I am looking forward your more elaborate tutorial !
Cheers.
Clément
palkabAugust 25, 2018
Hi Clément, sounds interesting! Send me a mail at P.vangent@tudelft.nl please. I think I can help you with the pain analysis. I recently developed something in another collaboration that is similar.
– Paul
SukhadaAugust 30, 2018
Nice read and helped me to get the insight into the subject ….Excited to make my own version!
nehaSeptember 10, 2018
hi paul
can u please help how to make our own dataset. i tried making.,it’s working but with errors.
I am not able to understand what are the text files and what is to written in these text files.
palkabSeptember 12, 2018
Hi Neha. In the text files are the labels (1, 2, 3, etc). They are encoded as floats. If you read them in Python, pass them through the function int() to convert them.
There should be a file in the dataset about which label corresponds to which emotion.
SherrySeptember 17, 2018
Do you have the complete downloads for ck database? That’s why I keep on getting errors since I used the landmarks text file because I wasn’t able to download the emotions text file.
And yeah, I can’t download the emotions text file because ck database website is down :/
palkabSeptember 18, 2018
They usually come back up within a few days. Keep trying!
Paul
SherrySeptember 22, 2018
Thanks Paul! Everything worked perfectly!
palkabSeptember 28, 2018
Glad to hear! Happy coding
–?
palkabSeptember 28, 2018
I’m sorry Alvaro, I recently migrated servers and apparently not everything came back in the right folder from the backup. I’ve re-uploaded the file, links should work now.
–?
CarmenSeptember 25, 2018
Hi Paul!
I’ve run all the code and it works, I’m still cleaning and uploading new images to get better predictions.
My question is: how do you predict an emotion by a given image? Lets say I have image.jpg and I want to predict its emotion. How would that be?
Many thanks,
Carmen
palkabSeptember 28, 2018
Hi Carmen,
After loading (or training) the model, you can call its predict() function. In the tutorial I called the object ‘fishface’. Assuming you’ve kept the name, just load the image into an array and pass it:
fishface.predict(image_array)
Carmen Gonzalez-condeOctober 1, 2018
I figured it out too! Thank you :))))
philipOctober 18, 2018
PermissionError: [Errno 13] Permission denied: ‘source_emotion\\Emotion\\S005\\001’
palkabOctober 20, 2018
Your path is likely wrong. If you’re on linux, use different path format (/).
EmanuelOctober 19, 2018
hello paul very good what you do. My idea is to improve every day in this world of data science just like you. I just think that the order of the folders should be clearer.
data/
*sorted_set
**dataset
***anger
***contempt
***disgust
***etc
**difficult
**harcascade_frontalface_deafault
*source_emotion
**s005
**s010
**etc
*source_images
**s005
**s010
**etc
I would like to know how to apply the training to the real camera and tell me what mood I am in
Glenn Thomas AlexOctober 23, 2018
Hey Paul,
I am not able to download the dataset that you generated and cleaned.
Please do share. Thank you
palkabOctober 25, 2018
Hi Glenn. Sorry I migrated a while ago, seems I did not caught everything that went wrong yet. I’ve re-upped it.
Cheers
– Paul
philipNovember 4, 2018
Hi paul,
how can i now pass any random image for the model to predict the emotion in it
palkabNovember 6, 2018
Use the model’s .predict() function.
So if you initialised the model ze ’emotion_model’, you need to do ’emotion_model.predict(img)’, where ‘img’ is an image array.
Cheers
PradeepNovember 11, 2018
training fisher face classifier
size of training set is: 0 images
Traceback (most recent call last):
File “emotiontraining.py”, line 54, in
correct = run_recognizer()
File “emotiontraining.py”, line 37, in run_recognizer
fishface.train(training_data, np.asarray(training_labels))
cv2.error: OpenCV(3.4.3) C:\projects\opencv-python\opencv_contrib\modules\face\src\fisher_faces.cpp:71: error: (-5:Bad argument) Empty training data was given. You’ll need more than one sample to learn a model. in function ‘cv::face::Fisherfaces::train’
palkabNovember 11, 2018
Your paths are likely incorrect. It says in the top and bottom line: 0 images are loaded.
Sanghamitra MohantyNovember 12, 2018
Getting th eerror: ImportError: No module named ‘cv2’
palkabNovember 12, 2018
You need to install opencv. Be sure to read the tutorial and not just paste the code buddy :).
TALIB MOHAMMEDNovember 24, 2018
Traceback (most recent call last):
File “C:\Users\Talib\Desktop\face.py”, line 64, in
fishface = cv2.createFisherFaceRecognizer() #Initialize fisher face classifier
AttributeError: ‘module’ object has no attribute ‘createFisherFaceRecognizer’
>>>
help me for this error
palkabNovember 26, 2018
You need to use the OpenCV version specified in the tutorial. If you use a newer version: they changed the API a bit. In that case I’d say check the docs since I’m unsure what they changed it to and am on mobile now.
Cheers
Paul
AbdurahmanNovember 29, 2018
Respected sir, could you help me on the following error
sourcefile_neutral = glob.glob(“C:/Users/Moha/Desktop/Practical work/Emotion/source_images//%s//%s* ” %(part, current_session))[0] #do same for neutral image
IndexError: list index out of range
palkabDecember 3, 2018
Hi Abdurahman. Usually this means that the files are not found and/or not loaded properly. Please double check all the paths in the code are correct and that Python can see the files. The error states that the generated list is empty.
– Paul
Aurin BoseDecember 10, 2018
Hi Paul,
Thanks for the great tutorial. Just one question. What editor do you recommend using to run the code? I am currently using command prompt to run the python files and on running the Extracting faces script, no errors show up but my “dataset” and “sorted_set” folders remain empty.
Thanks,
Aurin Bose
palkabDecember 12, 2018
Hi Aurin,
I use mostly Visual Studio for coding and run it in cmd prompt / powershell. If your folders are staying empty, make sure that the source files are actually found. The for loop will also work with empty lists (silently doing nothing).
Cheers
Aurin BoseJanuary 21, 2019
Thanks a lot for the help!
I got the code to work properly now but I am experiencing a very high error rate. Its classifying the “Happy” faces as “Neutral”. Any idea why that could happen?
Thanks Again,
Aurin Bose
AraviJanuary 9, 2019
Hi,
Great work. I have created my own dataset and run the algorithm it is working fine.
But I want to pass an image to this and retrieve the emotion depicted in that image.
Anyway to do it.
palkabJanuary 16, 2019
Hi Aravi,
You need to read the image into memory and then simply call the
is the image array you loaded into memory. Keep in mind you’d need to resize the image to the standard dimensions. I’d recommend extracting the face, resizing it, then passing it.
.predict(function on it. In the blog we call the model ‘fishface’ so it should be
)
fishface.predict(where
)
Cheers
DudaJanuary 13, 2019
I am having issues because the glob format you show in your code does not work on my ubuntu 16.04 using python 2 or 3. I had to change from \\* to /* with each path. However, I am still having issues because some of the folders I downloaded are empty and then the first program does not work. What can I do?
palkabJanuary 16, 2019
Hi Duda,
It should not be an issue if the folders are empty. What error are you seeing?
Also if you’re on Ubuntu keep in mind that the OS doesn’t sort files the way Windows does. After calling “glob” you need to sort the resulting list as well.
Cheers
Paul
AbhiJanuary 18, 2019
Hi Paul, thanks for the amazing blog. I wanted to know how would we implement .predict function in the program (where will be the predict() be written)?
abhishek sethJanuary 19, 2019
the first code shows this error… please tell me what to do.
File “cone.py”, line 11, in
emotion = int(float(file.readline())) #emotions are encoded as a float, readline as float, then convert to integer.
ValueError: could not convert string to float: ‘ 2.1779878e+02 2.1708728e+02\n’
JasperJanuary 19, 2019
Hey Paul. Your blog is amazing. I wanted to know, suppose i have an “image.jpg” on desktop, is it possible to feed it’s path (for e.g, C:\desktop:\image.jpg) for prediction of emotion ?
palkabFebruary 6, 2019
Hi Jasper,
Sorry for the late replay I’ve been on holidays. Yes this is possible! Just load the image with any Python module (for example scipy.misc.imread) and pass it to the module. In the case of the tutorial you’d do
fishface.predict(im)with
imbeing the loaded image array.
– Paul
saradaFebruary 7, 2019
hi paul. how to find confusion matrix.
palkabFebruary 8, 2019
Hi Sarada,
I always find the module from `sklearn` to be super useful:
Just feed it the list of predictions and the list of true labels.
Code example here
Cheers,
Paul
MokayaFebruary 8, 2019
Hi paul,
Am trying to run the script for extracting the faces but am facing this error. How should i go along with it.kindly assist me.
Traceback (most recent call last):Color’
vicFebruary 8, 2019
Traceback (most recent call last):
Hi Paul;
Kindly assist me overcome this glitchCo
lor’
palkabFebruary 8, 2019
The error states the source is empty. In other words: your image is failing to load. Make sure to check why this is happening, if your paths are correct, verify an there’s an array of image data in memory. Try going step by step in the image load section to see where it goes wrong
– Paul
PrasadFebruary 23, 2019
Hi ….i am getting permission Denied error in organizing dataset code ..tried changing permission n all but didn’t work please help
vicFebruary 25, 2019
i have set the paths correctly and followed all the steps but still coming back to the same problem. how else can i solve it.
KeerthiFebruary 28, 2019
Hi paul,
I don’t get any output while i’m run the 1st code (Organising the dataset). I do everything as you told in the paragraph above the code. What to do?
TasmimMarch 2, 2019
Hi, how can i show result as a image? Like, happy expression written in happy images? ( in testing images)
Like this in only showing result in written line. It will be very helpful if it shows in images.
Btw, thats a really nice project. It helpse a lot. Thank you for your effort
palkabMarch 5, 2019
Hi Tasmim. Take a look at the opencv drawing functions, specifically the ‘puttext’ command in the docs:
This can be used to draw a specified string on the image.
Cheers,
Paul
SaifulMarch 17, 2019
anyone please tell me which script will put in which file what should be the file name for each script…and which file should is run first
help me please!!!!!!!!
Yeasir ArafatMarch 23, 2019
Hi
i want to know more about fisherface algorithm and lda method. How this method works in your code? i search in net but didn’t satisfied. There written style is very hard & didn’t make sense to me.
can you please give some document link /video or anything that i can understand fisherface & lda method?
thanks for your effort.
RoyMarch 24, 2019
Hi,
In this code, in which part you used feature extraction & classification?
Can you explain a little?
AminaApril 1, 2019
Hi,
i want to save the model by using
def save_model() :
fishface.save(“model_emotion.xml”)
and load it with
def load_model() :
dishface.read(“model_emotion.xml”)
but it dosen’t work how can i save it ?
palkabApril 4, 2019
Try using Pickle tp save and load any Python object. See more info here:
– Paul
sidApril 7, 2019
sir how can i detect emotions through webcam using this training code please help as i am doing a project for my academics and finding it very hard….!!!!
sidApril 8, 2019
please help me as I want to detect emotions from live streaming from webcam trained the model using this code what to do further please help….
Sandhya MalegereApril 20, 2019
I am getting error in organising the data set while executing “IndexError: list index out of range”. Please help me in resolving this, because I am new to python.
Alexander B VanCampenApril 22, 2019
Trying to run this through an RPI 3 B+. Wrote up the code you’ve given us, but I’m not getting a response. Admittedly you said that the tutorial is written for use on windows, but figured it should work since python is a multi-platform language. That said you’ve said intermittently in the comments that the it should work as is, and that it should need some tweeks. Please help.
palkabApril 30, 2019
Make sure you use forward slashes for the paths on linux systems. I’ve been planning to update the tutorial but haven’t found the time yet
Hoai DucApril 25, 2019
Hello sir
I have this error :’ascii’ codec can’t decode byte 0x81 in position 356: ordinal not in range(128). How can i solve it ?
palkabApril 30, 2019
You’re likely opening a pickle file that was made on Pyton2.x. Pickle on Python 3.x is not necessariy backwards compatible.
See:
Good luck!
Paul | http://www.paulvangent.com/2016/04/01/emotion-recognition-with-python-opencv-and-a-face-dataset/ | CC-MAIN-2019-30 | refinedweb | 23,047 | 66.03 |
I'm having some trouble implementing the Karatsuba algorithm. My project limits me to the following libraries: iostream, iomanip, cctype, cstring. Also, I'm limited to only using the integer built-in type and arrays/dynamic arrays to handle numbers (only unsigned integers will be input). I've built a class to handle integers of arbitrary size using dynamic arrays. I need to implement a function that multiplies big integers, and I'd like to use Karatsuba if possible. The trouble I'm running into is how to break apart large integers and do the multiplications called for in the algorithm. I assume this should be done recursively. I was hoping someone could give me an example of how to do this.
For example:
I have two numbers stored in dynamic arrays. Let's say they are:
X = 123456789123456789123456789
Y = 987654321987654321987654321987654321
How would Karatsuba need to handle this, given the storage limitations on the unsigned int type? Any help would be much appreciated!
If you look at the Pseudo-code here, you can modify it a little to use with an array like so:
procedure karatsuba(num1, num2) if (num1.Length < 2) or (num2.Length < 2) //Length < 2 means number < 10 return num1 * num2 //Might require another mult routine that multiplies the arrays by single digits /* calculates the size of the numbers */ m = max(ceiling(num1.Length / 2), ceiling(num2.Length / 2)) low1, low2 = lower half of num1, num2 high1, high2 = higher half of num1, num2 /* 3 calls made to numbers approximately half the size */ z0 = karatsuba(low1,low2) z1 = karatsuba((low1+high1),(low2+high2)) z2 = karatsuba(high1,high2) //Note: In general x * 10 ^ y in this case is simply a left-shift // of the digits in the 'x' array y-places. i.e. 4 * 10 ^ 3 // results in the array x[4] = { 4, 0, 0, 0 } return (z2.shiftLeft(m)) + ((z1-z2-z0).shiftLeft(m/2)) + (z0)
Provided you have an addition, subtraction and extra single-digit multiplication routine defined for your number arrays this algorithm should be implemented pretty easily (of course along with the other required routines such as digit shifting and array splitting).
So, there is other preliminary work for those other routines, but that is how the Karatsuba routine would be implemented. | https://codedump.io/share/vZEAb7UZJzoJ/1/karatsuba-c-implementation | CC-MAIN-2017-13 | refinedweb | 377 | 53.31 |
public class jq extends java.lang.Object
jqfunction. For example,
jq('#id');
ZK 5 Client Engine is based on jQuery.
It inherits all functionality provided by jQuery. Refer to jQuery documentation
for complete reference. However, we use the global function called
jq
to represent jQuery. Furthermore, for documentation purpose,
we use @{link jq} to represent the object returned by the
jq function.
Notice that there is no package called
_.
Rather, it represents the global namespace of JavaScript.
In other words, it is the namespace of the
window object
in a browser.
jq jq(Object selector, Object context);
Refer jQuery as
jq
First of all, the jQuery class is referenced as
jq(), and it is suggested to use jq instead of $ or jQuery when developing a widget, since it might be renamed later by an application (say, overridden by other client framework). Here is an example uses jq:
jq(document.body).append("");
Dual Objects
To extend jQuery's functionally, each time
jq(...)or
zk(...)is called, an instance of
jqand an instance of
jqzkare created. The former one provides the standard jQuery API plus some minimal enhancement as described below. The later is ZK's addions APIs.
You can retrieve one of the other with
zkand
jqzk.jq.
jq('#abc').zk; //the same as zk('#abc') zk('#abc').jq; //the same as jq('#abc');
Extra Selectors
@tagName
jqis extended to support the selection by use of ZK widget's tagName. For example,
jq('@window');
Notice that it looks for the ZK widget tree to see if any widget whose className ends with
window.
If you want to search the widget in the nested tag, you can specify the selector after @. For example, the following searches the space owner named x, then y, and finally zor search the element from the given attribute of the widget, you can specify the selector as follows. For example,or search the element from the given attribute of the widget, you can specify the selector as follows. For example,
jq('@x @y @z');
jq('@window[border="normal"]')
$id
jqis extended to support the selection by use of widget's ID (
Widget.id), and then DOM element's ID. For example,
jq('$xx');
Notice that it looks for any bound widget whose ID is xx, and select the associated DOM element if found.
If you want to search the widget in the inner ID space, you can specify the selector after $. For example, the following searches the space owner named x, then y, and finally zor advanced search combine with CSS3 and @, you can specify like this.or advanced search combine with CSS3 and @, you can specify like this.
jq('$x $y $z');
jq('@window[border="normal"] > $x + div$y > @button:first');
A widget
jqis extended to support
Widget. If the selector is a widget,
jqwill select the associated DOM element of the widget.
jq(widget).after(''); //assume widget is an instance of
Widget
In other words, it is the same as
jq(widget.$n()).after('');
Extra Contexts
The
zkcontext
jq('foo', zk);
With the zk context, the selector without any prefix is assumed to be the identifier of ID. In other words, you don't need to prefix it with '#'. Thus, the above example is the same as
jq('#foo')
Of course, if the selector is not a string or prefix with a non-alphnumeric letter, the zk context is ignored.
Extra Global Functions
The
zkfunction
jqzk(Object selector);
It is the same as
jq(selector, zk).zk. In other words, it assumes the zk context and returns an instance of
jqzkrather than an instance of
jq.
Other Extension
-
jq- DOM utilities (such as,
innerX()
-
jqzk- additional utilities to
jq.
-
Event- the event object passed to the event listener
Not override previous copy if any
Unlike the original jQuery behavior, ZK's jQuery doesn't override the previous copy, if any, so ZK can be more compatible with other frameworks that might use jQuery. For example, if you manually include a copy of jQuery before loading ZK Client Engine,
jQuerywill refer to the copy of jQuery you included explicitly. To refer ZK's copy, always use
jq.
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
public jqzk zk
public static boolean nodeName(DOMElement el, String tag1)
el- the element to test
tag1- the name to test. You can have any number of names to test, such as
jq.nodeName(el, "tr", "td", "span")
public static String nodeName(DOMElement el)
el- the element to test. If el is null, an empty string is returned.
public static String px(java.lang.Integer v)
It is usually used for generating left or top.
v- the number of pixels
px0(java.lang.Integer)
public static String px0(java.lang.Integer v)
Unlike
px(java.lang.Integer), this method assumes 0 if v is negative.
It is usually used for generating width or height.
v- the number of pixels. 0 is assumed if negative.
px(java.lang.Integer)
public static Array $$(String id, String subId)
DOMElementthat matches. It invokes
document.getElementsByNameto retrieve the DOM elements.
id- the identifier
subId- [Optional] the identifier of the sub-element. Example,
jq.$$('_u_12', 'cave');.
DOMElementthat matches the specified condition
public static boolean isAncestor(DOMElement p, DOMElement c)
Notice that, if you want to test widgets, please use
zUtl.isAncestor(java.lang.Object, java.lang.Object) instead.
p- the parent element to test
c- the child element to test
zUtl.isAncestor(java.lang.Object, java.lang.Object)
public static int innerX()
public static int innerY()
public static int innerWidth()
public static int innerHeight()
public static int scrollbarWidth()
public static boolean isOverlapped(Offset ofs1, Offset dim1, Offset ofs2, Offset dim2, int the)
ofs1- the offset of the first rectangle
dim1- the dimension (size) of the first rectangle
ofs2- the offset of the second rectangle
dim2- the dimension (size) of the second rectangle
the- tolerant value for the calculation
public static boolean isOverlapped(Offset ofs1, Offset dim1, Offset ofs2, Offset dim2)
ofs1- the offset of the first rectangle
dim1- the dimension (size) of the first rectangle
ofs2- the offset of the second rectangle
dim2- the dimension (size) of the second rectangle
public static boolean clearSelection()
Notice:
jqzk.setSelectionRange(int, int) is used for the input-type
elements, while this method is applied to the whole browser window.
jqzk.setSelectionRange(int, int),
jqzk.enableSelection(),
jqzk.disableSelection()
public static Map filterTextStyle(Map styles, Array plus)
jq.filterTextStyle({width:"100px", fontSize: "10pt"}); //return {font-size: "10pt"}
styles- the styles to filter
plus- an array of the names of the additional style to include, such as
['width', 'height']. Ignored if not specified or null.
public static String filterTextStyle(String style, Array plus)
>
jq.filterTextStyle('width:100px;font-size:10pt;font-weight:bold'); //return 'font-size:10pt;font-weight:bold'
style- the style to filter
plus- an array of the names of the additional style to include, such as
['width', 'height']. Ignored if not specified or null.
public static Map parseStyle(String style)
style- the style to parse
public static DOMElement newFrame(String id, String src, String style)
id- ID (required)
src- the source URL. If omitted, an one-pixel gif is assumed.
style- the CSS style. Ingored if omitted.
public static DOMElement newStackup(DOMElement el, String id, DOMElement anchor)
Notice that you usually have to call
jqzk.makeVParent() before calling this, since DIV with relative or absolute position will crop the child element. In other words, you have to make the element as the top-level element before creating a stackup for it.
To remove the stackup, call
remove().
If you want to create a shadow, you don't need to access this method since
Shadow has an option to create and maintain the stackup automatically.
el- the element to retrieve the dimensions. If omitted, the stackup is not appended to the DOM tree.
id- ID of the stackup (iframe). If omitted and el is specified, it is el.id + '$ifrstk'. If both el and id are omitted, 'z_ifrstk' is assumed.
anchor- where to insert the DOM element before (i.e., anchor will become the next sibling of the stackup, so anchor will be on top of the stackup if z-index is the same). If omitted, el is assumed.
public static DOMElement newHidden(String name, String value, DOMElement parent)
name- the name of the HIDDEN tag.
value- the value of the HIDDEN tag.
parent- the parent node. Ignored if not specified.
public static DOMElement head()
public static boolean confirm()
window.confirm, except it will set
zk.alertingso widgets know to ignore
onblur(since the focus will be back later).
It is strongly suggested to use this method instead of
window.confirm.
public static void alert(String msg, Map opts)
zk.alerting, so widgets (particularly input widgets) know to ignore onblur (since the focus will be back later).
It is strongly suggested to use this method instead of window.alert.
If opts is omitted or opts.mode is not os, it is similar to
org.zkoss.zul.Messagebox.show() at the server.
jq.alert('Hi'); jq.alert('This is a popup message box', {mode:"popup", icon: "ERROR"}); jq.alert('With listener', { button : { YES: function () {jq.alert('Yes clicked')}, NO: function () {jq.alert('No clicked')} } });
msg- the message to show
opts- the options.
public static void onzsync(java.lang.Object obj)
zsyncinvocation. For example,
jq.onzsync(obj1);
obj- the object to register
zsync(),
unzsync(java.lang.Object)
public static void unzsync(java.lang.Object obj)
zsyncinvocation. For example,
jq.unzsync(obj1);
obj- the object to register
zsync(),
onzsync(java.lang.Object)
public static void zsync()
zsyncmethod of the registered objects.
zsync is called automatically when
zWatch
fires onSize, onShow or onHide.
It is useful if you have a DOM element whose position is absolute.
Then, if you register the widget, the widget's zsync method will be called when some widget becomes visible, is added and so on.
For example,
Window uses DIV to simulate the shadow in IE,
then it can register itself in
Widget.bind_(zk.Desktop, zk.Skipper, _global_.Array) and then
synchronize the position and size of shadow (DIV) in zsync as follows.
bind_: function () { if (zk.ie) jq.onzsync(this); //register ... }, unbind_: function () { if (zk.ie) jq.unzsync(this); //unregister ... }, zsync: function () { this._syncShadow(); //synchronize shadow ... }
Notice that it is better not to use the absolute position for any child element, so the browser will maintain the position for you. After all, it runs faster and zsync won't be called if some 3rd-party library is used to create DOM element directly (without ZK).
public static void onSyncScroll(java.lang.Object wgt)
doSyncScrollinvocation. For example,
onSyncScroll();
wgt- the object to register
doSyncScroll(),
unSyncScroll(java.lang.Object)
public static void doSyncScroll()
doSyncScrollmethod of the registered objects.
doSyncScroll is called automatically when
zWatch
fires onResponse, onShow or onHide.
It is useful if you have a Widget that using zul.Scrollbar.
Then, if you register the widget, the widget's doSyncScroll method will be called when widget add/remove/hide/show its child widget.
onSyncScroll(java.lang.Object),
unSyncScroll(java.lang.Object)
public static void unSyncScroll(java.lang.Object wgt)
doSyncScrollinvocation. For example,
unSyncScroll(wgt);
wgt- the object to register
doSyncScroll(),
onSyncScroll(java.lang.Object)
public static Map margins()
jqzk.sumStyles(_global_.String, _global_.Array)to calculate the numbers specified in these styles.
margins(),
paddings()
public static Map borders()
jqzk.sumStyles(_global_.String, _global_.Array)to calculate the numbers specified in these styles.
margins(),
paddings()
public static Map paddings()
jqzk.sumStyles(_global_.String, _global_.Array)to calculate the numbers specified in these styles.
margins(),
borders()
public static void focusOut()
Notice that you cannot simply use
jq(window).focus()
or
zk(window).focus(),
because it has no effect for browsers other than IE.
public static String css(DOMElement elem, String name, String extra)
Note that the function is only applied to the width or height property, and the third argument must be 'styleonly'.
For example,
jq.css(elem, 'height', 'styleonly'); or jq.css(elem, 'width', 'styleonly');
elem- a Dom element
name- the style name
extra- an option in this case, it must be 'styleonly'
public static java.lang.Object evalJSON(String s)
It is similar to jq.parseJSON (jQuery's default function), except 1) it doesn't check if the string is a valid JSON 2) it uses eval to evaluate
Thus, it might not be safe to invoke this if the string's source is not trustable (and then it is better to use jq.parseJSON)
s- the JSON string
public static void toJSON(java.lang.Object obj, java.lang.Object replace)
You can provide an optional replacer method. It will be passed the key and value of each member, with this bound to the containing object. The value that is returned from your method will be serialized. If your method returns undefined, then the member will be excluded from the serialization. Values that do not have JSON representations, such as undefined or functions, will not be serialized. Such values in objects will be dropped; in arrays they will be replaced with null. You can use a replacer function to replace those with JSON values. JSON.stringify(undefined) returns undefined.
The optional space parameter produces a stringification of the value that is filled with line breaks and indentation to make it easier to read.
If the space parameter is a non-empty string, then that string will be used for indentation. If the space parameter is a number, then the indentation will be that many spaces.
Example:
text = jq.toJSON(['e', {pluribus: 'unum'}]); // text is '["e",{"pluribus":"unum"}]' text = jq.toJSON([new Date()], function (key, value) { return this[key] instanceof Date ? 'Date(' + this[key] + ')' : value; }); // text is '["Date(---current time---)"]'
obj- any JavaScript object
replace- an optional parameter that determines how object values are stringified for objects. It can be a function.
public static String d2j(Date d)
It works with org.zkoss.json.JSONs.d2j() to transfer data from client to server.
d- the date object to marshall. If null, null is returned
public static Date j2d(String s)
It works with org.zkoss.json.JSONs.j2d() to transfer data from server to client.
s- the string that is marshalled at the server
public jq replaceWith(Widget widget, Desktop desktop, Skipper skipper)
Widget. We extends jQuery's replaceWith to allow replacing with an instance of
Widget.
widget- a widget
desktop- the desktop. It is optional.
skipper- the skipper. It is optional.
public jq remove()
public jq empty()
Unlike jQuery, it does nothing if nothing is matched.
public jq show()
public jq hide()
public void before(java.lang.Object content, Desktop desktop)
Notice that this method is extended to handle
Widget.
Refer to Parameters:
content - If it is a string, it is assumed to be
a HTML fragment. If it is a widget, the widget will be insert before
desktop - [optional] the desktop. It is used only
if content is a widget.
public void after(java.lang.Object content, Desktop desktop)
Notice that this method is extended to handle
Widget.
Refer to Parameters:
content - If it is a string, it is assumed to be
a HTML fragment. If it is a widget, the widget will be insert after
desktop - [optional] the desktop. It is used only
if content is a widget.
public void append(java.lang.Object content, Desktop desktop)
Notice that this method is extended to handle
Widget.
Refer to Parameters:
content - If it is a string, it is assumed to be
a HTML fragment. If it is a widget, the widget will be appended
desktop - [optional] the desktop. It is used only
if content is a widget.
public void prepend(java.lang.Object content, Desktop desktop)
Notice that this method is extended to handle
Widget.
Refer to Parameters:
content - If it is a string, it is assumed to be
a HTML fragment. If it is a widget, the widget will be prepended
desktop - [optional] the desktop. It is used only
if content is a widget. | https://www.zkoss.org/javadoc/latest/jsdoc/_global_/jq.html | CC-MAIN-2021-43 | refinedweb | 2,661 | 59.3 |
LONDON (ICIS)--The majority of participants in the northwest ?xml:namespace>
However, etac price hikes have been accepted more easily in some parts of
“It’s a mixed picture [in terms of customers accepting higher prices],” one producer said. "
Indian sellers reported healthy demand from European customers, shortages of etac supply and European etac prices going up.
However, other market participants have not yet seen price hikes this week, but believe it is just a matter of time.
“I definitely think prices will move up,” a distributor said. “For some distributors and some producers it already did.”
Just one distributor disagreed with the idea of prices facing upward pressure.
“I don't see any signal that prices will go up further,” this source said.
Last week, the European July contract price for ethylene settled at an increase of €50/tonne from June.As a result, both local and importing producers sought July etac price hikes of €30-60/tonne from June levels. | http://www.icis.com/resources/news/2014/07/04/9798492/higher-feedstock-costs-see-europe-etac-prices-start-to-rise/ | CC-MAIN-2016-26 | refinedweb | 162 | 55.84 |
This notebook can be used to subset the 2012 medicare provider utilization and payment data by state.
The raw data are available here:
The files are downloaded in a zip archive. After extracting the files, compress the main data file. We used gzip here, if you compress it in a different way you will need to edit some of the code below.
We will use these modules from the standard library.
import gzip import os import csv
Choose a state to subset.
state = "FL"
This should be the name of the data file downloaded from the CMS web site, edit if needed.
fname = "Medicare-Physician-and-Other-Supplier-PUF-CY2012.txt.gz"
Set up a reader for a tab-delimited file. If you compressed the file using something other than gzip you will need to edit this cell to use the corresponding compressed file reader.
fid = gzip.open(fname, 'rt') inp = csv.reader(fid, delimiter="\t")
Set up a writer for the state subset file to be created.
oname = state + "-subset.csv.gz" oid = gzip.open(oname, "wt") out = csv.writer(oid)
Always include the header.
head = next(inp) out.writerow(head)
Read the rest of the file and write the selected records.
for line in inp: if line[11] == state: out.writerow(line)
Clean up.
oid.close() fid.close() | http://nbviewer.jupyter.org/urls/umich.box.com/shared/static/0wfjux0rktzd5n1zbc0ymlpas693fv75.ipynb | CC-MAIN-2018-05 | refinedweb | 220 | 78.14 |
?
Let me know if you figure out how to do this—I tried briefly a while ago but I don't think I found anything.
Normally it's simply part of the args variable:
import sublime, sublime_plugin
class TestPluginCommand(sublime_plugin.TextCommand):
def run_(self, args):
print args'event']
I tried to play with this but i had issues converting reliably (x,y) to test position (it was some times ago but maybe things are better now ...). So in the end what i did was pass the arg to the drag_select command, and then use view.sel()[0].This was for a plugin to emulate a kind of drag&drop (more a sel&click) before it was implemented: you can find the very simple source here: bitbucket.org/Clams/pasteselonclick/src
Didn't work for me. The second argument is just a plain Edit object, which cannot be indexed. Could you, please, post your mousemap?
{
"button": "button2",
"modifiers": "alt"],
"press_command": "paste_sel_on_click",
"press_args" : {"del" : true}
},
{
"button": "button2",
"modifiers": "alt","ctrl"],
"press_command": "paste_sel_on_click",
"press_args" : {"del" : false}
}
]
Ah I see! run_ (with an underscore) strips the event from the arguments list before calling run (no underscore). So you have to override run_ instead of overriding run.Awesome.
I don't think wbond's package manager does anything with dependencies, but I just wrote a cool mouse-callback thing:gist.github.com/2979613It lets you inherit from MouseEventListener and then you get an on_pre_click with the args and an on_post_click with the position in the file where the user clicked.I think I might upload it to github and make it a real repo, and then I'll make my ScrollOffset plugin ignore mouse input. Now that would rock!
Woot! What's the difference between run and run_?
/quick thread hijack:
@adzenith I was checking out your ScrollOffset plugin earlier today. I use a similar plugin that always keeps the cursor centered, so I thought your "mouse-callback thing" may be solving similar problems as the ones I am encountering. Honestly, I can't figure out what it's supposed to do. What I was looking for is disabling the click+hold for selection so that I could only use mouse button 1 to change the cursor's location. (I would then use shift+click for selection). With the aforementioned plugin, my thumb has a bad habit of tapping the touchpad, selecting chunks of text which I then gleefully proceed to overwrite without noticing... Hope this makes, sense
The mouse-callback thing is not yet integrated into my ScrollOff plugin, which may have explained what you were seeing.My plan was to capture on_pre_click, disable ScrollOff for the next on_selection_modified, then capture on_post_click and re-enable. This would make it so that clicking wouldn't scroll the display, allowing normal selections with the mouse. Does that make sense?
That's the entire difference right there. run doesn't get the event.
I wasn't clear, sorry. I tried the Scroll Off plugin and the mouse stuff separately (one after the other). If the latter only works with the former, then that's why I didn't see any effect. Or maybe I still don't understand what it's supposed to do.
Sort of. (I'm not a programmer and your explanation is a little technical.) I'll keep an eye out for what you're building and see if it might work for me also, although probably not out of the box.
In the meantime I would be quite happy with just disabling the mouse (or, at least, button 1) altogether when my plugin is enabled, but "context" seems to be ignored in mousemaps. Darn.
I just pushed my changes. Try installing both ScrollOffset and MouseEventListener and tell me if you like what you see.
You should put something in the README about the fact that ScrollOffset only works when word wrap is off
ScrollOffset looks like a plugin I might well use (for coding, which is a different situation than the one I've been talking about).
As for using MouseEventListener with my typewriter plugin, I didn't manage to get anything particularly useful done.
I've thrown up the plugin below, if you don't mind having a look. (But I don't mean to impose.)
import sublime, sublime_plugin
class AlwaysCenterCommand(sublime_plugin.EventListener):
def on_selection_modified(self, view):
if view.settings().get('typewriter_mode') == 1:
sel = view.sel()
region = sel[0] if len(sel) == 1 else None
if region != None:
view.show_at_center(region)
The plugin keeps the cursor always centered. What I've been hoping to do is to prevent is extending the selection with mouse clicking. Is this possible?
Again, feel free to ignore this stuff.
I guess I never use word wrap... Maybe I should look into why it doesn't work.
I'd install MouseEventListener, then in your AlwaysCenterCommand add this:
ignore_count = 0
def on_pre_mouse_down(self, args):
self.ignore_count = 3
def on_post_mouse_down(self, click_point):
self.ignore_count = 1
Then inside of on_selection_modified, throw this at the top:
if self.ignore_count:
self.ignore_count -= 1
return
Lemme know if that works.
typewriter_mode was added to bufferscroll, did you have some problem with this plugin? I want to fix it, if that is the case.
REgars,
Hi @tito,
I am not using your BufferScroll plugin, so there isn't anything related to that for you to fix. If you're offering your excellent services though.......
I am using the plugin I posted above; it's slightly modified from something @facelessuser has written1. This plugin activates on "on_selection_modified" rather than "on_modified" (as is the case with BufferScroll). This means that the screen is scrolled every time the cursor changes position,which works great when you are:
a) writing prose rather than code,b) using the keyboard rather than the mouse
Unfortunately, the functionality of the mouse is quite broken: it's too easy to select text when trying to position the cursor. I haven't actually used iA Writer2, but it disables the mouse completely, I suspect for this reason.
I actually rather like how it works, if only I could disable extending the selection on mouse click.
I understand, I'll take a look after lunch, I'm interested into providing better behaviour for these features. | https://forum.sublimetext.com/t/customizing-ctrl-click/6016/5 | CC-MAIN-2016-07 | refinedweb | 1,041 | 65.01 |
Lots of code below, since it's usually the details that matter. Bit of a long one, but I think it's generic enough to be a good one for the archives. Experimenting with how to extend ActiveRecord. So I picked a little example project of an input scrubber. The purpose for the code is to automatically sanitize attribute data prior to it being saved. Don't focus on the scrubber itself, it's just a arbitrary task for something to play with (obviously it's not a full scale sanitizer). I have the Peepcode plugins book, and I have checked several blogs. I have tinkered for hours now, trying simple Ruby-only steps to understand the various ruby dynamics with extend, include, included, etc. So I'm generally getting the concepts (and working simple examples), but getting lost in the details of how to get this particular idea to work. So let's start with an implementation hard coded inside one model which works just fine. class Example < ActiveRecord::Base @@scrub_attributes = [:author_name, :title, :content] def before_save self.attributes.each do |key,value| if @@scrub_attributes.include?(key.to_sym) scrub_value(value) if !value.nil? end end end def scrub_value(input_value) #================================================================ But, of course, what I really want for the model code API is this: class Example < ActiveRecord::Base scrub_attributes :author_name, :title, :content end So, I started work on a plugin. Below is where I am at so far. I have boiled (or rather logged) the problem down to two details. I have marked a few points of reference to talk about. Have a gander, and I'll meet you at the end of the code... #================================================================ Plugin structure: /plugins init.rb /ar_extensions /lib /gw attribute_scrubber.rb init.rb contains ActiveRecord::Base.send(:include, GW::AttributeScrubber) module GW module AttributeScrubber def self.included(base_module) class << base_module @@attributes_to_scrub = [] end base_module.extend ClassMethods base_module.send(:include, InstanceMethods) # <---- (1) end module ClassMethods def scrub_attributes(*attr_names) @@attributes_to_scrub = attr_names end def attributes_to_scrub return @@attributes_to_scrub end end module InstanceMethods def scrub_value(input_value) # <---- (2) def self.append_features(base_module) base_module.before_save do |model| model.attributes.each do |key,value| if @@attributes_to_scrub.include?(key.to_sym) #<--- (3) scrub_value(value) if !value.nil? # <---- (4) end end end end end end #================================================================ So, the problem I am having starts with (1). The PeepCode book says this code should be def self.included(base_module) base_module.extend ClassMethods base_module.include InstanceMethods end But that generates an error `included': private method `include' called Thus, I'm trying base_module.send(:include, InstanceMethods), but that doesn't seem to be working either because down at (4) I get errors that method scrub_value doesn't exist. Before we get to (4) though, it turns out that at (3) @@attributes_to_scrub, which I am expecting to be a class var for ActiveRecord doesn't exist for the before_save method. I discovered that even though the @@attributes_to_scrub contained in scrub_attributes does havethe values I expect, at (3) there are no values. Namespace problem? I hard coded an array in place of the class var at (3), and then I get the error that the method scrub_value doesn't exist, so I know the InstanceMethods are not being included, or at least not in the namespace I'm expecting. But I can get some simple Ruby-only code to work this way just fine. So, I'm confused about that. Other than running into the snags at (3) and (4), it all seems to run. Clues appreciated. Thanks. -- gw
on 2009-04-11 10:08
on 2009-04-11 10:57
Greg W. wrote: > Lots of code below, since it's usually the details that matter. Bit of a > long one, but I think it's generic enough to be a good one for the > archives. OK, nevermind -- got it sorted out. required this change: def self.append_features(base_module) base_module.before_create do |model| model.attributes.each do |key,value| if model.class.attributes_to_scrub.include?(key.to_sym) model.scrub_value(value) if !value.nil? end end end end -- gw
on 2009-04-11 12:25
On Apr 11, 7:57 am, Greg W. <removed_email_address@domain.invalid> wrote: > base_module.before_create do |model| > model.attributes.each do |key,value| > if model.class.attributes_to_scrub.include?(key.to_sym) > model.scrub_value(value) if !value.nil? > end > end > end > end > In case you're curious why, that;s because blocks are closures - in particular they remember self, so you were trying to call scrub_value on the module, read that variable form the module etc. Fred
on 2009-04-11 13:24
Frederick C. wrote: > On Apr 11, 7:57�am, Greg W. <removed_email_address@domain.invalid> > wrote: >> � base_module.before_create do |model| >> � � model.attributes.each do |key,value| >> � � � if model.class.attributes_to_scrub.include?(key.to_sym) >> � � � � model.scrub_value(value) if !value.nil? >> � � � end >> � � end >> � end >> end >> > In case you're curious why, that;s because blocks are closures - in > particular they remember self, so you were trying to call scrub_value > on the module, read that variable form the module etc. Thx. Always curious as to why! :-) What was most curious is that the @@var created by the class << block is different than the @@var in the scrub_attributes method. By using model.class.attributes_to_scrub I don't even need the class << statements anymore. So, yeah, I need to do more atomic experiments to understand the scoping of modules & classes. I thought modules didn't hold vars, but I get they hold at least class vars (or something else is going on I didn't pick up on). -- gw | http://www.ruby-forum.com/topic/184014 | CC-MAIN-2018-34 | refinedweb | 942 | 58.79 |
User talk:Dkf
Welcome to Rosetta Code! My name is Mike Mol, and I write C++, Perl and PHP for a living in a small shop catering to specialized industries. I'm the guy on top around here, but, honestly, I do what I can to avoid interfering with the activity of the site. I watch the Recent Changes feed and step in if I feel my input is needed, or if I simply have questions of the folks involved in an existing discussion. Otherwise, I just keep the server running and try to help the site grow. If you have any questions, drop a line in my talk page or the Village Pump. Chances are, Mwned, Paddy, Kevin Reid, Shin, Neville, PauliKL, MBishop, Ce, IanOsgood or any of the other regulars around here will have input for you, and I don't often disagree. If you need something from me specifically, well, I'm on break for the next twenty or so seconds. :) --Short Circuit 00:17, 13 May 2009 (UTC)
[edit] F# RCSNUSP page
I implemented the bloated version and put it in, but the original comments had been placed under a different header and the little box that says "RCSNUSP/F Sharp is an implementation of SNUSP" is covering the edit button for the original comments so I can't change them. I took out the "program" header thinking that maybe that was the problem, but now both edit buttons are covered by their own little box which keeps me from doing any further editing. I don't really have anything more to edit here, but can you maybe see if you can do something about it for the future? Thanks and sorry to bother you. [Preceding comment was made on 14:20, 29 December 2009 by Darrellp]
- All fixed up. Just had to move the <br clear=all> line up a little. I took the opportunity to add another heading for the Bloated impl too. –Donal Fellows 18:04, 29 December 2009 (UTC)
[edit] Ok but
Ok, but this page exists for a reason, and now Object Serialization is there since I've added needs-review (and now I see someone put an incorrect with explanation); both enough to see the page where people should look to fix things. If you want a special "note" for you, you can watch the page... I've a lot of watched pages, there just to rimind me that I've something to do on them, even though still I've not done that something. I believe the categories should be left alone (unless you want to add specific info about the Tcl language, not about specific example). Anyway this is just my opinion. --ShinTakezou 11:14, 14 May 2009 (UTC)
[edit] math
Fixed. --Michael Mol 05:49, 13 March 2010 (UTC)
- There's a pool of people doing things for Tcl; the reminder is not just for me, but for others too. (Don't worry so much about transient things, OK?) —Dkf 12:07, 14 May 2009 (UTC)
- Isn't this enough? (It already automatically contained the task, even though the page wasn't edited; now it is). I am not worried, tried just to keep things clean and coherent (even though it is not my busyness maybe) --ShinTakezou 13:13, 14 May 2009 (UTC)
- It's all gone now anyway. —Dkf 12:37, 15 May 2009 (UTC)
- That's a good point, though; That should probably get linked to in the language template. Take a gander at the new beta language template staging ground, and see what you can do. Also see if you can add it to the header template for the unimplemented in X pages. --Short Circuit 22:24, 15 May 2009 (UTC)
[edit] Language Comparison Table
Hi Donal. Could you have a look at the Language Comparison Table? I had a crack at it a while ago, but I'm sure it would benefit from your perusal. Cheers. --glennj 18:14, 12 June 2009 (UTC)
If you do make changes, could you make corresponding changes to the language pages using the parameters in the language template? Thanks. --Mwn3d 18:31, 12 June 2009 (UTC)
- I already was maintaining the parameters in the language template, and the LCT only needed a minor correction (nothing which I thought needed a real change elsewhere). There is a discussion/justification of the categorization parameters on Category talk:Tcl...
[edit] Overlapping divs
If you change to the rosetta theme (which I thought was supposed to be default by now) a lot (if not all) of the overlapping div box problems are resolved. --Mwn3d 21:46, 17 June 2009 (UTC)
- I merely use the default theme, like most visitors to the site will. —Donal Fellows 21:48, 17 June 2009 (UTC)
- I've stopped seriously thinking about the Rosetta Theme myself. I like the colors, layout and text styles, but it has compatibility issues with IE6 that I couldn't get around last time I had time to spend on it. (IE 6 is my current oldest IE browser target. Sadly, I expect it to continue to be fairly common for several more years. It currently accounts for 34% of IE visits, and IE visits account for 22% of overall visits. Oddly enough, IE6 is the version of IE where the user is most likely to go on and use the site, and the most likely version of IE where the user will go on and edit a page. ) Whatever the bit is that fixes the overlapping divs, that should go in MediaWiki:Common.css. --Short Circuit 03:56, 18 June 2009 (UTC)
[edit] RCHQ9+/Tcl garbled?
It appears your cut and paste didn't translate well -- the switch statement has gotten garbled. --glennj 20:34, 22 July 2009 (UTC)
- nevermind, I see the talk page there.
[edit] Ethiopian Multiplication reformat
Liked it. Thanks! --Paddy3118 11:26, 23 July 2009 (UTC)
[edit] Your discussion about J
"it's clear that excessive terseness is an issue with J code"
This can be an issue with any kind of code but I'd like to give you an e-high-five for recognizing it with J and pointing it out. --Mwn3d 12:42, 31 August 2009 (UTC)
- I believe people are mis-identifying a vocabulary problem, here. People have problems reading J, people see J is terse, people think their problem is J's terseness. But, they would have the same problems, or worse problems, if J was not terse. (Given any J expression, expanding it into a more verbose expression is a trivial exercise. But you still need to know what the words mean, no matter how they are spelled.) Meanwhile, teaching J to non-programmers is easy, but teaching J to programmers can be difficult because programmers "know" things they learned from other languages -- things which usually turn out to be mistakes, in J (for example: how to implement algorithms efficiently using loops or recursion -- that said, programmers with some background in SQL might be more comfortable with this issue). Rdm 21:10, 31 August 2009 (UTC)
- My aversion to terseness comes from working on supporting codes for multiple years. It's correct that the natural level of terseness varies with language though. —Donal Fellows 10:56, 1 September 2009 (UTC)
- (Need input from all you folks watching the recent changes feed here) I have noticed that vocabulary makes J difficult for me to understand. Is J's vocabulary such that it could be compared or related to other languages on a word by word basis, such as what one might find in a dictionary equating words in one language with words in another? Even if there aren't direct word-word equations, can a word be described as a phrase in another language? I could see such a thing as being equivalent to a study guide, key or legend. --Short Circuit 14:47, 1 September 2009 (UTC)
- There is the Vocabulary page of the J Dictionary, however the J Dictionary is not very enlightening for beginners - understandable given its primary purpose is to serve as a specification for the language implementation, rather than a study guide. Instead I'd recommend the book J for C programmers (available free online or as hard copy) as a well written intro to J and array languages, especially for those coming from scalar-oriented languages, not just C. Don't skip the Foreword!!. If you're wanting a Quick Reference, then I'd suggest the J Reference Card. --Tikkanz 21:53, 1 September 2009 (UTC)
- The more I write code and work with other peoples', the more I find that terseness isn't a virtue. Clarity of expression is much better, especially since its usually the case that clear code is easier for compilers/optimizers to do something good with. The other thing I've learned (and keep learning) is that getting over clever is not a good idea if it just ends up defeating the hardware's cache management; the core of the Tcl
string mapcommand is very stupidly implemented, but switching to a fancy Rabin-Karp implementation was actually a slow down in practice because of the overhead of building the matchers and the difference in memory access patterns. Doing it the stupid way worked best because it allowed the hardware L1 cache predictor in modern CPUs to function most effectively; a real eye opener that indicates why its always worth measuring performance rather than guessing!
- I suppose it all comes down to the principle of always try the simplest clearest thing that could possibly work, as that's the best option in such a huge number of cases. And there's a real tendency of many programmers (yes, myself included) to lose sight of these things. Which brings us back to J. They've clearly chosen to go down the short/very expressive path, and the language will attract a number of very smart people who will do amazing things with it in very little code. But they'll only ever attract the very smart, and the very smart are known for being fickle. The language is doomed to remain esoteric, rather like APL. —Donal Fellows 14:27, 31 August 2009 (UTC)
- I think terseness is a virtue unless it obscures clarity. As you point out it is usually best to make things simple. If you can express a concept in a couple of words rather than a couple of sentences it is easier to digest and quickly get a feel for what is important. Then you can start investigating the "but what if" questions. It is much quicker and easier to keep focused on answering those questions if getting the answer doesn't require you to write half a page of code first or think too much about the details of how the computer might implement it. One of the comments frequently made by people having learned J and APL, is how it helps them think differently about problems even when coding in scalar languages. You may find the following papers interesting: Notation as a Tool of Thought (Ken Iverson's Turing Award Lecture), Language as an Intellectual Tool: From Hieroglyphics to APL(4MB).
- Of course as has been pointed out, you first have to learn the language. In the case of J (and APL), "learning the language" means just that (vocab, grammar, etc.), not just syntax. That doesn't mean it takes forever to gain any proficiency - just like any human language, you can usually get a lot of things done even with a limited vocabulary. --Tikkanz 23:13, 31 August 2009 (UTC)
- Terseness has nothing to do with readability or understandability. Chinese ideograms provide one symbol for each complete word in the language, much like J or APL. Chinese text is extremely "terse" when compared to English, but I'm sure if you told a native Chinese that their language is harder to understand than English because it is too terse, they would disagree.
- Readability/understandability of any text is simply a function of familiarity, not terseness. The reason that many common programming languages are "readable" to many programmers, is because one language will often use constructs that are similar to other languages, which the reader is already familiar with. The reason that J seems hard to read is because the reader doesn't understand the language, not because the language uses fewer symbols. J sacrificed similarity with scalar languages for the higher goal of a simple, consistent, precise, executable notation.
- A similar argument can be made for comments. If the reader is very familiar with a specific programming language, well-written code in that language will self-describe its' processes to that reader. Of course, code can be written to disguise its function, and poorly-written code can still be difficult to read. Comments are still useful for readers who are not proficient with a specific language, but who must maintain that unfamiliar code. -- Teledon 1:46 1 September 2009
- One of the key purposes of this site is to show to non-experts in a particular language how to use it to do tasks that they may understand from their knowledge of other languages. This suggests that using longer names and more comments than you otherwise would is likely to be a good plan... —Donal Fellows 10:56, 1 September 2009 (UTC)
- I think Teledon has a point. J may well be great, but because it is so unlike the C and Lisp and forth based languages that I know more about, it remains impenetrable to me, and probably most other RC readers.
- Even so, I think the J guys should write good, maintainable, idiomatic J and maybe help out us poor, non-J readers maybe by answering questions on the talk pages?
- Like one of these? Category:Maintenance ... There are templates that put pages in there, but I don't recall offhand which put pages where. --Short Circuit 21:34, 2 September 2009 (UTC)
- At one time I was in the Texas Instruments camp against the HP Reverse-Polish Notation calculators. Then I learnt all about Reverse-Polish when writing an interpreter and revisited my earlier conclusions on RPN and new that they were rubbish. Later, before I learnt a Lisp-like language I was careful not to reject their claims, and I did learn what made it so good to program in at the time. But now I prefer Python for most things, and am being 'tickled' by Haskell/OCaml/D/Oz. --Paddy3118 11:07, 1 September 2009 (UTC)
- Writing code that is both idiomatic and maintainable is a good target. (Indeed, that's the case for any language but some have more of a culture of it than others.) Which leaves a suggested thing for the J community to do: go through the existing solutions in J and evaluate whether they are good style (from both technical and pedagogic perspectives, of course); I've already been doing this with the Tcl solutions, but I'm not in a position to be able to do it for every language. There's only a fixed number of hours in the day... ☺ —Donal Fellows 12:28, 1 September 2009 (UTC)
- J symbols can easily be assigned English words, just as mathematical symbols can be assigned words.
- For example, take the mathematical equation x = y ^ 2 + 3 * z
- One can write this this in pure English as: "x equals y squared plus three times z"
- However, this is not typically done, because the result is more verbose than necessary.
- This is even more true with J. Take the J expression to calculate the average of a group of numbers:
- avg =. +/ % #
- avg 45 66 35 86 24
- 51.2
- avg i. 100 NB. Find the average of the number sequence from 0 to 99
- 49.5
-
- For "clarity", one could assign each J symbol or expression an English name:
- sum_up_all_numbers =. +/
- divide_by =. %
- count_the_number_of_numbers =. #
-
- Now we can write our J function in English words:
- avg1 =. sum_up_all_numbers divide_by count_the_number_of_numbers
- avg1 45 66 35 86 24
- 51.2
- avg1 i. 100 NB. Find the average of the number sequence from 0 to 99
- 49.5
- For someone not familiar with J, the English word approach may be easier to read. However, a typical J programmer would never do this, because it would require too much typing. Anyone marginally familiar with J symbols would immediately grasp the original code much quicker than the English translation. The correct form for J programs is to use the primitive J symbols, which will make the resulting code terse. For J programmers the terse form is much easier to read than an English-substitute version.
- The issue here is: Just how far should the J programmer go towards expanding an elegant J expression into a verbose English translation, in order to help newbies understand what is going on? In my opinion, not very far. The whole purpose of an efficient notation is to allow a complex algorithm to be defined in a simple, concise way that can be scanned and understood in a single glance. It is true that this kind of proficiency in reading J code is not obtained overnight. However when one becomes proficient with J, you discover that you can deal with complex processes as a whole, that were not possible before you learned J. The notation you use to describe a problem shapes the way you think about the solution. J will change the way you think, and change how you approach the solution to problems. Whether this is good or bad depends on your viewpoint. Teledon 1:46 10 September 2009 (Skip Cave) Ref. Notation as a Tool of Thought (Iverson) - [1]
- It's all your (collective) call. I'm just of the opinion that for the purposes of this site it is worth being more verbose than you otherwise would. Increased comment length is one of the best ways of doing this IMO (not that this is a specific comment). The other thing to watch for is where code is making use of assumptions that are obvious to experts in the implementation language, but which are mysterious to everyone else. Again, it's a universal problem but the more a language has implicit things, the harder it is to grasp a particular solution at a glance. (Of course, you and I probably differ over the definition of the correct level of implicit-ness, but that's just both of our opinions differing. I expect neither of us to change.) —Donal Fellows 08:20, 10 September 2009 (UTC)
[edit] GeSHi and Tcl
FYI, our Tcl langfile for GeSHi has issues with langcheck. Would you be interested in putting together a better language file? (This comment was made on 06:29, 13 December 2009 by User:Short Circuit)
- I don't grasp what the nature of the failure is, but there are a number of things that ought to be done anyway; the set of useful things to highlight has evolved since the Tcl langfile was written and there's at least one significant issue that I know of (namespaced names highlight wrongly). Alas, I don't know how long it will take me though. –Donal Fellows 19:21, 13 December 2009 (UTC)
- Just whip it together using AutoGeSHi and send the files my way. I'll drop them in for live testing, and we can iterate through it. :) --Michael Mol 21:36, 13 December 2009 (UTC)
- The format of the file isn't the problem. Working out what exactly to put in there is. There are a number of syntactic tricky bits; cases where the language parser works a bit differently to most languages. The most noticeable ones are that the language has no keywords, and comments are handled during parsing, not lexing. Moreover, I've a lot of other things on right now at both work and home; no blame, but that's how it is. –Donal Fellows 23:11, 13 December 2009 (UTC)
- You can always drop me a note at [email protected] for some more information on what GeSHi expects in the langfile or how to do particular features. Depending on the issue I even might help a bit with putting together the RegExps and some other complex issues of the language file. As I don't know Tcl I won't be able to fix issues in the langfile though - I just can handle updates and testing things. In addition it's always good to get some example sources and a brief description of how it should look. --BenBE 03:12, 3 February 2010 (UTC)
[edit] User pages
Please don't edit user pages directly, even if they haven't created it yet. For something like that, I'd leave a friendly greeting and some advice in their user talk page. --Michael Mol 16:30, 23 December 2009 (UTC)
[edit] Question on Nan
One of the wonderful (or horrible - your call) properties of Tcl is that it is type-free. Or at least that's the way I understand it: everything is just strings of bytes; code, data, numbers, processes, text - at the bottom it's all just strings of bytes. Whether something is a "number" is an interpretation; "set a 5" creates a length-1 string containing the byte that can be interpreted as the ASCII symbol "5" or the number 5 (or any other way). So it would seem utterly natural to me to
(Tcl) 1 % package require Tcl 8.5 8.5.8 (Tcl) 2 % set nan NaN NaN (Tcl) 3 % expr {$nan+0} can't use non-numeric floating-point value as operand of "+"
as opposed to, say
(Tcl) 4 % set nan Booger Booger (Tcl) 5 % expr {$nan+0} can't use non-numeric string as operand of "+"
Because "NaN" can be interpreted as a number, while "Booger" has no such interpretation. (My 8-year old disagrees with this, by the way). I would maintain that "set nan NaN" does not only have the exact same outcome (generates the same result) as "binary scan [binary format q nan] q nan" but that it is actually more idiomatic, more Tcl'ish, more the way of "Tcl does what you expect".[Edit: sorry about that, wasn't logged in. That was me. :-) Sgeier 01:34, 20 July 2010 (UTC)]
- Now that I'm not exhausted, I agree. :-) –Donal Fellows 06:03, 20 July 2010 (UTC)
[edit] Chime in on Hough Transform
Could you wander over to Talk:Hough_transform and chime in? I notice you're the originator of the task, and there's been a lot of interest in it lately. --Michael Mol 21:07, 9 August 2010 (UTC)
[edit] Filling out Rosetta Code:Add a Task
Could I get you, Rdm and Paddy3118 to give Rosetta Code:Add a Task a thorough treatment of examination, debate and filling? Of the cross section of current users, I think you three are probably the most likely to be familiar with the general pattern and concerns of creating tasks. I added a bunch of my own thoughts in HTML comments in-line, and left a note in the talk page. --Michael Mol 17:15, 21 September 2010 (UTC)
[edit] Inspirations from {{tcllib}}
Some pages for your perusal: {{uses from}}, {{Library/Body}}, {{component}}, {{component/Body}},Library/Qt, Library/Qt/QApplication. Does the arrangement handle your thirst for semantic browsing? (I still hope to get versioning info in there eventually, but it's not a particularly high priority.) (Also, I've noticed some weirdness with stale query results, but recommitting and/or refreshing relevant templates seems to fix it.) --Michael Mol 04:02, 20 November 2010 (UTC)
[edit] Icon and Unicon coalesce
Please don't remove the Unicon header and replace it with Works with. While well intentioned it causes problems. I'd already started to address this cleanup and documented it today with Category_talk:Unicon#Works_With and Category_talk:Unicon#Template.2F_How_to_write_up_Icon_and_Unicon_markup.
It may only have been the one, but if you know of others please let me know. They'll get cleaned up when I retrofit all of this. --Dgamey 16:50, 31 December 2010 (UTC)
- Alas, there's quite a few places where I've done it, far more than I stand any chance of remembering. (You'll have to go through and check all the tasks implemented by Icon. Sorry. Can't be more than a few hundred… :-)) However, I'd encourage working on example showing the difference between Icon and Unicon; my impression has been of a lot of noise (by comparison with neat use of the {{works with}} template) and no distinctiveness. Is Unicon an implementation or dialect of Icon? (If it's the latter, we might need to reconsider how RC handles dialects; up to now, it's been usual to treat them as implementations, but that's not exactly correct when there's only a partial distinction between the language and the implementation of that language, which is all too common. That'd be something for the Village Pump…) –Donal Fellows 18:03, 31 December 2010 (UTC)
- A dialect is probably closest. Unicon is a super-set of most of Icon. There are a frew Iconisms that won't work the same way in Unicon. I recently revised the way I want to present the two (described in the Unicon talk pages as Category_talk:Unicon#How_to_reasonably_handle_Icon_.v._Union_similarities_and_differences. It may work for other close dialects as well. --Dgamey 17:21, 2 January 2011 (UTC)
[edit] Blocking
Please avoid blocking by IP for the time being; right now, all inbound requests should be coming from Cloudflare IPs, so blocking one spammer's IP address risks blocking an entire subcontinent. I'm slowly working on resolving the issue. --Michael Mol 13:12, 13 March 2013 (UTC)
- OK, thanks for the heads-up. –Donal Fellows 09:02, 14 March 2013 (UTC)
- Sorry, didn't notice you'd turned off autoblocking, and didn't check here first... --TimToady 23:17, 14 March 2013 (UTC)
- Don't know about autoblocking; I just take time to uncheck the box when blocking an undesirable. Adds a second or so to the time to block. (Just wish I could block the air supply to the human spammers doing this sort of thing. They spew their rubbish on many different websites, and in some cases even complain to admins after being blocked, assuming that admins will never take the time to see whether spammers are being scum or not. It's just time consuming and fills the logs with crap.) –Donal Fellows 11:45, 15 March 2013 (UTC) | http://rosettacode.org/wiki/User_talk:Dkf | CC-MAIN-2013-20 | refinedweb | 4,446 | 67.59 |
Kevin diagnosed and found this issue. I'm just filing for him. In bug 579488 (typing in twitter) nsWindow::DispatchStarvedPaints forces us to paint after every character typed. The paints take a long time. This forces us to paint more often, instead of coalescing paints, making things worse then they could be. We should be smarter about dispatching starved paints. There are some thoughts about this (for Linux) in bug 602303.
Bugs that we don't want to regress: bug 601547, bug 592093, bug 592954.
Created attachment 505712 [details] [diff] [review] Throttle dispatching of starved paints so that time is allowed for processing input events I tried throttling the starved paint logic so that it will only dispatch a starved paint synchronously if it's been at least 50 milliseconds since the last paint completed. This drastically improves the responsiveness of the twitter input field for me without compromising the responsiveness of the sinatra network graph on GitHub (from one of the previously mentioned bug). This seems like a decent way to solve the problem.
I like this! + // The point in time at which the last paint completed. We use this to avoid + // painting too rapidly in response to frequent input events. + mozilla::TimeStamp mLastPaintEndTime; add "typedef mozilla::TimeStamp TimeStamp;" to the top of nsWindow to avoid mozilla:: prefixes.
I'm not sure if this is too risky or not for FF4, but if we want it for FF4 we should get it landed soon so it can get testing.
I'd approve a final patch.
Created attachment 506605 [details] [diff] [review] Throttle dispatching of starved paints so that time is allowed for processing input events (2) Patch revised based on feedback and tested locally. Spent a bit trying to come up with a good automated test but could not come up with one that would perform well without a lot of effort. Andreas suggested that a manual test might be sufficient here. My manual test case for this is, roughly: 1. Load up new twitter and expand the tweet input field so it is around 5 lines tall and horizontally covers the right pane of their UI. This makes paints expensive enough to reproduce the issue. 2. Rapidly spam keystrokes into the input field (holding down a key is not sufficient) for around 15 seconds, then stop. Without the patch, you should observe Firefox slowly processing each keystroke for multiple seconds after you stop typing, until it eventually becomes responsive again. With the patch, it should become responsive within a second of your last keystroke.
--- a/widget/src/windows/nsWindow.h Fri Jan 21 04:25:18 2011 +0100 +++ b/widget/src/windows/nsWindow.h Mon Jan 24 17:54:29 2011 -0800 @@ -60,7 +60,12 @@ #include "gfxWindowsSurface.h" #include "nsWindowDbg.h" #include "cairo.h" -#include "nsITimer.h" +#include "nsITimer.h" +#include "mozilla/TimeStamp.h" + +typedef mozilla::TimeStamp TimeStamp; +typedef mozilla::TimeDuration TimeDuration; Sorry, we shouldn't pollute the global namespace like this. Put these typedefs inside the nsWindow class.
Created attachment 506626 [details] [diff] [review] Throttle dispatching of starved paints so that time is allowed for processing input events (3) Ah, my mistake. Moved the typedefs into the correct place.
I had to make one small change because we can call DispatchPendingEvents before we've had a paint, but otherwise green on try server. I'll push this later today.
Backed this out to fix bug 635465
So in order to fix this again we need to fix bug 635465 in a better way, perhaps bug 635465, comment 12.
Comment on attachment 506626 [details] [diff] [review] Throttle dispatching of starved paints so that time is allowed for processing input events (3) (removing the patch approval for bookeeping)
I'm not comfortable landing this on cedar...
This shouldn't land. It was backed out for causing a regression and we haven't figured out how to fix the regression yet. | https://bugzilla.mozilla.org/show_bug.cgi?id=627628 | CC-MAIN-2017-34 | refinedweb | 652 | 65.42 |
Re: Re: chr() .Net equivalent
- From: Just_a_fan@xxxxxxxx
- Date: Thu, 10 Apr 2008 11:58:44 -0700
This is WELL SAID! You show great depth of experience. I hope folks
will take this as gospel!
Also, modifying make the new code unlike the old and a direct
comparison, later, to see if there was a simple conversion "typo" is out
the window, too. Make it in two (and a half) passes, as you mention.
1: Convert,
1.5 Make notes on anything to look at later
2: Fix on the new platform
Then as a new project, look at the notes from the conversion and the
output from the newly functioning program and rationalize any
differences.
A new platform may introduce differences of its own simply due to
changes in the underlying routines, even those of the same name which
have been modified by M$ or some other vendor. Then you are shooting at
a VERY moving target. You have the changes made during conversion and
changes on the new platform and/or called (purchased) routines and one
is not sure which change caused the different output. Then one might
have to back out the change, shoot the difference then put the change
back in causing triple work.
This should be in stone!
Mike
On Sun, 30 Mar 2008 23:07:13 +0100, in
microsoft.public.dotnet.languages.vb "\(O\)enone" <oenone@xxxxxxxxxxx>
wrote:
David Griffiths wrote:
I read many newsgroup articles that say it's better to (or more
correct) to write code without the VB namespace using just the
framework (if that's the correct analogy). Is there any advantage of
writing the code without the VB namespace....?
No.
Particularly if you're upgrading VB6 projects.
One of the things I've become very aware of over the last few years of
upgrading VB6 code to .NET is that it's very easy to "fix" things along the
way, tidying things up and generally improving things. While this seems like
a good idea, it's actually a major opportunity to add subtle bugs to your
code. As a result I now strongly advise my development team to simply
upgrade the code "as is", and worry about fixing things and tidying up
later.
If you start dropping Visual Basic functionality in order to simply avoid
referencing Microsoft.VisualBasic.dll, you'll break your code somewhere
along the line, it's almost inevitable. And for what? There's no benefit to
removing the reference to the DLL, it's a part of the framework so go ahead
and use it. The whole point of the .NET Framework is to provide rich
functionality to applications, and if you decide arbitrarily to ignore a
part of it then it'll just make your coding more difficult.
.
- Prev by Date: RE: One project running second project
- Next by Date: Vb.Net 2005 with ClickOnce and Crystal Reports
- Previous by thread: Re: chr() .Net equivalent
- Next by thread: Office 2007 typelib
- Index(es): | http://www.tech-archive.net/Archive/DotNet/microsoft.public.dotnet.languages.vb/2008-04/msg00596.html | crawl-002 | refinedweb | 499 | 72.46 |
- Name
- Synopsis
- Description
- Methods
- Function
- Example
- Variables
- Data Types
- Limitations
- Reference
- Author
Name
Teradata::SQL - Perl interface to Teradata SQL
Synopsis
use Teradata::SQL; use Teradata::SQL qw(:all); # Exports variables $dbh = Teradata::SQL::connect(logonstring [,tranmode]); $dbh->execute($request); $rh = $dbh->open($request); $rh->fetchrow_list(); $rh->close(); $dbh->disconnect; # And others. See below.
Description
Teradata::SQL is a Perl interface to Teradata SQL. It does not attempt to be a complete interface to Teradata -- for instance, it does not allow asynchronous requests or PM/API connections -- but it should be sufficient for many applications.
Methods
This is an object-oriented module; no methods are exported by default. The connect method must be called with its full name; other methods are called with object handles.
Most methods return a true value when they succeed and FALSE upon failure. The fetch methods, however, return the data to be fetched. If there is no row to be fetched, they return an empty list.
- Teradata::SQL::connect LOGONSTRING [CHARSET] [TRANMODE]
Connects to Teradata. The first argument is a standard Teradata logon string in the form "[server/]user,password[,'account']". The second argument (optional) is the client character set for the session, 'ASCII' by default. The most common character sets besides ASCII are 'UTF8' and 'UTF16'. The third argument (optional) is the session transaction mode, either 'BTET' (the default) or 'ANSI'.
This method returns a connection handle that must be used for future requests. If the connection fails, undef will be returned. Many connections (sessions) can be active at a time.
- disconnect
Connection method. Disconnects from Teradata. This method must be applied to an active connection handle.
- execute REQUEST
Connection method. Executes a single request without input variables. The argument is the SQL request to be run. It can be a multi-statement request, i.e. contain multiple statements separated by semicolons.
This method should be used only when the request does not return data. If data is to be returned, use open instead.
- open REQUEST
Connection method. Opens a request for execution. The argument is the SQL request to be prepared. It can be a multi-statement request, i.e. contain multiple statements separated by semicolons. The WITH clause (to add subtotals and totals) is not supported.
You can have as many requests open at a time as you wish, but be aware that each one allocates additional memory.
The request cannot include parameter markers ('?' in the place of variables or literals). If you need parameter markers, use prepare instead.
open returns a request handle or, if the open fails, undef.
After fetching all the rows, be sure to close() the cursor.
- prepare REQUEST
Connection method. Opens a request for execution. The arguments are the same as for open, and prepare also returns a request handle or, if the prepare fails, undef. The difference is that a prepared request can include parameter markers ('?' in the place of variables or literals).
- executep [ARGS]
Request method. Executes the prepared request. If the request includes parameter markers, arguments can be supplied to take the place of the markers. For more information, see "Data Types".
This method should be used only when the request does not return data. If data is to be returned, use openp instead.
- openp [ARGS]
Request method. Executes the prepared request and opens a cursor to contain the results. If the request includes parameter markers, arguments can be supplied to take the place of the markers.
After fetching all the rows, be sure to close() the cursor.
- fetchrow_list
Request method. Returns the next row from the open cursor in list form, or an empty list if no more rows are available; e.g.:
@row = $rh->fetchrow_list();
This works with cursors opened by open() or by openp().
- fetchrow_hash
Request method. Returns the next row from the open cursor in hash form, or an empty hash if no more rows are available; e.g.:
%row = $rh->fetchrow_hash();
This works with cursors opened by open() or by openp(). The hash entries are those specified by ColumnName, not ColumnTitle. See the CLIv2 Reference, s.v. "PrepInfo Parcel".
Request method. Closes the cursor. This should always be called after opening and fetching the results.
- dec_digits N
Connection method. Sets the maximum number of decimal digits to be returned to N. This is similar to the DECIMALDIGITS command in BTEQ. See, however, the section "Data Types" for notes about large decimal values.
- abort
Connection method. Aborts the currently active request for the session. Note that this is an asynchronous ABORT (like the .ABORT command in BTEQ), not a ROLLBACK. Ordinarily it would have to be called from a signal handler; for example:
sub abort_req { $dbh->abort; print "Request has been aborted.\n"; $dbh->disconnect; exit; } $SIG{'INT'} = \&abort_req;
Function
- server_info TDP ITEM
This is an ordinary function, not a method. It is independent of connections and requests. It can be exported into your namespace.
This is a partial implementation of the DBCHQE call, which queries various attributes of the server. You need not be connected to the server to run this function. The first argument is the server name or "TDP ID"; for instance, for "mrbigcop1", it would be "mrbig". The second argument is the number of the item to be queried; these numbers are given in the CLIv2 manual. The one of most interest is probably 34 (QEPIDBR), which returns the DBS release and version information as a 2-element list.
Items that require a connection (session) ID are not implemented, and not all session-independent items are implemented. If an item is not implemented, it will return undef.
Example
# Connect and get a database handle. $dbh = Teradata::SQL::connect("mrbig/user,password") or die "Could not connect"; # Prepare a request; read the results. $rh = $dbh->open("sel * from edw.employees"); while (@emp_row = $rh->fetchrow_list) { print "employee data: @emp_row\n"; } $rh->close; # # Prepare, then insert some rows. $rh = $dbh->prepare("insert into edw.departments (?,?,?,?)"); while (<DATA>) { chomp; @incoming = split; $rh->executep(@incoming); } # All finished. $dbh->disconnect; # Note: $dbh, not $rh.
For more examples, see test.pl.
Variables
- $Teradata::SQL::activcount
Activity count, i.e. the number of rows affected by the last SQL operation. This variable can be exported to your namespace.
- $Teradata::SQL::errorcode
The Teradata error code from the last SQL operation. This variable can be exported.
- $Teradata::SQL::errormsg
The Teradata error message from the last SQL operation. This variable can be exported.
These three variables can be exported to your namespace all at once by this means:
use Teradata::SQL qw(:all);
- $Teradata::SQL::msglevel
By default, Teradata::SQL will display error codes and messages from Teradata on stderr. Setting this variable to 0 will suppress these messages. The default value is 1. The module will honor changes to the value of this variable at any point during your program.
Data Types
Perl uses only three data types: integers, double-precision floating point, and byte strings. The data returned from Teradata will be converted to one of these types and will look like ordinary Perl values.
Dates are returned in either integer form (e.g., 1020815 for 15 August 2002) or ANSI character form (e.g., '2002-08-15'), depending on the default for your system, the session characteristics, and whether you have issued a SET SESSION DATEFORM request. If you want dates returned in some other form, you must explicitly cast them, e.g. like this:
cast(cast(sale_dt as format 'MM/DD/YYYY') as char(10))
By default, times and timestamps are returned as character strings in their default formats. Again, you can cast them as you wish in your select request.
A word of caution is in order about decimal fields and bigints. Decimal fields with a precision of 9 or lower will be converted to doubles (numeric) and will behave more or less as expected, with the usual caveats about floating-point arithmetic. Decimal fields with a higher precision (10-18 digits), as well as bigints, will be converted to character strings. This has the advantage of preserving their full precision, but it means that Perl will not treat them as numeric. To convert them to numeric fields, you can add 0 to them, but values with 16 or more significant digits will lose precision. You have been warned!
Decimal fields of more than 18 digits are not supported. If they are returned from the database, the module will issue a warning and substitute a 0 in their place. (This warning will not appear if msglevel is 0.) You should instead ask the database to convert them to strings, e.g. like this:
cast(cast(large_dec as format '-(30)9.99') as varchar(40))
Arguments passed to Teradata via openp and executep will be passed in Perl internal form (integer, double, or byte string). You can pass undefs to become nulls in the database, but there are limitations. Since all undefs look the same to the module, it coerces them all to integers. This works for most data types, but Teradata will not allow integer nulls to be placed in BYTE, TIME, or TIMESTAMP fields. At present, the only workaround for this situation would be to eschew parameter markers for the nulls and hard-code the nulls to be of the type you want. In other words, instead of this:
$rh = $dbh->prepare("insert into funkytown values (?,?,?)"); $rh->executep(1, "James Brown", undef);
you could code this:
$rh = $dbh->prepare("insert into funkytown values (?,?, cast(null as timestamp(0)))"); $rh->executep(1, "James Brown");
Limitations
The maximum length of a request to be prepared is 64 Kbytes. The maximum length of data to be returned is 65400 bytes. These limits cannot be relaxed without rewriting the module.
The maximum number of fields selected or returned by any request is 520. Likewise, you can pass no more than 520 arguments to openp or executep. If these limitations are too strict, you can ask your Perl administrator to change the value of MAX_FIELDS in the module's header file and recompile the module.
Multiple sessions are supported. This feature would be most useful when connecting to multiple servers; multiple sessions on a single server are of little use without support for asynchronous requests.
CLI applications can use a different client character set for each request, but this module sets it only at the session level.
The following Teradata features are not supported:
Partitions other than DBC/SQL (e.g. MONITOR or MLOAD) Asynchronous requests WITH clause LOB data types CHECKPOINT DESCRIBE ECHO POSITION REWIND
If you would like some features added, write to the author at the address shown below. No guarantees!
Reference
Teradata Call-Level Interface Version 2 Reference for Network-Attached Systems, B035-2418-096A (Sep. 2006).
Author
Geoffrey Rommel, GROMMEL [at] cpan [dot] org. | https://metacpan.org/pod/release/GROMMEL/Teradata-SQL-0.05/SQL.pm | CC-MAIN-2020-50 | refinedweb | 1,795 | 59.19 |
I would like to make a alphabetical list for an application similar to an excel worksheet.
A user would input number of cells and I would like to generate list.
For example a user needs 54 cells. Then I would generate
'a','b','c',...,'z','aa','ab','ac',...,'az', 'ba','bb'
I can generate the list from [ref]
from string import ascii_lowercase
L = list(ascii_lowercase)
Use
itertools.product.
from string import ascii_lowercase import itertools def iter_all_strings(): size = 1 while True: for s in itertools.product(ascii_lowercase, repeat=size): yield "".join(s) size +=1 for s in iter_all_strings(): print s if s == 'bb': break
Result:
a b c d e ... y z aa ab ac ... ay az ba bb
This has the added benefit of going well beyond two-letter combinations. If you need a million strings, it will happily give you three and four and five letter strings.
Bonus style tip: if you don't like having an explicit
break inside the bottom loop, you can use
islice to make the loop terminate on its own:
for s in itertools.islice(iter_all_strings(), 54): print s | https://codedump.io/share/I5KWT7wMwEww/1/how-to-make-a-continuous-alphabetic-list-python-from-a-z-then-from-aa-ab-ac-etc | CC-MAIN-2017-09 | refinedweb | 185 | 66.94 |
Responsive Applications
This article covers multi-threaded GTK# programing as well as how to keep your GTK# application responsive.
Background
The Gtk# toolkit is an event-based system. At the core of the Gtk.Application.Run method there is a loop like this:
while (Gtk.Application.EventsPending ()) Gtk.Application.RunIteration ();
The above loop basically consumes events until the method Gtk.Application.Quit() is invoked.
Events can come from a number of sources:
- Keyboard (as the user types information).
- Mouse (as the user moves the mouse or clicks on information).
- Synthesized events (pressing the `enter’ key and mapping that to an “activated” event on the widget).
- X events: redrawing, resizing, respoding to cut-and-paste.
- Timers going off.
- Notification of idle time.
The loop that processes and dispatches events is called the “event loop”.
The thread that happens to run the event loop is said to “own” Gtk. This means that all Gtk operations should be perfomed from this thread and no other thread. Failure to restrict the use of Gtk to this thread will result in unpredictable behavior, and most often will lead to a crash.
Threads are useful if you need to perform long running computations or your application complexity would increase too much if it were event driven.
The most important thing to keep in mind is that the thread running the Gtk main loop (the one that calls Gtk.Application.Run()) is the only thread that should make calls into Gtk. This includes method calls and property access.
A typical scenario is that your thread will want to update the user interface with some status progress or notify the user when a task is completed. The solution is to turn the notification into an event that is then dispatched from the main thread.
A common problem faced by GUI application developers is keeping an application responsive while a long-running operation is running. There are a number of approaches that can be used depending on the situation and the complexity of the computation.
There are a number of reasons why an application might become unresponsive to a user: the application might be performing a long-running computation or it might be blocking on data to become available from the network, a file system or the operating system.
Threads are often used to overcome this problem, but as explained above you have to be careful when using them. In this document we explore the solutions available to Gtk# developers to keep your GUI responsive by using multiple threads and other approaches.
Approaches
There are a number of approaches that can be used to make your application responsive in the presence of blocking operations:
- Event-based programming.
- Explicitly threaded applications.
- Asynchronous Mono Programming.
Event-based programming is the best option as it avoids the complexity that comes from writing multi-threaded applications. Event-based programming requires that you architect your software in a way in which it responds to events, but this is already the case for most GUI applications: callbacks are invoked in response to user actions or signals, so this is a natural model to use.
You should think twice before you start using threads. Not only because you have to be careful about the way you use Gtk# but also because it will make your code hard to debug, the bugs might be extremely hard to reproduce and you will need to become intimately familiar with a whole family of thread synchronization primitives and you must design your software in a way in which you avoid deadlocks.
Idle Handlers
The Gtk main loop has support for invoking routines when the application is idle. When no events are left to process the main loop will invoke all of the idle handlers. This technique is useful if you can break up the work that your application does into small units that take a small amount of time to process and can resume execution at any point.
Another use of the Idle handler is to queue work to be done when the machine is idling and not doing anything interesting. You can use Idle handlers to perform background operations without the added complexity of having to write a multi-threaded applications
For example, simple animations or status updates could be hooked up into an Idle handler.
void Start () { GLib.Idle.Add (new IdleHandler (OnIdleCreateThumbnail)); } bool OnIdleCreateThumbnail () { Image img = GetNextImage (); // If no more images remain, stop the idle handler. if (img == null) return false; CreateThumbnail (img, img.ToString () + ".thumbnail"); // There are more images, invoke this routine again on the next Idle moment. return true; }
For more details about using Idle handlers, see the GLib.Idle class library documentation.
Timeouts
You can use timeouts to invoke routines at specified intervals of time. This can be used to update status at specified times for example while a background thread runs or to notify the user of an action at a given interval.
Those of you who have done winforms development before will notice that GLib.Timeout is similar to System.Windows.Forms.Timer. The difference is that GLib.Timeouts are always invoked on the thread that owns the Gtk mainloop.
void StartClock () { // Every second call `update_status' (1000 milliseconds) GLib.Timeout.Add (1000, new GLib.TimeoutHandler (update_status)); } bool update_status () { time_label.Text = DateTime.Now.ToString (); // returning true means that the timeout routine should be invoked // again after the timeout period expires. Returning false would // terminate the timeout. return true; }
As described on the example, the timeout routine has to return a true or false value. Returning true will reset the timeout and invoke the method again, and returning false will prevent further invocations of the timeout routine to be called.
Look at the documentation for GLib.Timeout for more examples and to learn more about timeouts.
Gtk.Application.Invoke
With Gtk# 2 and C# it is possible to use Gtk.Application.Invoke and anonymous methods to request from a thread that the GUI thread should wake up and execute some code in the context of the main loop.
You must use this if you have a thread that needs to perform or access some GUI components during its execution:
using Gtk; class Demo { static Label label; static void Main () { Application.Init (); Window w = new Window ("Cray on a Window"); label = new Label ("Busy computing"); Thread thr = new Thread (new ThreadStart (ThreadRoutine)); thr.Start (); Application.Run (); } static void ThreadRoutine () { LargeComputation (); Gtk.Application.Invoke (delegate { label.Text = "Done"; }); } static void LargeComputation () { // lots of processing here } }
Gtk.ThreadNotify
using Gtk; class Demo { static ThreadNotify notify; static Label label; static void Main () { Application.Init (); Window w = new Window ("Cray on a Window"); label = new Label ("Busy computing"); Thread thr = new Thread (new ThreadStart (ThreadRoutine)); thr.Start (); notify = new ThreadNotify (new ReadyEvent (ready)); Application.Run (); } static void ready () { label.Text = "Done"; } static void ThreadRoutine () { LargeComputation (); notify.WakeupMain (); } static void LargeComputation () { // lots of processing here } }
Programming with Threads
For certain kind of problems that are not easy to handle with event dispatching, timers or other simpler approaches, you can use threads.
Threads have a few downsides, for example, you need to worry about race conditions that happens when two threads need to share information.
Read our Thread Beginners Guide for an introduction to using threads with Mono.
Asynchronous Mono Programmning
Programmers that use threads have to create their own communications protocols between the thread and the main application. Sometimes the features offered by the threads are enough, but some other times it might be useful to take advantage of a built-in protocol in the ECMA CLI for asynchronous programming.
In the ECMA CLI every delegate declaration creates three methods in the delegate that you can use, for example consider the following C# declaration for a BinaryOperator:
delegate int BinaryOperator (int op1, int op2);
The compiler generates something along these lines:
class BinaryOperator : MulticastDelegate { public virtual int Invoke (int op1, int op2) {..} public virtual IAsyncResult BeginInvoke (int op1, int op2, AsyncCallback cback, object data) {..} public virtual int EndInvoke (IAsyncResult r) {..} }
There is actually no code generated for those methods, the virtual machine handles these specially. Notice that incoming parameters are added to BeginInvoke, and the return type for EndInvoke is the return type for the Invoke method. BeginInvoke/EndInvoke have split the operation in two: the startup, and the completion.
You would use the above code typically like this:
// C# 2.0 style of coding, using an anonymous method: BinaryOperator adder = delegate(int op1, int op2) { return op1 + op2; }; // C# way of invoking it: int sum = adder (4, 5); // You can also call it like this: int sum2 = adder.Invoke (4, 5);
But in addition to this invocation model, the BeginInvoke/EndInvoke pair allow you to run the method on a separate thread, notify an optional callback method when the code has executed and fetch the results when you are done.
For example, this is how you could invoke the previous sum in a separate thread:
Startup () { string ABC = "hello"; adder.BeginInvoke (4, 5, new AysnCallback (finished_callback), ABC); ... } void finished_callback (IAsyncResult ar) { Console.WriteLine ("Addition has completed"); // You can fetch the variable that was passed as the last argument // to BeginInvoke by accessing ar.AsyncState, in this case "ABC" object state = ar.AsyncState; }
If you want to fetch the actual result value, you need to call the “EndInvoke” method, and pass the IAsyncResult parameter:
int result = adder.EndInvoke (ar);
Here is a full sample:
using System; delegate int BinaryOperator (int a, int b); class x { static void Main () { BinaryOperator adder = Adder; adder.BeginInvoke (10, 20, Callback, adder); Console.ReadLine (); } static int Adder (int a, int b) { return a + b; } static void Callback (IAsyncResult r) { // // We pass the "adder" object as the "data" argument to // BeginInvoke, here we retrieve it: // BinaryOperator adder = (BinaryOperator) r.AsyncState; Console.WriteLine (" Addition completed"); Console.WriteLine (" Result was: {0}", adder.EndInvoke (r)); } }
Here is the same sample, but this time rewritten using anonymous methods from C# 2.0 which make the code more compact, and does not require us to pass the adder as a parameter (we could pass other data if we wanted to):
using System; delegate int BinaryOperator (int a, int b); class x { static void Main () { BinaryOperator adder = delegate(int a, int b) { return a + b; }; adder.BeginInvoke (10, 20, delegate (IAsyncResult r) { Console.WriteLine ("Addition completed"); Console.WriteLine ("Result was: {0}", adder.EndInvoke (r)); }, null); Console.ReadLine (); } }
The above example uses the anonymous method syntax to define both the adder method and used to define the callback method that is invoked when the process is complete.
As you can see, the operation result is fetched by invoking the delegate’s EndInvoke method. In the above example we call EndInvoke in our callback, so EndInvoke will return immediately with the result because the method has completed its execution. But if you were to call EndInvoke in another place of your program and your async method had not finished, EndInvoke would block until the method completes (see below for more information on how to deal with this).
Both samples currently “wait” by calling ReadLine, a more advanced application would take advantage of the IAsyncResult value returned. This is the definition of IAsyncResult:
public interface IAsyncResult { // The "data" value passed at the end of BeginInvoke object AsyncState { get; } // A waithandle that can be used to monitor the async process System.Threading.WaitHandle AsyncWaitHandle { get; } // Determines whether the method completed synchronously. bool CompletedSynchronously { get; } // Whether the method has completed. bool IsCompleted { get; } }
You can use the “IsCompleted” method if you are polling for example, you could check if the method has finished, and if so, you could invoke EndInvoke.
You can also use the WaitHandle that is returned by the AsyncWaitHandle to monitor async method. You can use the WaitOne method, WaitAll or WaitAny to get finer control, they are all essentially the same operation, but it applies to the three potential conditions: single wait, waiting for all the handles to complete, or wait for any handles to complete (WaitHandles are not limited to be used with async methods, you can obtain those from other parts of the framework, like network communications, or from your own thread synchronization operations).
In our sample, we can replace Console.ReadLine with:
IAsyncResult r = adder.BeginInvoke (10, 20, delegate(int a, int b){ return a+b; }, null); r.AsyncWaitHandle.WaitOne ();
In the above invocation of WaitOne() you would wait until the async thread completes. But you can use other method overloads for more control, like only waiting for some amount of time before returning.
Manual Event Processing
You can also take control of the event loop, and instead of partitioning the problem in callback routines, put the code inline and put your computation code in the middle.
This can be used for example for a file-copying operation, where the status is updated as the copy proceeds and the unit of work is discrete and easy to resume:
void LongComputation () { while (!done){ ComputationChunk (); // Flush pending events to keep the GUI reponsive while (Gtk.Application.EventsPending ()) Gtk.Application.RunIteration (); } }
Alternatives
Application.Invoke
Gtk# 2.0 includes a new mechanism to invoke code on the main thread, this is part of the Gtk.Application class, to do this, just call the Invoke method with a delegate or anonymous method:
void UpdatingThread () { DoSomething (); Gtk.Application.Invoke (delegate { label.Text = "Thread update"; }); }
Other options
Other options of doing the same are available for Gtk# but are now outdated:
GuiDispatch
Monodevelop has a class called GuiDispatch that automatically wraps delegates so they will be invoked on the main thread. This provides an extremely easy way to safely use threads in your application.
Lots more information as well as many examples at.
Runtime.DispatchService.GuiDispatch (new StatefulMessageHandler (UpdateGui), n);
counter.TimeChanged += (EventHandler) Runtime.DispatchService.GuiDispatch (new EventHandler (UpdateTime));
RunOnMainThread
A simple wrapper around GLib.Idle.Add that lets you easily run any method with arguments on the main thread.
public class Example { public void Go() { /* Let's assume this method was called by another thread */ RunOnMainThread.Run(this, "DoSomethingToGUI", new object[] { "some happy text!" }); } private void DoSomethingToGUI(string someText) { /* Do something to the GUI here */ } } | http://www.mono-project.com/docs/gui/gtksharp/responsive-applications/ | CC-MAIN-2018-26 | refinedweb | 2,349 | 54.83 |
The previous labs in this chapter dealt with client connections, where an application initiated a connection to a remote server. However, Twisted can also be used for writing network servers, where the application waits for connections from clients. This lab will show you how to write a Twisted server that accepts connections from clients and interacts with them.
2.4.1. How Do I Do That?
Create a Protocol object defining your server's behavior. Create a ServerFactory object using the Protocol, and pass it to reactor.listenTCP. Example 2-7 shows a simple echo server that accepts a client connection and then repeats back all client messages.
Example 2-7. echoserver.py
from twisted.internet import reactor, protocol from twisted.protocols import basic class EchoProtocol(basic.LineReceiver): def lineReceived(self, line): if line == 'quit': self.sendLine("Goodbye.") self.transport.loseConnection( ) else: self.sendLine("You said: " + line) class EchoServerFactory(protocol.ServerFactory): protocol = EchoProtocol if __name__ == "_ _main_ _": port = 5001 reactor.listenTCP(port, EchoServerFactory( )) reactor.run( )
When you run this example, it will listen on port 5001, and report client connections as they are made:
$ python echoserver.py Server running, press ctrl-C to stop. Connection from 127.0.0.1 Connection from 127.0.0.1
In another terminal, use netcat, telnet, or the dataforward.py application from Example 2-6 to connect to the server. It will echo anything you type back to you. Type quit to close your connection:
$ python dataforward.py localhost 5001 Connected to server. Press ctrl-C to close connection. hello You said: hello twisted is fun You said: twisted is fun quit Goodbye. $ How does that work?
Twisted servers use the same Protocol classes as clients. To save some work, the EchoProtocol in Example 2-6 inherits from twisted.protocols.basic.LineReciever, which is a slightly higher-level implementation of Protocol. LineReceiver is a Protocol that automatically breaks its input into separate lines, making it easier to process a single line at a time. When EchoProtocol receives a line, it will echo it back to the clientunless the line is "quit", in which case it sends a goodbye message and closes the connection.
Next, a class called EchoServerFactory is defined. EchoServerFactory inherits from ServerFactory, the server-side sibling of ClientFactory, and sets EchoProtocol as its protocol. An instance of EchoServerFactory is then passed as the second argument to reactor.listenTCP, with the first argument being the port to listen on.
Getting Started
Building Simple Clients and Servers
Web Clients
Web Servers
Web Services and RPC
Authentication
Mail Clients
Mail Servers
NNTP Clients and Servers
SSH
Services, Processes, and Logging | https://flylib.com/books/en/2.407.1/accepting_connections_from_clients.html | CC-MAIN-2020-10 | refinedweb | 436 | 52.26 |
Well I guess, I will eventually whip up such a plugin myself, cause I really need to cleanup
quite a code-generation mess.
Eventually based upon the GraniteDS libs, as does the Flexmojos plugin.
Chris
-----Ursprüngliche Nachricht-----
Von: bmathus@gmail.com [mailto:bmathus@gmail.com] Im Auftrag von Baptiste MATHUS
Gesendet: Freitag, 1. März 2013 08:02
An: Maven Users List
Betreff: Re: Is there any generic Maven code generator?
The following is not going to really help you, but I just wanted to point out that a plugin
was recently initiated at mojo (still in the sandbox) dedicated to templating (called templating-maven-plugin).
It's currently only supporting Maven sources filtering, but the plan is to support different
templating engines like velocity or mustache for example.
Cheers
Le 28 févr. 2013 12:17, "Anders Hammar" <anders@hammar.net> a écrit :
> I haven't seen any generator plugin that does what you're looking for.
>
> Wearing a Maven hat, I don't think that having these stub classes
> generated to src/main/java belongs to the build lifecycle. It's a
> separate process that should be executed outside of a Maven build. SO
> you would then have a separate goal for that is not bound to a phase by default.
>
> /Anders
>
>
> On Thu, Feb 28, 2013 at 10:57 AM, christofer.dutz@c-ware.de <
> christofer.dutz@c-ware.de> wrote:
>
> > Ahem ... this wasn't quite the path I was indenting to go with my
> question.
> >
> > What I am looking for is a generator-plugin that uses a template
> > engine (such as Velocity) to generate code, and uses as input an
> > pojo-object
> model
> > representing the details of a Java class that it should generate
> > code
> for.
> >
> > Now I would like to have (or create, if I have to) a plugin, that
> > allows me to specify which classes I am addressing using some
> > include/exclude config.
> >
> > I could configure different include/exclude groups, each one in one
> > execution and for each I could provide a set of:
> > - resourceTemplates
> > - testResourceTemplates
> > - sourceStubTemplates
> > - sourceTemplates
> > - testSourceStubTemplates
> > - testSourceTemplates
> > - ...
> > So for each matched class, I could generage:
> > - resources into the target/generated-resources directory using the
> > resourceTemplates (could be more than one)
> > - testResources into the target/generates-test-resources directory
> > using the testResourceTemplates, ...
> > - source code into target/generated-sources ...
> > - test source code into target/generated-test-sources ...
> >
> > One speciality I liked with GraniteDS was the ability to have
> > auto-generated stub-classes generated in src/main/java ... if a
> > class allready existed, no code would be generated, but if a class
> > didn't
> exist,
> > it would be generated. This was great for creating classes that
> > extend other generated classes and to allow customization. Something
> > simple as
> > this:
> >
> > public class MyClass extends MyClassBase { }
> >
> > And MyClassBase is generated to target/generated-sources.
> >
> > With such a plugin I could be able to generate my JPA MetaModel
> > classes, my DTOs, SQL scripts, ... and wouldn't have to configure
> > thousands of different generators. People could develop
> > best-practice templates for general purpose tasks.
> >
> > Hope this clarified, what I was looking for.
> >
> > If nothing similar exists, I would like to start developing
> > something
> like
> > this, because I think it would make my life a lot easier.
> >
> > Chris
> >
> > ________________________________________
> > Von: Stadelmann Josef [josef.stadelmann@axa-winterthur.ch]
> > Gesendet: Donnerstag, 28. Februar 2013 09:52
> > An: Maven Users List
> > Betreff: AW: Is there any generic Maven code generator?
> >
> > A code generator needs input. Hence some "generic formal language"
> > is
> used
> > to specify the input to the genric code generator? UML?
> > Visualization of
> a
> > generic language could mean - speak UML, UML is a reality today. UML
> > modells can says much more then 1000 words? Hence I opt for better
> > integration of modelling tools with maven. Also roundtrip
> > engineering is
> a
> > must. Enterprise Architect (EA) is able to generate code from UML
> > and
> other
> > diagrams for many target compiler and it helps you to have code,
> > modell
> and
> > documentation in sync. And EA can be feed with code and generates
> > your diagramms. A task which only ends when the modells become to complex.
> Maven
> > is much about geting and nailing dependent components together. If
> > maven finds the dependecies, a tool like EA could use input to draw
> > i.e. the component diagram, maybe a maven plugin can pave the way
> > for EA. Other whise, in my mind, there are far too many tools around
> > and I get to often the feeling that each time a problem developes,
> > folk starts to seek a
> tool
> > to avoid some brain work. So if one knows in which aspects maven
> > poms
> could
> > be used toward EA to get ceratin artiacs visualized for better
> > understanding, that would be almost a very good step into proper
> direction.
> > Josef
> >
> >
> > -----Ursprüngliche Nachricht-----
> > Von: christofer.dutz@c-ware.de [mailto:christofer.dutz@c-ware.de]
> > Gesendet: Mittwoch, 27. Februar 2013 12:31
> > An: users@maven.apache.org
> > Betreff: Is there any generic Maven code generator?
> >
> > I just posted this to Stackoverflow, but after posting I realized,
> > that I should have come here first :-(
> >
> >
> > I am currently working on a project, that makes intense usage of
> > code generation for various purposes. One generator generates SQL
> > scripts from jpa entities. Another generates DTOs from pojos,
> > another generates the
> > JPA2.0 meta model, jet another generates some xml and schema files
> > based
> on
> > Java classes ... each generator works completely different needs to
> > be configured differently.
> >
> > My question now is ... is there any generic maven code generator
> > plugin out there with the following attributes:
> >
> > *
> > Creates a pojo model of a Java class (Names, Properties, Annotation,
> > Methods ...)
> > *
> > Uses templates for defining the output that uses the pojo model to
> > generate any output.
> > *
> > Allows me to specify multiple templates for one class
> > *
> > Allows me to generate code and resources
> > *
> > Allows me to generate a base class to target/generated-sources and a
> dummy
> > implementation to src/main/java which simply extends the base class
> > (If
> the
> > dummy class in src/main/java exists, nothing happens, if it doesn't
> > it generates such a dummy class. This code is checked in to the SCM
> > and
> allows
> > extending the generated classes manually)
> >
> > I am using the Flexmojos GraniteDS plugin for generating my
> > ActionScript model code, but it's pretty specialized for that particular purpose.
> >
> > I think such a generic generator would make things a lot easier ...
> > is there something like that available out there, or do I have to
> > start implementing it myself?
> >
> > Chris
> >
> > --------------------------------------------------------------------
> > -
> >
> >
> | http://mail-archives.apache.org/mod_mbox/maven-users/201303.mbox/%3C66E38C42347D6446BF7FCB22C3D3878072EB8CD176@ECCR06PUBLIC.exchange.local%3E | CC-MAIN-2015-40 | refinedweb | 1,072 | 55.74 |
But the reported values are always numbers and never letters (like 5A), so for one distance I can get 58,58,59,58,59,60,60,59 etc. But since this is in hex there is a gap from -89 to -96 dBm. Is it suppose to be like this? Is it really in hex? What can I do about it?
I also first set "rssi" as an int and sent it to the other XBee, and I thought this was the problem. I changed my code and instead of sending "rssi" I'm sending rx16.getRssi(), but I then get this error: warning: invalid conversion from 'uint8_t {aka unsigned char}' to 'uint8_t* {aka unsigned char*}' [-fpermissive]
tx16=Tx16Request(0xFFFF, rx16.getRssi(),sizeof(rx16.getRssi()));
Code: Select all
#include <XBee.h> XBee xbee = XBee(); Rx16Response rx16 = Rx16Response(); Tx16Request tx16; //int rssi; void setup() { Serial.begin(9600); xbee.begin(Serial); } void loop() { xbee.readPacket(100); if (xbee.getResponse().isAvailable()){ if (xbee.getResponse().getApiId() == RX_16_RESPONSE) { xbee.getResponse().getRx16Response(rx16); Serial.println(""); Serial.println(rx16.getRssi()); //rssi = rx16.getRssi(); //tx16=Tx16Request(0xFFFF, rssi,sizeof(rssi)); tx16=Tx16Request(0xFFFF, rx16.getRssi(),sizeof(rx16.getRssi())); xbee.send(tx16); // delay(50); } } } | https://forum.sparkfun.com/viewtopic.php?f=13&t=46059 | CC-MAIN-2018-09 | refinedweb | 195 | 70.5 |
Name | Synopsis | Description | Return Values | Errors | Attributes | See Also
#include <sys/types.h> #include <dirent.h> void seekdir(DIR *dirp, long int loc);
The seekdir() function sets the position of the next readdir(3C) operation on the directory stream specified by dirp to the position specified by loc. The value of loc should have been returned from an earlier call to telldir(3C). The new position reverts to the one associated with the directory stream when telldir() was performed.
If the value of loc was not obtained from an earlier call to telldir() or if a call to rewinddir(3C) occurred between the call to telldir () and the call to seekdir(), the results of subsequent calls to readdir() are unspecified.
The seekdir() function returns no value.
No errors are defined.
See attributes(5) for descriptions of the following attributes:
opendir(3C), readdir(3C), rewinddir(3C), telldir(3C), attributes(5), standards(5)
Name | Synopsis | Description | Return Values | Errors | Attributes | See Also | http://docs.oracle.com/cd/E19082-01/819-2243/seekdir-3c/index.html | CC-MAIN-2014-10 | refinedweb | 160 | 55.03 |
On Mon, 2004-09-20 at 10:07, Donovan Preston wrote: >). The twisted 1.3 release will available for download for the forseeable future, for those who need it. I don't believe we should release stuff with subtle, broken semantics in twisted 2.0 - the release number implies that we are changing a few things, so this would be a good opportunity to abandon support for the old code. >. If there is to be no backwards compatibility, why bother with the separate namespace? Do we want to do this with other modules in the future, e.g. twisted.spread2? Our last discussion of this was inconclusive because I don't think we foresaw the web rewrite being so radically different. I guess we have to drag that dead horse out here to beat it one more time ;). Even if the namespace changes, there should only be one of these in svn at a time. If further development on twisted web 1 is going to continue it should be in a maintenance branch, and then we need to discuss how we're going to manage maintenance releases. > We already figured out which web server we ("we" being those developers > who actually care about the web) are going to be supporting. > twisted.web2. web should have a deprecation warning in __init__ for a > release, and then should be terminated with extreme prejudice. I would terminate before providing a deprecation warning. If you want to provide a backwards-compatibility release for using web1 with the new reactor core, that's fine, but I don't see any reason to include web1 with the new download. > As far as the nevow dependency, [...] you're not going to get much out of the box. It really sounds like nevow and web2 ought to be merged. Traffic on the twisted-web list suggests that anyone who is using one is using the other, at least in some capacity. The marketing aspect of this certainly requires some discussion, but it seems that one's utility is greatly reduced without the other, and the dependency is circular. For specialized cases, such as using nevow for CGI scripts, there is no problem with having the other code around as long as it can avoid being imported. | http://twistedmatrix.com/pipermail/twisted-python/2004-September/008660.html | CC-MAIN-2016-26 | refinedweb | 378 | 62.48 |
CStrings are a useful data type. They greatly simplify a lot of operations in MFC, making it much more convenient to do string manipulation. However, there are some special techniques to using CStrings, particularly hard for people coming from a pure-C background to learn. This essay discusses some of these techniques.
CString
Much of what you need to do is pretty straightforward. This is not a complete tutorial on CStrings, but captures the most common basic questions.
One of the very convenient features of CString is the ability to concatenate two strings. For example if we have
CString gray("Gray");
CString cat("Cat");
CString graycat = gray + cat;
is a lot nicer than having to do something like:
char gray[] = "Gray";
char cat[] = "Cat";
char * graycat = malloc(strlen(gray) + strlen(cat) + 1);
strcpy(graycat, gray);
strcat(graycat, cat);
Rather than using sprintf or wsprintf, you can do formatting for a CString by using the Format method:
sprintf
wsprintf
Format
CString s;
s.Format(_T("The total is %d"), total);
The advantage here is that you don't have to worry about whether or not the buffer is large enough to hold the formatted data; this is handled for you by the formatting routines.
Use of formatting is the most common way of converting from non-string data types to a CString, for example, converting an integer to a CString:
CString s;
s.Format(_T("%d"), total);
I always use the _T( ) macro because I design my programs to be at least Unicode-aware, but that's a topic for some other essay. The purpose of _T( ) is to compile a string for an 8-bit-character application as:
_T( )
#define _T(x) x // non-Unicode version
whereas for a Unicode application it is defined as
#define _T(x) L##x // Unicode version
so in Unicode the effect is as if I had written
s.Format(L"%d", total);
If you ever think you might ever possibly use Unicode, start coding in a Unicode-aware fashion. For example, never, ever use sizeof( ) to get the size of a character buffer, because it will be off by a factor of 2 in a Unicode application. We cover Unicode in some detail in Win32 Programming. When I need a size, I have a macro called DIM, which is defined in a file dim.h that I include everywhere:
sizeof( )
DIM
#define DIM(x) ( sizeof((x)) / sizeof((x)[0]) )
This is not only useful for dealing with Unicode buffers whose size is fixed at compile time, but any compile-time defined table.
class Whatever { ... };
Whatever data[] = {
{ ... },
...
{ ... },
};
for(int i = 0; i < DIM(data); i++) // scan the table looking for a match
Beware of those API calls that want genuine byte counts; using a character count will not work.
TCHAR data[20];
lstrcpyn(data, longstring, sizeof(data) - 1); // WRONG!
lstrcpyn(data, longstring, DIM(data) - 1); // RIGHT
WriteFile(f, data, DIM(data), &bytesWritten, NULL); // WRONG!
WriteFile(f, data, sizeof(data), &bytesWritten, NULL); // RIGHT
This is because lstrcpyn wants a character count, but WriteFile wants a byte count.
lstrcpyn
WriteFile
Using _T does not create a Unicode application. It creates a Unicode-aware application. When you compile in the default 8-bit mode, you get a "normal" 8-bit program; when you compile in Unicode mode, you get a Unicode (16-bit-character) application. Note that a CString in a Unicode application is a string that holds 16-bit characters.
_T
Unicode-aware
The simplest way to convert a CString to an integer value is to use one of the standard string-to-integer conversion routines.
While generally you will suspect that _atoi is a good choice, it is rarely the right choice. If you play to be Unicode-ready, you should call the function _ttoi, which compiles into _atoi in ANSI code and _wtoi in Unicode code. You can also consider using _tcstoul (for unsigned conversion to any radix, such as 2, 8, 10 or 16) or _tcstol (for signed conversion to any radix). For example, here are some examples:
_atoi
_ttoi
_wtoi
_tcstoul
_tcstol
CString hex = _T("FAB");
CString decimal = _T("4011");
ASSERT(_tcstoul(hex, 0, 16) == _ttoi(decimal));
This is the most common set of questions beginners have on the CString data type. Due largely to serious C++ magic, you can largely ignore many of the problems. Things just "work right". The problems come about when you don't understand the basic mechanisms and then don't understand why something that seems obvious doesn't work.
For example, having noticed the above example you might wonder why you can't write
CString graycat = "Gray" + "Cat";
or
CString graycat("Gray" + "Cat");
In fact the compiler will complain bitterly about these attempts. Why? Because the + operator is defined as an overloaded operator on various combinations of the CString and LPCTSTR data types, but not between two LPCTSTR data types, which are underlying data types. You can't overload C++ operators on base types like int and char, or char *. What will work is
LPCTSTR
int
char
char *
CString graycat = CString("Gray") + CString("Cat");
or even
CString graycat = CString("Gray") + "Cat";
If you study these, you will see that the + always applies to at least one CString and one LPCSTR.
LPCSTR
So you have a char *, or a string. How do you create a CString. Here are some examples:
char * p = "This is a test"
or, in Unicode-aware applications
TCHAR * p = _T("This is a test")
LPTSTR p = _T("This is a test");
you can write any of the following:
CString s = "This is a test"; // 8-bit only
CString s = _T("This is a test"); // Unicode-aware
CString s("This is a test"); // 8-bit only
CSTring s(_T("This is a test"); // Unicode-aware
CString s = p;
CString s(p);
Any of these readily convert the constant string or the pointer to a CString value. Note that the characters assigned are always copied into the CString so that you can do something like
TCHAR * p = _T("Gray");
CString s(p);
p = _T("Cat");
s += p;
and be sure that the resulting string is "GrayCat".
"GrayCat"
There are several other methods for CString constructors, but we will not consider most of these here; you can read about them on your own.
This is a slightly harder transition to find out about, and there is lots of confusion about the "right" way to do it. There are quite a few right ways, and probably an equal number of wrong ways.
The first thing you have to understand about a CString is that it is a special C++ object which contains three values: a pointer to a buffer, a count of the valid characters in the buffer, and a buffer length. The count of the number of characters can be any size from 0 up to the maximum length of the buffer minus one (for the NUL byte). The character count and buffer length are cleverly hidden.
NUL
Unless you do some special things, you know nothing about the size of the buffer that is associated with the CString. Therefore, if you can get the address of the buffer, you cannot change its contents. You cannot shorten the contents, and you absolutely must not lengthen the contents. This leads to some at-first-glance odd workarounds.
The operator LPCTSTR (or more specifically, the operator const TCHAR *), is overloaded for CString. The definition of the operator is to return the address of the buffer. Thus, if you need a string pointer to the CString you can do something like
const TCHAR *
CString s("GrayCat");
LPCTSTR p = s;
and it works correctly. This is because of the rules about how casting is done in C; when a cast is required, C++ rules allow the cast to be selected. For example, you could define (float) as a cast on a complex number (a pair of floats) and define it to return only the first float (called the "real part") of the complex number so you could say
Complex c(1.2f, 4.8f);
float realpart = c;
and expect to see, if the (float) operator is defined properly, that the value of realpart is now 1.2.
realpart
This works for you in all kinds of places. For example, any function that takes an LPCTSTR parameter will force this coercion, so that you can have a function (perhaps in a DLL you bought):
LPCTSTR
BOOL DoSomethingCool(LPCTSTR s);
and call it as follows
CString file("c:\\myfiles\\coolstuff")
BOOL result = DoSomethingCool(file);
This works correctly because the DoSomethingCool function has specified that it wants an LPCTSTR and therefore the LPCTSTR operator is applied to the argument, which in MFC means that the address of the string is returned.
DoSomethingCool
But what if you want to format it?
CString graycat("GrayCat");
CString s;
s.Format("Mew! I love %s", graycat);
Note that because the value appears in the variable-argument list (the list designated by "..." in the specification of the function)that there is no implicit coercion operator. What are you going to get?
...
Well, surprise, you actually get the string
"Mew! I love GrayCat"
because the MFC implementers carefully designed the CString data type so that an expression of type CString evaluates to the pointer to the string, so in the absence of any casting, such as in a Format or sprintf, you will still get the correct behavior. The additional data that describes a CString actually lives in the addresses below the nominal CString address.
What you can'tdo is modify the string. For example, you might try to do something like replace the "." by a "," (don't do it this way, you should use the National Language Support features for decimal conversions if you care about internationalization, but this makes a simple example):
CString v("1.00"); // currency amount, 2 decimal places
LPCTSTR p = v;
p[lstrlen(p) - 3] = ',';
If you try to do this, the compiler will complain that you are assigning to a constant string. This is the correct message. It would also complain if you tried
strcat(p, "each");
because strcat wants an LPTSTR as its first argument and you gave it an LPCTSTR.
strcat
LPTSTR
Don't try to defeat these error messages. You will get yourself into trouble!
The reason is that the buffer has a count, which is inaccessible to you (it's in that hidden area that sits below the CString address), and if you change the string, you won't see the change reflected in the character count for the buffer. Furthermore, if the string happens to be just about as long as the buffer physical limit (more on this later), an attempt to extend the string will overwrite whatever is beyond the buffer, which is memory you have no right to write (right?) and you'll damage memory you don't own. Sure recipe for a dead application.
A special method is available for a CString if you need to modify it. This is the operation GetBuffer. What this does is return to you a pointer to the buffer which is considered writeable. If you are only going to change characters or shorten the string, you are now free to do so:
GetBuffer
CString s(_T("File.ext"));
LPTSTR p = s.GetBuffer();
LPTSTR dot = strchr(p, '.'); // OK, should have used s.Find...
if(p != NULL)
*p = _T('\0');
s.ReleaseBuffer();
This is the first and simplest use of GetBuffer. You don't supply an argument, so the default of 0 is used, which means "give me a pointer to the string; I promise to not extend the string". When you call ReleaseBuffer, the actual length of the string is recomputed and stored in the CString. Within the scope of a GetBuffer/ReleaseBuffer sequene, and I emphasize this: You Must Not, Ever, Use Any Method Of CString on the CString whose buffer you have!The reason for this is that the integrity of the CString object is not guaranteed until the ReleaseBuffer is called. Study the code below:
0
ReleaseBuffer
GetBuffer/ReleaseBuffer
CString s(...);
LPTSTR p = s.GetBuffer();
//... lots of things happen via the pointer p
int n = s.GetLength(); // BAD!!!!! PROBABLY WILL GIVE WRONG ANSWER!!!
s.TrimRight(); // BAD!!!!! NO GUARANTEE IT WILL WORK!!!!
s.ReleaseBuffer(); // Things are now OK
int m = s.GetLength(); // This is guaranteed to be correct
s.TrimRight(); // Will work correctly
Suppose you want to actually extend the string. In this case you must know how large the string will get. This is just like declaring
char buffer[1024];
knowing that 1024 is more than enough space for anything you are going to do. The equivalent in the CString world is
LPTSTR p = s.GetBuffer(1024);
This call gives you not only a pointer to the buffer, but guarantees that the buffer will be (at least) 1024 bytes in length.
Also, note that if you have a pointer to a const string, the string value itself is stored in read-only memory; an attempt to store into it, even if you've done GetBuffer, you have a pointer to read-only memory, so an attempt to store into the string will fail with an access error. I haven't verified this for CString, but I've seen ordinary C programmers make this error frequently.
const
A common "bad idiom" left over from C programmers is to allocate a buffer of fixed size, do a sprintf into it, and assign it to a CString:
CString:
char buffer[256];
sprintf(buffer, "%......", args, ...); // ... means "lots of stuff here"
CString s = buffer;
while the better form is to do
CString s;
s.Format(_T("%....", args, ...);
Note that this always works; if your string happens to end up longer than 256 bytes you don't clobber the stack!
Another common error is to be clever and realize that a fixed size won't work, so the programmer allocates bytes dynamically. This is even sillier:
int len = lstrlen(parm1) + 13 + lstrlen(parm2) + 10 + 100;
char * buffer = new char[len];
sprintf(buffer, "%s is equal to %s, valid data", parm1, parm2);
CString s = buffer;
....
delete [] buffer;
Where it can be easily written as
CString s;
s.Format(_T("%s is equal to %s, valid data"), parm1, parm2);
Note that the sprintf examples are not Unicode-ready (although you could use tsprintf and put _T() around the formatting string, but the basic idea is still that you are doing far more work than is necessary, and it is error-prone.
tsprintf
_T()
A very common operation is to pass a CString value in to a control, for example, a CTreeCtrl. While MFC provides a number of convenient overloads for the operation, but in the most general situation you use the "raw" form of the update, and therefore you need to store a pointer to a string in the TVITEM which is included within the TVINSERTITEMSTRUCT:
CTreeCtrl
TVITEM
TVINSERTITEMSTRUCT:
TVINSERTITEMSTRUCT tvi;
CString s;
// ... assign something to s
tvi.item.pszText = s; // Compiler yells at you here
// ... other stuff
HTREEITEM ti = c_MyTree.InsertItem(&tvi);
Now why did the compiler complain? It looks like a perfectly good assignment! But in fact if you look at the structure, you will see that the member is declared in the TVITEM structure as shown below:
TVITEM
LPTSTR pszText;
int cchTextMax;
Therefore, the assignment is not assigning to an LPCTSTR and the compiler has no idea how to cast the right hand side of the assignment to an LPTSTR.
OK, you say, I can deal with that, and you write
tvi.item.pszText = (LPCTSTR)s; // compiler still complains!
What the compiler is now complaining about is that you are attempting to assign an LPCTSTR to an LPTSTR, an operation which is forbidden by the rules of C and C++. You may not use this technique to accidentally alias a constant pointer to a non-constant alias so you can violate the assumptions of constancy. If you could, you could potentially confuse the optimizer, which trusts what you tell it when deciding how to optimize your program. For example, if you do
const int i = ...;
//... do lots of stuff
... = a[i]; // usage 1
// ... lots more stuff
... = a[i]; // usage 2
Then the compiler can trust that, because you said const, that the value of i at "usage1" and "usage2" is the same value, and it can even precompute the address of a[i] at usage1 and keep the value around for later use at usage2, rather than computing it each time. If you were able to write
i
a[i]
const int i = ...;
int * p = &i;
//... do lots of stuff
... = a[i]; // usage 1
// ... lots more stuff
(*p)++; // mess over compiler's assumption
// ... and other stuff
... = a[i]; // usage 2
The the compiler would believe in the constancy of i, and consequently the constancy of the location of a[i], and the place where the indirection is done destroys that assumption. Thus, the program would exhibit one behavior when compiled in debug mode (no optimizations) and another behavior when compiled in release mode (full optimization). This Is Not Good. Therefore, the attempt to assign the pointer to i to a modifiable reference is diagnosed by the compiler as being bogus. This is why the (LPCTSTR) cast won't really help.
(LPCTSTR)
Why not just declare the member as an LPCTSTR? Because the structure is used both for reading and writing to the control. When you are writing to the control, the text pointer is actually treated as an LPCTSTR but when you are reading from the control you need a writeable string. The structure cannot distinguish its use for input from its use for output.
LPCTSTR?
Therefore, you will often find in my code something that looks like
tvi.item.pszText = (LPTSTR)(LPCTSTR)s;
This casts the CString to an LPCTSTR, thus giving me that address of the string, which I then force to be an LPTSTR so I can assign it. Note that this is valid only if you are using the value as data to a Set or Insert style method! You cannot do this when you are trying to retrieve data!
You need a slightly different method when you are trying to retrieve data, such as the value stored in a control. For example, for a CTreeCtrl using the GetItem method. Here, I want to get the text of the item. I know that the text is no more than MY_LIMIT in size. Therefore, I can write something like
GetItem
MY_LIMIT
TVITEM tvi;
// ... assorted initialization of other fields of tvi
tvi.pszText = s.GetBuffer(MY_LIMIT);
tvi.cchTextMax = MY_LIMIT;
c_MyTree.GetItem(&tvi);
s.ReleaseBuffer();
Note that the code above works for any type of Set method also, but is not needed because for a Set-type method (including Insert) you are not writing the string. But when you are writing the CString you need to make sure the buffer is writeable. That's what the GetBuffer does. Again, note that once you have done the GetBuffer call, you must not do anything else to the CString until the ReleaseBuffer call.
Set
Insert
When programming with ActiveX, you will sometimes need a value represented as a type BSTR. A BSTR is a counted string, a wide-character (Unicode) string on Intel platforms and can contain embedded NUL characters.
BSTR
You can convert at CString to a BSTR by calling the CString method AllocSysString:
b
:.
B
Str "code-string" href="%3Cspan">"#VARIANT to CString">VARIANT type, which is a type returned by various COM and Automation calls.
"code-string" href="%3Cspan">"#VARIANT to CString">VARIANT
For example, if you do, in an ANSI application,
BSTR b;
b = ...; // whatever
CString s(b == NULL ? L"" :.
LPCWSTR
NULL
Remember, according to the rules of C/C++, if you have an LPWSTR it will match a parameter type of LPCWSTR (it doesn't work the other way!).
LPWSTR.
lstrlen
Note that the conversion from Unicode to ANSI uses the ::WideCharToMultiByte conversion with specific arguments that you may not like. If you want a different conversion than the default, you have to write your own.
::WideCharToMultiByte
If you are compiling as UNICODE, then it is a simple assignment:
UNICODE change.
Actually, I've never done this; I don't work in COM/OLE/ActiveX where this is an issue. But I saw a posting by Robert Quirk on the microsoft.public.vc.mfc newsgroup on how to do this, and it seemed silly not to include it in this essay, so here it is, with a bit more explanation and elaboration. Any errors relative to what he wrote are my fault.
microsoft.public.vc.mfc
A VARIANT is a generic parameter/return type in COM programming. You can write methods that return a type VARIANT, and which type the function returns may (and often does) depend on the input parameters to your method (for example, in Automation, depending on which method you call, IDispatch::Invoke may return (via one of its parameters) a VARIANT which holds a BYTE, a WORD, an float, a double, a date, a BSTR, and about three dozen other types (see the specifications of the VARIANT structure in the MSDN). In the example below, it is assumed that the type is known to be a variant of type BSTR, which means that the value is found in the string referenced by bstrVal. This takes advantage of the fact that there is a constructor which, in an ANSI application, will convert a value referenced by an LPCWCHAR to a CString (see "code-string" href="%3Cspan">"#BSTR to CString">BSTR-to-CString). In Unicode mode, this turns out to be the normal CString constructor. See the caveats about the default ::WideCharToMultibyte conversion and whether or not you find these acceptable (mostly, you will).
VARIANT
IDispatch::Invoke
BYTE
WORD
float
double
bstrVal
LPCWCHAR
"code-string" href="%3Cspan">"#BSTR to CString">BSTR
::WideCharToMultibyte
VARIANT vaData;
vaData = m_com.YourMethodHere();
ASSERT(vaData.vt == VT_BSTR);
CString strData(vaData.bstrVal);
Note that you could also make a more generic conversion routine that looked at the vt field. In this case, you might consider something like:
vt
CString VariantToString(VARIANT * va)
{
CString s;
switch(va->vt)
{ /* vt */
case VT_BSTR:
return CString(vaData->bstrVal);
case VT_BSTR | VT_BYREF:
return CString(*vaData->pbstrVal);
case VT_I4:
s.Format(_T("%d"), va->lVal);
return s;
case VT_I4 | VT_BYREF:
s.Format(_T("%d"), *va->plVal);
case VT_R8:
s.Format(_T("%f"), va->dblVal);
return s;
... remaining cases left as an Exercise For The Reader
default:
ASSERT(FALSE); // unknown VARIANT type (this ASSERT is optional)
return CString("");
} /* vt */
}
If you want to create a program that is easily ported to other languages, you must not include native-language strings in your source code. (For these examples, I'll use English, since that is my native language (aber Ich kann ein bischen Deutsch sprechen). So it is verybad practice to write
CString s = "There is an error";
Instead, you should put all your language-specific strings (except, perhaps, debug strings, which are never in a product deliverable). This means that is fine to write
s.Format(_T("%d - %s"), code, text);
in your program; that literal string is not language-sensitive. However, you must be very careful to not use strings like
// fmt is "Error in %s file %s"
// readorwrite is "reading" or "writing"
s.Format(fmt, readorwrite, filename);
I speak of this from experience. In my first internationalized application I made this error, and in spite of the fact that I know German, and that German word order places the verb at the end of a sentence, I had done this. Our German distributor complained bitterly that he had to come up with truly weird error messages in German to get the format codes to do the right thing. It is much better (and what I do now) to have two strings, one for reading and one for writing, and load the appropriate one, making them string parameter-insensitive, that is, instead of loading the strings "reading" or "writing", load the whole format:
// fmt is "Error in reading file %s"
// "Error in writing file %s"
s.Format(fmt, filename);
Note that if you have more than one substitution, you should make sure that if the word order of the substitutions does not matter, for example, subject-object, subject-verb, or verb-object, in English.
For now, I won't talk about FormatMessage, which actually is better than sprintf/Format, but is poorly integrated into the CString class. It solves this by naming the parameters by their position in the parameter list and allows you to rearrange them in the output string.
FormatMessage
So how do we accomplish all this? By storing the string values in the resource known as the STRINGTABLE in the resource segment. To do this, you must first create the string, using the Visual Studio resource editor. A string is given a string ID, typically starting IDS_. So you have a message, you create the string and call it IDS_READING_FILE and another called IDS_WRITING_FILE. They appear in your .rc file as
STRINGTABLE
IDS_
IDS_READING_FILE
IDS_WRITING_FILE
STRINGTABLE
IDS_READING_FILE "Reading file %s"
IDS_WRITING_FILE "Writing file %s"
END
Note: these resources are always stored as Unicode strings, no matter what your program is compiled as. They are even Unicode strings on Win9x platforms, which otherwise have no real grasp of Unicode (but they do for resources!). Then you go to where you had stored the strings
// previous code
CString fmt;
if(...)
fmt = "Reading file %s";
else
fmt = "Writing file %s";
...
// much later
CString s;
s.Format(fmt, filename);
and instead do
// revised code
CString fmt;
if(...)
fmt.LoadString(IDS_READING_FILE);
else
fmt.LoadString(DS_WRITING_FILE);
...
// much later
CString s;
s.Format(fmt, filename);
Now your code can be moved to any language. The LoadString method takes a string ID and retrieves the STRINGTABLE value it represents, and assigns that value to the CString.
LoadString
STRINGTABLE
There is a clever feature of the CString constructor that simplifies the use of STRINGTABLE entries. It is not explicitly documented in the CString::CString specification, but is obscurely shown in the example usage of the constructor! (Why this couldn't be part of the formal documentation and has to be shown in an example escapes me!). The feature is that if you cast a STRINGTABLE ID to an LPCTSTR it will implicitly do a LoadString. Thus the following two examples of creating a string value produce the same effect, and the ASSERT will not trigger in debug mode compilations:
CString::CString
ASSERT
CString s;
s.LoadString(IDS_WHATEVER);
CString t( (LPCTSTR)IDS_WHATEVER);
ASSERT(s == t);
Now, you may say, how can this possibly work? How can it tell a valid pointer from a STRINGTABLE ID? Simple: all string IDs are in the range 1..65535. This means that the high-order bits of the pointer will be 0. Sounds good, but what if I have valid data in a low address? Well, the answer is, you can't. The lower 64K of your address space will never, ever, exist. Any attempt to access a value in the address range 0x00000000 through 0x0000FFFF (0..65535) will always and forever give an access fault. These addresses are never, ever valid addresses. Thus a value in that range (other than 0) must necessarily represent a STRINGTABLE ID.
0x00000000
0x0000FFFF
I tend to use the MAKEINTRESOURCE macro to do the casting. I think it makes the code clearer regarding what is going on. It is a standard macro which doesn't have much applicability otherwise in MFC. You may have noted that many methods take either a UINT or an LPCTSTR as parameters, using C++ overloading. This gets us around the ugliness of pure C where the "overloaded" methods (which aren't really overloaded in C) required explicit casts. This is also useful in assigning resource names to various other structures.
MAKEINTRESOURCE
UINT
CString s;
s.LoadString(IDS_WHATEVER);
CString t( MAKEINTRESOURCE(IDS_WHATEVER));
ASSERT(s == t);
Just to give you an idea: I practice what I preach here. You will rarely if ever find a literal string in my program, other than the occasional debug output messages, and, of course, any language-independent string.
Here's a little problem that came up on the microsoft.public.vc.mfc newsgroup a while ago. I'll simplify it a bit. The basic problem was the programmer wanted to write a string to the Registry. So he wrote:
I am trying to set a registry value using RegSetValueEx() and it is the value that I am having trouble with. If I declare a variable of char[] it works fine. However, I am trying to convert from a CString and I get garbage. "ÝÝÝÝ...ÝÝÝÝÝÝ" to be exact. I have tried GetBuffer, typecasting to char*, LPCSTR. The return of GetBuffer (from debug) is the correct string but when I assign it to a char* (or LPCSTR) it is garbage. Following is a piece of my code:
RegSetValueEx()
char[]
char*
char* szName = GetName().GetBuffer(20);
RegSetValueEx(hKey, "Name", 0, REG_SZ,
(CONST BYTE *) szName,
strlen (szName + 1));
The Name string is less then 20 chars long, so I don't think the GetBuffer parameter is to blame. It is very frustrating and any help is appreciated.
Name
Dear Frustrated,
You have been done in by a fairly subtle error, caused by trying to be a bit too clever. What happened was that you fell victim to knowing too much. The correct code is shown below:
CString Name = GetName();
RegSetValueEx(hKey, _T("Name"), 0, REG_SZ,
(CONST BYTE *) (LPCTSTR)Name,
(Name.GetLength() + 1) * sizeof(TCHAR));
Here's why my code works and yours didn't. When your function GetName returned a CString, it returned a "temporary object". See the C++ Reference manual §12.2.
In some circumstances it may be necessary or convenient for the compiler to generate a temporary object. Such introduction of temporaries is implementation dependent. When a compiler introduces a temporary object of a class that has a constructor it must ensure that a construct is called for the temporary object. Similarly, the destructor must be called for a temporary object of a class where a destructor is declared.
The compiler must ensure that a temporary object is destroyed. The exact point of destruction is implementation dependent....This destruction must take place before exit from the scope in which the temporary is created.
Most compilers implement the implicit destructor for a temporary at the next program sequencing point following its creation, that is, for all practical purposes, the next semicolon. Hence the CString existed when the GetBuffer call was made, but was destroyed following the semicolon. (As an aside, there was no reason to provide an argument to GetBuffer, and the code as written is incorrect since there is no ReleaseBuffer performed). So what GetBuffer returned was a pointer to storage for the text of the CString. When the destructor was called at the semicolon, the basic CString object was freed, along with the storage that had been allocated to it. The MFC debug storage allocator then rewrites this freed storage with 0xDD, which is the symbol "Ý". By the time you do the write to the Registry, the string contents have been destroyed.
There is no particular reason to need to cast the result to a char * immediately. Storing it as a CString means that a copy of the result is made, so after the temporary CString is destroyed, the string still exists in the variable's CString. The casting at the time of the Registry call is sufficient to get the value of a string which already exists.
In addition, my code is Unicode-ready. The Registry call wants a byte count. Note also that the call lstrlen(Name+1) returns a value that is too small by 2 for an ANSI string, since it doesn't start until the second character of the string. What you meant to write was lstrlen(Name) + 1 (OK, I admit it, I've made the same error!). However, in Unicode, where all characters are two bytes long, we need to cope with this. The Microsoft documentation is surprisingly silent on this point: is the value given for REG_SZ values a byte count or a character count? I'm assuming that their specification of "byte count" means exactly that, and you have to compensate.
One problem of CString is that it hides certain inefficiencies from you. On the other hand, it also means that it can implement certain efficiencies. You may be tempted to say of the following code
CString s = SomeCString1;
s += SomeCString2;
s += SomeCString3;
s += ",";
s += SomeCString4;
that it is horribly inefficient compared to, say
char s[1024];
lstrcpy(s, SomeString1);
lstrcat(s, SomeString2);
lstrcat(s, SomeString 3);
lstrcat(s, ",");
lstrcat(s, SomeString4);
After all, you might think, first it allocates a buffer to hold SomeCString1, then copies SomeCString1 to it, then detects it is doing a concatenate, allocates a new buffer large enough to hold the current string plus SomeCString2, copies the contents to the buffer and concatenates the SomeCString2 to it, then discards the first buffer and replaces the pointer with a pointer to the new buffer, then repeats this for each of the strings, being horribly inefficient with all those copies.
The truth is, it probably never copies the source strings (the left side of the +=) for most cases.
In VC++ 6.0, in Release mode, all CString buffers are allocated in predefined quanta. These are defined as 64, 128, 256, and 512 bytes. This means that unless the strings are very long, the creation of the concatenated string is an optimized version of a strcat operation (since it knows the location of the end of the string it doesn't have to search for it, as strcat would; it just does a memcpy to the correct place) plus a recomputation of the length of the string. So it is about as efficient as the clumsier pure-C code, and one whole lot easier to write. And maintain. And understand.
Those of you who aren't sure this is what is really happening, look in the source code for CString, strcore.cpp, in the mfc\src subdirectory of your vc98 installation. Look for the method ConcatInPlace which is called from all the += operators.
Aha! So CString isn't really "efficient!" For example, if I create
CString cat("Mew!");
then I don't get a nice, tidy little buffer 5 bytes long (4 data bytes plus the terminal NUL). Instead the system wastes all that space by giving me 64 bytes and wasting 59 of them.
If this is how you think, be prepared to reeducate yourself. Somewhere in your career somebody taught you that you always had to use as little space as possible, and this was a Good Thing.
This is incorrect. It ignores some seriously important aspects of reality.
If you are used to programming embedded applications with 16K EPROMs, you have a particular mindset for doing such allocation. For that application domain, this is healthy. But for writing Windows applications on 500MHz, 256MB machines, it actually works against you, and creates programs that perform far worse than what you would think of as "less efficient" code.
For example, size of strings is thought to be a first-order effect. It is Good to make this small, and Bad to make it large. Nonsense. The effect of precise allocation is that after a few hours of the program running, the heap is cluttered up with little tiny pieces of storage which are useless for anything, but they increase the storage footprint of your application, increase paging traffic, can actually slow down the storage allocator to unacceptable performance levels, and eventually allow your application to grow to consume all of available memory. Storage fragmentation, a second-order or third-order effect, actually dominates system performance. Eventually, it compromises reliability, which is completely unacceptable.
Note that in Debug mode compilations, the allocation is always exact. This helps shake out bugs.
Assume your application is going to run for months at a time. For example, I bring up VC++, Word, PowerPoint, FrontPage, Outlook Express, Forté Agent, Internet Explorer, and a few other applications, and essentially never close them. I've edited using PowerPoint for days on end (on the other hand, if you've had the misfortune to have to use something like Adobe FrameMaker, you begin to appreciate reliability; I've rarely been able to use this application without it crashing four to six times a day! And always because it has run out of space, usually by filling up my entire massive swap space!) Precise allocation is one of the misfeatures that will compromise reliability and lead to application crashes.
By making CStrings be multiples of some quantum, the memory allocator will end up cluttered with chunks of memory which are almost always immediately reusable for another CString, so the fragmentation is minimized, allocator performance is enhanced, application footprint remains almost as small as possible, and you can run for weeks or months without problem.
Aside: Many years ago, at CMU, we were writing an interactive system. Some studies of the storage allocator showed that it had a tendency to fragment memory badly. Jim Mitchell, now at Sun Microsystems, created a storage allocator that maintained running statistics about allocation size, such as the mean and standard deviation of all allocations. If a chunk of storage would be split into a size that was smaller than the mean minus one s than the prevailing allocation, he didn't split it at all, thus avoiding cluttering up the allocator with pieces too small to be usable. He actually used floating point inside an allocator! His observation was that the long-term saving in instructions by not having to ignore unusable small storage chunks far and away exceeded the additional cost of doing a few floating point operations on an allocation operation. He was right.
Never, ever think about "optimization" in terms of small-and-fast analyzed on a per-line-of-code basis. Optimization should mean small-and-fast analyzed at the complete application level (if you like New Age buzzwords, think of this as the holistic approach to program optimization, a whole lot better than the per-line basis we teach new programmers). At the complete application level, minimum-chunk string allocation is about the worst method you could possibly use.
If you think optimization is something you do at the code-line level, think again. Optimization at this level rarely matters. Read my essay on Optimization: Your Worst Enemy for some thought-provoking ideas on this topic.
Note that the += operator is special-cased; if you were to write:
CString s = SomeCString1 + SomeCString2 + SomeCString3 + "," + SomeCString4;
then each application of the + operator causes a new string to be created and a copy to be done (although it is an optimized version, since the length of the string is known and the inefficiencies of strcat do not come into play).
These are just some of the techniques for using CString. I use these every day in my programming. CString is not a terribly difficult class to deal with, but generally the MFC materials do not make all of this apparent, leaving you to figure it out on your own.
Special thanks to Lynn Wallace for pointing out a syntax error in one of the examples, Brian Ross for his comments on BSTR conversions, and Robert Quirk for his example of VARIANT-to-BSTR conversion.
The views expressed in these essays are those of the author, and in no way represent, nor are they endorsed by, Microsoft.
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
CString
AfxGetResourceHandle()
afxCurrentResourceHandle == NULL
afxSetResourceHandle()
General News Suggestion Question Bug Answer Joke Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | http://www.codeproject.com/Articles/542/CString-Management?msg=3356698 | CC-MAIN-2014-41 | refinedweb | 6,654 | 61.16 |
A constructor is that one which is called implicitly when an object is created in Java. A constructor gives properties to an object while it is being created itself. Else, separate methods are required to give properties to the object after it is created. Both styles are given hereunder.
Observe the code where properties for a Student object are given using a Java constructor at the time of object creation.
public class Student { int marks; String name; public Student(int marks, String name) { this.marks = marks; this.name = name; } public static void main(String args[]) { Student std1 = new Student(50, "Jyostna"); // while std1 is being created, marks and name are given System.out.println(std1.marks + " : " + std1.name); } }
public Student(int marks, String name)
This is called constructor and will be discussed downside.
Through constructor, std1 object is given properties of marks and name (in Spring, it is known as constructor injection; injecting the properties to an object through constructor).
Let us repeat the same code with methods, say setter methods.
public class Student { int marks; String name; public void setMarks(int marks) { this.marks = marks; } public void setName(String name) { this.name = name; } public static void main(String args[]) { Student std1 = new Student(); // first object is created std1.setMarks(50); // then properties are assgined std1.setName("Jyostna"); System.out.println(std1.marks + " : " + std1.name); } }
See how much code increases with methods. This is the importance of constructor in Java. In Java, String constructor is overloaded many folds and is discussed in Java String Constructors.
Java Constructor is a good concept and must be studied elaborately; should be known how to write a constructor, constructor overloading, calling same class constructor and super class constructor etc. and all are discussed very clearly with code examples in Constructors and Constructor overloading.
Other Constructor related Topics, Read only when you are comfortable with basic constructor examples | https://way2java.com/constructors/what-is-java-constructor/ | CC-MAIN-2020-40 | refinedweb | 313 | 57.57 |
The linked list is one of the most important concepts and data structures to learn while preparing for interviews. Having a good grasp of Linked Lists can be a huge plus point in a coding interview.
Problem Statement
Given two linked lists, our task is to check whether the first list is present in the second list or not.
Problem Statement Understanding
Let’s try to understand the problem statement with the help of examples.
According to the problem statement, we will be given two linked lists list1 and list2, and we need to check whether list1 is present in list2 or not.
- If list1 is present in list2, we will output Found.
- Otherwise, we will output Not Found.
If the given linked lists are list1: 1→2→4 and list2: 1→2→1→2→4→3.
- As we can see, in list2 starting from the 3rd index and up to 5th index (1→2→4) (considering 1 based indexing), list2 contains all the elements of list 1 in the same order as they are present in list1. So we can say that list2 contains list1.
- We will output Found, as we found our list1 in list2.
Say, if the list1: 1→2→4 and list2: 1→2→1→2→3→4.
- As we can see that list2 does not contain all the elements of list1 in the same order as they were present in list1, so we will output Not Found.
Some more examples
Sample Input 1: list1 = 3→5, list2 =5→3→5.
Sample Output 1: Found
Sample Input 2: list1 = 1→3→4, list2 = 1→2→1→3→5.
Sample Output 2: Not Found
Remember: Here we are doing a sublist search, so if all the elements of list1 are present in list2 in the same order as they were present in list1 and these elements are consecutive in list2, then only we can say that we found list1 in list2. and Algorithm
- Start traversing through both the list.
- Match every node of the 2nd list (list2) with the first node of the 1st list (list1).
- If the first node of the 1st list matches with the current node of the 2nd list.
- Then, we have to check whether the remaining nodes of 1st List matches the nodes of 2nd list or not.
- If all nodes of 1st list (list1) are matched with 2nd list (list2), then return true.
- If all nodes of list1 didn’t match with list2 nodes, we will move forward in list2 and repeat the above process from step 2.
- Until any of the list1 or list2 becomes empty, we will repeat the above process.
- If our list1 got empty, then we can say that list1 found in list2, else not.
Dry Run
Code Implementation
#include
using namespace std; // Linked List node structure struct Node { int data; Node* next; }; //This searchList function will return true if list1 is present in list2 bool searchList(Node* list1, Node* list2) { Node* p1 = list1, *p2 = list2; if (list1 == NULL && list2 == NULL) return true; if ( list1 == NULL || (list1 != NULL && list2 == NULL)) return false; while (list2 != NULL) { p2 = list2; while (p1 != NULL) { if (p2 == NULL) return false; else if (p1->data == p2->data) { p1 = p1->next; p2 = p2->next; } else break; } if (p1 == NULL) return true; p1 = list1; list2 = list2->next; } return false; } /* This function is used to print the nodes of a given linked list */ void printList(Node* node) { while (node != NULL) { printf("%d ", node->data); node = node->next; } } // This function is used to add a new node to a linked list Node *newNode(int key) { Node *temp = new Node; temp-> data= key; temp->next = NULL; return temp; } int main() { Node *list1 = newNode(1); list1->next = newNode(2); list1->next->next = newNode(4); Node *list2 = newNode(1); list2->next = newNode(2); list2->next->next = newNode(1); list2->next->next->next = newNode(2); list2->next->next->next->next = newNode(4); list2->next->next->next->next->next = newNode(3); searchList(list1,list2) ? cout << "Found" : cout << "Not Found"; return 0; }
#include
#include #include struct Node { int data; struct Node* next; }; // Returns true if first list is present in second // list bool findList(struct Node* first,struct Node* second) { struct Node* ptr1 = first, *ptr2 = second; // If both linked lists are empty, return true if (first == NULL && second == NULL) return true; // Else If one is empty and other is not return // false if ( first == NULL || (first != NULL && second == NULL)) return false; // Traverse the second list by picking nodes // one by one while (second != NULL) { // Initialize ptr2 with current node of second ptr2 = second; // Start matching first list with second list while (ptr1 != NULL) { // If second list becomes empty and first // not then return false if (ptr2 == NULL) return false; // If data part is same, go to next // of both lists else if (ptr1->data == ptr2->data) { ptr1 = ptr1->next; ptr2 = ptr2->next; } // If not equal then break the loop else break; } // Return true if first list gets traversed // completely that means it is matched. if (ptr1 == NULL) return true; // Initialize ptr1 with first again ptr1 = first; // And go to next node of second list second = second->next; } return false; } /* Function to print nodes in a given linked list */ void printList(struct Node* node) { while (node != NULL) { printf("%d ", node->data); node = node->next; } } // Function to add new node to linked lists struct Node *newNode(int key) { struct Node* temp = (struct Node*)malloc(sizeof(struct Node)); temp-> data= key; temp->next = NULL; return temp; } /* Driver program to test above functions*/ int main() { /* Let us create two linked lists to test the above functions. Created lists shall be a: 1->2->3->4 b: 1->2->1->2->3->4*/ struct Node *a = newNode(1); a->next = newNode(2); a->next->next = newNode(3); a->next->next->next = newNode(4); struct Node *b = newNode(1); b->next = newNode(2); b->next->next = newNode(1); b->next->next->next = newNode(2); b->next->next->next->next = newNode(3); b->next->next->next->next->next = newNode(4); findList(a,b) ? printf("LIST FOUND") : printf("LIST NOT FOUND"); return 0; }
class Node: def __init__(self, value = 0): self.value = value self.next = None # Returns true if first list is # present in second list def findList(first, second): if not first and not second: return True if not first or not second: return False ptr1 = first ptr2 = second while ptr2: ptr2 = second while ptr1: if not ptr2: return False elif ptr1.value == ptr2.value: ptr1 = ptr1.next ptr2 = ptr2.next else: break if not ptr1: return True ptr1 = first second = second.next return False node_a = Node(1) node_a.next = Node(2) node_a.next.next = Node(4) node_b = Node(1) node_b.next = Node(2) node_b.next.next = Node(1) node_b.next.next.next = Node(2) node_b.next.next.next.next = Node(4) node_b.next.next.next.next.next = Node(3) if findList(node_a, node_b): print("LIST FOUND") else: print("LIST NOT FOUND")
Output
Found
Time Complexity: O(M * N), M is the size of lsit1 and N is the size of list2.
[forminator_quiz id="4878"]
So, In this blog, we have learned How to Search a linked list in another list. If you want to solve more questions on Linked List, which are curated by our expert mentors at PrepBytes, you can follow this link Linked List. | https://www.prepbytes.com/blog/linked-list/sublist-search-search-a-linked-list-in-another-list/ | CC-MAIN-2022-21 | refinedweb | 1,217 | 76.35 |
Investors in BCE Inc (Symbol: BCE) saw new options become available today, for the November 15th expiration. At Stock Options Channel, our YieldBoost formula has looked up and down the BCE options chain for the new November 15 BCE, that could represent an attractive alternative to paying $48.10.11% return on the cash commitment, or 0.76% annualized — at Stock Options Channel we call this the YieldBoost.
Below is a chart showing the trailing twelve month trading history for BCE Inc, and highlighting in green where the $45.00 strike is located relative to that history:
Turning to the calls side of the option chain, the call contract at the $50.00 strike price has a current bid of 20 cents. If an investor was to purchase shares of BCE stock at the current price level of $48.10/share, and then sell-to-open that call contract as a "covered call," they are committing to sell the stock at $50.00. Considering the call seller will also collect the premium, that would drive a total return (excluding dividends, if any) of 4.37% if the stock gets called away at the November 15th expiration (before broker commissions). Of course, a lot of upside could potentially be left on the table if BCE shares really soar, which is why looking at the trailing twelve month trading history for BCE Inc, as well as studying the business fundamentals becomes important. Below is a chart showing BCE's trailing twelve month trading history, with the $50.00 strike highlighted in red:
Considering the fact that the $50 80%..42% boost of extra return to the investor, or 2.86% annualized, which we refer to as the YieldBoost.
The implied volatility in the put contract example is 30%, while the implied volatility in the call contract example is 15%.
Meanwhile, we calculate the actual trailing twelve month volatility (considering the last 250 trading day closing values as well as today's price of $48.10) to be 13%.. | https://www.nasdaq.com/articles/bce-november-15th-options-begin-trading-2019-09-23 | CC-MAIN-2021-10 | refinedweb | 336 | 62.88 |
Introduction What if your customer asks you to integrate Dynamics AX with your Mobile application? In that case you need to be smart. If you have worked with WCF, you would be having an idea that when you host WCF you get a WSDL link which you can use to
add a service reference.
Dynamics AX has AIF (Application Integration Framework) which is used to integrate Microsoft Dynamics AX with external software systems.
MSDN says:
"Application Integration Framework (AIF) provides an extensible framework that supports multiple asynchronous transports, as well as synchronous transport using Web services, to reliably exchange documents in XML format with trading partners or other
systems.
An exchange starts with a document, that is, a document class defined using Microsoft Dynamics AX business logic. The document is serialized into XML and header information is added to create a message, which may then be transferred into or
out of your Microsoft Dynamics AX system (called the local endpoint within AIF)."
Now issue here is that you need to integrate different table of Microsoft Dynamics and need to Get and Post data to them. Two options here:
Once we are done with developing our ASP.NET Web API we are going to host that using IIS Server and use public IP to let our mobile application communicate with our Microsoft Dynamics AX through Web API. Being mobile guy I don't have much idea of Dynamics
AX. It took me few hours to understand Dynamics AX and in this tutorial I would start from scratch as if you never interacted with Dynamics AX previously. So, lets start,
Lets start with a simple table, I would first create a simple table having not more than three attributes,
We need to Create a Unique Index field as well because if we are targeting READ and FIND method of Document Service. In AOT under Query Node with Name AxdStudentInfo, create a new DataSource and add StudentInfoTable. Right click on field and set Dynamic
Property to Yes
Now right click on “StudentInfoTable” Node and set its Update properties to true, so Update methods will created against this query object
Now in Development environment from top menu click on AIF frame and then select on Create Document Service and click on it. For this result in running a Wizard
Select following options from Wizard, select the query which we create above mentioned drop down.
Right click on AxdStudentInfo and compile again. You have delete two methods cacheObject and cacheRecordRecord in AxStudentInfoTable and compile the whole project.
Now right click on StudentInfoService and set namespace
Now create a right on Service group Node and create a new service group.
Drag and drop service node under Student InfoService group. So Student Service will be available for in side service group.
Now generate Inc CIL or right click on StudentInfoServicegroup and click deploy
After successful deployment you may find following info box
Now open a new work space and go at System Administration module and open in bound port under Application integration framework And Open Application Inbound framework
Copy WSDL URL and now our real work starts
Create New ASP.NET project and select Web Api from template.
Create New Controller and Create a Post Method for that.
Now we need to consume our WSDL Link, for that we need to first of all add a Service Reference in solution explorer. Enter your WSDL link complete he Wizard.
After you have added service reference you need to consume that service reference and sync that with POST method of your Web API.
public void Post([FromBody]Person value)
{
ServiceReference2AxdStudentQuery _TableQuery = new ServiceReference2AxdStudentQuery();
ServiceReference2AxdEntity_StudentsTable_1 _StudentTable = new ServiceReference2AxdEntity_StudentsTable_1();
_StudentTable.Name = value.Name;
_StudentTable.ID = value.ID;
_TableQuery.StudentsTable_1 = new ServiceReference2AxdEntity_StudentsTable_1[1] { _StudentTable };
ServiceReference2StudentQueryServiceClient _Client = new ServiceReference2StudentQueryServiceClient();
ServiceReference2CallContext _callContext = new ServiceReference2CallContext();
_callContext.Company = "USMF";
ServiceReference2EntityKey[] entityKeys = _Client.create(_callContext, _TableQuery);
}
Now what you need to do is to start that Web Api in Google Chrome as it renders JSON more easily.we are using JSON as format of data so I need to add two lines to change that form XML to JSON. Add these lines in WebApiConfig.cs file
var appXmlType = config.Formatters.XmlFormatter.SupportedMediaTypes.FirstOrDefault(t => t.MediaType == "application/xml");
config.Formatters.XmlFormatter.SupportedMediaTypes.Remove(appXmlType);
To test it you just need to post to the URL of your Web API using any of the tool. I personally use Fiddler. When you would open Table in Table Explores you would notice that your POST have added data respectively to Dynamics Student Table. We are done with
your Integration, Now next part here is to integrate that with mobile.
All the work I have done here is in a server environment. To use that Web API I need to host that using IIS Server. Here a blog that you can follow.
Once you have hosted your Web API, you would be having a URL that you may use to hit that.
You can consume that API on any of the platform that supports JSON. e.g. Windows , Android, iOS or any platform including JAVA.
Let's try look at procedure to consume that in Universal Windows Platform Application.
Whenever you are parsing JSON you need to request data using Client. Well it's same for parsing Json.net way or native way.
First you need initialize client to request JSON data,
var client = new HttpClient();
Then for obvious reasons you need to put request URL for JSON data,
HttpResponseMessage response = await client.GetAsync(new Uri(""));
It's time to get JsonString so that we can parse that,
var jsonString = await response.Content.ReadAsStringAsync();
Now here things become a bit tricky, my json is consisted to simple JSONArray, your JSON may appear a bit different from what my json looks like. Here is a way to know how your json looks like. Just navigate to , put the url of JSON
and it would return you with respective C# class of your JSON, You may use that to cast you JSON object.
As this JSON is a simple array so I just need to cast that to JSONArray and get out objects from that. I have saved my JSON respective C# class as "RootObject"
JsonArray root = JsonValue.Parse(jsonString).GetArray();
for (uint i = 0; i < root.Count; i++)
{
//Parse that accordingly
});
here obs_mydata is the ObservableCollection where we are adding channels, whereas RootObject is my JSON respective C# class. Now you just need to bind your Observable Collection with UI. We would strongly suggest you to follow MVVM pattern and get full out
of BLEND for VS.
To get respective C# class of your JSON you can use online tool like:
Some of documentation of MSDN has been used as part of blog. | https://social.technet.microsoft.com/wiki/contents/articles/32497.integrating-dynamics-ax-with-mobile-apps-using-asp-net-web-api.aspx | CC-MAIN-2021-49 | refinedweb | 1,121 | 53.51 |
Before you can run your trainer application with Cloud Machine Learning Engine, your code and any dependencies must be placed in a Google Cloud Storage location that your Google Cloud Platform project can access. This page shows you the steps to package your code and stage it in the cloud. You can find a detailed description of Cloud ML Engine's packaging requirements on the training concepts page.
Before you begin
Packaging your trainer code is one step in the model training process. You should have completed the following steps before you move your application to the cloud:
Configure your development environment.
Develop your trainer application with TensorFlow.
In addition, you'll get the best results if you:
Know all of the Python libraries that your trainer depends on, whether custom or freely available through PyPI.
Test your trainer locally; training with Cloud ML Engine incurs charges to your account for the resources used.
Packaging and uploading your code and dependencies
How you get your training application to its destination Google Cloud Storage location depends on these factors:
Will you use the
gcloudtool (recommended) or code your own solution?
Do you need to create your package manually?
Do you have additional dependencies that aren't included in the Cloud ML Engine runtime that you are using?
Gather required information
You need to gather the following information to package your trainer:
- Package path
- If you are using the
gcloudcommand-line tool to package your trainer, you must include the local path to your trainer source code. Refer to the recommended trainer project structure for more details.
- Job directory
- This is the root directory for your job's output. It must be a Cloud Storage path to a location that your project has write access to.
- Dependency paths
- If you have custom dependencies you need to have the URI to the package of each one. If you use the
gcloudtool to run your training job, you can specify local directories and the tool will stage them in the cloud for you. If you run training jobs using the Cloud ML Engine API directly, you must stage your dependency packages in a Cloud Storage location yourself and then use the paths to them there.
- Module name
- The name of your trainer's main module. This is the Python file that you run to start your trainer. This name must use the namespace notation for your package. If you use the recommended trainer project structure your module name is
trainer.task.
- Staging bucket
- The Google Cloud Storage location where your trainer is staged so that the training service can copy it to the training instances needed to run your job. If you package and stage yourself, you just need to copy your trainer package here and remember the path for when you start the training job. If you are using the
gcloudtool to package your trainer, you specify this value and the tool copies your package for you. In the
gcloudtool case, you can omit this value if you specify a job directory to have the tool use the output directory for staging.
To use the
gcloud tool to package and upload (recommended)
The simplest way to package your trainer and upload it along with its
dependencies is to use the
gcloud tool:
As part of your
gcloud ml-engine jobs submit trainingcommand:
Set the
--package-pathflag to the path to the root directory of your trainer application.
Set the
--module-nameflag to the name of your application's main module using your package's namespace dot notation (for example, in the recommended case of your main module being
.../my_application/trainer/task.py, the module name is
trainer.task).
Set the
--staging-bucketflag to the Cloud Storage location that you want the tool to use to stage your training and dependency packages.
It can be helpful to define your configuration values as environment variables:
TRAINER_PACKAGE_PATH="/path/to/your/application/sources" MAIN_TRAINER_MODULE="trainer.task" PACKAGE_STAGING_PATH="gs://your/chosen/staging/path"
The example training job submission command below packages a trainer application using the environment variables just defined, as well as these, which are explained more fully in the how-to page about starting training jobs:
- Job name
The name for your job, which must be unique within your project. A common approach to ensuring meaningful, unique job names is to append the current date and time to the model name. For example, in BASH:
now=$(date +"%Y%m%d_%H%M%S") JOB_NAME="census_$now"
- Job directory
The Cloud Storage location that you want to use for your training outputs. Remember to use a location in a bucket in the same region that you run the job.
JOB_DIR="gs://your/chosen/job/output/path"
Here is the example command:
gcloud ml-engine jobs submit training $JOB_NAME \ --job-dir $JOB_DIR \ --package-path $TRAINER_PACKAGE_PATH \ --module-name $MAIN_TRAINER_MODULE \ --region us-central1 \ -- \ --trainer_arg_1 value_1 \ ... --trainer_arg_n value_n
To use the
gcloud tool to upload your existing package
If you build your package yourself, you can upload it with the gcloud tool:
To use the
gcloud tool to use an existing package already in the cloud
If you build your package yourself and upload it to a Cloud Storage location, you can upload it with gcloud:
Working with dependencies
Your trainer may have many dependencies (packages that you
import in your
code) that you need to make it work. Your training job runs on training
instances (specially-configured virtual machines) that have many common Python
packages already installed. Check the packages
included in the runtime version
that you use for training and note any of your dependencies that are not already
installed.
Extra dependencies come in two types: common Python packages available on PyPI, and custom packages (such as packages that you developed yourself, or those internal to an organization). There is a different procedure for each type.
To include additional PyPI dependencies
If your trainer relies on common Python packages that aren't part of the
training instance image, you can include them as dependencies of your trainer
package. Cloud ML Engine uses
pip to install your package, which
looks for configured dependencies and installs them.
Create a file called
setup.py in the root directory of your trainer
application (one directory up from your
trainer directory if you follow the
recommended pattern). Enter the following
script, inserting your own values:
from setuptools import find_packages from setuptools import setup REQUIRED_PACKAGES = ['some_PyPI_package>=1.0'] setup( name='trainer', version='0.1', install_requires=REQUIRED_PACKAGES, packages=find_packages(), include_package_data=True, description='My trainer application package.' )
If you are using the
gcloud command-line tool to submit your training job, it
will automatically use your
setup.py file to make the package. If you submit
the training job without using the tool, you need to run this script yourself
using the following command:
python setup.py sdist
See the section about manually packaging your trainer on this page for more information.
To include custom dependencies with your package
If your application uses libraries that aren't included in the training instance for the runtime version you are using, you can upload them along with your application package:
As part of your
gcloud ml-engine jobs submit trainingcommand:
Specify your training application either as a path to the directory where your source code is stored or as the path to a built package.
Set the
--packagesargument to include each dependency in a comma-separated list.
This example training jobs submit command uses a path to the application's
sources and includes packaged dependencies named
dep1.tar.gz and
dep2.whl
(one each of the supported package types):
gcloud ml-engine jobs submit training $JOB_NAME \ --staging-bucket $PACKAGE_STAGING_PATH \ --package-path /Users/mlguy/models/faces/trainer \ --module-name $MAIN_TRAINER_MODULE \ --packages dep1.tar.gz,dep2.whl \ --region us-central1 \ -- \ --trainer_arg_1 value_1 \ ... --trainer_arg_n value_n
This example training jobs submit command uses a built training application
package and includes packaged dependencies named
dep1.tar.gz and
dep2.whl
(one each of the supported package types):
gcloud ml-engine jobs submit training $JOB_NAME \ --staging-bucket $PACKAGE_STAGING_PATH \ --module-name $MAIN_TRAINER_MODULE \ --packages trainer-0.0.1.tar.gz,dep1.tar.gz,dep2.whl --region us-central1 \ -- \ --trainer_arg_1 value_1 \ ... --trainer_arg_n value_n
Building your trainer package manually
Packaging Python code is an expansive topic that is largely beyond the scope of this documentation. For convenience, this section provides an overview of using Setuptools, though there are other libraries you can use to do the same thing.
Packaging steps
In each directory of your application package, include a file named
__init__.py, which may be empty or may contain code that runs when that package (any module in that directory) is imported.
In the root directory of your package, include the Setuptools file named
setup.pythat includes:
Import statements for
setuptools.find_packagesand
setuptools.setup.
A call to setuptools.setup with ().
_include_package_data_ set to
True.
Run
python setup.py sdistto create your package.
Recommended project structure
You can structure your training application however you like. However, the following structure is commonly used in Cloud ML Engine samples, and having your project's organization be similar to the samples can make them easier to follow.
Use a main project directory, containing your
setup.pyfile.
Use a subdirectory named trainer to store your main application module.
Name your main trainer application module
task.py.
Make whatever other subdirectories in your main project directory that you need to implement your application.
Create an
__init__.pyfile in every subdirectory. These files are used by Setuptools to identify directories with code to package, and may be empty.
In the samples, the
trainer directory usually contains two source files in
addition to
task.py:
model.py and
util.py. The breakdown of code is:
task.pycontains the trainer logic that manages the job.
model.pycontains the TensorFlow graph code—the logic of the model.
util.pyif present, contains code to run the trainer.
If you use the gcloud tool to package your application, you don't need to create
a setup.py or any
__init__.py files. When you run
gcloud ml-engine jobs submit training, you can set the
--package_path
argument to the path of your main project directory, or you can run the tool
from that directory and omit the argument altogether.
To manually upload packages
You can upload your packages manually if you have a reason to. The most common
reason is calling the Cloud ML Engine API directly to start your
training job. The easiest way to manually upload your package and any custom
dependencies to your Google Cloud Storage bucket is to use the
gsutil tool:
gsutil cp /local/path/to/package.tar.gz gs://bucket/path/
However, if you can use the command line for this operation, you should just use
gcloud ml-engine jobs submit training to
upload your packages
as part of
setting up a training job. If you can't use the command-line, you can use the
Google Cloud Storage client library to
upload programmatically.
Remember that you can still use the
gcloud tool to
run training with a training application package that is already in the cloud.
What's next
Perform the next steps of the training process:
Configure and run a training job.
Monitor your training job while it runs.
Get more detail about packaging and uploading your trainer. | https://cloud.google.com/ml-engine/docs/packaging-trainer | CC-MAIN-2017-43 | refinedweb | 1,886 | 53.51 |
#include <rte_crypto_sym.h>
Authentication / Hash transform data.
This structure contains data relating to an authentication/hash crypto transforms. The fields op, algo and digest_length are common to all authentication transforms and MUST be set.
Definition at line 367 of file rte_crypto_sym.h.
Authentication operation type
Definition at line 368 of file rte_crypto_sym.h.
Authentication algorithm selection
Definition at line 370 of file rte_crypto_sym.h.
pointer to key data
Definition at line 374 of file rte_crypto_sym.h.
key length in bytes
Length of valid IV data.
Definition at line 375 of file rte_crypto_sym.h.
Authentication key data. The authentication key length MUST be less than or equal to the block size of the algorithm. It is the callers responsibility to ensure that the key length is compliant with the standard being used (for example RFC 2104, FIPS 198a).
Starting point for Initialisation Vector or Counter, specified as number of bytes from start of crypto operation (rte_crypto_op).
lengthin iv struct.
For optimum performance, the data pointed to SHOULD be 8-byte aligned.
Definition at line 385 of file rte_crypto_sym.h.
Initialisation vector parameters
Length of the digest to be returned. If the verify option is set, this specifies the length of the digest to be compared for the session.
It is the caller's responsibility to ensure that the digest length is compliant with the hash algorithm being used. If the value is less than the maximum length allowed by the hash, the result shall be truncated.
Definition at line 421 of file rte_crypto_sym.h. | https://doc.dpdk.org/api-20.11/structrte__crypto__auth__xform.html | CC-MAIN-2022-27 | refinedweb | 253 | 59.8 |
I have the following plot build with Python and Seaborn using factorplot() method. Is it possible to use the line style as a legend to replace the legend based on line color on the right?
graycolors = sns.mpl_palette('Greys_r', 4)
g = sns.factorplot(x="k", y="value", hue="class", palette=graycolors, data=df, linestyles=["-", "--"])
I have been looking for a solution trying to put the linestyle in the legend like matplotlib, but I have not yet found how to do this in seaborn. However, to make the data clear in the legend I have used different markers:
import seaborn as sns import numpy as np import pandas as pd # creating some data n = 11 x = np.linspace(0,2,n) y = np.sin(2*np.pi*x) y2 = np.cos(2*np.pi*x) df=pd.DataFrame({'x':np.append(x, x),'y':np.append(y, y2),'class':np.append(np.repeat('sin',n),np.repeat('cos',n))}) # plot the data with the markers # note that I put the legend=False to move it up (otherwise it was blocking the graph) g=sns.factorplot(x="x",y="y",hue="class",palette=graycolors, data=df, linestyles=["-", "--"],markers=['o','v'], legend=False) # placing the legend up g.axes[0][0].legend(loc=1) #showing graph plt.show() | https://codedump.io/share/4ckzObSkAATQ/1/python-seaborn-matplotlib-setting-line-style-as-legend | CC-MAIN-2017-09 | refinedweb | 215 | 56.35 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.