text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Thats good :) randomising is fairly easy to put in. In case you're interested, this is what I know about generating random numbers:
use this to seed the random number generator at the beginning of the program
#include <time.h>
srand(time(NULL));
Or if you are making a Windows app, you can use
srand((int)GetTickCount());
use this to get just any random number
rand();
To get a random number from "start" to "end", do this:
int start = 5;
int end = 10;
int randomNumber = (rand() % (end - start + 1)) + start;
To make it easier, make a macro:
Hope it helps :)Hope it helps :)Code:
#define RANDOM(start,end) ((rand() % ((end) - (start) + 1)) + (start)) | https://cboard.cprogramming.com/game-programming/23851-first-windows-game-finally-done-3-print.html | CC-MAIN-2017-09 | refinedweb | 113 | 52.87 |
Thanks in advance for your help. The problem is this: I am supposed to read an array from a file and ask the user to input a name to search for within the file. If the name is there then return the position number in the file, if not output an error message. I get my program to read the file but when it searches for the name it always comes back false. What am I missing? Here is my code:
#include <iostream> #include <fstream> using namespace std; int binarySearch(string[], int, string); const int SIZE = 20; int main() { //define array and input string names[SIZE]; string peopleSearch; int results; char ag; ifstream dataIn; dataIn.open("sortedNames.txt"); do { //get user data cout << "Enter the name of the person you would like to search for (Last, First): \n"; cin.ignore(); getline(cin, peopleSearch); cout << endl; //search for the person results = binarySearch(names, SIZE, peopleSearch); //return -1 if person not found if (results = -1) { cout << "That person does not exist in this array. You might try \n"; cout << ". That is a great site for finding people."; cout << endl; cout << endl; } else { //Otherwise results contain info cout << "Congratulations! You have found " << peopleSearch << " in the array \n"; cout << "in position " << results << "of the array."; cout << endl; cout << endl; } cout << "Would you like to run another person search? Y/N: "; cin >> ag; cout << endl; }while (ag == 'y' || ag == 'Y'); dataIn.close(); system("pause"); return 0; } int binarySearch(string array[], int size, string value) { int first, last = size - 1, middle, position = -1; bool found = false; string s1, s2; int index; int pos; while (!found && first <= last) { middle = (first + last) / 2; if (array[middle] == value) { for(int i = 0; i < size; i++) { for(int h = 0; h < size; h++) { if (s1 == s2) { found = true; position = middle; } } } } else if (array[middle] > value) last = middle - 1; else first = middle + 1; } return position; } | https://www.daniweb.com/programming/software-development/threads/180152/binary-search-array-from-a-file-help | CC-MAIN-2019-04 | refinedweb | 314 | 69.52 |
Tag: Windows7 Phone Application
Reading coordinate values using Accelerometer API in Windows Phone 7
In this post we will see how to read different axis values using Accelerometer API. To work with Accelerometer API first you need to add reference of Microsoft.Devices.Sensors Then below namespace, Now to capture X Axis, Y Axis and Z Axis value you need to follow below steps Step 1 Create task object of Accelerometer […] […]
Smooth Streaming on Windows Phone 7
Read […]
Changing Application Tile in Windows Phone 7
When you create a Windows Phone 7 application, by default newly created app will have Title Background Image Icon Value set to default as below , If you will notice by default Title would be the same as of name of the Windows Phone Application project. On running you will get Application tile as below, […]
Viewing Flickr Images on Windows 7.1 Phone or Mango Phone […]
Fetching Mobile Operator Name in Windows 7.1 Phone [Mango]
In this quick post I will show you, How to fetch Mobile operator name in Windows 7 Phone? Design the page In content Grid, I have put a button. On click event of button, in message box we will display Mobile Operator Name Write Code Behind Add Namespace On the click event of Button we […]
Get Address from Contact in Windows Phone 7.1 [Mango] […] | https://debugmode.net/tag/windows7-phone-application/ | CC-MAIN-2022-40 | refinedweb | 225 | 52.02 |
i have dataset :
recency;frequency;monetary
21;156;41879955
13;88;16850284
8;74;79150488
2;74;26733719
9;55;16162365
...;...;...
DataFrame
df = pd.DataFrame(datas, columns=['userid', 'recency', 'frequency', 'monetary'])
df['recency'] = df['recency'].astype(float)
df['frequency'] = df['frequency'].astype(float)
df['monetary'] = df['monetary'].astype(float)
df['recency'] = pd.qcut(df['recency'].values, 5).codes + 1
df['frequency'] = pd.qcut(df['frequency'].values, 5).codes + 1
df['monetary'] = pd.qcut(df['monetary'].values, 5).codes + 1
df['frequency'] = pd.qcut(df['frequency'].values, 5).codes + 1
ValueError: Bin edges must be unique: array([ 1., 1., 2., 4., 9., 156.])
I ran this in Jupyter and placed the exampledata.txt to the same directory as the notebook.
Please note that the first line:
df = pd.DataFrame(datas, columns=['userid', 'recency', 'frequency', 'monetary'])
loads the colums
'userid' when it isn't defined in the data file. I removed this column name.
import pandas as pd def pct_rank_qcut(series, n): edges = pd.Series([float(i) / n for i in range(n + 1)]) f = lambda x: (edges >= x).argmax() return series.rank(pct=1).apply(f) datas = pd.read_csv('./exampledata.txt', delimiter=';') df = pd.DataFrame(datas, columns=['recency', 'frequency', 'monetary']) df['recency'] = df['recency'].astype(float) df['frequency'] = df['frequency'].astype(float) df['monetary'] = df['monetary'].astype(float) df['recency'] = pct_rank_qcut(df.recency, 5) df['frequency'] = pct_rank_qcut(df.frequency, 5) df['monetary'] = pct_rank_qcut(df.monetary, 5)
The problem you were seeing was a result of pd.qcut assuming 5 bins of equal size. In the data you provided,
'frequency' has more than 28% number 1's. This broke
qcut.
I provided a new function
pct_rank_qcut that addresses this and pushes all 1's into the first bin.
edges = pd.Series([float(i) / n for i in range(n + 1)])
This line defines a series of percentile edges based on the desired number of bins defined by
n. In the case of
n = 5 the edges will be
[0.0, 0.2, 0.4, 0.6, 0.8, 1.0]
f = lambda x: (edges >= x).argmax()
this line defines a helper function to be applied to another series in the next line.
edges >= x will return a series equal in length to
edges where each element is
True or
False depending on whether
x is less than or equal to that edge. In the case of
x = 0.14 the resulting
(edges >= x) will be
[False, True, True, True, True, True]. By the taking the
argmax() I've identified the first index where the series is
True, in this case
1.
return series.rank(pct=1).apply(f)
This line takes the input
series and turns it into a percentile ranking. I can compare these rankings to the edges I've created and that's why I use the
apply(f). What's returned should be a series of bin numbers numbered 1 to n. This series of bin numbers is the same thing you were trying to get with:
pd.qcut(df['recency'].values, 5).codes + 1
This has consequences in that the bins are no longer equal and that bin 1 borrows completely from bin 2. But some choice had to be made. If you don't like this choice, use the concept to build your own ranking.
print df.head() recency frequency monetary 0 3 5 5 1 2 5 5 2 2 5 5 3 1 5 5 4 2 5 5 | https://codedump.io/share/Hn0p1BFYpOg5/1/why-use-pandas-qcut-return-valueerror-bin-edges-must-be-unique | CC-MAIN-2016-50 | refinedweb | 574 | 61.83 |
Every application has a special method called main(). The main() method marks the starting point in the program for the Java Virtual Machine. Here's a short example program:
public class Hello{
public static void main(String arg[]){
System.out.println("Hello");
}
}
When this program is compiled using javac then run with the command
>java Hello
the JVM loads Hello.class and looks in it for the main() method, then starts executing the code inside main()'s code block.
the JVM loads Hello.class and looks in it for the main() method, then starts executing the code inside main()'s code block.
Now, you'll notice there's a bunch of other stuff with main():
public static void main(String arg[]){
Because of how Java works, that stuff has to be there in a Java application. You can't just put "main(){" on the line by itself. The other stuff has a purpose, though it all looks very confusing, and certainly it looks like a lot of extra junk in a short program.
public allows the method to be accessed from outside the class (and its package--we'll get into that later.) If you leave out public, the JVM can't access main() since it's not available outside of Hello.
static says that this is the one and only main() method for this entire class (or program). You can't have multiple main()s for a class. If you leave static out, you'll get an error.
void says that main() doesn't pass back any data. Since the JVM wouldn't know what to do with any data, since it's not set up to accept data from main(), main() has to be of type void, which is to say that it's a method that doesn't pass any data back to the caller. If you leave this out main() won't have a data type, and all methods have to have a data type, even if it's void.
Inside main()'s parentheses is String arg[]. This is a way for the program to accept data from the host system when it's started. It's required that main() be able to accept data from the system when starting. And the data must be in the form of an array of Strings (or a variable-length list of Strings as of Java 5, but that's something we'll save for later.) The name "arg" can be whatever you want to make it. It's just a name I've given the array. It could just as well be:
public static void main(String fred[]){
I'd just have to be sure to use fred whenever I wanted to access the information that the system has passed to my application when it started, instead of arg.
Finally, after the parentheses, comes the open curly brace { that marks the start of main()'s code block.
* There are other types as well, but I'm limiting my discussion to Java SE/client side stuff for now.
* There are other types as well, but I'm limiting my discussion to Java SE/client side stuff for now. | http://beginwithjava.blogspot.com/2008/06/main.html | CC-MAIN-2018-39 | refinedweb | 529 | 79.19 |
point in projective 2D space More...
#include <vgl/vgl_vector_2d.h>
#include <vgl/vgl_point_2d.h>
#include <vgl/vgl_fwd.h>
#include <vcl_iosfwd.h>
#include <vcl_cassert.h>
Go to the source code of this file.
point in projective 2D space
Modifications Peter Vanroose - 4 July 2001 - Added geometric interface like vgl_point_2d Peter Vanroose - 1 July 2001 - Renamed data to x_ y_ w_, inlined constructors Peter Vanroose - 27 June 2001 - Added operator==
Definition in file vgl_homg_point_2d.h.
Definition at line 260 of file vgl_homg_point_2d.h.
Return the point at the centre of gravity of two given points.
Identical to midpoint(p1,p2). Invalid when both points are at infinity. If only one point is at infinity, that point is returned.
Definition at line 239 of file vgl_homg_point_2d.h.
Return the point at the centre of gravity of a set of given points.
There are no rounding errors when T is e.g. int, if all w() are 1.
Definition at line 251 of file vgl_homg_point_2d.h.
Are three points collinear, i.e., do they lie on a common line?.
Definition at line 198 of file vgl_homg.
Return true iff the point is at infinity (an ideal point).
The method checks whether |w| <= tol * max(|x|,|y|)
Definition at line 126 of file vgl_homg_point_2d.h.
Return the point at a given ratio wrt two other points.
By default, the mid point (ratio=0.5) is returned. Note that the third argument is T, not double, so the midpoint of e.g. two vgl_homg_point_2d<int> is not a valid concept. But the reflection point of p2 wrt p1 is: in that case f=-1.
Definition at line 227 of file vgl_homg_point_2d.h.
Adding a vector to a point gives a new point at the end of that vector.
If the point is at infinity, nothing happens. Note that vector + point is not defined! It's always point + vector.
Definition at line 145 of file vgl_homg_point_2d.h.
Adding a vector to a point gives the point at the end of that vector.
If the point is at infinity, nothing happens.
Definition at line 153 of file vgl_homg_point_2d.h.
The difference of two points is the vector from second to first point.
This function is only valid if the points are not at infinity.
Definition at line 132 of file vgl_homg_point_2d.h.
Subtracting a vector from a point is the same as adding the inverse vector.
Definition at line 160 of file vgl_homg_point_2d.h.
Subtracting a vector from a point is the same as adding the inverse vector.
Definition at line 167 of file vgl_homg_point_2d.h.
Write "<vgl_homg_point_2d (x,y,w) >" to stream.
Read x y w from stream. T, since the ratio of e.g. two vgl_vector_2d<int> need not be an int.
Definition at line 216 of file vgl_homg_point_2d.h. | http://public.kitware.com/vxl/doc/release/core/vgl/html/vgl__homg__point__2d_8h.html | crawl-003 | refinedweb | 464 | 79.56 |
Difference between revisions of "OpenSCAD ResizeMeshFeature"
Latest revision as of 12:35, 23 August 2021
Previous: Scale Mesh Feature
Description
Creates a new resized mesh object with independent sizing for each axis.
Usage
- Select the mesh object to be resized.
- Click the OpenSCAD → Scale Resize Feature... menu.
- Select the desired axis in the dialog, or enter your own custom axis to use and click OK.
- A new mesh object is created and resized, the original object is rendered hidden.
Limitations
- The new mesh object is not parametric to the original mesh object, which means any changes to the original object do not get reflected in the new mirrored object.
Notes
- The function does not modify the existing mesh, but returns a new mesh.
- The function can be accessed via python:
import OpenSCADUtils import Mesh #this assumes an existing object in the document named "Mesh" that you wish to mirror original_mesh = App.ActiveDocument.Mesh resized_mesh = OpenSCADUtils.resizemesh(original_mesh.Mesh, FreeCAD.Base.Vector(100,50,40)) #New mesh would be 100 mm on the x axis, 50 mm on the y axis, and 40 mm on the z axis. Mesh.show(resized_mesh)
Previous: Scale Mesh Feature | https://wiki.freecadweb.org/index.php?title=OpenSCAD_ResizeMeshFeature&diff=prev&oldid=979749 | CC-MAIN-2022-21 | refinedweb | 194 | 54.73 |
You might also want to get a better IDE. Notepad isn't particularly that good for writing code.
You might also want to get a better IDE. Notepad isn't particularly that good for writing code.
Teacher: "You connect with Internet Explorer, but what is your browser? You know, Yahoo, Webcrawler...?" It's great to see the educational system moving in the right direction;}
Okay, we have very good news. The calculator has came to life! YAY!
There's just two things to take care of with this.There's just two things to take care of with this.Code:/* C-Calculator by Matthew Nicola */ /* Mid-Term project of M. Fagan's C Programming Class */ #include <stdio.h> #include <conio.h> /*This is where the coding takes place:*/ #pragma argsused int main(void) { double mainval, /* The starting value for the program */ memory, /* Holds memory value for memory functions */ newval, /* This modifies the main value by operation */ exit1; /* Prevents breaking of loop */ char opr; /* Term for basic functions */ exit1 = 0; mainval = 0; memory = 0; /* Calculator Introduction */ printf("Welcome to Matt Nicola's C Calculator!\n"); printf("Enter an operator and then enter the value\n"); printf("to correspond with the operator. Enjoy!\n\n"); printf("Enter H in operator for basic commands.\n\n"); while (exit1 == 0) { printf("\nMEMORY: %lf", memory); printf("\nThe current value is %lf", mainval); printf("\n\nEnter the operator to begin: "); scanf("%c", &opr); switch (opr) { case '+': /* Addition */ printf("\nEnter a value to add to the main value. "); scanf("%lf", &newval); mainval = mainval + newval; break; case '-': /* Subtraction */ printf("\nEnter a value to subtract from the main value. "); scanf("%lf", &newval); mainval = mainval - newval; break; case '*': /* Multiplication */ printf("\nEnter a value to multiply the main value. "); scanf("%lf", &newval); mainval = mainval * newval; break; case '/': /* Division */ printf("\nEnter a value to divide the main value. "); scanf("%lf", &newval); mainval = mainval / newval; break; case 'C': /* Clear */ case 'c': mainval = 0.0; printf("\nThe main value has been cleared.\n"); break; case 'A': /* Add to memory */ case 'a': printf("\nEnter a value to add to the memory. "); scanf("%lf", &newval); memory = memory + newval; break; case 'D': /* Subtract from memory */ case 'd': printf("\nEnter a value to deduct from memory. "); scanf("%lf", &newval); memory = memory - newval; break; case 'R': /* Return the memory */ case 'r': mainval = memory; printf("\nMemory has returned to the main value.\n"); break; case 'H': /* Gives basic commands to user */ case 'h': printf("\nHELP CONTENTS\n\n"); printf("Basic operations comply to their sign.\n"); printf("c = Clear the main value.\n"); printf("a = Add value to memory.\n"); printf("d = Subtract a value from memory.\n"); printf("r = Place the memory value into the main value.\n"); printf("x = Clear the value of the memory.\n"); printf("e = Exit the program.\n\n"); break; case 'X': /* Clear the memory */ case 'x': memory = 0.0; printf("\nThe memory has been cleared.\n"); break; case 'E': /* Exit program */ case 'e': printf("\nThank you for using the calculator. Enter any key"); printf("\nand press ENTER to exit the program."); scanf(" "); exit1 = 1; break; default: printf("\nThat case does not comply to the operator.\n"); break; } /* End of Switch statement. */ } /* End of While statement. */ return (0); }
- Reduce the value to its hundredths place (o.oo)
- Take care of the bug that's making my program repeat something once everytime I enter a value.
At least it still works, but I just have one bug to take care of.
You may want to look into your use of scanf. scanf is nasty. I bet it's filling your buffer with newlines that you probably don't want.
Teacher: "You connect with Internet Explorer, but what is your browser? You know, Yahoo, Webcrawler...?" It's great to see the educational system moving in the right direction | https://cboard.cprogramming.com/c-programming/84901-basic-calculator-2.html | CC-MAIN-2017-47 | refinedweb | 632 | 59.7 |
Basics of hardware/firmware interface codesign
Editor’s Note: In this article, excerpted from Hardware/Firmware Interface Design, by Gary Stringham, the author provides seven principles of embedded hardware/firmware codesign that will ensure that such collaborations are a success.
Hardware and firmware engineering design teams often run into problems and conflicts when trying to work together. They come from different development environments, have different tool sets and use different terminology. Often they are in different locations within the same company or work for different companies.
The two teams have to work together, but often have conflicting differences in procedures and methods. Since their resulting hardware and firmware work have to integrate successfully to build a product, it is imperative that the hardware/firmware interface – including people, technical disciplines, tools and technology – be designed properly
This article provides seven principles hardware/firmware codesign that if followed will ensure that such collaborations are a success. They are:
- Collaborate on the Design;
- Set and Adhere to Standards;
- Balance the Load;
- Design for Compatibility;
- Anticipate the Impacts;
- Design for Contingencies; and
- Plan Ahead.
Collaborate on the Design
Designing and producing an embedded product is a team effort. Hardware engineers cannot produce the product without the firmware team; likewise, firmware engineers cannot produce the product without the hardware team.
Even though the two groups know that the other exists, they sometimes don’t communicate with each other very well. Yet it is very important that the interface where the hardware and firmware meet—the registers and interrupts—be designed carefully and with input from both sides.
Collaborating implies proactive participation on both sides. Figure 2.1 shows a picture of a team rowing a boat. Some are rowing on the right side and some on the left. There is a leader steering the boat and keeping the team rowing in unison. Both sides have to work and work together. If one side slacks off, it is very difficult for the other side and the leader to keep the boat going straight.
In order to collaborate, both the hardware and firmware teams should get together to discuss a design or solve a problem. Collaboration needs to start from the very early stages of conceptual hardware design all the way to the late stages of final firmware development. Each side has a different perspective, that is, a view from their own environment, domain, or angle.
Collaboration helps engineers increase their knowledge of the system as a whole, allowing them to make better decisions and provide the necessary features in the design. The quality of the product will be higher because both sides are working from the same agenda and specification.
Documentation is the most important collaborative tool. It ranges from high-level product specification down to low-level implementation details. The hardware specification written by hardware engineers with details about the bits and registers forming the hardware/ firmware interface is the most valuable tool for firmware engineers. They have to have this to correctly code up the firmware. Of course, it goes without saying that this specification must be complete and correct.
Software tools are available on the market to assist in collaborative efforts. In some, the chip specifications are entered and the tool generates a variety of hardware (Verilog, VHDL. . . ), firmware (C, C++ . . . ), and documentation (*.rtf, *.xls, *.txt . . . ) files. Other collaborative tools aid parallel development during the hardware design phase, such as co-simulation, virtual prototypes, FPGA-based prototype boards, and modifying old products.
Collaboration needs to happen, whether it is achieved by walking over to the desk on the same floor, or by using email, phone, and video conferencing, or by occasional trips to another site in the same country or halfway around the world.
This principle, collaboration, is the foundation to all of the other principles. As we shall see, all of the other principles require some amount of collaboration between the hardware and firmware teams to be successful.
Set and Adhere to Standards
Standards need to be set and followed within the organization. I group standards into industry standards and internal standards.
Industry standards exist in many areas, such as ANSI C, POSIX, PCI Express, and JTAG. Stay true to industry standards. Don’t change them. Changing a standard will break the protocol, interoperability, and any off-the-shelf components, such as IP, device drivers, and test suites.
For example, USB is widely known and used for connecting devices to computers. If this standard is adhered to, any USB-enabled device can plug into any computer and a well-defined behavior will occur (even if it is “unknown USB device installed”).
Industry standards evolve but still behave in a well-defined manner. USB has evolved, from 1.1, to 2.0, and now 3.0, but it still has a well-defined behavior when plugging one version into another.
By internal standards, I mean that you have set standards, rules, and guidelines that everybody must follow within your organization. Modules are written in a certain fashion, specific quality checks are performed, and documentation is written in a specified format. Common practices and methods are defined to promote reuse and avoid the complexity of multiple, redundant ways of doing the same thing.
In the same way that industry standards allow many companies to produce similar products, following internal standards allows many engineers to work together and encourages them to make refinements to the design. It provides consistency among modules, creation of common test suites and debugging tools, and it spreads expertise among all the engineers.
Look at the standards within your organization. Look for best practices that are being used and formalize them to make them into standards that everybody abides by. There are many methods and techniques in the industry that help with this, such as CMMI (capability maturity model integration, an approach for improving processes; sei.cmu.edu/cmmi), ISO (International Organization for Standardization, international standards for business, government, and society; iso.org), and Agile (software development methods promoting regular inspection and adaptation; agilealliance.org).
Adapt and change your internal standards as necessary. If a change needs to be made, it needs to go through a review and approval process by all interested parties.
Once such a change has been approved, make sure that it is published within your organization. Apply version numbers if necessary. There is no such thing as a “customized standard.” Something is either a standard or customized, but not both. If you break away from a standard, be sure you have a good reason.
Balance the Load
Hardware and firmware each have their strengths and weaknesses when it comes to performing tasks. The challenge is to achieve the right balance between the two. What applies in one embedded system will not necessarily apply in another. Differences exist in CPU performance, bus architectures, clock speeds, memory, firmware load, and other parameters.
Proper balance between hardware and firmware depends on the given product and constraints. It requires studying what the tradeoffs will be for a given situation and adjusting as necessary.
An embedded system without a proper balance between hardware and firmware may have bottlenecks, performance issues, and stability problems. If firmware has too much work, it might be slow responding to hardware and/or it might not be able to keep hardware busy.
Alternatively, hardware might have too big of a load, processing and moving data excessively, which may impact its ability to keep up with firmware requests. The quality of the system is also impacted by improper load balancing. The side with the heavier load may be forced to take shortcuts, fall behind, or lose some work.
A simple example to illustrate this point is to calculate the parity of a byte, a task often required in communication and storage applications. A firmware routine has to use a for() loop to look at each bit in the byte to calculate its parity. Listing 2.1 is an example in C of a for() loop to calculate parity by exclusive-ORing each bit.
// Generate the parity of a byte
char generate_parity (char byte);
{
char parity; // Contains the current parity value
char bit; // Contains the bit being looked at
char pos; // Bit position in the byte
parity = 0;
for (pos=0; pos<8; pos++) // For each bit in the byte
{
bit = byte >> pos; // Shift bit into position
bit &= 0x1; // Mask out the rest of the byte
parity ^= bit; // Exclusive OR with parity
}
return (parity);
}
The four-step for() loop translates into several assembly language steps that must be repeated eight times requiring multiple CPU cycles to do so. Other algorithms exist with various impacts on time and memory, but none can get close to the performance that can be achieved using a hardware implementation. Figure 2.2 and Listing 2.2 illustrate how hardware can exclusive-OR all eight bits together in a single clock cycle.
module parity(
Data,
Parity
);
input [7:0] Data;
output Parity;
assign Parity = Data[0]^Data[1]^Data[2]^Data[3]^
Data[4]^Data[5]^Data[6]^Data[7];
endmodule
Like the firmware version, the hardware version has several steps (levels of logic) to it. But since it can access all eight bits at once, the parity is generated in a single clock cycle. In fact, the parity can be generated while the byte is being transferred on the bus.
Another example is the tradeoff for calculating floating-point numbers. It is faster to calculate them in hardware with a floating-point coprocessor but it adds to the material costs of the product. Performing the calculations in firmware is slower but reduces parts costs.
The lesson here is to consider the tradeoffs between the hardware and firmware tasks and to determine whether the balance needs adjusting. Do the I/O buffers need to be expanded? Do interrupt priorities need to be adjusted? Are there firmware tasks that could be performed faster in hardware? Are there hardware tasks that require the flexibility of firmware? Are there handshaking protocols between hardware and firmware that could be better tuned? Are there ways to reduce material costs by having firmware do more?
Balancing the load between hardware and firmware requires collaboration between the hardware and firmware engineers. Engineers understand the load impact associated with a task in their own domain but may not fully realize how it impacts the other. A principle of economics applies here—two parties will be more productive if they work together, but with each working on the tasks that suits them best.
Most Read
Most Commented
Currently no items | http://www.embedded.com/design/mcus-processors-and-socs/4418036/Basics-of-hardware-firmware-interface-codesign | CC-MAIN-2016-07 | refinedweb | 1,756 | 54.02 |
Before you start
Learn what these tutorials can teach you and how you can get the most from them.
About this series
The Linux Professional Institute (LPI) certifies Linux), you should:
- Have several years of experience in installing and maintaining Linux on a number of computers for various purposes
- Have integration experience with diverse technologies and operating systems
- Have professional experience as, or training to be, an enterprise-level Linux professional (including having experience as a part of another role)
- Know advanced and enterprise levels of Linux administration including installation, management, security, troubleshooting, and maintenance.
- Be able to use open source tools to measure capacity planning and troubleshoot resource problems
- Have professional experience using LDAP to integrate with UNIX® services and Microsoft® Windows® services, including Samba, Pluggable Authentication Modules (PAM), e-mail, and Active Directory
- Be able to plan, architect, design, build, and implement a full environment using Samba and LDAP as well as measure the capacity planning and security of the services
- Be able create scripts in Bash or Perl or have knowledge of at least one system programming language (such as C)
The Linux Professional Institute doesn't endorse any third-party exam preparation material or techniques in particular.
About this tutorial
Welcome to "Installation and development," the second of six tutorials designed to prepare you for LPI exam 301. In this tutorial, you learn about LDAP server installation and configuration, and how to use Perl to access your new LDAP server.
This tutorial is organized according to the LPI objectives for this topic. Very roughly, expect more questions on the exam for objectives with higher weights.
Objectives
Table 2 shows the detailed objectives for this tutorial.
Table 2. Installation and development: Exam objectives covered in this tutorial
Prerequisites
To get the most these tutorials, you'll need a Linux workstation with the OpenLDAP package and support for PAM. Most modern distributions meet these requirements.
Compiling and installing OpenLDAP
This section covers material for topic 302.1 for the Senior Level Linux Professional (LPIC-3) exam 301. This topic has a weight of 3.
In this section, learn how to:
- Compile and configure OpenLDAP from source
- Understand OpenLDAP backend databases
- Manage OpenLDAP daemons
- Troubleshoot errors during installation
OpenLDAP is an open source application that implements an LDAP server and associated tools. Because it's open source, you can download the source code free of charge. The OpenLDAP project doesn't distribute binaries directly, but most major distributions package it themselves. In this tutorial, you learn how to install OpenLDAP from both source and packages.
Compiling from source
The first order of business is to download the latest version of OpenLDAP from the project site (see the Resources section for a download link). The project generally has two active versions available: One is a stable version, and the other is a test version. This tutorial was written using the stable versions 2.3.30 and 2.3.38. If you're following along, some of the directory names may be different depending on which version you have.
To extract the source code from the downloaded tarball, enter
tar -xzf openldap-stable-20070831.tgz. This
decompresses and untars the downloaded file into a directory. Change into the
new directory with
cd openldap-2.3.38 (substituting
your version of OpenLDAP as appropriate).
At this point, you're in the source directory. You must now configure the build
environment for your system and then build the software. OpenLDAP uses a script
called
configure that performs these actions. Type
./configure --help to see all the options available
to you. Some define where the files are installed to (such as
--prefix); others define the OpenLDAP features you
wish to build. Listing 1 lists the features and their defaults.
Listing 1. Configuration options relating to OpenLDAP features
SLAPD (Standalone LDAP Daemon) Options: --enable-slapd enable building slapd [yes] --enable-aci enable per-object ACIs (experimental) [no] --enable-cleartext enable cleartext passwords [yes] --enable-crypt enable crypt(3) passwords [no] --enable-lmpasswd enable LAN Manager passwords [no] --enable-spasswd enable (Cyrus) SASL password verification [no] --enable-modules enable dynamic module support [no] --enable-rewrite enable DN rewriting in back-ldap and rwm overlay [auto] --enable-rlookups enable reverse lookups of client hostnames [no] --enable-slapi enable SLAPI support (experimental) [no] --enable-slp enable SLPv2 support [no] --enable-wrappers enable tcp wrapper support [no] SLAPD Backend Options: --enable-backends enable all available backends no|yes|mod --enable-bdb enable Berkeley DB backend no|yes|mod [yes] --enable-dnssrv enable dnssrv backend no|yes|mod [no] --enable-hdb enable Hierarchical DB backend no|yes|mod [yes] --enable-ldap enable ldap backend no|yes|mod [no] --enable-ldbm enable ldbm backend no|yes|mod [no] --enable-ldbm-api use LDBM API auto|berkeley|bcompat|mdbm|gdbm [auto] --enable-ldbm-type use LDBM type auto|btree|hash [auto] --enable-meta enable metadirectory backend no|yes|mod [no] --enable-monitor enable monitor backend no|yes|mod [yes] --enable-null enable null backend no|yes|mod [no] --enable-passwd enable passwd backend no|yes|mod [no] --enable-perl enable perl backend no|yes|mod [no] --enable-relay enable relay backend no|yes|mod [yes] --enable-shell enable shell backend no|yes|mod [no] --enable-sql enable sql backend no|yes|mod [no] SLAPD Overlay Options: --enable-overlays enable all available overlays no|yes|mod --enable-accesslog In-Directory Access Logging overlay no|yes|mod [no] --enable-auditlog Audit Logging overlay no|yes|mod [no] --enable-denyop Deny Operation overlay no|yes|mod [no] --enable-dyngroup Dynamic Group overlay no|yes|mod [no] --enable-dynlist Dynamic List overlay no|yes|mod [no] --enable-lastmod Last Modification overlay no|yes|mod [no] --enable-ppolicy Password Policy overlay no|yes|mod [no] --enable-proxycache Proxy Cache overlay no|yes|mod [no] --enable-refint Referential Integrity overlay no|yes|mod [no] --enable-retcode Return Code testing overlay no|yes|mod [no] --enable-rwm Rewrite/Remap overlay no|yes|mod [no] --enable-syncprov Syncrepl Provider overlay no|yes|mod [yes] --enable-translucent Translucent Proxy overlay no|yes|mod [no] --enable-unique Attribute Uniqueness overlay no|yes|mod [no] --enable-valsort Value Sorting overlay no|yes|mod [no] SLURPD (Replication Daemon) Options: --enable-slurpd enable building slurpd [auto] Optional Packages: --with-PACKAGE[=ARG] use PACKAGE [ARG=yes] --without-PACKAGE do not use PACKAGE (same as --with-PACKAGE=no) --with-subdir=DIR change default subdirectory used for installs --with-cyrus-sasl with Cyrus SASL support [auto] --with-fetch with fetch(3) URL support [auto] --with-threads with threads [auto] --with-tls with TLS/SSL support [auto] --with-yielding-select with implicitly yielding select [auto] --with-odbc with specific ODBC support iodbc|unixodbc|auto [auto] --with-gnu-ld assume the C compiler uses GNU ld [default=no] --with-pic try to use only PIC/non-PIC objects [default=use both] --with-tags[=TAGS] include additional configurations [automatic]
In Listing 1, you can see that many features are disabled by default, such as metadirectories and modules. In addition, many options are marked as "auto," which turns on features if the proper libraries are present on your system. Instead of relying on this automatic behavior, it's best to make a list of the required features and enable them. If you're missing any libraries, you'll get an error to this effect at compile time, rather than some time later.
Some configuration options can be passed either
no,
yes, or
mod.
no disables the option,
yes causes the option to be statically linked to the
final binary, and
mod builds the option as a separate
shared library. Shared libraries are loaded into the server at runtime (see
"Server parameters (global)" below). By default, the
modules are statically linked; that is, they are part of the binary and
inseparable. If you wish to use dynamic modules, you will also need the
--enable-modules option. The benefits of dynamic
modules are that you can test various options without bloating your binary, and
you can package the modules separately.
Listing 2 shows a configuration line, based on the configuration that ships in
Fedora 7, that enables many helpful features. For the most part, the chosen
options will enable features that will be required in later tutorials, such as
--enable-slurpdand
--enable-multimaster for replication, and
--enable-meta for meta-directories. Other options
enable various backends, such as ldab, bdb, null, and monitor.
Listing 2. A sample build configuration
./configure --enable-plugins --enable-modules --enable-slapd --enable-slurpd \ --enable-multimaster --enable-bdb --enable-hdb --enable-ldap --enable-ldbm \ --enable-ldbm-api=berkeley --enable-meta --enable-monitor --enable-null \ --enable-shell --enable-sql=mod --disable-perl \ --with-kerberos=k5only --enable-overlays=mod --prefix=/tmp/openldap
Listing 2 enables plug-ins and multiple backends, including Structured Query Language (SQL) based backends and Berkeley Database files. Backends are OpenLDAP's way of storing and retrieving data, and are examined in more detail under "Backends and databases," and in later tutorials.
Listing 2 also builds both the stand-alone daemon
slapd and the replication daemon
slurpd. Overlays, which allow easier customization of
the backend data, are also enabled for testing. Because this is a test setup,
the installation prefix has been changed to
/tmp/openldap, so the resulting binaries end up in
/tmp/openldap/libexec.
When you execute the
configure script, it checks for
the necessary libraries and then generates the build environment. If
configure completes successfully, compile OpenLDAP
with
make depend; make.
After the code has compiled, you can install OpenLDAP with
make install. This copies all the binaries, manpages,
and libraries to their place in
/tmp/openldap.
Installing from packages
If you were daunted by the previous section on compiling from source, you aren't alone. Compiling from source is time consuming and can be aggravating if you don't have the proper development libraries available. If your C development experience is limited or nonexistent, then you'll likely have trouble interpreting any build errors. Fortunately, most distributions package OpenLDAP as a set of binaries with a preset configuration. Usually these binaries have all the features you'll ever need.
RPM-based distributions
Fedora and CentOS use the
yum tool to install RedHat
packages (RPMs) from repositories. To find out which packages are available, use
the
yum list command, passing an optional regular
expression that filters the list of packages returned. Listing 3 shows a search
for all packages containing the term
openldap.
Listing 3. Determining which packages are available through
yum
# yum list \*openldap\* Loading "installonlyn" plugin Setting up repositories Reading repository metadata in from local files Installed Packages openldap.i386 2.3.30-2.fc6 installed openldap-clients.i386 2.3.30-2.fc6 installed openldap-devel.i386 2.3.30-2.fc6 installed openldap-servers.i386 2.3.30-2.fc6 installed openldap-servers-sql.i386 2.3.30-2.fc6 installed Available Packages compat-openldap.i386 2.3.30_2.229-2.fc6 updates
In a large application such as OpenLDAP, the client and server tools are often
split into two separate packages. In addition, you may find some compatibility
libraries (to ensure applications linked against much older versions of the
software still work). To install a package, use
yum install with the name of the package, such as
yum install openldap-clients openldap-servers; this
downloads and installs both the client and server packages, along with any
needed dependencies.
For Red Hat Enterprise Linux, the command to search packages for
openldap is
up2date --showall | grep openldap. To install a
package, supply the package names as arguments to
up2date, such as
up2date openldap-clients openldap-servers.
To make sure the OpenLDAP server starts on boot, use
chkconfig ldap on.
Debian-based distributions
Debian-based distributions, such as Ubuntu, use the Advanced Packaging
(
APT) tools to install packages. First, to search for
OpenLDAP packages, use
apt-cache search openldap, as
shown in Listing 4.
Listing 4. Listing the available OpenLDAP packages in Ubuntu Linux
notroot@ubuntu:~$ apt-cache search openldap libldap2 - OpenLDAP libraries libldap2-dev - OpenLDAP development libraries python-ldap - A LDAP interface module for Python. [dummy package] python-ldap-doc - Documentation for the Python LDAP interface module python2.4-ldap - A LDAP interface module for Python 2.4 ldap-utils - OpenLDAP utilities libldap-2.2-7 - OpenLDAP libraries slapd - OpenLDAP server (slapd)
Listing 4 shows several packages available. The
slapd package provides the server, and any
dependencies will be resolved at install time. Run
sudo apt-get install slapd to install the server. You
may also include the
ldap-utils package, which
contains the command-line clients.
Configuring the software
Once you've installed OpenLDAP, you must configure it. For testing purposes, you need to specify only a few things; but for the real world (and the LPIC 3 exam), you must be well acquainted with the various options.
Two configuration files govern the behavior of OpenLDAP; both are in
/etc/openldap/ by default. The first is ldap.conf, which controls the global
behavior of LDAP clients. The configuration file for all LDAP servers is called
slapd.conf. Despite the name, slapd.conf also has the configuration for
slurpd, the replication daemon. The focus of this
article is on slapd.conf, specifically pertaining to the
slapd daemon.
slapd.conf has an easy format: a single keyword followed by one or more arguments, subject to the following conditions:
- The keyword must start at column 0—that is, no spaces may exist in front of it.
- If an argument has spaces in it, the argument must be quoted with double quotes ("").
- If a line begins with a space, it's considered a continuation of the previous line.
- Keywords aren't case sensitive, but the arguments may be, depending on which keyword is used.
As with most UNIX® tools, the hash symbol (#) denotes a comment. Anything after the hash is ignored.
slapd.conf is divided into two sections: global options and backend database
options. Although this ordering isn't enforced, you must be careful where you
place your directives, because some directives alter the context in which
subsequent directives are processed. For instance, if no
backend or
database
keywords have been encountered, an option is considered global. Once a
database directive is read, all further options apply
to that database. This continues until another
database directive is read, at which point the next
commands apply to the new database.
Some of the global options will be covered in later tutorials in this 301 series, such as those dealing with access controls and replication. A description of the commonly used configuration directives follows.
Server parameters (global)
Several parameters limit the work that the
slapd
process can do, which prevents resource starvation.
conn_max_pending accepts an integer that dictates how
many anonymous requests can be pending at any given time.
You'll learn about binding to the LDAP server in a
later tutorial in this 301 series; simply put, you can make requests from the
server by logging in as a user (an authenticated session) or without any
credentials (anonymous session). Requests beyond the
conn_max_pending limit are dropped by the server.
Similarly,
conn_max_pending_auth is the same as
conn_max_pending but refers to authenticated
sessions.
The
idletimeout parameter (specified in seconds)
tells
slapd how long idle clients can be held before
they should be disconnected. If this number is 0, no disconnections happen.
The
sizelimit parameter limits the number of search
results that can come back from a single query, and
timelimit limits how long the server spends
searching. These two parameters can take either an integer, the keyword
unlimited, or more complex hard and soft limits. This
would allow you to set a default (soft) timeout or result-set size; but if a
client requests a larger number of rows or a longer timeout, it can be
accommodated up to the hard limit. For example,
sizelimit sizesoft=400 size.hard=1000 specifies that
by default, 400 rows are returned. Clients can request that this limit be
increased up to 1,000. This format can be applied to groups of users so that
some people or applications can perform large searches, and others can perform
only small searches
When a client performs a search on the tree, it usually specifies a node
(called the search base, or base) from which the search should
start—the Distinguished Names (DNs) of all the results have the search
base in them. This allows for faster searching (because fewer nodes need to be
searched) and easier client implementation (because searching only part of a
tree is a simple but effective filter). If the client doesn't specify a base,
the value of
defaultsearchbase is used. This is a
good parameter to set to avoid surprises with misconfigured clients down the
road. Depending on the layout of your LDAP tree, you may wish to use either your
users container or the root of the tree. (Trees and distinguished names are
covered in the
previous tutorial.)
Three commands govern various features supported by your server, such as legacy
support and security requirements by clients. These commands are
allow,
disallow, and
require. Each command takes a series of whitespace
keywords that enable, disable, or require a feature. The keywords are shown in
Table 3.
Table 3. Keywords used with
allow,
disallow, and
require
Even though certain types of login may be allowed by some commands from Table 3, the connections are still subject to access controls. For example, an anonymous bind may be granted read-only access to part of the tree. The nature of your application and the capabilities of your clients dictate how you allow or disallow various authentication methods.
If you wish to maintain a higher level of availability, then enable
gentlehup. With this command enabled,
slapd stops listening on the network when it receives
a
SIGHUP signal, but it doesn't drop any open
connections. A new instance of
slapd can then be
started, usually with an updated configuration.
To get more verbose logging, adjust the value of
loglevel. This command accepts an integer, multiple
integers, or a series of keywords, which enable logging for a particular
function. Consult the slapd.conf manpage for the full list of keywords and
values. For example, connection tracing has a value of 8 and a keyword of
conns, and synchronization has a value of 4096 and a
keyword of
sync. To enable logging of these two items
logging 5004,
logging 8 4096, or
logging conns sync will achieve the same result.
If you compiled OpenLDAP from source, you may have enabled some modules.
Alternatively, you may have downloaded extra modules from your package manager,
such as the
openldap-server-sql package, which
includes the SQL backend module. The
modulepath and
moduleload options are used to load dynamic modules
into
slapd.
modulepath
specifies the directory (or list of directories) that contains the shared
libraries, and each instance of
moduleload specifies
a module to load. It's not necessary to specify the module's version number or
extension, because
slapd looks for a shared library.
For example, for a library called
back_sql-2.3.so.0.2.18, use
moduleload back_sql. Alternatively,
moduleload can be given the full path (without the
version and extension) to the library, such as
moduleload /usr/share/openldap/back_sql.
Some scripts expect the process id of a process to be held in a certain file.
pidfile tells
slapd where
to write its process id.
Schema parameters
A handful of commands let you add schema items to your tree, either by including a schema file or by defining the object in slapd.conf. Recall from the previous tutorial that the schema provides the attributes and object classes that can be used by your LDAP tree.
To add a new schema file to your server, use the
include command followed by the full path to the
schema file (usually found in /etc/openldap/schema). If one schema makes
reference to another (such as
inetOrgPerson
inheriting from
organizationalPerson), you need to
include all the necessary files in the proper order, with the base objects
included first. OpenLDAP parses each schema file as it's included, so order of
inclusion is important.
You can add new schema items directly through slapd.conf with the
attributetype and
objectclass commands for attributes and object
classes, respectively. This is the same as putting the information in a schema
file and including it with the
include command.
Similarly, you can define object identifiers (OIDs) with
objectidentifier.
Backends and databases
Backends and databases are two separate but closely related concepts. A
database represents part of a tree, such as
dc=ertw,dc=com. A backend describes the method by
which
slapd retrieves the data. (The
dc=ertw,dc=com tree has been the primary example in
this series.)
In many cases, the backend is a file on disk (in some format; more on this later); or it can be a method to get data from another source, from a SQL database, to DNS, and even through a script. Each database is handled by one backend, and the same backend type can be used by multiple databases.
As noted earlier, slapd.conf starts with global directives. Backend mode then
starts at the first instance of the
backend
directive. All directives in this backed mode apply to the particular backend
being configured. Any options that were set globally apply to the backend,
unless they're overridden at the backend level. Similarly, you configure
databases with the
database keyword. A database is
tied to a backend type, which inherits any global or backend level
configurations. You can override any options at the database level, too.
OpenLDAP splits the backends into three types:
- Those that store data:
- bdb—Uses the Berkeley database engine (such as Sleepycat, now owned by Oracle)
- hdb—An improvement on back-ldb, which adds some indexing improvements
- Those that proxy data:
- ldap—Proxies another LDAP server
- meta—Proxies several LDAP servers for different parts of the tree
- sql—Returns data from a SQL database
- Those that generate data:
- dnssrv—Returns LDAP referrals based on data in DNS SRV records
- monitor—Returns statistics from the LDAP server
- null—A testing module; returns nothing
- passwd—Returns data from the password file
- perl—Returns data generated from a Perl script
- shell—Returns data generated from a shell script
Configuration options are specific to each backend, and can be found in the
relevant manpage (such as
slapd-bdb for the bdb
backend).
Databases represent the tree and its data. The
dc=ertw,dc=com tree is an example of a database. All
data under this DN would be stored in a similar fashion if it were part of the
same database. It's also possible to have
ou=people,dc=ertw,dc=com in one database, with
anything else under
dc=ertw,dc=com in another.
Finally, an LDAP server can serve more than one tree, such as
dc=ertw,dc=com and
dc=lpi,dc=org. Each database has its own way of
handling the request by way of its own backend.
Specify
database followed by the database type to
start database configuration mode. The commonly used form is the Berkeley
database, so
database bdb creates a BDB database. The
next command you need is
suffix, which specifies the
root of the tree the database is serving.
rootdn and
rootpw allow
you to specify a user with all privileges (a root user) for the database.
This user isn't even subject to access controls. The
rootdn should be within the specified suffix and may
or may not have a password. If a
rootpw is specified,
this is used. Otherwise, the behavior is to look for the
rootdn's record in the tree and authenticate against
the
userPassword attribute. If no root user is
specified, then all users are subject to the access controls configured.
If you specify
lastmod on, OpenLDAP keeps several
hidden attributes (called operational attributes), such as the name of
the person who created the record and when it was modified. Some of these
attributes are required for replication to work, so it's smart to leave
lastmod enabled (which is the default). These
operational attributes aren't shown to clients unless specifically requested.
You can further restrict what can be done to the database through the
restrict command. This command takes parameters
corresponding to LDAP operations, such as
add,
bind,
compare,
delete,
rename, and
search. To block users from deleting nodes in the
tree, use
restrict delete. If the tree contains
users, but for some reason you don't want them to be able to bind to the tree,
use
restrict bind. Additionally,
read and
write are
available to block any reading and writing to the tree, respectively, rather
than having to spell out all the relevant operations. Alternatively, you can use
the command
readonly to make the database read only.
Different parts of the same tree can be handled by different databases. If
properly configured, OpenLDAP glues all the parts together. The database
containing the other is called the superior database; the database being
contained is the subordinate database. First, define the subordinate
database and add the
subordinate command on a line of
its own. Then, define the superior database. With this configuration, OpenLDAP
can treat multiple databases as one, with some data stored locally and some
pulled from other sources (a special case of this is when all the data is on
remote LDAP servers, which is where a metadirectory is used). Note that if you
define the superior database before the subordinate database, you'll get errors
that you're trying to redefine part of your tree. Listing 5 shows the
dc=ertw,dc=com tree split into a superior and a
subordinate database.
Listing 5. Configuration for a subordinate and superior database
# Subordinate database bdb suffix "ou=people,dc=ertw, dc=com" rootdn "cn=Sean Walberg,ou=people,dc=ertw,dc=com" rootpw mysecret directory /var/db/openldap/ertw-com-people subordinate # Superior database bdb suffix "dc=ertw, dc=com" rootdn "cn=Sean Walberg,dc=ertw,dc=com" rootpw mysecret directory /var/db/openldap/ertw-com
Also note that two
rootdns are configured. If you
want to define a password, the
rootdn must fall
within the database. To build the tree, the second root account must be used to
define the
dc=ertw,dc=com entry, and the first root
account defines the people organizational unit (OU) and any objects underneath
it. Once users have been added, you can authenticate as a different user in
order to get access to the whole tree.
If you're using the bdb backend, you also need to use the
directory command to specify where the database files
are stored. Each database instance needs a separate directory.
Setting up a new database is fairly simple, because there are only a few commands to worry about. Much of the complexity comes in when you try to tune the backend, which is the subject of the next tutorial in this 301 series.
Overlays
Overlays are an extension of the database. If you want to add a feature to a
database, you can often add it as an overlay rather than forking the database
code. For example, if you want all writes to be logged to a file, you can attach
the
auditlog overlay to the relevant database.
Overlays operate as a stack. After configuring the database, you specify one or
more databases. Then, define each overlay with the
overlay command, followed by the name of the overlay.
Each overlay has its own configuration parameters.
If you've configured multiple overlays, they're run in the reverse order that
you define them. The database is accessed only after all the overlays have run.
After the database returns the data, the overlays are run again in the same
order before
slapd returns the data to the client.
At each step, an overlay can perform an action such as logging, it can modify the request or response, or it can stop processing.
Developing for LDAP with Perl/C++
This section covers material for topic 302.2 for the Senior Level Linux Professional (LPIC-3) exam 301. This topic has a weight of 1.
In this section, learn how to:
- Use Perl's
Net::LDAPmodule
- Write Perl scripts to bind, search, and modify directories
- Develop in C/C++
Although OpenLDAP includes command-line clients, it's often helpful to use LDAP
information in your own scripts. Perl is a popular language for scripting. Perl
has a module called
Net::LDAP that is used to connect
to and use an LDAP server.
Getting started
Net::LDAP doesn't ship with Perl, but your
distribution may include it as a package. See
"Installing from packages" for more information
on searching for and installing packages.
If your distribution doesn't have the
Net::LDAP
package, then you can download it from the Comprehensive Perl Archive Network
(CPAN). As root, run
perl -MCPAN -e "install Net::LDAP", which downloads
and installs
Net::LDAP and any dependencies.
Using Net::LDAP
Using
Net::LDAP is fairly simple:
- Create a new
Net::LDAPobject.
- Bind to the desired server.
- Perform your LDAP operations.
Create a new object
In typical Perl fashion, you must create an instance of the
Net::LDAP module through the
new method. All further operations will be on this
instance.
new requires, at a minimum, the name of the
server you want to connect to. For example:
my $ldap = Net::LDAP->new('localhost') or die "$@";
Here, a new
Net::LDAP object is created with the
new method and is passed the string
localhost. The result is assigned to the
$ldap variable. If the function fails, the program
exits and prints an error message describing the problem.
$@ is a Perl internal variable that contains the
status of the last operation.
You can proceed to perform LDAP operations with the new
Net::LDAP object. Each function returns a
Net::LDAP::Message object that contains the status of
the operation, any error messages, and any data returned from the server.
Binding to the tree
The first operation you should do is to log in or bind to the tree. Listing 6 shows a bind operation and associated error checking.
Listing 6. Perl code to bind to the tree
my $message = $ldap->bind( "cn=Sean Walberg,ou=people,dc=ertw,dc=com", password=>"test" ); if ($message->code() != 0) { die $message->error(); }
Listing 6 starts by calling the
bind method of the
previously created object. The first parameter to the function is the DN you're
binding as. If you don't specify a DN, you bind anonymously. Further parameters
are in the format of
key=>value; the one
you'll use most often is the password.
Each
Net::LDAP method returns a
Net::LDAP::Message object, which has the results of
the function. The error code is retrieved through the
code method. A code of 0 means success, so the code
in Listing 6 exits the program with the error message if the result isn't 0.
Note that the error is retrieved from
$message->error rather than
$@, like the earlier example. This is because the
error isn't a Perl error; it's internal to
Net::LDAP.
Once the bind is successful, you can do anything you want, subject to the
server's access controls. To log out, call the
unbind
method.
Searching the tree
Searching is done through the
search method. Like
the
bind method, you must pass some parameters and
check the result of your query. However, the returned object now contains your
data, so this must be parsed. With the
search
operation, the result is a
Net::LDAP::Search object,
which inherits all the methods from
Net::LDAP::Message (such as
code and
error) and adds
methods to help you parse the data. Listing 7 shows a search of the tree.
Listing 7. Searching the tree with
search
$message = $ldap->search(base => "dc=ertw,dc=com", filter=> "(objectClass=*)"); if ($message->code() != 0) { print $message->error(); } else { foreach my $entry ($message->entries()) { print $entry->dn() . ": "; print join ", ", $entry->get_value("objectClass"); print "\n"; } }
Listing 7 begins by calling the
search method,
passing two parameters: the base and a filter. The base tells the server where
in the tree to begin searching. A complementary option,
scope, tells the server how far to search:
- base—Only the base object
- one—Only the children of the base object (and not the base object itself)
- sub—The base object and all its children (the default)
The filter is a string describing the objects you're interested in. You can
search on attributes and perform complex AND/OR queries.
objectClass=* returns any object.
The result of the search is checked, and an error is printed if a problem happened. Because the script could still recover from an error, it just prints the error and continues, rather than exiting.
The
entries function returns an array of
Net::LDAP::Entry objects, each with a single result.
First the entry's DN is printed, and then all the object classes. If you'd
rather have a text version of the whole record, the
dump method prints the entire entry in text format.
Adding a new entry
You add an entry to the tree through the
add method.
You must pass the function the DN of the entry you wish to add, along with the
attributes. The attributes are an array of
key => value pairs. The value can also be an
array in the case of multiple instances of the same attribute. Listing 8 shows
an entry being added to the tree.
Listing 8. Adding an entry using
Net::LDAP
$message = $ldap->add( "cn=Fred Flintstone,ou=people,dc=ertw,dc=com", attr => [ cn => "Fred Flintstone", sn => "Flintstone", objectclass => [ "organizationalPerson", "inetOrgPerson" ], ] ); if ($message->code() != 0) { print $message->error(); }
The first parameter to
add is either the DN or a
Net::LDAP::Entry object. If the DN is passed, you
must pass an arrayref through the
attr method. Even
though the
key => value format is used as in a
hashref,
Net::LDAP is expecting an arrayref, so be
careful!
More about
Net::LDAP
Net::LDAP provides an interface to all the LDAP
functions, such as
compare,
delete, and
moddn. They're
all used similarly to the previous examples and are fully documented in the
Net::LDAP manpage.
All the examples shown operate in blocking mode, which means the function returns after the response has been received from the server. You can also operate in asynchronous mode, which involves giving a callback function that is called as packets are received.
By using
Net::LDAP, you can use the data stored in
your LDAP tree from within your scripts. Perl is already used in a wide variety
of software, so the opportunities for integration are unlimited.
Developing in C/C++
Using the C libraries is more involved than the Perl libraries. The ldap(3)
manpage contains a detailed description of how to use the library, and has
pointers to the other manpages describing each function. To use the LDAP C
libraries, your code must first include the ldap.h include file, such as with
#include <ldap.h>. Your object files
must then be linked with libldap using the
-lldap
option to the linker.
Summary
In this tutorial, you learned about installing and configuring the OpenLDAP
stand-alone server. When configuring
slapd, use the
slapd.conf file. You must take care to keep your global options at the top
of the file and then progress to backend and database configurations, because
slapd is dependent on the order of the directives.
When in doubt, consult the slapd.conf manpage.
Perl code can make use of an LDAP server through the
Net::LDAP module. First you create an object, and
then you call methods of the object that correspond with the LDAP operation you
want. Generally, you first
bind and then perform your
queries. It's important to check the results of your functions through the
code and
error functions.
Resources
Learn
- Review the previous tutorial in this 301 series, "LPI exam 301 prep, Topic 301: Concepts, architecture, and design" (developerWorks, October 2007).
- Take the developerWorks tutorial "Linux Installation and Package Management" (developerWorks, September 2005) to brush up on your package management commands.
- Review the entire LPI exam prep tutorial series on developerWorks to learn Linux fundamentals and prepare for system administrator certification.
- At the LPIC Program, find task lists, sample questions, and detailed objectives for the three levels of the Linux Professional Institute's Linux system administration certification.
- Find answers to What is a backend? and What is a database? in the OpenLDAP FAQ.
- Learn more about building overlays in the developer's documentation. Overlays are a complex but powerful concept.
- Consult the OpenLDAP Administrator's Guide if the manpages for slapd.conf or your particular backend don't help. You might also find the OpenLDAP FAQ to be helpful.
- The online book LDAP for Rocket Scientists is excellent, despite being a work in progress.
- The Perl-LDAP page has lots of documentation and advice on using
Net::LDAP.
- In the developerWorks Linux zone, find more resources for Linux developers, and scan our most popular articles and tutorials.
- See all Linux tips and Linux tutorials on developerWorks.
- Stay current with developerWorks technical events and Webcasts.
Get products and technologies
- Download OpenLDAP.
- The IBM Tivoli Directory Server is a competing LDAP server that integrates well with other IBM products.
- phpLDAPadmin is a Web-based LDAP administration tool. If the GUI is more your style, Luma is a good one to look. | http://www.ibm.com/developerworks/linux/tutorials/l-lpic3302/index.html | CC-MAIN-2014-42 | refinedweb | 6,260 | 53.81 |
Kubernetes Namespace Stuck in Terminating State
Thursday morning IST and Slack started buzzing. I suspected something wrong and I was right, Nginx Ingress on K8s was throwing 503 Service Unavailable.
Started debugging and suddenly I pushed myself into more trouble as by mistake I deleted the namespace (Please be very careful don't ever do this in Prod K8s cluster, I was lucky I made mistake but it was my QA cluster but again we always think about the end user in case of QA environment its QA Engineering team.)
After waiting for few minutes I found out that namespace is in Terminating state from so long and which means I cannot recreate the same namespace and cannot deploy the Nginx Ingress.
Cause of this might be some Extensions which are not managed by namespace not getting deleted and hence the name space deletion got stuck.
Solution:
Delete the namespace manually
- Get the namespace that is stuck in terminating state
kubectl get namespaces
2. Create a temporary json
kubectl get namespace aaa -o json >tmp.json
3. Edit the tmp.json. Remove the
kubernetes from the finalizers and save the file.
"spec": {
"finalizers": [ "kubernetes" ]
},
Read more about the Finalizers .
4. Connect the Proxy
kubectl proxy
5. Make an API call with your proxy IP and port
curl -k -H "Content-Type: application/json" -X PUT --data-binary @tmp.json
6. Check the namespace back. You will see not more namespace stuck in terminating state.
kubectl get namespaces | https://abhishekamralkar.medium.com/kubernetes-namespace-stuck-in-terminating-state-68a008e36e5d?readmore=1&source=user_profile---------1------------------------------- | CC-MAIN-2022-21 | refinedweb | 247 | 63.7 |
Hope you learned lot of useful information regarding debugging. Today we are going to continue sessions with debugging as it is very important to understand the framework of debugging. We use them unknowingly so let us know them a little more so that we do not run into issues when circumstances are not in under control. Let us continue the session with this sub-section.
13.4.1 Introduction
Lint is a code scanning tool. It is provided by Android SDK. Now, there is a need for such a scanning tool as structure or code is very important in certain scenarios. Reliability and efficiency of android app is influenced by the structure of code. It could be very difficult at a certain point of time when structure of code has loopholes. Using lint can maintain the quality of code as it is easy to identify and rectify problems. The best part is you need not execute or write any test case in order to accomplish this task. This tool identifies the problem and publishes a problem description message along with the threat on security level. This in turn helps us to prioritize primary and secondary requirements of code. It can be integrated with the automatic testing process as it has a command-line interface. Lint can be run from eclipse or from command-line interface. lint tool is automatically installed when we talk about Android SDK Tools revision 16 or higher. When we talk about Eclipse we have to install Android Development Tools (ADT) Plugin for Eclipse revision 16 or higher.
13.4.2 Workflow
Let us have an overview on the workflow of this tool.
Figure workflow of lint
Let us have a brief idea regarding these jargons:
- App source files: These are the source files which our project consists of. They include java files, xml files, icons, etc.
- Lint.xml file: This is a configuration file. We can use it to specify lint checks which you might want to remove or configuring and customizing the security level.
- Lint tool: This is the code scanning tool which we can run on our android project no matter from where you run it (Eclipse or command-line interface) The structural code errors are detected by this tool so that quality and security level of code is maintained. Structural code examples could be using API calls that are not supported by the target API versions or the XML resource file that contain unused namespaces which in turn consumes space or unnecessary code processing, etc.
Lint tool’s results can be viewed in console. It can be viewed in Eclipse as Lint Warnings. The concern is identified by the location of problem inside source files accompanied by description of concern in form of an error message.
The lint tool runs automatically when we run it from Eclipse when any one of the following task is accomplished and they are as follows:
- When xml file is edited and saved in project like manifest or resource file
- Apk file is exported. lint automatically runs to keep a check on fatal errors and export is terminated if fatal errors are discovered. It can be turned off using Lint error checking page.
- Layout editor is used in Eclipse to make changes
Following path is used to view lint warnings on EclipseWindow> Show View > Other > Lint Warnings.
FigureWindow > Show View > Other
Figure Lint Warnings
Figure Example of Lint Warnings
When we talk about command line interface, lint tool can be run against a list of files in project directory. The command usage is as follows:
lint [flags] <project_directory>
We canissue the following command to scan the files under the my_project directory and its subdirectories and the issue ID MissingPrefix asks lint to scan the xml attributes which are missing in Android namespace prefix. The command looks similar to following syntax:
lint --check MissingPrefixmy_project.
We can see the full listing of flags and command line arguments which are supported by lint. The command is as follows:
lint–help
Figure snapshot of lint tool (command line)
We can customize the restriction on issues for lint to check and assign the severity level for those issues. Lint checking can be configured at different levels of security which include globally (all projects), particular project, particular file and particular java class or method. We can specify our lint checking preferences in the lint.xml file. This fil should be placed in the root directory of our android project. When we use eclipse, configuring the lint preferences creates an lint.xml and is added to corresponding Android project automatically.
<?xml version="1.0" encoding="UTF-8"?> <lint> <!—configuration list of issues --> </lint>
Figure lint.xml
Congratulations ladies and gentlemen!! We are done with this sub-section. Looking forward to share another useful sub-section in terms of debugging, we hope you learned something from this section. See you in the next section. Till then keep practicing and wish you Happy App Developing!!! | http://www.wideskills.com/android/debuggingand-testing/android-debugging-with-lint | CC-MAIN-2019-35 | refinedweb | 828 | 62.48 |
Never thought of how to implement an "undo" function? Not that easy, huh? People in our architecture class today came up with quite creative solutions: two separate stacks storing operations, versioning of the object to go back etc... All quite complex. Well, I've thought about that already about a year ago, so it was quite easy for me and there you actually see how simple such a task becomes if you know the right pattern (I'll come to it immediately).The key is actually to encapsulate the operation and the object the operation acts on. If you encapsulate that within an object you're already pretty much done. Every time you perform an operation you create such an object encapsulating that operation.
If you knew about it already...yes, it's the Command pattern :) . Now the interface above is the standard implementation, but adding undo is not a major difficulty. You can either add the method to the ICommand interface or create another abstract class/interface UndoableCommand using ICommand. Take for instance the operation "make bold" of a word within a document. Applying the Command pattern and adapting it for undo and redo functions is quite simple
For each concrete command you implement the interface. So an example implementation of such a BoldCommand could look like
public class BoldCommand implements ICommand{I guess this should look now pretty obvious to you. Of course this is just a simple code for demonstrating the idea. You need some more sophisticated structures that take care of these command objects. For the undo/redo you'd probably have some list that tracks all of these objects, removes old ones etc...
private Word aWord;
public BoldCommand(Word aWord){
this.aWord = aWord;
}
public void execute(){
//call some appropriate object that knows how to perform
//the action, i.e.
aWord.setBold(true);
}
public void undo(){
//undoing is easy since we know here what we did previously and
//we have the reference to the object we acted upon
aWord.setBold(false);
}
public void redo(){
execute();
}
}
If you programmed already for the Eclipse platform, you probably came across the IAction interface and Action classes. Well that's one implementation of such a command pattern. They use it quite heavily there.
Now that you know the pattern, think about the solution you came up with previously (2 stacks, operations, undo operations etc..). Quite complicated :) I like this example because I think this example of the undo/redo functionality explains quite clearly the improvement of your code if you know the right - and obviously suitable - pattern. | https://juristr.com/blog/2009/12/power-and-simplicity-of-command-pattern/ | CC-MAIN-2018-43 | refinedweb | 427 | 55.54 |
Generic Types
A Class, Record or Interface can be generic, if it operates on one or more types that are not specified in concrete when the type is defined.
Why Generics?
Generic types are best understood by comparing them to regular, non-generic types and their limitations. For example, one could implement a "StringList" as a regular class that can contain a list of Strings. One could make that list as fancy as one likes, but it would always be limited to Strings and only Strings. To have a list of, say, Integers, the entire list logic would need to be implemented a second time.
Alternatively, one could implement an "ObjectList" that could hold any
Object. Since Strings and Integers can both be objects, both could be stored in this list. But now we're sacrificing type safety. Any given ObjectList instance might contain Strings, Integers, or indeed any other kind of object. At each access, one would need to cast, and type check.
By contrast, a generic "
List<T>" class can be written that holds an as-of-yet undefined type of object. The entire class can (and, indeed, must) be implemented without ever knowing what type of objects will be in the list, only referring to them as
T. But: When using the generic class for a concrete purpose, it can be instantiated with a concrete type replacing
T: as a
List<String> or a
List<Integer> for example.
Type Parameters
Any Class, Record or Interface type declaration can be made generic by applying one or more type parameters to its name, enclosed in angle brackets:
type List<T> = public Class end; List<Key,Value> = public class end;
The type parameters can be arbitrary identifiers. It is common to use short single-letter uppercase names such as
T,
U,
V, to make the type parameters stand out in the remainder of the code, but that is mere convention. Any valid identifier is allowed.
Once declared as part of the generic type's name, these type parameters become valid types that can be used throughout the declaration and implementation, as if they were regular, well-known type names. For example they can be used as type for Method Parameters or Results, or as variables inside a method body.
type List<T> = public class public method Add(aNewItem: T); property Items[aIndex: Integer]: T; end;
Of course, since little is know about what
T is, there are limitations to what the generic class can do with instances of
T in code. While some lists will contain Strings, others might contain Integers – so it would not be safe to, for example, call a string-specific method on
T.
This is where constraints come in.
Constraints
If a generic type needs more specific control over what subset of types are allowed for its generic parameters, it can declare one or more constraints, using the
where keyword.
Essentially, a constraint limits the generic class from being instantiated with a concrete type that does not fulfill the conditions.
There are four types of supported constraints:
is class— requires the concrete type to be a Class (i.e. disallows records or value types).
is record— requires the concrete type to be a Record or Value Type (i.e. disallows classes).
is
TypeName— requires the concrete type to implement the specified Interface or descend from the specified Class.
has constructor— requires the concrete type to have a parameter-less constructor.
Of course individual constraints can be combined. For example constraining the above list in two ways could give it additional capabilities:
type List<T> = public class where T is IComparable, T has constructor public method New: T; begin result := new T; // made possible because of `where T has constructor` Add(result); end; method Sort(); begin ... complex sorting code if self[a].CompareTo(self[b]) then // made possible by `where T is IComparable` Switch(a,b); ... more complex sorting code end; end;
The
where T has constructor constraint allows the new list code to create new instances of whatever type
T is, at runtime. And the
where T is IComparable constraint allows it to call members of that interface on
T (without cast, because
T is now assured to implement
IComparable).
Of course on the downside, the
List<T> class is now more restricted and can no longer be used with types that do not adhere to these constraints.
Adding constraints is a fine balance between giving a generic class more flexibility, on the one hand, and limiting its scope on the other. One possible solution for this is to declare additional constraints on an Extension, instead:
Constraints on Extensions
When declaring an Extension for a generic class, it is allowed to provide additional constraints that will applicable only on the extension members.
This keeps the original class free from being constrained, but limits the extension members to be available to those instances of the class that meet the constraints. For example, one could make the
List<T> from above more useful for strings:
List<T> = public class where T is String public method JoinedString(aSeparator: String): String; begin var sb := new StringBuilder(); for each s in self index i do begin if i > 0 then sb.Append(aSeparator); sb.Append(s); // we know s is a String, now end; end; end;
var x := List<String>; var xy:= List<Button>; x.JoinedString(); y.JoinedString(); // compiler error, no such member.
In this example, the new
JoinedString method would only be available on
List<String>, as a list with any other type would not satisfy the constraint.
Co- and Contra-Variance
A generic Interface can be marked as either co- or contra-variant on a type parameter, by prefixing it with the
out or
in keyword, respectively:
IReadOnlyList<out T> = public interface GetItemAt(aIndex: Integer): T; end; IWriteOnlyList<in T> = public interface SetItemAt(aIndex: Integer; aItem: T); end;
A co-variant generic parameter (marked with
out) makes a concrete type compatible with base types of the type parameter. For example, a
IReadOnlyList<Button> can be assigned to a
IReadOnlyList<Object>.
This makes sense, because any
Button is also an
Object. Since the
IReadOnlyList only uses the type
T outgoing, as method results (or
out Parameters), any call to a list of
Buttons can be assure to return an
Object.
The reverse would not be case, if the original
List class were co-variant, on could add arbitrary Objects to a list of Buttons – and that would be bad.
By contrast, a contra-variant generic parameter (marked with
in) makes a concrete type compatible with descendant types of the type parameter. For example, a
IWriteOnlyList<Object> can be assigned to a
IWriteOnlyList<Button>.
Once again, this makes sense, because
IWriteOnlyList only uses the type
T incoming, as method parameter. Because a
IWriteOnlyList<Object> can hold any object, it is perfectly safe to be treated as a
IWriteOnlyList<Button> – the only thing that can ever happen through this interface is that buttons get added to the list – and buttons are objects.
And again, the reverse would not be case. If the original
List class were contra-variant, one could retrieve arbitrary Objects from a List if Objects, from code that expects to get buttons.
Co- and Contra-Variance is allowed only on Interface types. Generic Classes or Records cannot be marked as variant.
See Also
- Classes, Records and Interfaces
- Extension
invs.
outMethod Parameters
- Method Results
- Type Casts and Type Checks
- Value Types vs Reference Types | https://docs.elementscompiler.com/Oxygene/Types/GenericTypes/ | CC-MAIN-2019-39 | refinedweb | 1,244 | 58.82 |
Uploading files
Serverpod has built-in support for handling file uploads. Out of the box, your server will be configured to use the database for storing files. This works well for testing but may not be performant in larger-scale applications. You should set up your server to use S3 or Google Cloud Storage in production scenarios.
caution
Caution: Currently, only S3 is supported, but Google Cloud Storage support is coming soon. If you want to use Google Cloud, please consider contributing an implementation.
How to upload a file
By default, a
public and a
private file storage are set up to use the database. If needed, you can replace these or add more configurations for other file storages.
Server-side code
There are a few steps required to upload a file. First, you need to create an upload description on the server and pass it to your app. The upload description grants access to the app to upload the file. If you want to grant access to any file, you can add the following code to one of your endpoints. However, in most cases, you may want to restrict which files can be uploaded.
Future<String?> getUploadDescription(Session session, String path) async {
return await session.storage.createDirectFileUploadDescription(
storageId: 'public',
path: path,
);
}
After the file is uploaded, you should verify that the upload has been completed. If you are uploading a file to a third-party service, such as S3 or Google Cloud Storage, there is no other way of knowing if the file was uploaded or if the upload was canceled.
Future<bool> verifyUpload(Session session, String path) async {
return await session.storage.verifyDirectFileUpload(
storageId: 'public',
path: path,
);
}
Client-side code
To upload a file from the app side, first request the upload description. Next, upload the file, you can upload from either
Stream or a
ByteData object. If you are uploading a larger file, using a
Stream is better because not all of the file needs to be held in RAM memory. Finally, you should verify the upload with the server.
var uploadDescription = await client.myEndpoint.getUploadDescription('myfile');
if (uploadDescription != null) {
var uploader = FileUploader(uploadDescription);
await uploader.upload(myStream);
var success = await client.myEndpoint.verifyUpload('myfile');
}
info
In a real-world app, you most likely want to create the file paths on your server. For your file paths to be compatible with S3, do not use a leading slash and only use standard characters and numbers. E.g.:
'profile/$userId/images/avatar.png'
Accessing stored files
It's possible to quickly check if an uploaded file exists or access the file itself. If a file is in a public storage, it is also accessible to the world through an URL. If it is private, it can only be accessed from the server.
To check if a file exists, use the
fileExists method.
var exists = await session.storage.fileExists(
storageId: 'public',
path: 'my/file/path',
);
If the file is in a public storage, you can access it through its URL.
var url = await session.storage.getPublicUrl(
storageId: 'public',
path: 'my/file/path',
);
You can also directly retrieve or store a file from your server.
var myByteData = await session.storage.retrieveFile(
storageId: 'public',
path: 'my/file/path',
);
Add a configuration for S3
This section shows how to set up a storage using S3. Before you write your Dart code, you need to setup a S3 bucket. Most likely, you will also want to setup a CloudFront for the bucket, where you will be able to use a custom domain and your own SSL certificate. Finally, you will need to get a set of AWS access keys and add them to your Serverpod password file.
When you are all set with the AWS setup, include the S3 package in your yaml file and your
server.dart file.
import 'package:serverpod_cloud_storage_s3/serverpod_cloud_storage_s3.dart'
as s3;
After creating your Serverpod, you add a storage configuration. If you want to replace the default public or private storages, set the storageId to
public or
private. Set the public host if you have configured your S3 bucket to be accessible on a custom domain through CloudFront. You should add the cloud storage before starting your pod.
pod.addCloudStorage(s3.S3CloudStorage(
serverpod: pod,
storageId: 'public',
public: true,
region: 'us-west-2',
bucket: 'my-bucket-name',
publicHost: 'storage.myapp.com',
));
For your S3 configuration to work, you will also need to add your AWS credentials to the
passwords.yaml file. You create the access keys from your AWS console when signed in as the root user.
shared:
AWSAccessKeyId: 'XXXXXXXXXXXXXX'
AWSSecretKey: 'XXXXXXXXXXXXXXXXXXXXXXXXXXX' | https://docs.serverpod.dev/0.9.8/concepts/file-uploads | CC-MAIN-2022-40 | refinedweb | 763 | 56.86 |
Problem
I created a plot using the Matplotlib library in a Python script. But the call to
show does not display the plot in a GUI window.
Solution
The rendering of a plot to a file or display is controlled by the backend that is set in Matplotlib. You can check the current backend using:
import matplotlib matplotlib.get_backend()
I got the default backend as
Agg. The possible values for GUI backends on Linux are
Qt4Agg,
GTKAgg,
WXagg,
TKAgg and
GTK3Agg. Since
Agg is not a GUI backend, nothing is being displayed.
I wanted to use the simple Tcl-Tk backend. So, I installed the necessary packages for Python:
$ sudo apt install tcl-dev tk-dev python-tk python3-tk
The backend is not set automatically after this. In my Python script, I set it explicitly:
import matplotlib matplotlib.rcParams["backend"] = "TkAgg"
The plot was displayed after this change.
However, this needs to be set immediately after the import line of Matplotlib and before importing
matplotlib.pyplot. Doing this in the import region of a Python script is quite ugly.
Instead, I like to switch the backend of the
matplotlib.pyplot object itself:
import matplotlib.pyplot as mplot mplot.switch_backend("TkAgg")
This too worked fine for me! 🙂
Reference: Matplotlib figures not showing up or displaying
Tried with: Ubuntu 14.04
One thought on “Matplotlib plot is not displayed in window”
Thanx for this. It was useful. I built a Python 2.7.14 environment on a couple of older Linux laptops, and I had to explicitly set the backend parameter to use “TkAgg”, after importing _tkinter and matplotlib. I built Python from source, and Tck/Tk 8.5 as well. The matplotlib backend was defaulting to using ‘agg’, which meant I could only see images using IPython Notebooks. But with the your example of setting it explicitly here, I was able to get my Fedora Linux stuff to work right, and show a modified image, directly on the Gnome desktop. Works nice. | https://codeyarns.com/2016/03/14/matplotlib-plot-is-not-displayed-in-window/?shared=email&msg=fail | CC-MAIN-2020-34 | refinedweb | 332 | 67.65 |
This is a case where "avoiding force unwrapping" has gotten you into a trap: you are no longer producing an md5 hash for
an empty data empty Data instances without storage.
This is a case where "avoiding force unwrapping" has gotten you into a trap: you are no longer producing an md5 hash for
Sorry for my ignorance but what's the point to generate the MD5 of an empty data?
If I'm commiting files into a git repo, I need to know the hash of each one so that they get stored (and deduplicated properly). That should work even if I try to check in an empty file!
(Git uses SHA-1 for now, not MD5, but you get the idea.)
Well, the code that I wrote above does not have the purpose to handle this case. I am not sure how to modify it to accept this case, I just modified the other examples above to make it "safe".
CC_MD5 takes an optional pointer; you should pass
baseAddress even if it is
nil. If it didn't, though, it's important to not silently produce a wrong answer:
guard let baseAddress: UnsafeRawPointer = buffer.baseAddress else { preconditionFailure("data must be non-empty") }
Alternately, you could be maximally correct and still call this hypothetical
CC_MD5_with_non_optional_param by providing your own dummy address. This ought to be safe because you pass 0 for the count.
guard let baseAddress: UnsafeRawPointer = buffer.baseAddress else { assert(buffer.count == 0) var dummy = 0 CC_MD5_with_non_optional_param(&dummy, CC_LONG(buffer.count), md5Buffer.bindMemory(to: UInt8.self).baseAddress) return } _ = CC_MD5_with_non_optional_param(baseAddress, CC_LONG(buffer.count), md5Buffer.bindMemory(to: UInt8.self).baseAddress)
Why not simply do this? I didn't know that CC_MD5 accepted
nil values. I tested it with a empty Data object and it's returning
1B2M2Y8AsgTpgAmY7PhCfg==.
import CommonCrypto func buildMD5(data: Data) -> String { var md5: Data = Data(count: Int(CC_MD5_DIGEST_LENGTH)) md5.withUnsafeMutableBytes { (md5Buffer: UnsafeMutableRawBufferPointer) in data.withUnsafeBytes { (buffer: UnsafeRawBufferPointer) in _ = CC_MD5(buffer.baseAddress, CC_LONG(buffer.count), md5Buffer.bindMemory(to: UInt8.self).baseAddress) } } return md5.base64EncodedString() }
Yep, that's the best answer! I was trying to include the recommended alternatives if it wasn't supported, but I should have included the "happy path" first, my bad.
how to make an empty data instance without storage?
it actually doesn't accept nil values, and neither does the newer CC_SHA256. the function doesn't crash but it also doesn't do anything useful either. so you have to pass a non zero pointer to it for it to make the calculation even if the count is 0, so use a variation of that method above that uses a dummy variable or a force unwrap if you are not afraid of it.
Or just stop using Common Crypto for this stuff (-:
import CryptoKit let md5 = CryptoKit.Insecure.MD5.hash(data: Data()) print(md5) // -> MD5 digest: d41d8cd98f00b204e9800998ecf8427e let sha256 = CryptoKit.SHA256.hash(data: Data()) print(sha256) // -> SHA256 digest: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
Share and Enjoy
Quinn “The Eskimo!” @ DTS @ Apple
in fact - no need to be afraid: always unwrap here, as Data's withUnsafeBytes can never give a buffer that has nil baseAddress:
let bp = UnsafeMutableRawBufferPointer(start: nil, count: 0) assert(bp.baseAddress == nil) let data = Data(bp) data.withUnsafeBytes { p in assert(p.baseAddress != nil) p.baseAddress! // safe }
and indeed, what Quinn says if you can tolerate
@available(iOS 13.2, macOS 10.15, watchOS 6.1, tvOS 13.2, *)
If it’s not documented, it’s an implementation detail and may change in future releases. Apple Developer Documentation | https://forums.swift.org/t/withunsafebytes-data-api-confusion/22142?page=2 | CC-MAIN-2021-31 | refinedweb | 586 | 58.38 |
Summary
To what extent does the mindset encouraged by a language and its surrounding culture influence people's perceived productivity when they use that language? In this weblog post, I take a look at this question in the context of the static versus dynamic typing debate.
There has been much debate concerning the relative merits of static and dynamic languages. Dynamic language enthusiasts feel they are far more productive when using languages such as Smalltalk, Python, and Ruby compared to static languages like Java or C++. Static language enthusiasts feel that although dynamic languages are great for quickly building a small prototype, languages such as Java and C++ are better suited to the job of building large, robust systems. I recently noticed that in my own day-to-day programming with Java and Python, I approach programming with a different mindset depending upon the language I'm using. I began to wonder to what extent the mindset encouraged by a language and its surrounding culture influences people's perceived productivity when they use that language.
A year ago at Artima I had a large number of JSPs that weren't organized in an model-view-controller (MVC) architecture. In these JSPs, the business logic was mixed with logic responsible for generating the HTML view. I wanted to start making incremental improvements to these JSPs by refactoring each JSP into an MVC architecture whenever I needed to make a change to that JSP. I didn't want to change any existing URLs, and wasn't pleased with the frameworks I investigated such as Struts and WebWork, so I adopted an interim approach I called "poor man's struts."
For each JSP I refactored to the poor man's struts architecture, I created a
Page class (the controller)
with a static
process method. I moved the business logic from the JSP into
the
process method, and called the
process method from
the top of the JSP, like this:
// Imagine this is the top of a JSP,... ExamplePage.Context context = ExamplePage.process(request, response, session);
After executing the business logic, the
process method returned a context object filled with objects
that the remainder of the JSP needed to render the page. Because the business logic often included explicit redirects, I decided to
let the context object indicate whether or not the process method had already redirected
the client. If the so, the JSP simply returned:
// Imagine this is the top of a JSP,... ExamplePage.Context context = ExamplePage.process(request, response, session); if (context.isRedirected()) { return; }The remainder of the JSP, the view portion, usually depended on variables that were declared in the business logic portion. Now that the business logic had been moved to the
processmethod, I needed to redeclare those variables. Therefore, I declared the missing variables next and initialized them with values and objects extracted from the context object, like this:
// Imagine this is the top of a JSP,... ExamplePage.Context context = ExamplePage.process(request, response, session); if (context.isRedirected()) { return; } long forumID = context.getForumID(); boolean reply = context.isReply(); String subject = context.getSubject(); String body = context.getBody();
ContextClasses
processmethod to the JSP. In many web MVC frameworks, this information is moved by placing the information in a context object, whose sole purpose in life is to move the information from the controller to whatever entity is rendering the view. Similarly, in the case of poor man's struts, the
processmethod populates a context object and returns it to the JSP. Because all poor man's struts context objects would need to indicate to the JSP whether or not the
processmethod had performed a redirect, I created
ControllerContext, a superclass extended by all context classes:
public class ControllerContext { private boolean redirected; public ControllerContext(boolean redirected) { this.redirected = redirected; } public boolean isRedirected() { return redirected; } }
For each
Page class, I created a nested class called
Context.
The constructor of this class accepted a boolean
redirected flag
and a parameter for each value or object needed by the JSP. Here's an example:
// Declared inside the ExamplePage controller class public static class Context extends ControllerContext { private long forumID; private boolean reply; private String subject; private String body; public Context(boolean redirected, long forumID, boolean reply, String subject, String body) { super(redirected); if (forumID < 0) { throw new IllegalArgumentException(); } if (subject == null || body == null) { throw new NullPointerException(); } this.forumID = forumID; this.reply = reply; this.subject = subject; this.body = body; } public long getForumID() { return forumID; } public boolean isReply() { return reply; } public String getSubject() { return subject; } public String getBody() { return body; } }
The context object is a vehicle that enables the
process method to return multiple values to the JSP. In several Java MVC frameworks, the context object is essentially a
Map. The controller places objects into the context
Map identified by named keys. Had I taken this approach in poor man's struts, for example, the
process method of
ExamplePage could have placed the subject string
"A Tale of Two Cities" into the
Map with the key
"subject". I rejected this approach primarily because I felt creating a specific
Context class for each controller would allow me to employ the type system to enforce constraints. The
ControllerContext superclass, for example, enforces that all context objects supply a boolean
redirected value. The
ExamplePage.Context subclass enforces that
ExamplePage.process always provides a non-negative
forumID. My theory was that such constraint checking would help me achieve robust code.
In practice, however, I found I was always in a hurry when I refactored a JSP to poor man's struts, because I only performed this kind of refactoring when I otherwise had some enhancement or bug fix to make to the JSP. I also found that
this approach to the context object required a lot of code. To speed up the process,
I wrote a Python script that, given a simple list of types and variable names, generated much of the needed Java code. The Python script did not generate any validation of input parameters in the
Context constructor. I
needed to write those by hand, and I discovered that I rarely felt it worth the time to
do so.
Given this experience, I ultimately decided that the added type safety wasn't really worth
the extra effort it required in this situation. In our new MVC architecture, the controllers return essentially a
Map context, and we enforce constraints with unit tests. The wierd thing is that I realized that if I had been designing poor man's struts in Python, I probably wouldn't have thought twice about just returning all that information in a tuple, as in:
(redirected, forumID, reply, subject, body) = ExamplePage.process(req, resp, session)
From a safety perspective, a tuple seems even more error prone than a
Map, because I have to get the order correct on both sides, not just the names.
If the thirteenth element in the tuple is a message ID, then I have to make sure the thirteenth element in both the return and assignment tuples is
messageID.
While order is generally random, names of keys and variables are usually similar, such as:
String subject = context.get("subject");In Java, the multi-valued approach equivalent to Python's tuple would be toss everything into an array of
Objectand pull it out by index on the other side. I would never do that in Java, but I don't hesitate to do it in Python. Why?
One reason is that that can be combined to build large, dependable systems. The Python culture encourages me to glide smoothly and quickly to a solution.
This realization got me wondering, to what extent is this perceived increase in productivity with languages such as Python, Ruby, and Smalltalk due to the culture of those communities, and the mindset that the culture engenders in programmers, versus the actual languages themselves? Do you find yourself performing acts with reckless abandon in one language that would make you feel guilty to do in another? How easily can you switch mindsets when you switch languages? Do you find yourself fighting the mindset in one language, and feeling at home in another? What is the real source of the differences in the static versus dynamic language debate?
Have an opinion? Readers have already posted 22 comments about this weblog entry. Why not add yours?
If you'd like to be notified whenever Bill Venners adds a new entry to his weblog, subscribe to his RSS feed. | https://www.artima.com/weblogs/viewpost.jsp?thread=92979 | CC-MAIN-2017-51 | refinedweb | 1,404 | 52.19 |
UART Access Not Working - ME SDK 8l_stanton May 4, 2014 3:20 PM
I am attempting to receive data over the UART on a Raspberry Pi with Java ME 8 (fresh install of the newest release).
I modified the /etc/inittab file on the Pi:
#Spawn a getty on Raspberry Pi serial line
# below is commented out to support UART access from Java ME
#T0:23:respawn:/sbin/getty -L ttyAMA0 115200 vt100
as well as the /boot/cmdline.txt:
dwc_otg.lpm_enable=0 console=tty1 root=/dev/mmcblk0p2 rootfstype=ext4 elevator=deadline rootwait
as outlined in the Raspberry Pi Getting Started Guide.
I tried to create a new UART instance via DeviceManager:
uart = (UART) DeviceManager.open(40);
This generates a DeviceNotFoundException ("Device 40 not found"). I used 40 as that is the UART Device ID listed in the Getting Started Guide.
I also tried to create a new UART instance using a UARTConfig:
public void startApp() {
System.out.println("startApp() ++");
uartEventHandler = new UARTDataAvailableEventHandler();
// create a basic UART config
UARTConfig uartAdHocTemplate = new UARTConfig(
DeviceConfig.DEFAULT,
DeviceConfig.DEFAULT,
9600,
UARTConfig.DATABITS_8,
UARTConfig.PARITY_NONE,
UARTConfig.STOPBITS_1,
UARTConfig.FLOWCONTROL_NONE,
100, 100);
try {
uart = (UART) DeviceManager.open(uartAdHocTemplate);
if (uart != null) {
uart.setEventListener(UARTEvent.INPUT_DATA_AVAILABLE, uartEventHandler);
}
} catch (IOException ex) {
Logger.getLogger((TestUART.class.getName())).log(Level.SEVERE, null, ex);
}
System.out.println("startApp() --");
}
The event handler is defined as:
public class UARTDataAvailableEventHandler implements UARTEventListener {
@Override
public void eventDispatched(UARTEvent event) {
System.out.println("Received an UART event!!");
}
}
This runs but the event handler's eventDispatch method is never called.
The application has a UART permission set: jdk.dio.uart.UARTPermission "*:*" "open"
The serial data is sent from an XBee radio module - I verified through a logic analyzer that the expected data is being sent to to the RXD pin on the Pi Cobbler breakout board.
Thank you,
Luther
1. Re: UART Access Not Working - ME SDK 8l_stanton May 4, 2014 4:01 PM (in response to l_stanton)
The application was being launched on EmbeddedDevice1 emulator. Running on the registered Raspberry Pi fired the INPUT_DATA_AVAILABLE event.
2. Re: UART Access Not Working - ME SDK 8f7b44acd-0f9e-4d38-acd1-2348e6d5969d May 19, 2014 1:56 AM (in response to l_stanton)
I am trying to do exactly the same thing except in my case the UART is talking to an Adafruit GPS module (with exactly the same comms parameters, conveniently). When I call DeviceManager.open() I get an IOException with a null message, but the Device Log shows "VM - [SERIAL] iso=-1:[UART] Can't open /dev/ttyAMA0 file errno 13" which is a permission denied error. This makes no sense since (1) I'm running usertest.sh under sudo, and (2) I don't even need root to access /dev/ttyAMA0 (I can, for example do "cat /dev/ttyAMA0" from BASH command line logged in as pi, and see a bunch of NMEA-ish looking strings fly by).
I've done the magic to disable the UART as a console and prevent the getty on it, I've given my MIDlet the [jdk.dio.uart.UARTPermission "*:*" "open"] permission. The JVM security manager seems quite content to let me open this device... but the OS isn't. Suggestions?
3. Re: UART Access Not Working - ME SDK 8Sergey.N-Oracle May 19, 2014 9:45 AM (in response to f7b44acd-0f9e-4d38-acd1-2348e6d5969d)
Did you follow Getting Started Guide and disable serial console?
4. Re: UART Access Not Working - ME SDK 8f7b44acd-0f9e-4d38-acd1-2348e6d5969d May 21, 2014 2:38 PM (in response to Sergey.N-Oracle)
Yes, as I stated in my original post, "I've done the magic to disable the UART as a console". Also I can just do "cat /dev/ttyAMA0" from the BASH prompt and I see output from the GPS module. I also can open and read from the UART from a Perl script and I get valid messages from the GPS module.
5. Re: UART Access Not Working - ME SDK 8Sergey.N-Oracle May 21, 2014 3:27 PM (in response to f7b44acd-0f9e-4d38-acd1-2348e6d5969d)
I can't help here since those error are reported by system call of open("/dev/ttyAMA0"), not by java code | https://community.oracle.com/message/12445460 | CC-MAIN-2016-50 | refinedweb | 698 | 55.44 |
Hello All,
I am a novice in Java programming. I am learning Java by taking an online class out of my own curiosity. I am solving basic level problems from many websites. So far, I was doing good. I get stuck as soon as I get a problem that asked for count occurrences of a character in a string. I’ve completed the string topic but the problem is going over my head. I can write a program that counts all the characters in a string but how to solve this particular problem? Do you guys have any idea? Please, help me to solve the problem.
Thanks.
Finding occurrences in a string is indeed a good problem to solve. In real-time developing it helps a lot. We can easily find out a particular character’s occurrences in a string using an iteration over the string and inside a method.
public class MyClass{
public static int count(String str, char ch){
int result = 0;
for (int i=0; i<str.length(); i++){
if (str.charAt(i) == ch)
result++;
}
return result;
}
public static void main(String args[]){
String str= "I Love Programming";
char ch = 'o';
System.out.println(count(str, ch));
}
}
Give the above program a try. It should count the occurrences of a particular character. | https://kodlogs.com/38747/how-to-count-occurrences-of-character-in-string-java | CC-MAIN-2021-21 | refinedweb | 215 | 76.42 |
I'm about to finish my first iOS app, but I'm facing a problem that I don't know how to solve.
The initial ViewController of the app contains a MKMapView, that has few annotations. When clicking any annotation, a popup appears with some info and then, there's an info button that leads to a new ViewController, with detailed information related to the selected annotation.
The thing is that second ViewController(DetailedViewController), has few labels (title, description), an image and few links, which should be loaded before the ViewController itself is shown. I'm reading this values from a JSON but there's no problem with that.
I'm reading all values and setting them in the ViewDidLoad()
So the thing is, when that second ViewController(DetailedViewController) is loaded in the simulator or in the physical iPhone, all fields has their default value (the one set in the storyboard) and spends few seconds to update with the desired values, even they're read so fast from the JSON (according to logs).
Also, an error appears in the console:
This application is modifying the autolayout engine from a background thread after the engine was accessed from the main thread. This can lead to engine corruption and weird crashes.
Stack:(
0 CoreFoundation 0x000000018a0ca1d8 <redacted> + 148
1 libobjc.A.dylib 0x0000000188b0455c objc_exception_throw + 56
2 CoreFoundation 0x000000018a0ca108 <redacted> + 0
3 Foundation 0x000000018acb1ea4 <redacted> + 192
4 Foundation 0x000000018acb1be4 <redacted> + 76
5 Foundation 0x000000018aafd54c <redacted> + 112
6 Foundation 0x000000018acb0880 <redacted> + 112
7 UIKit 0x000000018ff2140c <redacted> + 1688
8 QuartzCore 0x000000018d3e1188 <redacted> + 148
9 UIKit 0x000000019056de90 <redacted> + 64
10 QuartzCore 0x000000018d3d5e64 <redacted> + 292
11 QuartzCore 0x000000018d3d5d24 <redacted> + 32
12 QuartzCore 0x000000018d3527ec <redacted> + 252
13 QuartzCore 0x000000018d379c58 <redacted> + 512
14 QuartzCore 0x000000018d37a124 <redacted> + 660
15 libsystem_pthread.dylib 0x000000018915efbc <redacted> + 572
16 libsystem_pthread.dylib 0x000000018915ece4 <redacted> + 200
17 libsystem_pthread.dylib 0x000000018915e378 pthread_mutex_lock + 0
18 libsystem_pthread.dylib 0x000000018915dda4 start_wqthread + 4
)
dispatch_async(dispatch_get_main_queue(){
// code here
})Object! = "anyBarPhoneNumber"
var barWebsite: AnyObject! = "anyBarWebsite"
var barName: AnyObject! = "anyBarName"
var pinCoordinates : CLLocationCoordinate2D!
var pinId : String!
override func viewDidLoad() {
super.viewDidLoad())
let barName: AnyObject! = json["name"]
self.BarNameLabel.text = String(barName)
self.barName = String(barName)
self.BarDescriptionLabel.text = (json["description"] as! String)
self.barPhoneNumber = json["telf"]
self.barWebsite = json["website"] as! String()
}
}
Apple uses queues to represent threads, as @Adam says in the comments you you should look at Concurrency Programming Guide to get a better understanding of your app's behavior.
However it might be useful to explain what is going on with your app that it creates your crash as it is a very common pattern in iOS.
You are making a request to some web service. Unless explicitly this reuqest is going to happen on a background queue. If this was not the case your app's UI would freeze while the request to the web service was made. The point to remember here is that ALL ui work must be done on the main queue! @Leon Guo has shown you how to access the main queue from a different queue.
So how does this effect your web service request. Well when the response comes back and you parse it and you are ready to assign the data to ui elements such as labels and textfields, you want to do that on the main queue like so:
// comment: example method on DetailedViewController class func prepareUI(data: MyDataType){ // Swift 3 DispatchQueue.main.async { titleLabel.text = data.titleString descriptionLabel.text = data.descriptionText } }
Or other wise you will be accessing the ui elements in the DetailedViewController from a queue other than the main queue, hence your crash. So to solve your problem right now wrap your assign of the UI elements in the
DispatchQueue.main.async block of code where ever you are doing it. This should get you back up and running again.
In the long term you need to develop an understanding of how concurrency is implemented in iOS. The App Programming Guide for iOS is a great place to start!
Edit
Here is your code using the
DispatchQueue.main.async function. I would state that this is just to get you up and running...
import UIKitBarPhoneNumber" var barWebsite = "anyBarWebsite" var barName = "anyBarName" var pinCoordinates : CLLocationCoordinate2D! var pinId : String! override func viewDidLoad() { super.viewDidLoad() } override func viewWillAppear(_ animated: Bool) { super.viewWillAppear(animated) // For example purposes only networkCall() } func networkCall() {) // Here we get back on the main queue since we are altering UI elements dispatch_async(dispatch_get_main_queue()) { let barName: AnyObject! = json["name"] self.barNameLabel.text = String(barName) self.barName = String(barName) if let text = json["description"] as? String { self.barDescriptionLabel.text = text } if phoneNumber = json["telf"] as? String { self.barPhoneNumber = phoneNumber } if webSite = json["website"] as? String { self.barWebsite = barWebsite }() } } | https://codedump.io/share/A3beKSNaSWwG/1/modify-view-controller-properties-before-view-controller-is-displayed | CC-MAIN-2016-50 | refinedweb | 776 | 57.27 |
- Training Library
- Microsoft Azure
- Courses
- Processing IoT Hub Events and Data
Writing an Event Processor for IoT Hub with C# I’ll show you how to create an event processor using C#. Writing your own event processor is like having your Lego bricks which allow you to create any model, architecture, or algorithm you want.
This gives you the most flexibility, however, it also has the most amount of addition effort.
We’re here in our Azure portal. And I’ve already created an IoT Hub and an Azure storage account.
Let’s head into IoT Hub. Looking at the device explorer...you can see that we have one device named "dev1" which we’ll use for our simulated device.
Under shared access policies, I’ve created a policy for our event processor, and it has service permissions; which allow us to retrieve a message from the message endpoint.
On the endpoint I’ve configured a consumer group named processor. It's a good practice to create a consumer group for each processor.
Let's switch over to our Visual Studio project.
We have "IoTHubDeviceClient" and "IoTHubEventProcessor". Let's look at the IoTHubDeviceClient. This is a basic device simulator written in C#.
Then there is only a command implemented, which is for telemetry.
In the telemetry implementation there is a loop that generates one event per second.
The event itself is simple, it has a DeviceId; Index that is a sequence number; and the Data is our simulated telemetry . Then Date is when that event is generated.
The value for data is adjustable by using the up and down arrows on the keyboard. This allows me to adjust the data up and down by increments of 1. This will let me simulate different data patterns.
Okay, then there is the event processor which is a class that implements the IEventProcessor interface from the EventHubs namespace.
It contains four methods. Open, Close, ProcessError, and Process Events. And ProcessEvents is the interesting method.
Let's see how it is implemented.
In this example the open, close and process error, only write to the console.
The ProcessEvents method has arguments for the context as well as an IEnumerable of EventData.
Here I’m looping over the messages and passing them into another method that grabs the event data and hands it off to an event delegate.
So, let's see how the event processor works. If you look at the Processor class, there’s a Main method.
In order to instantiate a new EventProcessorHost you need a few settings.
The ConnectionString represents the connection to the IoT Hub namespace.
The Path represents the specific EventHub-compatible endpoint for IoT Hub.
Then there is the ConsumerGroupName. Again, specify a consumer group for each process you run.
Then there is the storage and a container. Since we have multiple partitions processed by the event processor, each event processor has a state that will be persisted in this storage, inside the specified container.
I have implemented commands for a couple of scenarios: logging and average.
First, let’s look at logging.
The first thing to do is call "RegisterEventProcessorAsync". We don't control how the processor is created here. You only stop and wait for a key to finish, while eventProcessorHost does the work for you.
This loop is just a sort of clock to gauge time.
So, if we go to the command line, I have one here. Let’s start another command line instance.
On the first one I’ll run a batch files that runs the simulated device.
On the second I’ll run a logging batch file that runs the eventProcessorHost process.
Notice that there are the events that are sent by the device. Here, you see that the processor is starting.
The processing is quite simple, it’s just writing to the console, you can see the messages being processed by the logging processor.
If I increase the data value by clicking the up key on the keyboard, here we see that we have a 22.
If I click the down arrow, the data value going down and the processor is receiving the events. Great. So, let’s stop the device, and processor. So that’s a simple example of using an event processor to just log events. Now, logging them to the console isn’t very useful, however persisting the records to DocumentDB, SQL, or anywhere else is easy from here.
Alright, let's do something more interesting. What is far more interesting than logging is control. That means looking at the data, finding a pattern, and then sending feedback to the device based on what we find.
Here in the Processor class I have another command for "average". I’m calculating the average of the value in the Data property for a 10 second window. Pretend that the data is something meaningful, maybe something like temperature.
If the average value hits above 24 and there are at least 8 messages, we send a “SWITCH-ON” command. You can imagine that in this case there’s a device listening with a fan that will listen for the “SWITCH-ON” command.
Let’s run this to see it in action. I’ll start by running the dev1 batch file.
Okay, then I’ll run on a second command prompt, an "average" batch file that launches the same processor with some other arguments.
So far it looks like the previous demo. Let’s increase the Data property. Notice it going up 23, 26. Okay. Now that we have some events, I’ll decrease the value. Notice here that we have some feedback. This command here to “SWITCH-ON” was triggered because our command condition was met. Obviously, this is a demo, so we’re not actually controlling anything, though you can imagine using this with an actual device.
Keep in mind that when it comes to controlling devices in production you’ll need to consider how many commands you send, and or make sure that they’re executed idempotently on the device
Alright, that’s going to wrap up this lesson, so let’s recap.
The advantage of implementing your own Event Processor is that you have full control over the processing. You can implement any processing model you want.
You can write a stateless processor. You can implement a stateful processor; you can do whatever you want in code and use any supporting library you need.
However the drawbacks are that you’ll need to manage the infrastructure; you’ll also need to write a lot of code that isn’t specific to your business logic.
So raw event processing requires a lot more development effort!
In the next lesson, we’ll consider how to handle messages with Azure Functions. | https://cloudacademy.com/course/processing-azure-iot-hub-events/writing-an-event-processor-1/ | CC-MAIN-2022-33 | refinedweb | 1,129 | 66.64 |
I am sort of on an SCA kick here. I just saw this post from Dick Weisinger of Formtek. In it he questions the approach OASIS is taking with the SCA family of specifications. Dick implies that SOA is already too complex for most people, and creating six new standards isn’t going to simplify anything. I would take that one further and question the very structure of SCA itself. It seems that there are a couple of loosely-related technology areas covered in the SCA specs, why are they all smashed together?
Why isn’t the portable communications API for Java being specified in the JCP, where such things are “supposed to’ be specified?
Why is that portable API combined in a single spec with a description language for composite artifacts (SCDL)?
SCA is the first standardization effort for a component-level programming model based on dependency injection. At its core, a dependency injection framework does not provide an api. Talking about SCA as an API kind of misses the point about what SCA is about. There are API:s, but these should be considered shortcuts for users without SCA tooling or without concerns for the full value of dependency injection. The Spring framework is the de-facto-standard for dependency injection for the Java platform. JavaEE 5 is a standard, but it’s dependency injection framework is intrusive (it depends on JavaEE-specific annotations). SCA – like Spring – is a zero-footprint (code-wise) DI-framework – at least on the Java platform. Unlike Spring, SCA is a standard. SCA (unlike Spring) also provides structures in terms of assembly contracts, that add enterprise-level properties like namespaces, promotion of references and services to the next level of composition etc. In my opinion, SCA is a much more mature model than Spring, when it comes supporting enterprise aspects of component library management. WCF is nothing of this. It is not a composition model based on dependency injection, which is the core value of SCA. SCA also provides bindings. From a WCF perspective, this may be viewed as competing feature. The major problem with SCA – as I see it – is that each new binding requires an extension to the standard. WCF is the opposite – a plug-in architecture for transports. From an SCA perspective, they are complementary. If SCA should standardize a single binding, it would be something like WCF. There is no standard for that in the Java world (not at the service-level). There are however open source products like Apache XCF – not as feature-rich as WCF, but on its way. But again – DI makes it less important to have something like WCF. DI abstracts even the fact that you need communication. It then becomes less important to standardize an api, since your code is not bound to any api. It’s fine to use Spring remoting today, Apache XCF tomorrow and something else after that. The code remains unaffected. Going away from WCF, means changing api. Going away from SCA (i.e. to Spring) does not affect your code, nor does changing transport (in case there is communication going on in the first place).
What I think is required for contemporary development of service-oriented solutions, is a programming language with the qualities of Java and C#, a zero-footprint dependency injection framework and a service-level protocol abstraction with plug-gable transports. The later should provide glue for dependency injection frameworks. Java is a good language for SCA, SCA is an excellent (by far the most well-engineered, in my opinion) service-level dependency injection framework and its standard. SCA provides transport binding in a rigid way (not plug-gable). WCF provides the later – but only that. Although SCA is rigid, it probably covers 90% of all requirements (both SOAP-based web-services, JMS and Spring are them selves transport abstractions). A final comment on limitations of an SCA domain: A domain may be universal – and thus imposing no restrictions on the scale of the assembly model. In case an enterprise require partitioning into multiple service domains, SCA offers a concept for expressing such domains.
Thanks for the thoughtful comment.
I am not sure why you are selecting this post to offer your comment on the comparison between SCA and WCF. This post was just about OASIS’ standardization efforts, and one person’s opinion about it (you know what they say about opinions).
I did write yesterday () about a comparison. And in that post, I offered that the understanding of SCA within the industry has not yet converged. Your description is something new to me, something I had not seen before. I take that as additional evidence for my prior point about an absence of convergence.
You seem to bring quite a bit of DI perspective to the SCA party. Not sure SCA has ever been described in quite that way but yes I see your point. From my perspective DI is not a good in its own right, but is an interesting potential design pattern, especially when assembling or composing distributed systems.
You also say something provocative about DI – that DI lessens the need for WCF. This seems like magical thinking to me. If I have DI, then I no longer have to program the communication links? Seriously? “DI abstracts even the fact that you need communication.”?? I think we tried this, it was called DCOM and CORBA, and it didn’t work. Abstracting away the network has proven to be a minefield repeatedly. The fact is, the network is not invisible in many respects, and abstracting it away is dangerous and leads to many harmful effects.
You point out, as I did (separately), that SCA does something different than WCF in its attempt to define a model for distributed systems. I get that. Thanks for being clear.
You also offered some comment on the SCA domain, and I think that may be a response to something I did not write here (maybe in a different post). Beside being off the point, I don’t get why your comment changes anything. The key problem with the SCA domain is that it is a single vendor artifact. Not that it cannot be “large” (you mentioned there are “no restrictions on the scale of the model”). It’s not that the theoretical limit on a domain is too small. My concern, although I did not articulate it in the original post here, is that the domain is a single-vendor artifact and there is no cross-domain integration, for example, between a domain hosted by vendor X’s SCA container, and a domain hosted in an open source SCA container. A domain is all managed by a single vendor’s SCA “container”. | https://blogs.msdn.microsoft.com/dotnetinterop/2007/09/18/commentary-oasis-is-on-the-wrong-path-with-the-sca-standardization-effort/ | CC-MAIN-2016-50 | refinedweb | 1,124 | 64.41 |
Uncyclopedia:VFH/Detailed rules
From Uncyclopedia, the content-free encyclopedia
< Uncyclopedia:VFH
These are the rules of VFH. They might change at any time, just to keep you on your toes.
Voting
- A registered user's vote always counts as one (1), even for weak and strong votes.*
- IP votes carry half the weight of registered users' votes.*
- Nonsigned votes and votes without timestamps will be struck out, so make sure you sign your votes with ~~~~ or press the "your signature with timestamp" button (
) above the edit box.*
- When voting for or against, please change the score number accordingly — this saves a lot of time later.*
- Voters: don't be dicks and do be constructive with criticism. Writers: Don't be prima donnas. Be open to criticism.
Nominating
- Nominate and vote for articles you find excellent.
- Any article from any namespace (except userspace) is eligible for nomination.
- You are allowed to nominate articles you wrote, but don't overdo it.
- Nominations do not count as a vote for. Please also add a for vote in the correct section if you wish to vote for your nomination. This is both allowed and encouraged.
- When nominating an article, the VFH template should be added to the article using {{VFH}}. Anything you add {{VFH}} to also shows up in Category:Feature nomination.
*Note that when you use the auto-voting script, these actions are carried out automatically. (Scores are automatically updated, votes are signed, etc.) | http://uncyclopedia.wikia.com/wiki/Uncyclopedia:VFH/rules?oldid=5733028 | CC-MAIN-2015-48 | refinedweb | 242 | 67.55 |
To accept user input from someone using your software, Java provides the Scanner class, which is part of the java.util package. We will look at how to use the java.util package, the Scanner class, and the nextLine() method in this Java programming tutorial for developers.
Below is an example program showing how to use the nextLine() method in Java to accept user input:
// How to import the Scanner class
import java.util.Scanner;
class Main {
public static void main(String[] args) {
// Create the Scanner
Scanner superName = new Scanner(System.in);
System.out.println("Enter a Super Hero Name:");
// Read User Input and store in a String
String heroName = superName.nextLine();
// Print heroName
System.out.println("Your Super Hero Name is: " + heroName);
}
}
The output from running this program would be:
Enter a Super Hero Name:
Your Super Hero Name Is:
The exact output will be slightly different depending upon which name the user input at the prompt.
While the nextLine() method is used to accept user input in the form of String data types, Java also provides a way to accept the following data types, including the following methods:
nextBoolean(): This method is used to read boolean values from a user. nextByte(): This method is used to read a byte value from a user. nextDouble(): This method is used to read double values from a user. nextFloat(): This method is used to read floating point numbers or float values from a user. nextInt(): This method is used to read integer or int values from a user. nextLine(): As discussed earlier in this article, this method is used to read String values from a user. nextLong(): This method is used to read long values from a user. nextShort(): This method is used to read short values from a user.
Advertiser Disclosure: | http://www.devx.com/tips/how-to-accept-user-input-in-java.html | CC-MAIN-2021-49 | refinedweb | 301 | 61.77 |
Why not use XML for configuration or DSLs?
XML suffers from overcomplication much like vanilla YAML does - although to an ever greater degree, thanks to the committee driven design. Doctypes and namespaces are horrendous additions to the language, for instance. XML is not only not really human readable (beyond a very basic subset of the language), it’s often barely programmer readable despite being less expressive than most turing complete languages. It’s a flagrant violation of the rule of least power.
The language was, in fact, so overcomplicated that it ended up increasing the attack surface of the parser itself to the point that it led to parsers with security vulnerabilities.
Unlike JSON and YAML, XML’s structure also does not map well on to the default data types used by most languages, often requiring a third language to act as a go between - e.g. either XQuery or XPath.
XML’s decline in favor of JSON as a default API format is largely due to these complications and the lack of any real benefit drawn from them. The associated technologies (e.g. XSLT) also suffered from design by committee.
Using it as a configuration language will all but ensure that you need to write extra boilerplate code to manage its quirks. | https://hitchdev.com/strictyaml/why-not/xml/ | CC-MAIN-2019-09 | refinedweb | 213 | 53.92 |
Design a Jenkins pipeline
Bajet $30-250 USD
Technology: Jenkins, K8, groovy
I need to design a Jenkins pipeline to deploy helm (from GitHub repo) on K8 cluster (ICP) in particular namespace.
7 pekerja bebas membida secara purata $177 untuk pekerjaan ini
Hello, I am devops engineer who worked with jenkins and helm for years. Please contact and I will help you to design the pipeline.
Hello there, I can help you to design your ci-cd pipeline through Jenkins to deploy on k8s Feel free to contact me Regards Gihan
Hi, i'm a kubernetes administrator with more then 3 yars of experience. please feel free to contact me order to start the task | https://www.my.freelancer.com/projects/jenkins/design-jenkins-pipeline/?ngsw-bypass=&w=f | CC-MAIN-2021-04 | refinedweb | 115 | 57.2 |
C# Graphics - A GDI+ world
Hi all,
GDI+ is the next version of GDI. GDI+ drawing has graphics classes that can be used
to write graphics on screen.GDI+ resides in the assembly System.Drawing.dll.
The namespace
System.Drawing.dll is the assembly to add in VS.net.
Add a reference to the System.Drawing.dll using Project->Add Reference menu.
System.Drawing is the namespace that has to be added at the beginning.
using System.Drawing;
There are few more advanced classes that come under System.Drawing. Following are
the main 3 classes.
1. System.Drawing.Drawing2D
2. System.Drawing.Imaging
3. System.Drawing.Text
The surface object.
Before writing anything to the screen, we must need a surface. The graphics class
provides the surface to write on it.
protected override void OnPaint(PaintEventArgs e)
{
Graphics g = e.Graphics;
}
After creating the graphics references, you can write graphics on it. Graphics in
the sense,
lines, arcs, circle, points etc....
Here is a list of such methods under graphics class
DrawLine - To draw line.
DrawPie - To draw pie diagram.
DrawPolygon - To draw polygon.
DrawRectangle - To draw rectangle.
DrawString - To draw string.
DrawArc - To draw arc in the ellipse.
DrawCurve - To draw a curves from array of points.
DrawEllipse - To draw ellipse.
DrawImage - To draw an image.
FillEllipse - To fill the inner part of an ellipse.
FillPie - To fill the inner part of a pie.
FillPolygon - To fill the inner part a polygon.
FillRectangle - To fill the inner part of a rectangle.
FillRegion - To fill the inner part of a Region.
After creating a Graphics object, you can use it to draw lines, draw strings, fill
rectangles and so on.
Following are some of the commonly used objects:
Brush Used to fill enclosed surfaces.
Brushes Brushes for all the standard colors.
Pen Used to draw lines and polygons, just like pen
Pens Pens for all the standard colors.
Font Font class to render text
Color Color class that has all colors objects.
Bitmap Bitmap is an object used to work with images defined by pixel.
FontFamily Defines a group of types font faces.
SolidBrush Brush of a single color
PEN class
A pen draws a line between specified points with specified width and color.
Pen pen = new Pen(Color.Black);
Pen pen = new Pen(Brushes.Blue);
Pen pen = new Pen(Color.Black, 2);
Pen pen = new Pen(Brushes.Blue, 4.7);
Brush class
Brush class is used to draw/fill inner part of any circle or rectangle area.
It has some derived classes
1. SolidBrush - brush of a single color
2. TextureBrush - used to fill the interior of a shape
3. RectangleGradientBrush
4. LinearGradientBrush.
example for brush class
Brush brsh = new SolidBrush(Color.Brown);
Brush brsh = Brushes.DarkBlue;
Rectangle class
The Rectangle class is used to draw a rectangle on the screen.
Rectangle rect = new Rectangle(2, 6, 20, 25);
Draws a rectangle starts from (2,6) with height and width of (20,25).
Point Class
Point class draws a point on the screen
Point p = new Point(5,5);
Drawing a line
DrawLine method of the Graphics class draws a line.
protected override void OnPaint(PaintEventArgs e)
{
Graphics g = e.Graphics ;
Pen pen = new Pen( Color.Red );
Point point1 = new Point( 5, 10);
Point point2 = new Point( 50, 60);
g.DrawLine( pen, point1, point2 );
}
Drawing an Ellipse
DrawEllipse method is used to draw ellipse or circle.
protected override void OnPaint(PaintEventArgs e)
{
Graphics g = e.Graphics ;
Pen pen = new Pen( Color.Black,3 );
Rectangle rect = new Rectangle(4, 4, 100, 100);
g.DrawEllipse( pen, rect );
}
Drawing strings
OnPaint method can be overridden to draw text/string on to the screen.
protected override void OnPaint(PaintEventArgs e)
{
Font f = new Font("Veranda", 25);
Graphics g = e.Graphics;
g.DrawString("Sample Text", f, new SolidBrush(Color.Cyan), 50,50);
}
This gives a basic ground knowledge on GDI+ graphics.
Hope this will help to explore more.
SAN | http://www.eggheadcafe.com/tutorials/csharp/387702a9-0a40-43a4-9e82-6f6d8eb86205/c-graphics--a-gdi-world.aspx | crawl-003 | refinedweb | 655 | 70.29 |
Across
Down
19. It's hard work turning limo (4)
21. Lout blotto after imbibing (3)
23. Just over a yard of ale knocked back, not one but fifty (3)
24. Bird brain, addled after double frontal removal (3)
25. It flows through the heart of boozer (4)
26. What the cow jumped over detailed this sound? (3)
27. Jock's conscientious one, accepted into paradise before time (6)
29. Man travelled by 'orse, reaching Germany worn out (6)
30. Highlights half of lion manes, oddly (5)
31. Essential to Balkan economy, leader entered currency (3)
32. Geek marginalised by neckbeard (4)
34. Request build at regular intervals (3)
35. Material restricted by blacklist sent back (4)
36. Before ultimate in master key, one broke out of jail (7)
40. Particle observer reportedly working (3)
42. Look at second base, then note number three (3)
43. More stretched ResNet backed up (6)
46. Caribbean music in Southern California eschews Latin underpinning (4)
48. Listening to Kenny, Warren, Ali, and Dario for example, results in expression of annoyance (4)
49. Authorities not working in 2½ minutes, for Roman? (6)
50. Heartless skier put edgy beginner atop glacial ridge (5)
52. Grease carried by this Calhoun performance, with utterly hammy leads to setter (6)
53. Lance, one lap back, took drug - about a thousand (6)
55. Miss is audibly disappointed with Romney (4)
57. Turbulent? Orient with yardarms (6)
58. Gorilla head's monkey stare (4)
61. Archaeological mound discovered in Metelen (3)
62. Army standing down for back-up rally (5)
64. Woman's with German recluse (6)
67. Eg Tommy gun concealed for emissary (6)
70, 132. TT race rehearsal (3,3)
71. Administrative assistant embraces left wing (3)
72. Complex problem eliminated from Nagpur - wild Indian bison (4)
73. Female monster giant grows internally, takes drug but loses zero weight initially (6)
74. Thirds of book Sigmund Freud returned, a third of his psychic apparatus (3)
75. Seabass in aquarium regularly missing and not suitable for all ages, armband having been ripped apart (10)
76. Come to before funeral (4)
78. Cricket fielder long gone, regularly disappearing (4,2)
79. Welcome back Perón, for example (3)
82. Steal information from ThunderCats character (5)
83. 20 participant has 68 first instead of filling starter at this time (6)
85. 50 to zero, 101 Latin places (4)
87. Thematic figure's fake broadcast during airing of Verhoeven film gets policeman off the trail (10)
89. "Hey Ya" - year Outkast blew up? Agreed (4)
90. Fated four having been removed, disfigured foot featured this? (3)
91. Quietly skimmed top of lip in west coast airport, giving kiss of peace (3)
92. Caustic soda, a likely story we hear (3)
93. Intern loses computer datum, to be paid for parochially (3)
95. Provoke Superdrug to go bust (4)
96. Buddhist laws - mark as confusing (6)
97. Japanese wrestler puts top three in reverse order - he is musical enthusiast (4)
99. Joyous exclamations at jovian satellites (3)
100. Falafel partner, from around North 147? (6)
102. I do karaoke for audiences - put on Cake (5)
105. Aria with no coda for male child (3)
106. Further along, as doctor might be? (6)
109. Rebound heard - switching sides of dartboard throwing line (4)
111. Juiced lemon's slew of succulence (10)
119. Variable eggs, unknown animal developed from egg (4)
120. Antelope stumbling a la ungainly nyala having been struck (6)
122. Foster child in the highlands wanted alternative housing (4)
124. Walker, perhaps, of the rovers (3)
125. Coin once used in Paris? Oui! (3)
127. After operating room schedule is returned, one might speak out (6)
130. Pairs of pants, 100 pairs ruined (6)
131. Two letters read out make moderate-length composition (5)
132. see 70
134. Spins, reverses, cuts up (5)
135. Indulge difficult part that concerns us - penny drops (6)
138. Vital cosmetic ingredient (4)
139. Frightfully testy about church secondary's appropriations of funds, in the past (6)
141. Interfere to get gold, reportedly (6)
142. Spanish beach, variable, becoming ultimate public space (5)
143. Theme park capital goes crazy for Indonesian product (6)
145. Horse-kick casualty back in rest and recuperation (4)
146. Education without working is not so great (4)
148. Well-connected actor hosts 50 on old verandah (6)
150. Not or nor, for Cicero, included in connectives (3)
152. Epoch references beginnings: a new time (3)
153. Lost hospital support for veteran (3,4)
157. She is South African research associate (4)
159. Party with a skip and jump (3)
161. Tedious in Scotland to prepare reed (4)
163. Irate, having lost at Frustration (3)
165. Villi exam regularly neglected sections of the intestine (4)
166. Austere bishop returning to the bosom of his diocese (6)
167. Cycling fraternity taken in by whopper (6)
168. Supporter at golf course stifles second half of giggle (3)
169. Italian has ankle bones (4)
170. Bow to audience from Bangkok (3)
171. She is limitlessly venal (3)
172. Spooner - pre-haze in church (3)
173. Home-phoner recalled ringing about branching structure (4)
1. New diet influenced by attraction to heavenly body (4)
2. Sharpen fashion, lacking the French virtue (7)
3. Ate crumbly snack (3)
4. Uses fan to cook, 83 at first (7)
5. Fat man's empty bowls sent back (4)
6. To a northerner, butter-bur makes Spanglish noise? (5)
7. Regretted sounding insulting (4)
8. More upsetting to put snake head on snake (6)
9. One who provides words following Leo, for example (5)
10. They can be found in gym at school (4)
11. Developed vacant land, developed vacant land in file (8)
12. Makes furrows in cash registers (5)
13. Slog in cricket match, finally, by neanderthal (4)
14. Indeterminate part that's typical of setter (4)
15. Affirmatives? Not one? On the contrary (3)
16. Virus, with last of antidote gone, can be used as weapon (4)
17. Uncle once starting off meme (3)
18. Archaic contrivance is permissible in archaic article (4)
20. On loan for 40 days (4)
22. For example US tennis player, not for a second hesitant, picks up aluminium (7)
28. Audible static could come after pop (4)
33. Once more construct cryptic teaser without hint of answer (5)
37. Ms Black on blind date is after spade for herb (6)
38. Machine part is placed at bottom of machine when in reverse (3)
39. Ape goes crazy for vegetable (3)
41. Acting up or in film, perhaps (4)
44. Point to invade sea walls (5)
45. Fashionable teletubby is dangerous animal (5)
46. Treated stray goat-like creature (5)
47. Music for Airports artist played backwards Metallica song (3)
48. Joint to attach picture (4)
51. Kingly wings, aerodynamic head and braw tail - that's Scottish daw (4)
54. Powerful midfielder got a deuce on the rebound (4)
56. Very large storage unit - ace! (4)
58. Styles hair on return (3)
59. Maiden provides assistance (3)
60. Cheer NATO character (5)
62. Founding father loses capital to banks (4)
63. Power comes from mother, and grandmother to the same extent (4)
65. Teen almost returns to Maidenhead for rendezvous (4)
66. Charybdis lethally surrounded small archipelago (5)
68. For example goose's principal offering... (3)
69. ...goose first, then ducks, producing sickly sentimental stuff (3)
71. Topless demos outside McDonalds, perhaps (6)
77. Maghrebi castle featured in Sahara skyline, going back a bit! (4)
80. High cards cap off the three below them? (4)
81. Multinational takes away poor people's money - that's disgusting! (4)
84. Hedges apparently in blossom (4)
85. Heartless Lenny hugging mouse, no use to author (6)
86. Procedure on Americans is important undertaking (4)
88, 159. Hercules, for example, in Homeric poem, doesn't want mom to get involved (4,4)
94. She is nothing without one (5)
98. Passable notes (2-2)
101. He got an album - one by the Beatles (4)
103. Endless silly mess of problems (4)
104. Gor blimey ultimately could mean bloody horrible (4)
107. Sarcastic coda to return of large quantity (3)
108. Court's beliefs based on flimsy evidence (3)
110. Salute bad weather (4)
112. Child's Play a 15? Ridiculous! (4)
113. Conditions snowier; with no hint of interruption deteriorate further (6)
114. Members of the lower class sense revolution (5)
115. Shame to downscale initially, as Jack did to house (5)
116. 164 counterpart ejected number 1 from mound (3)
117. Lettuce function (3)
118. Crustacean raw starter packed in ice (5)
121. Going north, for example, consuming average fruit (5)
123. Territorial Army recruitment may be seen hoofing through the Himalayas (4)
126. Mouthpiece, and where to find it (8)
128. Puzzle breakthrough moment leads to another hard-fought answer (3)
129. Bar regular, as he is in here? (7)
130. More appealing to skip surgery (5)
133. Add first to last fillers (3)
136. Honorific overwhelms worker, like they are capable of creating transformation (7)
137. Acquire vice, ere perversion (7)
140. Just a little grunge band (3)
142. Long distance adventuring capers (6)
144. Ireland, poetically, flanked by Northern Ireland from the east (4)
146. Loath, in Glasgow, to put strip of wood around one (5)
147. One put hat on one in country (5)
148. Altered Images perhaps owned by Damon Albarn’s band (5)
149. Assist in the medical treatment of an animal, as one should not do to a neighbour's ox (5)
151. Swedish money trader opening German wine (4)
154. Readily inform falsely of tip off (4)
155. Trendy to gather a collection (4)
156. It might be dingley or use dongle (4)
157. Be blown away by report of low-price event (4)
158. Hails goalkeeper's effort, start to finish (4)
159. See 88
160. Shut up Charlie Brown vehicle, trashed down under (4)
162. Space station over a star of Cetus (4)
164. 116 counterpart would be bankrupt if involved in bet (3)
167. Crib sheet covers it (3)
Romantic composer born on 12/11 (7)
Slang for currency (4)
____ account, buy now pay later agreement (6)
Secret Millionaire S02 location (7)
They clear pencil marks (7)
Number of fluid ounces in a gill (4)
Mad (7)
Attach, rhymes with niche (5)
Played Kevin Ball in Shameless (5)
First name of actress who appeared on S01 of Celebrity Fit Club (3)
Nearest country to the south (6)
Belonging to leader with the dubious honour of being the first to have to resign (6)
Blow ___, to stand up (3)
Past winner of I'm A Celebrity...(8)
Boss of The Office (5)
Pedestrian surface (8)
Past winner of celebrity dancing competition (5)
Played Jenny in Cold Feet (6)
Elusive striped character (5)
Old school slang for a police vehicle (5)
And of the morning
Arid
Asian luminant
Backyard, in time
Bares face displaying this?
Bear gut
Beep, for example, in the UK
Brad, perhaps
Buts
Can be block-faced
Characters from the west bask
Cheesy
Could be rock or tea
Cow gets air through this
Deduction that used to accompany tart
Expected snore
Female Celts
For keeping snag at night
From
Gerald's cry, maybe
Gross crops
Hump preparation
In this, something has chanced place
It creates list
It dares
It doesn't run your man, but it might run your mobile
It might fall out if you do a lot of press ups
It might share trespassers
It recognises Claude, amongst other things
It used to be overlaid
Italian capitol
Knowing when Waugh is appropriate
Lurk for a book purchase
Mend-alterer
Might live in hull
Mild spear in Scotland
More gappy
Mother
Mum or muck
Not a naval strongpoint
Now fake
Of a hacky sack with no beats
One in plight
One leaves a shop that's going under
Opposite of out
Palm in the face of frustration
Part of a word out
Pasta
People are often seen on one at the beach
Pillocks
Politician's kelp
Produced by file
Quade with this?
Quiz
Raved
Sells off
Set out
Shapeless dolour
She walked like a man
Tally of witches
To a canter
To be the cruse
Triad
Turned ground
Two in Dijon
Ural in Asia is one
Vessel fuelled by wine
Weed
Why me?
With boy, older dog
With tweet, she'll be in Paddy's Pub | http://www.mit.edu/~puzzle/2013/coinheist.com/feynman/sams_your_uncle/index.html | CC-MAIN-2018-05 | refinedweb | 2,074 | 77.64 |
tabs.query for active tab after tab.create
Not reproducible Issue #12093789
Steps to reproduce
- call
tabs.createand open a new tab (‘about:blank’)
- call
tabs.query({active: true}, cb)in callback you receive an empty tabs array. But you expect to see an array with an active tab.
If you call
tabs.query({}, cb)- you will see that newly created tab has
status: 'loading', active: false
Microsoft Edge Team
Changed Assigned To to “Steven K.”
Hi Igor,
Is this code being used in an extension?
I wanted to make you aware of the namespace requirement for Edge Extensions supported APIs. See the “Note” at the top of the following page.
“For Microsoft Edge, all extension APIs are under the browser namespace, e.g. browser.browserAction.disable().”
If this was not your main issue, send a repro that we can test.
Also, the example “QR code” demo extension shows the tabs API with a non-empty tabs array being returned.
The MS Edge Team
Microsoft Edge Team
Changed Status to “Not reproducible”
You need to sign in to your Microsoft account to add a comment. | https://developer.microsoft.com/en-us/microsoft-edge/platform/issues/12093789/ | CC-MAIN-2017-26 | refinedweb | 185 | 67.35 |
Apr 24, 2015 03:44 PM|Muncher39|LINK
Hi all,
I have ImpersonateList.ashx.cs file and i would like to debug the code so i can step through. However, i am unable to hit any of the break points? I place a breakpoint in and it appears red (no problem with sysmbols loading etc) but when i attach to process i am unable to hit it and inspect whats happening
Please someone help
My class inherits from ihttphandler
/// <summary> /// Returns a json string of impersonate users to populate the impersonate dropdown /// </summary> public class ImpersonateList : IHttpHandler
Cheers
Paul
Jun 29, 2015 02:59 PM|rstrahl|LINK
Make sure your Web project is the startup project and that this project has ASP.NET debugging enabled in the Debug options. Also double check the project's output path and make sure you rebuild the project. You might want to clean project and then rebuild to ensure the latest code is there.
+++ Rick ---
1 reply
Last post Jun 29, 2015 02:59 PM by rstrahl | https://forums.asp.net/t/2047472.aspx?How+to+debug+a+ashx+cs+file+ | CC-MAIN-2021-17 | refinedweb | 173 | 67.79 |
Upgrading to Play 2: Anorm and Testing
This time last year, I decided I wanted to learn Scala. I chose the Play Framework as my vehicle for learning and I added CoffeeScript and Jade to the mix. I packaged it all up, learned a bunch and presented it at Devoxx 2011.
In January, I added SecureSocial, JSON Services and worked a bit on the mobile client. I presented my findings at Jfokus shortly after. As part of my aforementioned post, I wrote:.
I had some complications (a.k.a. too much vacation) with Devoxx France and wasn't able to attend. To make up for it, I submitted the talk to ÜberConf. It got accepted and I started working on my app a couple weeks ago. So far, I've spent about 8 hours upgrading it to Play 2 and I hope to start re-writing the mobile client later this week. I plan on using Cordova, jQTouch and releasing it in the App Store sometime this month.
Upgrading to Play 2
When I heard about Play 2, I thought it was a great thing. The developers were re-writing the framework to use Scala at the core and I was already using Scala in my app. Then I learned they were going to throw backwards compatibility out the window and I groaned. "Really? Another web framework (like Tapestry of old) screwing its users and making them learn everything again?!", I thought. "Maybe they should call it Run instead of Play, leaving the old framework that everyone loves intact."
However, after hearing about it at Devoxx and Jfokus, I figured I should at least try to migrate. I downloaded Play 2.0.1, created a new project and went to work.
The first thing I learned about upgrading from Play 1.x to Play 2.x is there's no such thing. It's like saying you upgraded from Struts 1 to Struts 2 or Tapestry 4 to Tapestry 5. It's a migration, with a whole new project.
Evolutions
I started by looking around to see if anyone had documented a similar migration. I found two very useful resources right off the bat:
- Play 2.0 with Scala and Scaml, Part1: Setup of test infrastructure, model and persistence with Anorm by Jan Helwich
- Tutorial: Play Framework 2 with Scala, Anorm, JSON, CoffeeScript, jQuery & Heroku by James Ward
From Jan's Blog, I learned to copy my evolutions from my Play 1.x project into conf/evolutions/default. I changed my application.conf to use PostgreSQL and wrote an EvolutionsTest.scala to verify creating the tables worked.
import org.specs2.mutable._ import play.api.db.DB import play.api.Play.current import anorm._ import play.api.test._ import play.api.test.Helpers._ class EvolutionsTest extends Specification { "Evolutions" should { "be applied without errors" in { evolutionFor("default") running(FakeApplication()) { DB.withConnection { implicit connection => SQL("select count(1) from athlete").execute() SQL("select count(1) from workout").execute() SQL("select count(1) from comment").execute() } } success } } }
Then I began looking for how to load seed data with Play 2.x. In Play 1.x, you could use a BootStrap job that would load sample data with YAML.
import play.jobs._ import play.Play @OnApplicationStart) } } } } }
This is no longer a recommended practice in Play 2. Instead, they recommend you turn your YAML into code. 10 minutes later, I had a Global.scala that loaded seed data.
import models._ import play.api._ import play.api.Play.current import anorm._ object Global extends GlobalSettings { override def onStart(app: Application) { InitialData.insert() } } /** * Initial set of data to be loaded */ object InitialData { def date(str: String) = new java.text.SimpleDateFormat("yyyy-MM-dd").parse(str) def insert() { if (Athlete.count() == 0) { Seq( Athlete(Id(1), "mraible@gmail.com", "beer", "Matt", "Raible"), Athlete(Id(2), "trishmcginity@gmail.com", "whiskey", "Trish", "McGinity") ).foreach(Athlete.create) Seq( Workout(Id(1), "Chainsaw Trail", """ A beautiful fall ride: cool breezes, awesome views and yellow leaves. Would do it again in a heartbeat. """, 7, 90, date("2011-10-13"), 1), Workout(Id(2), "Monarch Lake Trail", "Awesome morning ride through falling yellow leaves and cool fall breezes.", 4, 90, date("2011-10-15"), 1), Workout(Id(3), "Creekside to Flume to Chainsaw", "Awesome morning ride through falling yellow leaves and cool fall breezes.", 12, 150, date("2011-10-16"), 2) ).foreach(Workout.create) Seq( Comment(1, "Jim", "Nice day for it!"), Comment(2, "Joe", "Love that trail."), Comment(2, "Jack", "Where there any kittens there?") ).foreach(Comment.create) } } }
Anorm's Missing Magic
Before starting with Play 2, I knew it had lost some of its magic. After all, the developers had mentioned they wanted to get ride of the magic and moving to Scala allowed them to do that. However, I didn't think I'd miss Magic[T] as much as I do. Like Martin Fowler, I like ORMs and having to use SQL again seems painful. It seems like a strange shift for Play to reduce type-safety on the backend, but increase it in its templates. Oh well, to each their own. I may eventually move to Squery, but I wanted to do a side-by-side comparison as part of my migration.
Using the aforementioned tutorial from James and Jan's blog posts, as well as Guillaume's Play 2.0/Anorm, I set about creating new model objects. I wrote a bunch of SQL, typed up some new finders and migrated my tests from ScalaTest to the new default, specs2. The Mosh Pit's Migrating a Play 1.2 website to Play 2.0 was a great help in migrating tests.
That's when I started having issues with Anorm and figuring out how its parser syntax works. After struggling for a few days, I finally found yabe-play20-scala. Since I'd used the yabe tutorial from Play 1.x, it was familiar and helped me get past my problems. Now, things aren't perfect (Workouts aren't ordered by their posted date), but everything compiles and tests pass.
To illustrate how little code was required for Anorm 1.x, checkout Workout.scala in Play 1.x vs. Play 2.x. The Play 1.x version is 66 lines; Play 2.x requires 193 lines. I don't know about you, but I kinda like a little magic in my frameworks to reduce the amount of code I have to maintain.
I was pleasantly surprised by specs2. First of all, it was an easy migration from ScalaTest. Secondly, Play's FakeApplication made it very easy to write unit tests. The line count on my UnitTests.scala in Play 1.x vs. Play 2.x is almost identical.
Summary
The first few hours of developing with Play 2 were frustrating, mostly because I felt like I had to learn everything over again. However, I was pleased to find good references on migrating from Play 1.x. Last night, I migrated all my Controllers, integrated Scalate and got most of my views rendering. I still have issues rendering validation errors, but I hope to figure that out soon. The last 2 hours have been much more fun and I feel like my Scala skills are coming along. I think if the Play Team could eliminate those first few hours of struggling (and provide almost instant joy like Play 1.x) they'd really be onto something.
As soon as I figure out how to validation and how to add a body class based on the URL, I'll write another post on the rest of my migration. A Play 2-compatible version of SecureSocial just came out this evening, so I may integrate that as well. In the meantime, I'll be working on the iPhone app and finishing up a Grails 2 application for James Ward and my Grails vs. Play Smackdown.
Thanks for sharing this. I am hesitating on upgrading to Play2 as the "total rewrite" of my Play1 Java app to a more frustrating framework feels pointless.
If no clear upgrade path is present and I have to throw away every piece of code, I could also just migrate to Django or Rails and have the same enjoyable development that I had with Play1 instead of frustrating and slow development experience that is Scala.
Posted by Sakuraba on June 06, 2012 at 05:46 AM MDT #
@Sakuraba - while there is no "clear" upgrade path, it doesn't mean folks haven't tried. Similar to my experience, I'm sure there's articles out there that show how to migrate Play Java applications. I was able to make minor adjustments to most of my controller/view code to get it to work. Of course, it helped that I used Scalate so I didn't have to migrate my views. I'd say I was able to migrate about 60% of my code w/o rewriting.
As far as Scala being slow, I didn't experience this. I used IntelliJ 11 for development/testing and "play ~run" to run. It could help that I have the latest and greatest MacBook Pro with 16GB of RAM and an SSD.
Posted by Matt Raible on June 06, 2012 at 06:05 AM MDT #
Posted by Donal on June 07, 2012 at 09:06 PM MDT #
Posted by Matt Raible on June 07, 2012 at 09:10 PM MDT #
@Matt - as a Grails devotee, I hope you batter this upstart into submission :)
Seriously, if you're looking to squeeze some extra performance out of your app, there's a new caching plugin that should help:
Don't forget to use the query cache with [cache: true] for dynamic finders or cache(true) for critiera queries.
If you care about UI performance, the following plugins (which themselves depend on the resources plugin) might help:
- zipped-resources
- minify-resources
Will the results of the duel be available online in video format?
Posted by Donal on June 08, 2012 at 02:26 AM MDT #
Posted by 59.160.207.20 on June 12, 2012 at 10:33 PM MDT #
Posted by Raible Designs on June 25, 2012 at 11:07 AM MDT # | http://raibledesigns.com/rd/entry/upgrading_to_play_2_anorm | CC-MAIN-2016-40 | refinedweb | 1,705 | 75.61 |
Code is often written in a serialized (or sequential) fashion. What
is meant by the term serialized? Ignoring instruction level parallelism (ILP),
code is executed sequentially, one after the next in a monolithic
fashion, without regard to possibly more available processors the program
could exploit. Often, there are potential parts of a program where
performance can be improved through the use of threads.
With increasing popularity of machines with symmetric multiprocessing (largely due in part to the rise of multicore processors), programming with threads is a valuable skill set worth learning.
Why is it that most programs are sequential? One guess would be that students are not taught how to program in a parallel fashion until later or in a difficult-to-follow manner. To make matters worse, multithreading non-trivial code is difficult. Careful analysis of the problem, and then a good design is not an option for multithreaded programming; it is an absolute must.
We will dive into the world of threads with a little bit of background first. We will examine thread synchronization primitives and then a tutorial on how to use POSIX pthreads will be presented.
Isn't that something you put through an eye of a sewing needle?
Yes.
How does it relate to programming then?
Think of sewing needles as the processors and the threads in a program as the thread fiber. If you had two needles but only one thread, it would take longer to finish the job (as one needle is idle) than if you split the thread into two and used both needles at the same time. Taking this analogy a little further, if one needle had to sew on a button (blocking I/O), the other needle could continue doing other useful work even if the other needle took 1 hour to sew on a single button. If you only used one needle, you would be ~1 hour behind!
Before we can dive in depth into threading concepts, we need to get familiarized with a few terms related to threads, parallelism and concurrency.
Threads can provide benefits... for the right applications! Don't
waste your time multithreading a portion of code or an entire program
that isn't worth multithreading.
Gene Amdahl argued the theoretical maximum improvement that is possible for a computer program that is parallelized, under the premise that the program is strongly scaled (i.e. the program operates on a fixed problem size). His claim is a well known assertion known as Amdahl's Law. Essentially, Amdahl's law states that the speedup of a program due to parallelization can be no larger than the inverse of the portion of the program that is immutably sequential. For example, if 50% of your program is not parallelizable, then you can only expect a maximum speedup of 2x, regardless the number of processors you throw at the problem. Of course many problems and data sets that parallel programs process are not of fixed size or the serial portion can be very close to zero. What is important to the reader here, is to understand that most interesting problems that are solved by computer programs tend to have some limitations in the amount of parallelism that can be effectively expressed (or introduced by the very mechanism to parallelize) and exploited as threads or some other parallel construct.
It must be underscored how important it is to understand the problem the computer program is trying to solve first, before simply jumping in head first. Careful planning and consideration of not only what the program must attack in a parallel fashion and the means to do so by way of the algorithms employed and the vehicle for which they are delivered must be performed.
There is a common saying: "90% of processor cycles are spent in 10% of the code." This is more formally known as the Pareto Principle. Carefully analyze your code or your design plan; don't spend all of your time optimizing/parallelizing the 90% of the code that doesn't matter much! Code profiling and analysis is outside of the scope of this document, but it is recommended reading left to those unfamiliar with the subject.
There are different ways to use threads within a program. Here, three common thread design patterns are presented. There is no hard and fast rule on which is the best. It depends on what the program is intended to tackle and in what context. It is up to you to decide which best pattern or patterns fit your needs.
One thread dispatches other threads to do useful work which are usually part of a worker thread pool. This thread pool is usually pre-allocated before the boss (or master) begins dispatching threads to work. Although threads are lightweight, they still incur overhead when they are created.
The peer model is similar to the boss/worker model except once the worker pool has been created, the boss becomes the another thread in the thread pool, and is thus, a peer to the other threads.
Similar to how pipelining works in a processor, each thread is part of a long chain in a processing factory. Each thread works on data processed by the previous thread and hands it off to the next thread. You must be careful to equally distribute work and take extra steps to ensure non-blocking behavior in this thread model or you could experience pipeline "stalls."
Threads may operate on disparate data, but often threads may have to touch the same data. It is unsafe to allow concurrent access to such data or resources without some mechanism that defines a protocol for safe access! Threads must be explicitly instructed to block when other threads may be potentially accessing the same resources.
Mutual exclusion is the method of serializing access to shared
resources. You do not want a thread to be modifying a variable that is
already in the process of being modified by another thread! Another
scenario is a dirty read where the value is in the process of being
updated and another thread reads an old value.
Mutual exclusion allows the programmer to create a defined protocol for serializing access to shared data or resources. Logically, a mutex is a lock that one can virtually attach to some resource. If a thread wishes to modify or read a value from a shared resource, the thread must first gain the lock. Once it has the lock it may do what it wants with the shared resource without concerns of other threads accessing the shared resource because other threads will have to wait. Once the thread finishes using the shared resource, it unlocks the mutex, which allows other threads to access the resource. This is a protocol that serializes access to the shared resource. Note that such a protocol must be enforced for the data or resource a mutex is protecting across all threads that may touch the resource being protected. If the protocol is violated (e.g., a thread modifies a shared resource without first requesting a mutex lock), then the protocol defined by the programmer has failed. There is nothing preventing a thread programmer, whether unintentionally (most often the case, i.e., a bug -- see race conditions below) or intentionally from implementing a flawed serialization protocol.
As an analogy, you can think of a mutex as a safe with only one key (for a standard mutex case), and the resource it is protecting lies within the safe. Only one person can have the key to the chest at any time, therefore, is the only person allowed to look or modify the contents of the chest at the time it holds the key.
The code between the lock and unlock calls to the mutex, is referred to as a critical section. Minimizing time spent in the critical section allows for greater concurrency because it potentially reduces the amount of time other threads must wait to gain the lock. Therefore, it is important for a thread programmer to minimize critical sections if possible.
There are different types of locks other than the standard simple blocking kind.
Depending upon the thread library or interface being used, only a subset of the additional types of locks may be available. POSIX pthreads allows recursive and reader/writer style locks.
An important problem associated with mutexes is the possibility of
deadlock. A program can deadlock if two (or more) threads have
stopped execution or are spinning permanently. For example, a simple
deadlock situation: thread 1 locks lock A, thread 2 locks lock B,
thread 1 wants lock B and thread 2 wants lock A. Instant deadlock. You
can prevent this from happening by making sure threads acquire locks
in an agreed order (i.e. preservation of lock ordering). Deadlock
can also happen if threads do not unlock mutexes properly.
A race condition is when non-deterministic behavior results from threads accessing shared data or resources without following a defined synchronization protocol for serializing such access. This can result in erroneous outcomes that cause failure or inconsistent behavior making race conditions particularly difficult to debug. In addition to incorrectly synchronized access to shared resources, library calls outside of your program's control are common culprits. Make sure you take steps within your program to enforce serial access to shared file descriptors and other external resources. Most man pages will contain information about thread safety of a particular function, and if it is not thread-safe, if any alternatives exist (e.g.,
gethostbyname() and
gethostbyname_r()).
Another problem with mutexes is that contention for a mutex can lead to priority inversion. A higher priority thread can wait behind a lower priority thread if the lower priority thread holds a lock for which the higher priority thread is waiting. This can be eliminated/reduced by limiting the number of shared mutexes between different priority threads. A famous case of priority inversion occurred on the Mars Pathfinder.
Atomic operations allow for concurrent algorithms and access to certain
shared data types without the use of mutexes. For example, if there is
sufficient compiler and system support, one can modify some variable
(e.g., a 64-bit integer) within a multithreaded context without having to
go through a locking protocol. Many atomic calls are non-portable and
specific to the compiler and system. Intel Threading Building Blocks
(see below), contains semi-portable atomic support
under C++. The C++1x and C1x standards will also include atomic
operations support. For gcc-specific atomic support, please see
this
and this.
Lock-free algorithms can provide highly concurrent and scalable operations. However, lock-free algorithms may be more complex than their lock-based counterparts, potentially incurring additional overhead that may induce negative cache effects and other problems. Careful analysis and performance testing is required for the problem under consideration.
As we have just discussed, mutexes are one way of synchronizing access to shared resources. There are other mechanisms available for not only coordinating access to resources but synchronizing threads.
A thread join is a protocol to allow the programmer to collect all relevant threads at a logical synchronization point. For example, in fork-join parallelism, threads are spawned to tackle parallel tasks and then join back up to the main thread after completing their respective tasks (thus performing an implicit barrier at the join point). Note that a thread that executes a join has terminated execution of their respective thread function.
Barriers are a method to synchronize a set of threads at some point in time by having all participating threads in the barrier wait until all threads have called the said barrier function. This, in essence, blocks all threads participating in the barrier until the slowest participating thread reaches the barrier call.
Spinlocks are locks which spin on mutexes. Spinning refers to
continuously polling until a condition has been met. In the case of
spinlocks, if a thread cannot obtain the mutex, it will keep polling the
lock until it is free. The advantage of a spinlock is that the thread is
kept active and does not enter a sleep-wait for a mutex to become
available, thus can perform better in certain cases than typical
blocking-sleep-wait style mutexes. Mutexes which are heavily contended
are poor candidates for spinlocks.
Spinlocks should be avoided in uniprocessor contexts. Why is this?
Semaphores are another type of synchronization primitive that come in two
flavors: binary and counting. Binary semaphores act much like simple
mutexes, while counting semaphores can behave as recursive mutexes.
Counting semaphores can be initialized to any arbitrary value which
should depend on how many resources you have available for that particular
shared data. Many threads can obtain the lock simultaneously until the
limit is reached. This is referred to as lock depth.
Semaphores are more common in multiprocess programming (i.e. it's usually used as a synch primitive between processes).
Now that we have a good foundation of thread concepts, lets talk about a
particular threading implementation, POSIX pthreads. The pthread library
can be found on almost any modern POSIX-compliant OS (and even under
Windows, see
pthreads-win32).
Note that it is not possible to cover more than an introduction on pthreads within the context of this short overview and tutorial. pthreads concepts such as thread scheduling classes, thread-specific data, thread canceling, handling signals and reader/writer locks are not covered here. Please see the Resources section for more information.
If you are programming in C++, I highly recommend evaluating the Boost C++ Libraries. One of the libraries is the Thread library which provides a common interface for portable multithreading.
It is assumed that you have a good understanding of the C programming language. If you do not or need to brush up, please review basic C (especially pointers and arrays). Here are some resources.
Before we begin, there are a few required steps you need to take before starting any pthreads coding:
#include <pthread.h>to your source file(s).
-pthreadwhich will set all proper defines and link-time libraries. On other compilers, you may have to define
_REENTRANTand link against
-lpthread.
_POSIX_PTHREAD_SEMANTICSfor certain function calls like
sigwait().
A pthread is represented by the type
pthread_t. To create a
thread, the following function is available:
Let's digest the arguments required for
pthread_create():
pthread_t *thread: the actual thread object that contains pthread id
pthread_attr_t *attr: attributes to apply to this thread
void *(*start_routine)(void *): the function this thread executes
void *arg: arguments to pass to thread function above
Before we dive into an example, let's first look at two other important thread functions:
pthread_exit() terminates the thread and provides the pointer
*value_ptr available to any
pthread_join()
call.
pthread_join() suspends the calling thread to wait for
successful termination of the thread specified as the first argument
pthread_t thread with an optional
*value_ptr
data passed from the terminating thread's call to
pthread_exit().
Let's look at an example program exercising the above pthread functions:
This program creates
NUM_THREADS threads and prints their
respective user-assigned thread id. The first thing to notice is the
call to
pthread_create() in the main function. The syntax
of the third and fourth argument are particularly important. Notice that
the
thr_func is the name of the thread function, while the
fourth argument is the argument passed to said function. Here we are
passing a thread function argument that we created as a
thread_data_t struct. Of course, you can pass simple
data types as pointers if that is all that is needed, or
NULL
if no arguments are required. However, it is good practice to be able
to pass arguments of arbitrary type and size, and is thus illustrated
for this purpose.
A few things to mention:
pthread_create()is
NULLindicating to create a thread with default attributes. The defaults vary depend upon the system and pthread implementation.
pthread_join()from the
pthread_create(). Why is it that you should not integrate the
pthread_join()in to the thread creation loop?
pthread_exit()at the end of the thread function, it is good practice to do so, as you may have the need to return some arbitrary data back to the caller via
pthread_join().
Threads can be assigned various thread attributes at the time of thread
creation. This is controlled through the second argument to
pthread_create(). You must first pass the
pthread_attr_t variable through:
Some attributes that can be set are:
Attributes can be retrieved via complimentary
get functions.
Consult the man pages for the effect of each of these attributes.
pthread mutexes are created through the following function:
The
pthread_mutex_init() function requires a
pthread_mutex_t variable to operate on as the first
argument. Attributes for the mutex can be given through the second
parameter. To specify default attributes, pass
NULL as the
second parameter. Alternatively, mutexes can be initialized to default
values through a convenient macro rather than a function call:
Here a mutex object named
lock is initialized to the default
pthread mutex values.
To perform mutex locking and unlocking, the pthreads provides the following functions:
Each of these calls requires a reference to the mutex object. The
difference between the lock and trylock calls is that lock is blocking
and trylock is non-blocking and will return immediately even if gaining
the mutex lock has failed due to it already being held/locked. It is
absolutely essential to check the return value of the trylock call to
determine if the mutex has been successfully acquired or not. If it has
not, then the error code
EBUSY will be returned.
Let's expand the previous example with code that uses mutexes:
In the above example code, we add some shared data called
shared_x and ensure serialized access to this variable
through a mutex named
lock_x. Within the
thr_func() we call
pthread_mutex_lock()
before reading or modifying the shared data. Note that we continue
to maintain the lock even through the
printf() function
call as releasing the lock before this and printing can lead to
inconsistent results in the output. Recall that the code in-between
the lock and unlock calls is called a critical section. Critical sections
should be minimized for increased concurrency.
pthread condition variables are created through the following function call or initializer macro similar to mutexes:
Similar to the mutex initialization call, condition variables can be
given non-default attributes through the second parameter. To specify
defaults, either use the initializer macro or specify
NULL
in the second parameter to the call to
pthread_cond_init().
Threads can act on condition variables in three ways: wait, signal or broadcast:
pthread_cond_wait() puts the current thread to sleep. It
requires a mutex of the associated shared resource value it is waiting
on.
pthread_cond_signal() signals one thread out of the
possibly many sleeping threads to wakeup.
pthread_cond_broadcast() signals all threads waiting
on the
cond condition variable to wakeup. Here is an
example on using pthread condition variables:
In
thr_func1(), we are locking the
count_lock
mutex so we can read the value of count without entering a potential race
condition. The subsequent
pthread_cond_wait() also requires a
locked mutex as the second parameter to avoid a race condition where a
thread prepares to wait on a condition variable and another thread
signals the condition just before the first thread actually waits on it
(as explained from the man page on
pthread_cond_wait). Notice
how a
while loop is used instead of an
if
statement for the
pthread_cond_wait() call. This is because
of spurious wakeups problem mentioned previously. If a thread has been
woken, it does not mean it was due to a
pthread_cond_signal() or
pthread_cond_broadcast() call.
pthread_cond_wait() if awoken, automatically
tries to re-acquire the mutex, and will block if it cannot.
Locks that other threads could be waiting on should be released
before you signal or broadcast.
pthreads can participate in a barrier to synchronize to some point in time. Before a barrier can be called, a pthread barrier object must be initialized first:
Barrier objects are initialized like mutexes or condition variables,
except there is one additional parameter,
count. The count
variable defines the number threads that must join the barrier for the
barrier to reach completion and unblock all threads waiting at the barrier.
If default barrier attributes are used (i.e.
NULL for
the second parameter), one can use the initializer macro with the
specified
count.
The actual barrier call follows:
This function would be inside thread code where the barrier is to take
place. Once
count number of threads have called
pthread_barrier_wait() then the barrier condition is met
and all threads are unblocked and progress continues.
Here are some suggestions and issues you should consider when using pthreads:
breakwhen it is deemed necessary).
Additional useful pthread calls:
pthread_kill()can be used to deliver signals to specific threads.
pthread_self()returns a handle on the calling thread.
pthread_equal()compares for equality between two pthread ids
pthread_once()can be used to ensure that an initializing function within a thread is only run once.
The performance gains from using threads can be substantial when done properly and in the right problem context, but can it be even better? You should consider the following when analyzing your program for potential bottlenecks:
There are various template libraries available that ease implementation of multithreading in a (semi-)portable fashion. For those programming in C++, you may want to look at Boost, Intel Threading Building Blocks (TBB) and POCO.
This tutorial has explored the very basics of multithreaded programming.
What about multiprocess programming?
These topics are beyond the scope of this document, but to perform cross-process synchronization, one would use some form of IPC: pipes, semaphores, message queues, or shared memory. Of all of the forms of IPC, shared memory is usually the fastest (excluding doors). You can use
mmap(), POSIX (e.g.,
shm_open()) or SysV
(e.g.,
shmget()) semantics when dealing with cross-process
resource management, IPC and synchronization. For those interested
in shared memory programming in C++, I recommend looking at
Boost.Interprocess
first.
OpenMP is a portable interface for implementing fork-join parallelism on shared memory multi-processor machines. It is available for C/C++ and Fortran. For a quick introduction, please see the slides here.
The Message
Passing Interface (MPI) is the de-facto standard for distributed
memory parallel processing. Data can be sent/received from distinct
computing machines with support for vectored I/O (scatter/gather),
synchronization and collectives.
It is not uncommon to see programs that are both multithreaded and contain MPI calls to take advantage of shared memory within a node and MPI to perform processing across nodes.
It is difficult to cover more than an introduction to threads with this short tutorial and overview. For more in-depth coverage on threads (like thread scheduling classes, thread-specific data (thread local storage), thread canceling, handling signals and reader/writer locks) and pthreads programming, I recommend these books:
There are many excellent online resources regarding pthreads on the web. Use your favorite search engine to find these.. | http://randu.org/tutorials/threads/ | CC-MAIN-2014-42 | refinedweb | 3,827 | 53.31 |
I'm trying to build a simple website with login functionality very similar to the one here on SO. The user should be able to browse the site as an anonymous user and there will be a login link on every page. When clicking on the login link the user will be taken to the login form. After a successful login the user should be taken back to the page from where he clicked the login link in the first place. I'm guessing that I have to somehow pass the url of the current page to the view that handles the login form but I can't really get it to work.
EDIT: I figured it out. I linked to the login form by passing the current page as a GET parameter and then used 'next' to redirect to that page. Thanks!
EDIT 2: My explanation did not seem to be clear so as requested here is my code: Lets say we are on a page foo.html and we are not logged in. Now we would like to have a link on foo.html that links to login.html. There we can login and are then redirected back to foo.html. The link on foo.html looks like this:
<a href='/login/?next={{ request.path }}'>Login</a>
Now I wrote a custom login view that looks somewhat like this:
def login_view(request): redirect_to = request.REQUEST.get('next', '') if request.method=='POST': #create login form... if valid login credentials have been entered: return HttpResponseRedirect(redirect_to) #... return render_to_response('login.html', locals())
And the important line in login.html:
<form method="post" action="./?next={{ redirect_to }}">
So yeah thats pretty much it, hope that makes it clear.
You do not need to make an extra view for this, the functionality is already built in.
First each page with a login link needs to know the current path, and the easiest way is to add the request context preprosessor to settings.py (the 4 first are default), then the request object will be available in each request:
settings.py:
TEMPLATE_CONTEXT_PROCESSORS = ( "django.core.context_processors.auth", "django.core.context_processors.debug", "django.core.context_processors.i18n", "django.core.context_processors.media", "django.core.context_processors.request", )
Then add in the template you want the Login link:
base.html:
<a href="{% url django.contrib.auth.views.login %}?next={{request.path}}">Login</a>
This will add a GET argument to the login page that points back to the current page.
The login template can then be as simple as this:
registration/login.html:
{% block content %} <form method="post" action=""> {{form.as_p}} <input type="submit" value="Login"> </form> {% endblock %} | https://pythonpedia.com/en/knowledge-base/806835/django--redirect-to-previous-page-after-login | CC-MAIN-2020-29 | refinedweb | 437 | 58.99 |
I'm trying to program modules with ESP8266 chip using Arduino IDE. So far, I have tried the ESP-01 module and I am just trying to program the ESP-01S. I came across a problem with this module waking up from deep sleep mode. I have this simple code:
#include <ESP8266WiFi.h>
void setup () {
Serial.begin (74880);
Serial.println ("Test");
ESP.deepSleep (60 * 1e6);
}
void loop () {
}
Of course I have GPIO16 connected to the RST pin. When the power supply is connected, the program starts and the text "TEST" is displayed in the console. Then the module sleeps for 60 seconds. After waking up, a message will appear and this will end:
ets Jan 8 2013, rst cause: 2, boot mode: (3,6)
I have tested this code on ESP-01 and it works. Would anyone please advise me where the problem might be? | https://www.esp8266.com/viewtopic.php?f=6&t=22372 | CC-MAIN-2021-43 | refinedweb | 145 | 76.93 |
Android system images in QML
Is there any way to use system Android resources (images) in QML code?
Like images I see in %ANDROID_SDK_DIR%\platforms\android-xx\data\res\drawable-yyyy?
@import QtQuick 2.3
import QtQuick.Controls 1.2
ApplicationWindow {
title: qsTr("Hello World")
width: Screen.width
height: Screen.height
visible: true
Image {
source: "" // don't work
}
}
@
It seems that not all drawable can be used by Android native app.
From this discussion:
the best approach is to copy the image into your app bundle (even in the case of Android native app).
So, copy it as a Qt resource and reference it as: "qrc://... " | https://forum.qt.io/topic/49940/android-system-images-in-qml | CC-MAIN-2018-09 | refinedweb | 106 | 69.58 |
Step!
When I started to learn it, I couldn’t find blogs that show “Which part of React Redux to build first?” or how to generally approach building any React-Redux apps. So I went through several example and blogs and came out with general steps as to how to approach building most React Redux Apps.
Please Note: I am using “Mocks” to keep it at a high level and not get into the weeds. I am using the classic Todo list app as the basis for building ANY app. If your app has multiple screens, simply repeat the process for each screen.
Why
BTW, There are8 steps for a simple Todo App. The theory is that, earlier frameworks made building Todo apps simple but real apps hard. But React Redux make building Todo apps hard but real productions apps simple.
Let’s get started:
STEP 1 — Write A Detailed Mock of the Screen
Mock should include all the data and visual effects (like strikethrough the TodoItem, or “All” filter as a text instead of a link)
Please Note: You can click on the pictures to Zoom
STEP 2 — Divide The App Into Components
Try to divide the app into chunks of components based on their overall “purpose” of each component.
We have 3 components “AddTodo”, “TodoList” and “Filter” component.
Redux Terms: “Actions” And “States”
Every component does two things:
1. Render DOM based on some data. This data is called as a“state”.
2. Listen to the user and other events and send them to JS functions. These are called “Actions”.
STEP 3 — List State and Actions For Each Component
Make sure to take a careful look at each component from STEP 2, and list of States and Actions for each one of them.
We have 3 components “AddTodo”, “TodoList” and “Filter” component. Let’s list Actions and States for each one of them.
3.1 AddTodo Component — State And Actions
In this component, we have no state since the component look and feel doesn’t change based on any data but it needs to let other components know when the user creates a new Todo. Let’s call this action “ADD_TODO”.
Please Note: You can click on the pictures to Zoom
3.2 TodoList Component — State And Actions
TodoList component needs an array of Todo items to render itself, so it need a state, let’s call it Todos (Array). It also needs to know which “Filter” is turned on to appropriately display (or hide) Todo items, it needs another state, let’s call it “VisibilityFilter” (boolean).
Further, it allows us to toggle Todo item’s status to completed and not completed. We need to let other components know about this toggle as well. Let’s call this action “TOGGLE_TODO”
3.3 Filter Component — State And Actions
Filter component renders itself as a Link or as a simple text depending on if it’s active or not. Let’s call this state as “CurrentFilter”.
Filter component also needs to let other components know when a user clicks on it. Let’s call this actions, “SET_VIBILITY_FILTER”
Redux Term: “Action Creators”
Action Creators are simple functions who job is to receive data from the DOM event, format it as a formal JSON “Action” object and return that object (aka “Action”). This helps us to formalize how the data/payload look.
Further, it allows any other component in the future to also send(aka “dispatch”) these actions to others.
STEP 4 — Create Action Creators For Each Action
We have total 3 actions: ADD_TODO, TOGGLE_TODO and SET_VISIBILITY_FILTER. Let’s create action creators for each one of them.
//1. Takes the text from AddTodo field and returns proper “Action” JSON to send to other components.
export const addTodo = (text) => {
return {
type: ‘ADD_TODO’,
id: nextTodoId++,
text, //<--ES6. same as text:text, in ES5
completed: false //<-- initially this is set to false
}
}
//2. Takes filter string and returns proper “Action” JSON object to send to other components.
export const setVisibilityFilter = (filter) => {
return {
type: ‘SET_VISIBILITY_FILTER’,
filter
}
}
//3. Takes Todo item’s id and returns proper “Action” JSON object to send to other components.
export const toggleTodo = (id) => {
return {
type: ‘TOGGLE_TODO’,
id
}
}
Redux Term: “Reducers”
Reducers are functions that take “state” from Redux and “action” JSON object and returns a new “state” to be stored back in Redux.
1. Reducer functions are called by the “Container” containers when there is a user action.
2. If the reducer changes the state, Redux passes the new state to each component and React re-renders each component
For example the below function takes Redux’ state(an array of previous todos), and returns a **new** array of todos(new state) w/ the new Todo added if action’s type is “ADD_TODO”.
const todo = (state = [], action) => {
switch (action.type) {
case ‘ADD_TODO’:
return
[…state,{id: action.id, text: action.text, completed:false}];
}
STEP 5 — Write Reducers For Each Action
Note: Some code has been stripped for brevity. Also I’m showing SET_VISIBILITY_FILTER along w/ ADD_TODO and TOGGLE_TODO for simplicity.
const todo = (state, action) => {
switch (action.type) {
case ‘ADD_TODO’:
return […state,{id: action.id, text: action.text,
completed:false}]
case ‘TOGGLE_TODO’:
return state.map(todo =>
if (todo.id !== action.id) {
return todo
}
return Object.assign({},
todo, {completed: !todo.completed})
)
case ‘SET_VISIBILITY_FILTER’: {
return action.filter
}
default:
return state
}
}
Redux Term: “Presentational” and “Container” Components
Keeping React and Redux logic inside each component can make it messy, so Redux recommends creating a dummy presentation only component called “Presentational” component and a parent wrapper component called “Container” component that deals w/ Redux, dispatch “Actions” and more.
The parent Container then passes the data to the presentational component, handle events, deal with React on behalf of Presentational component.
Legend: Yellow dotted lines = “Presentational” components. Black dotted lines = “Container” components.
STEP 6 — Implement Every Presentational Component
It’s now time for us to implement all 3 Presentational component.
6.1 — Implement AddTodoForm Presentational Component
Please Note: Click on the pictures to Zoom and read
6.2 — Implement TodoList Presentational Component
6.3 — Implement Link Presentational Component
Note: In the actual code, Link presentational component is wrapped in “FilterLink” container component. And then 3 “FilterLink” components are then displayed inside “Footer” presentational component.
STEP 7 — Create Container Component For Some/All Presentational Component
It’s finally time to wire up Redux for each component!
7.1 Create Container Component — AddTodo
Find the Actual code here
7.2 Create Container Component — TodoList Container
Find the Actual code here
7.3 Create Container Component — Filter Container
Find the Actual code here
Note: In the actual code, Link presentational component is wrapped in “FilterLink” container component. And then 3 “FilterLink” components are then arranged and displayed inside “Footer” presentational component.
STEP 8 — Finally Bring Them All Together
import React from ‘react’ // ← Main React library
import { render } from ‘react-dom’ // ← Main react library
import { Provider } from ‘react-redux’ //← Bridge React and Redux
import { createStore } from ‘redux’ // ← Main Redux library
import todoApp from ‘./reducers’ // ← List of Reducers we created
//Import all components we created earlier
import AddTodo from ‘../containers/AddTodo’
import VisibleTodoList from ‘../containers/VisibleTodoList’
import Footer from ‘./Footer’ // ← This is a presentational component that contains 3 FilterLink Container comp
//Create Redux Store by passing it the reducers we created earlier.
let store = createStore(reducers)
render(
<Provider store={store}> ← The Provider component from react-redux injects the store to all the child components
<div>
<AddTodo />
<VisibleTodoList />
<Footer />
</div>
</Provider>,
document.getElementById(‘root’) //<-- Render to a div w/ id "root"
)
That’s it!
My Other Blogs
ES6
WebPack
- Webpack — The Confusing Parts
- Webpack & Hot Module Replacement [HMR]
-. ()🎉🎉🎉
Thanks for reading!!😀🙏 | https://medium.com/@rajaraodv/step-by-step-guide-to-building-react-redux-apps-using-mocks-48ca0f47f9a | CC-MAIN-2016-40 | refinedweb | 1,264 | 56.25 |
Here are some mashup sites I created in 2007 to experiment with youtube api, flickr api:
flickr:
youtube:
blogs/rss/atom:
Tuesday, January 20, 2009
mashups
Here are some mashup sites I created in 2007 to experiment with youtube api, flickr api:
Scrum, XP, Agile
Lots of Scrum, XP, Agile, Lean, TOC, TQM resources:
A Free InfoQ book on Scrum and XP:
Thursday, April 03, 2008
Wordpress Plugin: Simple Sticky Posts
Plugin.
Thursday, March 27, 2008
simple bash stuff
Run applications from the linux terminal, as a new process:
> apptorun &
Other linux bash command line tips:
Wednesday, April 18, 2007
dev-c++ / wxdev-c++
dev:
Monday, November 06, 2006
stl fstream and macros on vs7
while working with visual studio 2003 / 7, c++, fstream, log files and macros, like the following:
#define LOGMACRO (format_string, ...) log (__FILE__, __LINE__, format_string, __VA_ARGS__)
got some errors:
error C2059: syntax error : '...'
error C2061: syntax error: identifier '_DebugHeapTag'
xdebug(29): _CRTIMP2 void * __cdecl operator new(size_t, const std::_DebugHeapTag_t&, char *, int)
_THROW1(std::bad_alloc); // allocate from the debug CRT heap
the solution was in 4 steps:
1. replacing all stl include files matching the pattern:
#include <string.h>
#include <fstream.h>
with header files without '.h' in their names, ex:
#include <string>
#include <fstream>
2. putting some parantheses around the parameters in the macro def. value, like:
#define LOGMACRO (format_string, ...) log (__FILE__, __LINE__, (format_string), __VA_ARGS__)
3. replacing '...' and '__VA_ARGS__' with a simple parameter. vs7 does not support varargs in macros.
#define LOGMACRO (format_string, args) log (__FILE__, __LINE__, (format_string), (args))
4. putting the fstream include in the last line of stdafx.h, removing from any other location in the source code like below.
#include <fstream>
using namespace std;
In addition to the things described above, here is a nice macro tutorial:
Thursday, November 02, 2006
linux command line tips
This is a list of linux commands for common operations.
Here is a list of tar, cpio and rpm command samples.
Monday, September 04, 2006
merge .net assemblies
this is a microsoft tool to merge multiple .net assemblies:
ILMerge
usually there are lots of assemblies in one project and the release looks kind of messy, merging them in to a single file can be very usefull
| http://bitworker.blogspot.com/ | crawl-002 | refinedweb | 368 | 62.68 |
Big-Θ Notation
We compute the big-Θ of an algorithm by counting the number of iterations the algorithm always takes with an input of n. For instance, the loop in the pseudo code below will always iterate N times for a list size of N. The runtime can be described as Θ(N).
for each item in list: print item
Asymptotic Notation
Asymptotic Notation is used to describe the running time of an algorithm - how much time an algorithm takes with a given input, n. There are three different notations: big O, big Theta (Θ), and big Omega (Ω). big-Θ is used when the running time is the same for all cases, big-O for the worst case running time, and big-Ω for the best case running time.
Adding Runtimes
When an algorithm consists of many parts, we describe its runtime based on the slowest part of the program.
An algorithm with three parts has running times of Θ(2N) + Θ(log N) + Θ(1). We only care about the slowest part, so we would quantify the runtime to be Θ(N). We would also drop the coefficient of 2 since when N gets really large, the multiplier 2 will have a small effect.
Algorithmic Common Runtimes
The common algorithmic runtimes from fastest to slowest are:
- constant: Θ(1)
- logarithmic: Θ(log N)
- linear: Θ(N)
- polynomial: Θ(N^2)
- exponential: Θ(2^N)
- factorial: Θ(N!)
Big-O Notation
The Big-O notation describes the worst-case running time of a program. We compute the Big-O of an algorithm by counting how many iterations an algorithm will take in the worst-case scenario with an input of N. We typically consult the Big-O because we must always plan for the worst case. For example, O(log n) describes the Big-O of a binary search algorithm.
Big-Ω Notation
Big-Ω (Omega) describes the best running time of a program. We compute the big-Ω by counting how many iterations an algorithm will take in the best-case scenario based on an input of N. For example, a Bubble Sort algorithm has a running time of Ω(N) because in the best case scenario the list is already sorted, and the bubble sort will terminate after the first iteration.
Analyzing Runtime
The speed of an algorithm can be analyzed by using a while loop. The loop can be used to count the number of iterations it takes a function to complete.
def half(N): count = 0 while N > 1: N = N//2 count += 1 return count
Queue Versus Stack
A
Queue data structure is based on First In First Out order. It takes O(1) runtime to retrieve the first item in a
Queue. A
Stack data structure is based on First In Last Out order. Therefore, it takes O(N) runtime to retrieve the first value in a
Stack because it is all the way at the bottom.
Max Value Search in List
The big-O runtime for locating the maximum value in a list of size N is O(N). This is because the entire list of N members has to be traversed.
# O(N) runtime def find_max(linked_list): current = linked_list.get_head_node() maximum = current.get_value() while current.get_next_node(): current = current.get_next_node() val = current.get_value() if val > maximum: maximum = val return maximum
Bubble Sort with Linked List
Bubble Sort is the simplest sorting algorithm for a list. For every element in the list, it compares it with its subsequent neighbor and swaps them if they are in descending order. Each pass of the swap takes O(N). Since there are N elements in the list, it would take N*N swaps. The Big O runtime would be O(N^2). | https://www.codecademy.com/learn/technical-interview-practice-with-javascript/modules/asymptotic-notation-js/cheatsheet | CC-MAIN-2022-21 | refinedweb | 627 | 63.59 |
I'm using the below code to submit information into the database. Everything is working but the file itself. It appears to work but when I look in the database it shows an issue with the file (see attached picture). What am I missing?
Thanks for the help!
import wixData from 'wix-data'; import wixLocation from 'wix-location'; export function uploadButton3_change(event) { let file = $w("#uploadButton3").value[0].name; $w("#text26").text= String(file); } export function button1_click_1() { let toInsert = { "repeater_file1": $w("#uploadButton3").value, "repeaterFile1Title": $w("#text26").text, "repeater_date": $w("#datePicker1").value, "repeater_title": $w("#inputTitle").value, "repeater_description": $w("#textDescription").value, }; wixData.insert("Announcements", toInsert) .then(() => { wixLocation.to(""); }) .catch( (err) => { console.log(err); } ); }
I think I understand what I'm missing I'm just not sure how to incorporate it into my existing code. I missing that actually file upload function, something like this:
I think need to figure out how to get the "uploadFile.url" into the "repeater_file1" field in the database.
Hi Tim,
Indeed you have not uploaded the file but you are just passing the value of the upload button. Try this:
Shan, thanks for the response! Ok, so the code seems to be working for the most part. The file is now uploading correctly and is inserted into the database. I'm using this in a lightbox and the one issue I'm having now is it hangs on "Please wait..." and nothing ever happens. However if I manually close the lightbox and then refresh the page everything shows correctly.
@timstumbo Please wait means that the file is still uploading, it does take few seconds.
Can you screenshot your code
@shan It hangs for several minutes. That's when I finally just closed the lightbox did a page refresh and everything was there correctly. Here's the exact code I'm using on the page (except for the wixLocation address, I modified that before copying the code over): | https://www.wix.com/corvid/forum/community-discussion/file-document-upload-issue-using-wixdata-insert | CC-MAIN-2019-47 | refinedweb | 320 | 60.31 |
Your message has been sent.
This article has been saved to your account.
Go to my account
This article has been emailed to your Kindle.
Send this article
Complete the form below to send this article, wxPython 2.8: Window Layout and Design, to a friend (or to yourself). We will never share your details (or your friend's) with anyone. For more information, read our Privacy Policy..
In this article by Cody Precord, author of the book wxPython 2.8 Application Development Cookbook, we will cover:
- Using a BoxSizer
- Understanding proportions, flags, and borders
- Laying out controls with the GridBagSizer
- Standard dialog button layout
- Using XML resources
- Making a custom resource handler
- Using the AuiFrameManager
(For more resources on Python, see here.)
Introduction
Once you have an idea of how the interface of your applications should look, it comes the time to put it all together. Being able to take your vision and translate it into code can be a tricky and often tedious task. A window's layout is defined on a two dimensional plane with the origin being the window's top-left corner. All positioning and sizing of any widgets, no matter what it's onscreen appearance, is based on rectangles. Clearly understanding these two basic concepts goes a long way towards being able to understand and efficiently work with the toolkit.
Traditionally in older applications, window layout was commonly done by setting explicit static sizes and positions for all the controls contained within a window. This approach, however, can be rather limiting as the windows will not be resizable, they may not fit on the screen under different resolutions, trying to support localization becomes more difficult because labels and other text will differ in length in different languages, the native widgets will often be different sizes on different platforms making it difficult to write platform independent code, and the list goes on.
So, you may ask what the solution to this is. In wxPython, the method of choice is to use the Sizer classes to define and manage the layout of controls. Sizers are classes that manage the size and positioning of controls through an algorithm that queries all of the controls that have been added to the Sizer for their recommended best minimal sizes and their ability to stretch or not, if the amount of available space increases, such as if a user makes a dialog bigger. Sizers also handle cross-platform widget differences, for example, buttons on GTK tend to have an icon and be generally larger than the buttons on Windows or OS X. Using a Sizer to manage the button's layout will allow the rest of the dialog to be proportionally sized correctly to handle this without the need for any platform-specific code.
So let us begin our adventure into the world of window layout and design by taking a look at a number of the tools that wxPython provides in order to facilitate this task.
Using a BoxSizer
A BoxSizer is the most basic of Sizer classes. It supports a layout that goes in a single direction—either a vertical column or a horizontal row. Even though it is the most basic to work with, a BoxSizer is one of the most useful Sizer classes and tends to produce more consistent cross-platform behavior when compared to some of the other Sizers types. This recipe creates a simple window where we want to have two text controls stacked in a vertical column, each with a label to the left of it. This will be used to illustrate the most simplistic usage of a BoxSizer in order to manage the layout of a window's controls.
How to do it...
Here we define our top level Frame, which will use a BoxSizer to manage the size of its Panel:
class BoxSizerFrame(wx.Frame):
def __init__(self, parent, *args, **kwargs):
super(BoxSizerFrame, self).__init__(*args, **kwargs)
# Attributes
self.panel = BoxSizerPanel(self)
# Layout
sizer = wx.BoxSizer(wx.VERTICAL)
sizer.Add(self.panel, 1, wx.EXPAND)
self.SetSizer(sizer)
self.SetInitialSize()
The BoxSizerPanel class is the next layer in the window hierarchy, and is where we will perform the main layout of the controls:
class BoxSizerPanel(wx.Panel):
def __init__(self, parent, *args, **kwargs):
super(BoxSizerPanel, self).__init__(*args, **kwargs)
# Attributes
self._field1 = wx.TextCtrl(self)
self._field2 = wx.TextCtrl(self)
# Layout
self._DoLayout()
Just to help reduce clutter in the __init__ method, we will do all the layout in a separate _DoLayout method::")
# Make the first row by adding the label and field
# to the first horizontal sizer
field1_sz.AddSpacer(50)
field1_sz.Add(field1_lbl)
field1_sz.AddSpacer(5) # put 5px of space between
field1_sz.Add(self._field1)
field1_sz.AddSpacer(50)
# Do the same for the second row
field2_sz.AddSpacer(50)
field2_sz.Add(field2_lbl)
field2_sz.AddSpacer(5)
field2_sz.Add(self._field2)
field2_sz.AddSpacer(50)
# Now finish the layout by adding the two sizers
# to the main vertical sizer.
vsizer.AddSpacer(50)
vsizer.Add(field1_sz)
vsizer.AddSpacer(15)
vsizer.Add(field2_sz)
vsizer.AddSpacer(50)
# Finally assign the main outer sizer to the panel
self.SetSizer(vsizer)
How it works...
The previous code shows the basic pattern of how to create a simple window layout programmatically, using sizers to manage the controls. First let's start by taking a look at the BoxSizerPanel class's _DoLayout method, as this is where the majority of the layout in this example takes place.
First, we started off by creating three BoxSizer classes: one with a vertical orientation, and two with a horizontal orientation. The layout we desired for this window requires us to use three BoxSizer classes and this is why. If you break down what we want to do into simple rectangles, you will see that:
- We wanted two TextCtrl objects each with a label to the left of them which can simply be thought of as two horizontal rectangles.
- We wanted the TextCtrl objects stacked vertically in the window which is just a vertical rectangle that will contain the other two rectangles.
This is illustrated by the following screenshot (borders are drawn in and labels are added to show the area managed by each of Panel's three BoxSizers):
In the section where we populate the first horizontal sizer (field1_sz), we use two of the BoxSizer methods to add items to the layout. The first is AddSpacer, which does simply as its named and adds a fixed amount of empty space in the left-hand side of the sizer. Then we use the Add method to add our StaticText control to the right of the spacer, and continue from here to add other items to complete this row. As you can see, these methods add items to the layout from left to right in the sizer. After this, we again do the same thing with the other label and TextCtrl in the second horizontal sizer.
The last part of the Panel's layout is done by adding the two horizontal sizers to the vertical sizer. This time, since the sizer was created with a VERTICAL orientation, the items are added from top to bottom. Finally, we use the Panel's SetSizer method to assign the main outer BoxSizer as the Panel's sizer.
The BoxSizerFrame also uses a BoxSizer to manage the layout of its Panel. The only difference here is that we used the Add method's proportion and flags parameters to tell it to make the Panel expand to use the entire space available. After setting the Frame's sizer, we used its SetInitialSize method, which queries the window's sizer and its descendents to get and set the best minimal size to set the window to. We will go into more detail about these other parameters and their effects in the next recipe.
There's more...
Included below is a little more additional information about adding spacers and items to a sizer's layout.
Spacers
The AddSpacer will add a square-shaped spacer that is X pixels wide by X pixels tall to the BoxSizer, where X is the value passed to the AddSpacer method. Spacers of other dimensions can be added by passing a tuple as the first argument to the BoxSizer's Add method.
someBoxSizer.Add((20,5))
This will add a 20x5 pixel spacer to the sizer. This can be useful when you don't want the vertical space to be increased by as much as the horizontal space, or vice versa.
AddMany
The AddMany method can be used to add an arbitrary number of items to the sizer in one call. AddMany takes a list of tuples that contain values that are in the same order as the Add method expects.
someBoxSizer.AddMany([(staticText,),
((10, 10),),
(txtCtrl, 0, wx.EXPAND)]))
This will add three items to the sizer: the first two items only specify the one required parameter, and the third specifies the proportion and flags parameters.
(For more resources on Python, see here.)
Understanding proportions, flags, and borders
Through the use of the optional parameters in a sizer's various Add methods, it is possible to control the relative proportions, alignment, and padding around every item that is managed by the sizer. Without using these additional settings, all the items in the sizer will just use their "best" minimum size and will be aligned to the top-left of the rectangle of space that the sizer provides. This means that the controls will not stretch or contract when the window is resized. Also, for example, if in a horizontal row of items in a BoxSizer one of the items has a greater height than some of the other items in that same row, they may not be aligned as desired (see the following diagram).
This diagram illustrates an alignment issue that can occur when some controls have a different-sized rectangle than the one next to it. This is a realistic example of a problem that can occur on GTK (Linux), as its ComboBoxes tend to be much taller than a StaticTextCtrl. So where on other platforms these two controls may appear to be properly center-aligned, they will look like this on Linux.
This recipe will re-implement the previous recipe's BoxSizerPanel, using these additional Add parameters to improve its layout, in order to show how these parameters can be used to influence how the sizer manages each of the controls that have been added to it.
Getting Started
Before getting started on this recipe, make sure you have reviewed the previous recipe, Using a BoxSizer, as we will be modifying its _DoLayout method in this recipe to define some additional behaviors that the sizers should apply to its layout.
How to do it...
Here, we will make some modifications to the SizerItems proportions, flags, and borders to change the behavior of the layout::")
# 1) HORIZONTAL BOXSIZERS
field1_sz.Add(field1_lbl, 0,
wx.ALIGN_CENTER_VERTICAL|wx.RIGHT, 5)
field1_sz.Add(self._field1, 1, wx.EXPAND)
field2_sz.Add(field2_lbl, 0,
wx.ALIGN_CENTER_VERTICAL|wx.RIGHT, 5)
field2_sz.Add(self._field2, 1, wx.EXPAND)
# 2) VERTICAL BOXSIZER
vsizer.AddStretchSpacer()
BOTH_SIDES = wx.EXPAND|wx.LEFT|wx.RIGHT
vsizer.Add(field1_sz, 0, BOTH_SIDES|wx.TOP, 50)
vsizer.AddSpacer(15)
vsizer.Add(field2_sz, 0, BOTH_SIDES|wx.BOTTOM, 50)
vsizer.AddStretchSpacer()
# Finally assign the main outer sizer to the panel
self.SetSizer(vsizer)
How it works...
This recipe just shows what we changed in the previous recipe's __DoLayout method to take advantage of some of these extra options. The first thing to notice in the section where we add the controls to the horizontal sizers is that we no longer have the AddSpacer calls. These have been replaced by specifying a border in the Add calls. When adding each of the labels we added two sizer flags, ALIGN_CENTER_VERTICAL and RIGHT. The first flag is an alignment flag that specifies the desired behavior of the alignment and the second is a border flag that specifies where we want the border parameter to be applied. In this case, the sizer will align the StaticText in the center of the vertical space and add a 5px padding to the right side of it.
Next, where we add the TextCtrl objects to the sizer, we specified a 1 for the proportion and EXPAND for the sizer flag. Setting the proportion greater than the default of 0 will tell the sizer to give that control proportionally more of the space in the sizer's managed area. A proportion value greater than 0 in combination with the EXPAND flag which tells the control to get bigger as space is available will let it stretch as the dialog is resized to a bigger size. Typically you will only need to specify 0 or 1 for the proportion parameter, but under some complex layouts it may be necessary to give different controls a relatively different amount of the total available space. For example, in a layout with two controls if both are given a proportion of 1, they would each get 50 percent of the space. Changing the proportion of one of the controls to 2 would change the space allocation to a 66/33 percent balance.
We also made some changes to the final layout with the vertical sizer. First, instead of using the regular AddSpacer function to add some static spacers to the layout, we changed it to use AddStretchSpacer instead. AddStretchSpacer is basically the equivalent of doing Add((-1,-1), 1, wx.EXPAND), which just adds a spacer of indeterminate size that will stretch as the window size is changed. This allows us to keep the controls in the center of the dialog as its vertical size changes.
Finally, when adding the two horizontal sizers to the vertical sizer, we used some flags to apply a static 50px of spacing around the LEFT, RIGHT, and TOP or BOTTOM of the sizers. It's also important to notice that we once again passed the EXPAND flag. If we did not do this, the vertical sizer would not allow those two items to expand which in turn would nullify us adding the EXPAND flag for the TextCtrl objects. Try running this and the previous sample side-by-side and resizing each window to see the difference in behavior.
The previous screenshot has had some lines drawn over it to show the five items that are managed by the main top level VERTICAL sizer vsizer.
There's more...
There are a number of flags that can be used to affect the layout in various ways. The following three tables list the different categories of these flags that can be combined in the flag's bitmask:
Alignment flags
This table shows a listing of all the alignment flags and a description of what each one does:
Border flags
The following flags can be used to control which side(s) of the control the border argument of the Sizer's Add method is applied to:
Behavior flags
The sizer flags in this table can be used to control how a control is resized within a sizer:
Laying out controls with the GridBagSizer
There are a number of other types of sizers in wxPython, besides the BoxSizer, that are designed to help simplify different kinds of layouts. The GridSizer, FlexGridSizer, and GridBagSizer can be used to lay items out in a grid-like manner. The GridSizer provides a fixed grid layout where items are added into different "cells" in the grid. The FlexGridSizer is just like the GridSizer, except that the columns in the grid can be different widths. Finally, the GridBagSizer is similar to the FlexGridSizer but also allows items to span over multiple "cells" in the grid, which makes it possible to achieve layouts that can usually only be achieved by nesting several BoxSizers. This recipe will discuss the use of the GridBagSizer, and use it to create a dialog that could be used for viewing the details of a log event.
How to do it...
Here we will create a custom DetailsDialog that could be used for viewing log messages or system events. It has two fields in it for displaying the type of message and the verbose message text:
class DetailsDialog(wx.Dialog):
def __init__(self, parent, type, details, title=""):
"""Create the dialog
@param type: event type string
@param details: long details string
"""
super(DetailsDialog, self).__init__(parent, title=title)
# Attributes
self.type = wx.TextCtrl(self, value=type,
style=wx.TE_READONLY)
self.details = wx.TextCtrl(self, value=details,
style=wx.TE_READONLY|
wx.TE_MULTILINE)
# Layout
self.__DoLayout()
self.SetInitialSize()
def __DoLayout(self):
sizer = wx.GridBagSizer(vgap=8, hgap=8)
type_lbl = wx.StaticText(self, label="Type:")
detail_lbl = wx.StaticText(self, label="Details:")
# Add the event type fields
sizer.Add(type_lbl, (1, 1))
sizer.Add(self.type, (1, 2), (1, 15), wx.EXPAND)
# Add the details field
sizer.Add(detail_lbl, (2, 1))
sizer.Add(self.details, (2, 2), (5, 15), wx.EXPAND)
# Add a spacer to pad out the right side
sizer.Add((5, 5), (2, 17))
# And another to the pad out the bottom
sizer.Add((5, 5), (7, 0))
self.SetSizer(sizer)
How it works...
The GridBagSizer's Add method of GridBagSizer takes some additional parameters compared to the other types of sizers. It is necessary to specify the grid position and optionally the number of columns and rows to span. We used this in our details dialog in order to allow the TextCtrl fields to span multiple columns and multiple rows in the case of the details field. The way this layout works can get a little complicated, so let's go over our __DoLayout method line-by-line to see how each of them affect the dialog's layout.
First, we create out GridBagSizer, and in its constructor we specify how much padding we want between the rows and columns. Next, we start adding our items to the sizer. The first item that we add is the type StaticText label, which we added at row 1, column 1. This was done to leave some padding around the outside edge. Next, we added the TextCtrl to the right of the label at row 1, column 2. For this item, we also specified the span parameter to tell the item to span 1 row and 15 columns. The column width is proportionally based upon the size of the first column in the grid.
Next we add the details fields, starting with the details label, which is added at row 2, column 1, in order to line up with the type StaticText label. Since the details text may be long, we want it to span multiple rows. Hence, for its span parameter we specified for it to span 5 rows and 15 columns.
Finally, so that the padding around our controls on the bottom and right-hand side matches the top and left, we need to add a spacer to the right and bottom to create an extra column and row. Notice that for this step we need to take into account the span parameters of the previous items we added, so that our items do not overlap. Items cannot occupy the same column or row as any other item in the sizer. So first we add a spacer to row 2, column 17, to create a new column on the right-hand side of our TextCtrl objects. We specified column 17 because the TextCtrl objects start at column 2 and span 15 columns. Likewise, we did the same when adding one to the bottom, to take into account the span of the details text field. Note that instead of offsetting the first item in the grid and then adding spacers, it would have been easier to nest our GridBagSizer inside of a BoxSizer and specify a border. The approach in this recipe was done just to illustrate the need to account for an item's span when adding additional items to the grid:
Standard dialog button layout
Each platform has different standards for how different dialog buttons are placed in the dialog. This is where the StdDialogButtonSizer comes into play. It can be used to add standard buttons to a dialog, and automatically take care of the specific platform standards for where the button is positioned. This recipe shows how to use the StdDialogButtonSizer to quickly and easily add standard buttons to a Dialog.
How to do it...
Here is the code for our custom message box class that can be used as a replacement for the standard MessageBox in cases where the application wants to display custom icons in their pop-up dialogs:
class CustomMessageBox(wx.Dialog):
def __init__(self, parent, message, title="",
bmp=wx.NullBitmap, style=wx.OK):
super(CustomMessageBox, self).__init__(parent, title=title)
# Attributes
self._flags = style
self._bitmap = wx.StaticBitmap(self, bitmap=bmp)
self._msg = wx.StaticText(self, label=message)
# Layout
self.__DoLayout()
self.SetInitialSize()
self.CenterOnParent()
def __DoLayout(self):
vsizer = wx.BoxSizer(wx.VERTICAL)
hsizer = wx.BoxSizer(wx.HORIZONTAL)
# Layout the bitmap and caption
hsizer.AddSpacer(10)
hsizer.Add(self._bitmap, 0, wx.ALIGN_CENTER_VERTICAL)
hsizer.AddSpacer(8)
hsizer.Add(self._msg, 0, wx.ALIGN_CENTER_VERTICAL)
hsizer.AddSpacer(10)
# Create the buttons specified by the style flags
# and the StdDialogButtonSizer to manage them
btnsizer = self.CreateButtonSizer(self._flags)
# Finish the layout
vsizer.AddSpacer(10)
vsizer.Add(hsizer, 0, wx.ALIGN_CENTER_HORIZONTAL)
vsizer.AddSpacer(8)
vsizer.Add(btnsizer, 0, wx.EXPAND|wx.ALL, 5)
self.SetSizer(vsizer)
How it works...
Here, we created a custom MessageBox clone that can accept a custom Bitmap to display instead of just the standard icons available in the regular MessageBox implementation. This class is pretty simple, so let's jump into the __DoLayout method to see how we made use of the StdDialogButtonSizer.
In __DoLayout, we first created some regular BoxSizers to do the main part of the layout, and then in one single line of code we created the entire layout for our buttons. To do this, we used the CreateButtonSizer method of the base wx.Dialog class. This method takes a bitmask of flags that specifies the buttons to create, then creates them, and adds them to a StdDialogButtonSizer that it returns. All we need to do after this is to add the sizer to our dialog's main sizer and we are done!
The following screenshots show how the StdDialogButtonSizer handles the differences in platform standards.
For example, the OK and Cancel buttons on a dialog are ordered as OK/Cancel on Windows:
On Macintosh OS X, the standard layout for the buttons is Cancel/OK:
There's more...
Here is a quick reference to the flags that can be passed as a bitmask to the CreateButtonSizer method in order to create the buttons that the button sizer will manage:
(For more resources on Python, see here.)
Using XML resources
XRC is a way of creating and design window layouts with XML resource files. The hierarchical nature of XML parallels that of an application's window hierarchy, which makes it a very sensible data format to serialize a window layout with. This recipe shows how to create and load a simple dialog with two CheckBoxe objects and two Button objects on it, from an XML resource file.
How to do it...
Here is the XML for our dialog that we have in a file called xrcdlg.xrc:
<?xml version="1.0" ?>
<resource>
<object class="wxDialog" name="xrctestdlg">
<object class="wxBoxSizer">
<orient>wxVERTICAL</orient>
<object class="spacer">
<option>1</option>
<flag>wxEXPAND</flag>
</object>
<object class="sizeritem">
<object class="wxCheckBox">
<label>CheckBox Label</label>
</object>
<flag>wxALL|wxALIGN_CENTRE_HORIZONTAL</flag>
<border>5</border>
</object>
<object class="spacer">
<option>1</option>
<flag>wxEXPAND</flag>
</object>
<object class="sizeritem">
<object class="wxBoxSizer">
<object class="sizeritem">
<object class="wxButton" name="wxID_OK">
<label>Ok</label>
</object>
<flag>wxALL</flag>
<border>5</border>
</object>
<object class="sizeritem">
<object class="wxButton" name="wxID_CANCEL">
<label>Cancel</label>
</object>
<flag>wxALL</flag>
<border>5</border>
</object>
<orient>wxHORIZONTAL</orient>
</object>
<flag>wxALIGN_BOTTOM|wxALIGN_CENTRE_HORIZONTAL</flag>
<border>5</border>
</object>
</object>
<title>Xrc Test Dialog</title>
<style>wxDEFAULT_DIALOG_STYLE|wxRESIZE_BORDER</style>
</object>
</resource>
When loaded, the above XML will generate the following dialog:
This is a minimal program to load this XML resource to make and show the dialog it represents:
import wx
import wx.xrc as xrc
app = wx.App()
frame = wx.Frame(None)
resource = xrc.XmlResource("xrcdlg.xrc")
dlg = resource.LoadDialog(frame, "xrctestdlg")
dlg.ShowModal()
app.MainLoop()
How it works...
The XML in this recipe was created with the help of xrced, which is an XML resource editor tool that is a part of the wxPython tools package. The object tag is used to represent a class object. Nesting other objects inside is how the parent child relationship is represented with the XML. The class attribute of the object tag is what is used to specify the type of class to create. The values should be a class name and in the case of wxPython provided classes, they use the wxWidgets names, which are prefixed with "wx". To work with XmlResource classes, it is highly recommended to use a tool like xrced to generate the XML.
In order to load the XML to create the object(s) that are used for representation, you need to import the wx.xrc package, which provides the XmlResource class. There are a few ways to use XmlResource to perform the transformations on the XML. In this example, we created our XmlResource object by passing the path to our xrc file in its constructor. This object has a number of load methods for instantiating different types of objects. We want to load a dialog, so we called its LoadDialog method, passing a parent window as the first argument and then the name of the dialog we want to load from the XML. It will then instantiate an instance of that dialog and return it so that we can show it.
There's more...
Included below are some additional references to features available when using the XRC library.
The XmlResource object has methods for loading many different kinds of resources from XML. Here is quick reference to some of the additional methods:
Specifying standard IDs
In order to give an object a standard ID in XRC, it should be specified in the object tag's name attribute, using the wxWidgets naming for the ID (that is, wxID_OK without the '.').
Making a custom resource handler
Although XRC has built-in support for a large number of the standard controls, any non-trivial application will use its own subclasses and/or custom widgets. Creating a custom XmlResource class will allow these custom classes to be loaded from an XML resource file. This recipe shows how to create an XML resource handler for a custom Panel class and then use that handler to load the resource.
Getting Started
This recipe discusses how to customize and extend the handling of XML resources. Please review the Using XML resources recipe in this article to learn the basics of how XRC works.
How to do it...
In the following code, we will show how to create a custom XML resource handler for a Panel and then how to use XRC to load that resource into a Frame:
import wx
import wx.xrc as xrc
# Xml to load our object
RESOURCE = r"""<?xml version="1.0"?>
<resource>
<object class="TextEditPanel" name="TextEdit">
</object>
</resource>
"""
Here, in our Frame subclass, we simply create an instance of our custom resource handler and use it to load our custom Panel:
class XrcTestFrame(wx.Frame):
def __init__(self, *args, **kwargs):
super(XrcTestFrame, self).__init__(*args, **kwargs)
# Attributes
resource = xrc.EmptyXmlResource()
handler = TextEditPanelXmlHandler()
resource.InsertHandler(handler)
resource.LoadFromString(RESOURCE)
self.panel = resource.LoadObject(self,
"TextEdit",
"TextEditPanel")
# Layout
sizer = wx.BoxSizer(wx.VERTICAL)
sizer.Add(self.panel, 1, wx.EXPAND)
self.SetSizer(sizer)
Here is the Panel class that our custom resource handler will be used to create. It is just a simple Panel with a TextCtrl and two Buttons on it:
class TextEditPanel(wx.Panel):
"""Custom Panel containing a TextCtrl and Buttons
for Copy and Paste actions.
"""
def __init__(self, *args, **kwargs):
super(TextEditPanel, self).__init__(*args, **kwargs)
# Attributes
self.txt = wx.TextCtrl(self, style=wx.TE_MULTILINE)
self.copy = wx.Button(self, wx.ID_COPY)
self.paste = wx.Button(self, wx.ID_PASTE)
# Layout
self._DoLayout()
# Event Handlers
self.Bind(wx.EVT_BUTTON, self.OnCopy, self.copy)
self.Bind(wx.EVT_BUTTON, self.OnPaste, self.paste)
def _DoLayout(self):
"""Layout the controls"""
vsizer = wx.BoxSizer(wx.VERTICAL)
hsizer = wx.BoxSizer(wx.HORIZONTAL)
vsizer.Add(self.txt, 1, wx.EXPAND)
hsizer.AddStretchSpacer()
hsizer.Add(self.copy, 0, wx.RIGHT, 5)
hsizer.Add(self.paste)
hsizer.AddStretchSpacer()
vsizer.Add(hsizer, 0, wx.EXPAND|wx.ALL, 10)
# Finally assign the main outer sizer to the panel
self.SetSizer(vsizer)
def OnCopy(self, event):
self.txt.Copy()
def OnPaste(self, event):
self.txt.Paste()
Finally, here is our custom XML resource handler class, where we just have to override two methods to implement the handling for our TextEditPanel class:
class TextEditPanelXmlHandler(xrc.XmlResourceHandler):
"""Resource handler for our TextEditPanel"""
def CanHandle(self, node):
"""Required override. Returns a bool to say
whether or not this handler can handle the given class
"""
return self.IsOfClass(node, "TextEditPanel")
def DoCreateResource(self):
"""Required override to create the object"""
panel = TextEditPanel(self.GetParentAsWindow(),
self.GetID(),
self.GetPosition(),
self.GetSize(),
self.GetStyle("style",
wx.TAB_TRAVERSAL),
self.GetName())
self.SetupWindow(panel)
self.CreateChildren(panel)
return panel
How it works...
The TextEditPanel is our custom class that we want to create a custom resource handler for. The TextEditPanelXmlHandler class is a minimal resource handler that we created to be able to load our class from XML. This class has two required overrides that need to be implemented for it to function properly. The first is CanHandle, which is called by the framework to check if the handler can handle a given node type. We used the IsOfClass method to check if the node was of the same type as our TextEditPanel. The second is DoCreateResource, which is what is called to create our class. To create the class, all of its arguments can be retrieved from the resource handler.
The XrcTestFrame class is where we made use of our custom resource handler. First, we created an EmptyXmlResource object and used its InsertHandler method to add our custom handler to it. Then we loaded the XML from the RESOURCE string that we defined using the handler's LoadFromString method. After that, all there was to do was load the object using the resource's LoadObject method, which takes three arguments: the parent window of the object to be loaded, the name of the object in the XML resource, and the classname.
Using the AuiFrameManager
The AuiFrameManager is part of the Advanced User Interface (wx.aui) library added to wxPython in 2.8. It allows a Frame to have a very user customizable interface. It automatically manages children windows in panes that can be undocked and turned into separate floating windows. There are also some built-in features to help with persisting and restoring the window's layout during running the application. This recipe will create a Frame base class that has AUI support and will automatically save its perspective and reload it when the application is next launched.
How to do it...
The following code will define a base class that encapsulates some of the usage of an AuiManager:
import wx
import wx.aui as aui
class AuiBaseFrame(wx.Frame):
"""Frame base class with builtin AUI support"""
def __init__(self, parent, *args, **kwargs):
super(AuiBaseFrame, self).__init__(*args, **kwargs)
# Attributes
auiFlags = aui.AUI_MGR_DEFAULT
if wx.Platform == '__WXGTK__' and \
aui.AUI_MGR_DEFAUL & aui.AUI_MGR_TRANSPARENT_HINT:
# Use venetian blinds style as transparent can
# cause crashes on Linux when desktop compositing
# is used. (wxAUI bug in 2.8)
auiFlags -= aui.AUI_MGR_TRANSPARENT_HINT
auiFlags |= aui.AUI_MGR_VENETIAN_BLINDS_HINT
self._mgr = aui.AuiManager(self, flags=auiFlags)
# Event Handlers
self.Bind(wx.EVT_CLOSE, self.OnAuiBaseClose)
OnAuiBaseClose will be called when the Frame closes. We use this as the point to get the current window layout perspective and save it for the next time the application is launched:
def OnAuiBaseClose(self, event):
"""Save perspective on exit"""
appName = wx.GetApp().GetAppName()
assert appName, "No App Name Set!"
config = wx.Config(appName)
perspective = self._mgr.SavePerspective()
config.Write("perspective", perspective)
event.Skip() # Allow event to propagate
AddPane simply wraps getting access to the Frame's AuiManager and adds the given pane and auiInfo to it:
def AddPane(self, pane, auiInfo):
"""Add a panel to be managed by this Frame's
AUI Manager.
@param pane: wx.Window instance
@param auiInfo: AuiInfo Object
"""
# Delegate to AuiManager
self._mgr.AddPane(pane, auiInfo)
self._mgr.Update() # Refresh the layout
The next method is simply a convenience method for creating and adding the main center pane to the managed window:
def SetCenterPane(self, pane):
"""Set the main center pane of the frame.
Convenience method for AddPane.
@param pane: wx.Window instance
"""
info = aui.AuiPaneInfo()
info = info.Center().Name("CenterPane")
info = info.Dockable(False).CaptionVisible(False)
self._mgr.AddPane(pane, info)
This final method is used to load the last saved window layout from the last time the window was opened:
def LoadDefaultPerspective(self):
appName = wx.GetApp().GetAppName()
assert appName, "Must set an AppName!"
config = wx.Config(appName)
perspective = config.Read("perspective")
if perspective:
self._mgr.LoadPerspective(perspective)
How it works...
In this recipe, we created a class to help encapsulate some of the AuiManager's functionality. So let's take a look at some of the functionality that this class provides, and how it works.
The __init__ method is where we create the AuiManager object that will manage the panes that we want to add to the Frame. The AuiManager accepts a number of possible flags to dictate its behavior. We employed a small workaround for a bug on Linux platforms that use desktop compositing. Using the transparent docking hints can cause an AUI application to crash in this scenario, so we replaced it with the venetian blind style instead.
OnAuiBaseClose is used as an event handler for when the Frame closes. We use this as a hook to automatically store the current layout of the AuiManager, which is called a perspective, for the next application launch. To implement this feature, we have created a requirement that the App object's SetName method was called to set the application name because we need this in order to use wx.Config. The wx.Config object is simply an interface used to access the Registry on Windows or an application configuration file on other platforms. SavePerspective returns a string encoded with all of the information that the AuiManager needs in order to restore the current window layout. The application can then simply call our LoadDefaultPerspective method when the application starts up, in order to restore the user's last window layout.
The other two methods in this class are quite simple and are provided simply for convenience to delegate to the AuiManager of the Frame. The AddPane method of the AuiManager is how to add panes to be managed by it. The pane argument needs to be a window object that is a child of the Frame. In practice, this is usually some sort of Panel subclass. The auiInfo argument is an AuiPaneInfo object. This is what the AuiManager uses to determine how to manage the pane. See the sample code that accompanies this recipe for an example of this class in action.
There's more...
Here is a quick reference to the flags that can be used in the flags bitmask for the AuiManager in order to customize its behavior and the styles of some of its components:
Summary
This article introduced to you a number of concepts and techniques for designing your user interfaces in wxPython. The majority of this article explained the use of Sizers to allow you to quickly implement cross-platform user interfaces.
Further resources on this subject:
- Python Image Manipulation [article]
- Creating Your Own Functions in MySQL for Python [article]
- Python Text Processing with NLTK 2.0: Creating Custom Corpora [article]
About the Author :
Cody Precord
Cody Precord is a Software Engineer based in Minneapolis, MN, USA. He has been designing and writing systems and application software for AIX, Linux, Windows, and Macintosh OSX for the last 10 years using primarily C, C++, Perl, Bash, Korn Shell, and Python. The constant need to be working on multiple platforms naturally led Cody to the wxPython toolkit, which he has been using intensely for that last 5 years. Cody has been primarily using wxPython for his open source project Editra which is a cross platform development tool. He is interested in promoting cross platform development practices and improving usability in software.
Post new comment | http://www.packtpub.com/article/wxpython-28-window-layout-design | CC-MAIN-2013-20 | refinedweb | 6,120 | 54.83 |
table of contents
- buster 4.16-2
- buster-backports 5.04-1~bpo10+1
- testing 5.07-1
- unstable 5.07-1
NAME¶gets - get a string from standard input (DEPRECATED)
SYNOPSIS¶
#include <stdio.h>
char *gets(char *s);
DESCRIPTION¶Never use this function.
gets() reads a line from stdin into the buffer pointed to by s until either a terminating newline or EOF, which it replaces with a null byte ('\0'). No check for buffer overrun is performed (see BUGS below).
RETURN VALUE¶gets() returns s on success, and NULL on error or when end of file occurs while no characters have been read. However, given the lack of buffer overrun checking, there can be no guarantees that the function will even return.
ATTRIBUTES¶For an explanation of the terms used in this section, see attributes(7).
CONFORMING TO¶C¶Never | https://manpages.debian.org/buster/manpages-dev/gets.3.en.html | CC-MAIN-2020-34 | refinedweb | 142 | 68.67 |
Installation¶
Installing
astropy¶
If you are new to Python and/or do not have familiarity with Python virtual environments, then we recommend starting by installing the Anaconda Distribution. This works on all platforms (linux, Mac, Windows) and installs a full-featured scientific Python in a user directory without requiring root permissions.
Using pip¶
Warning
Users of the Anaconda Python distribution should follow the instructions for Using Conda.
To install
astropy with pip, run:
pip install astropy
If you want to make sure none of your existing dependencies get upgraded, you can also do:
pip install astropy --no-deps
On the other hand, if you want to install
astropy along with recommended
or even all of the available optional dependencies,
you can do:
pip install astropy[recommended]
or:
pip install astropy[all]. Note that in
this case you will need a C compiler (e.g.,
gcc or
clang) to be installed
(see Building from source below) for the installation to succeed.
If you get.
Alternatively, if you intend to do development on other software that uses
astropy, such as an affiliated package, consider installing
astropy
into a virtualenv.
Do not install
astropy or other third-party packages using
sudo
unless you are fully aware of the risks.
Using Conda¶
To install
astropy using conda run:
conda install astropy
astropy is installed by default with the Anaconda Distribution. To update to the latest version run:
conda update astropy
There may be a delay of a day or two between when a new version of
astropy
is released and when a package is available for conda. You can check
for the list of available versions with
conda search astropy.
If you want to install
astropy along with recommended or all of the
available optional dependencies, you can do:
conda install -c conda-forge -c defaults scipy matplotlib
or:
conda install -c conda-forge -c defaults scipy matplotlib \ h5py beautifulsoup4 html5lib bleach pandas sortedcontainers \ pytz setuptools mpmath bottleneck jplephem asdf pyarrow
To also be able to run tests (see below) and support Building Documentation use the
following. We use
pip for these packages to ensure getting the latest
releases which are compatible with the latest
pytest and
sphinx releases:
pip install pytest-astropy sphinx-astropy
Testing an Installed
astropy¶
The easiest way to test if your installed version of
astropy is running
correctly is to use the astropy.test() function:
import astropy astropy.test()
The tests should run and print out any failures, which you can report at the Astropy issue tracker.
This way of running the tests may not work if you do it in the
astropy source
distribution. See Testing a Source Code Build of astropy for how to run the tests from the
source code directory, or Running Tests for more details.
Requirements¶
astropy has the following strict requirements:
astropy also depends on a number of other packages for optional features.
The following are particularly recommended:
scipy >=1.3 or later: To power a variety of features in several modules.
matplotlib !=3.4.0,!=3.5.2,>=3.1 or later: To provide plotting functionality that
astropy.visualizationenhances.
The further dependencies provide more specific features:
h5py: To read/write
Tableobjects from/to HDF5 files.
BeautifulSoup: To read
Tableobjects from HTML files.
html5lib: To read
Tableobjects from HTML files using the pandas reader.
bleach: Used to sanitize text when disabling HTML escaping in the
TableHTML writer.
xmllint: To validate VOTABLE XML files. This is a command line tool installed outside of Python.
pandas: To convert
Tableobjects from/to pandas DataFrame objects. Version 0.14 or higher is required to use the Pandas I/O functions to read/write
Tableobjects..
setuptools: Used for discovery of entry points which are used to insert fitters into
astropy.modeling.fitting.
mpmath: Used for the ‘kraft-burrows-nousek’ interval in
poisson_conf_interval.
asdf >=2.10.0 or later: Enables the serialization of various Astropy classes into a portable, hierarchical, human-readable representation.
bottleneck: Improves the performance of sigma-clipping and other functionality that may require computing statistics on arrays with NaN values.
certifi: Useful when downloading files from HTTPS or FTP+TLS sites in case Python is not able to locate up-to-date root CA certificates on your system; this package is usually already included in many Python installations (e.g., as a dependency of the
requestspackage).
pyarrow >=5.0.0 or later: To read/write
Tableobjects from/to Parquet files.
However, note that these packages require installation only if those particular
features are needed.
astropy will import even if these dependencies are not
installed.
The following packages can optionally be used when testing:
pytest-astropy: See Testing a Source Code Build of astropy
pytest-xdist: Used for distributed testing.
pytest-mpl: Used for testing with Matplotlib figures.
objgraph: Used only in tests to test for reference leaks.
IPython >=4.2 or later: Used for testing the notebook interface of
Table.
coverage: Used for code coverage measurements.
skyfield: Used for testing Solar System coordinates.
spgp4: Used for testing satellite positions.
tox: Used to automate testing and documentation builds.
Building from Source¶
Prerequisites¶
You will need a compiler suite and the development headers for Python in order
to build
astropy. You do not need to install any other specific build
dependencies (such as Cython) since these are
declared in the
pyproject.toml file and will be automatically installed into
a temporary build environment by pip.
Prerequisites for Linux¶
On Linux, using the package manager for your distribution will usually be the
easiest route to making sure you have the prerequisites to build
astropy. In
order to build from source, you will need the Python development
package for your Linux distribution, as well as pip.
For Debian/Ubuntu:
sudo apt-get install python3-dev python3-numpy-dev python3-setuptools cython3 python3-pytest-astropy
For Fedora/RHEL:
sudo yum install python3-devel python3-numpy python3-setuptools python3-Cython python3-pytest-astropy
Note
Building the developer version of
astropy may require
newer versions of the above packages than are available in
your distribution’s repository. If so, you could either try
a more up-to-date distribution (such as Debian
testing),
or install more up-to-date versions of the packages using
pip or
conda in a virtual environment.
If you wish to participate in the development of
astropy, see the
Developer Documentation. The present document covers only the basics necessary to
installing
astropy.
Building and Installing¶
To build and install
astropy (from the root of the source tree):
pip install .
If you install in this way and you make changes to the code, you will need to re-run the install command for changes to be reflected. Alternatively, you can use:
pip install -e .
which installs
astropy in develop/editable mode – this then means that
changes in the code are immediately reflected in the installed version.
Troubleshooting¶
If you get an error mentioning that you do not have the correct permissions to
install
astropy into the default
site-packages directory, you can try
installing with:
pip set environment variables with the
pattern
ASTROPY_USE_SYSTEM_??? to
1 when building/installing
the package.
For example, to build
astropy using the system’s expat parser
library, use:
ASTROPY_USE_SYSTEM_EXPAT=1 pip install -e .
To build using all of the system libraries, use:
ASTROPY_USE_SYSTEM_ALL=1 pip install -e .
The C libraries currently bundled with
astropy include:
wcslib see
cextern/wcslib/READMEfor the bundled version. To use the system version, set
ASTROPY_USE_SYSTEM_WCSLIB=1.
cfitsio see
cextern/cfitsio/changes.txtfor the bundled version. To use the system version, set
ASTROPY_USE_SYSTEM_CFITSIO=1.
expat see
cextern/expat/READMEfor the bundled version. To use the system version, set
ASTROPY_USE_SYSTEM_EXPAT=1.
Installing
astropy into CASA¶
If you want to be able to use
astropy inside CASA, the easiest way is to do so from inside CASA.
First, we need to make sure pip is installed. Start up CASA as normal, and then type:
CASA <2>: from setuptools.command import easy_install CASA <3>: easy_install.main(['--user', 'pip'])
Now, quit CASA and re-open it, then type the following to install
astropy:
CASA <2>: import do not encounter your issue there, please post a new one.
Installing pre-built Development Versions of
astropy¶
Most nights a development snapshot of
astropy will be compiled.
This is useful if you want to test against a development version of astropy but
do not want to have to build it yourselves. You can see the
available astropy dev snapshots page
to find out what is currently being offered.
Installing these “nightlies” of
astropy can be achieved by using
pip:
$ pip install --extra-index-url= --pre astropy
The extra index URL tells
pip to check the
pip index on Azure Pipelines, where the
nightlies are built, and the
--pre command tells
pip to install pre-release
versions (in this case
.dev releases).. The easiest way to build the documentation is to use tox as detailed in
Building. If you are happy to do this, you can skip the rest
of this section.
On the other hand, if you wish to call Sphinx manually to build the documentation, you will need to make sure that a number of dependencies are installed. If you use conda, the easiest way to install the dependencies is with:
conda install -c conda-forge sphinx-astropy
Without conda, you install the dependencies by specifying
[docs] when
installing
astropy with pip:
pip install -e '.[docs]'
You can alternatively install the sphinx-astropy package with pip:
pip install used by
astropy
Graphviz - generate inheritance graphs (available as a conda package or a system install but not in pip)
Building¶
There are two ways to build the Astropy documentation. The easiest way is to
execute the following tox command (from the
astropy source directory):
tox -e build_docs
If you do this, you do not need to install any of the documentation dependencies
as this will be done automatically. The documentation will be built in the
docs/_build/html directory, and can be read by pointing a web browser to
docs/_build/html/index.html.
Alternatively, you can do:
cd docs make html
And the documentation will be generated in the same location. Note that this uses the installed version of astropy, so if you want to make sure the current repository version is used, you will need to install it with e.g.:
pip install -e .[docs]
before changing to the
docs directory.
In the second way, LaTeX documentation can be generated by using the command:
make latex
The LaTeX file
Astropy.tex will be created in the
docs/_build/latex
directory, and can be compiled using
pdflatex.
Reporting Issues/Requesting Features¶
As mentioned above, building the documentation depends on a number of Sphinx extensions and other packages. Since it is not always possible to know which package is causing issues or would need to have a new feature implemented, you can
The easiest way to run the tests in a source checkout of
astropy
is to use tox:
tox -e test-alldeps
There are also alternative methods of Running Tests if you would like more control over the testing process. | https://docs.astropy.org/en/latest/install.html | CC-MAIN-2022-27 | refinedweb | 1,843 | 52.9 |
I am really new to java and I signed up for an AP class which is being taught very badly, and so I have no idea of how to do this part of the assignment. It prampts you to add code that will do the following
Find and print the maximum sale. Print both the id of the salesperson with the max sale and the amount of the sale, e.g., “Salesperson 3 had the highest sale with $4500.” Note that you don’t need another loop for this; you can do it in the same loop where the values are read and the sum is computed.
the code we are given is the following
import java.util.Scanner;
public class Sales {
public static void main(String[] args) {
final int salesPeople = 5;
int[] sales = new int[salesPeople];
int sum;
Scanner scan = new Scanner(System.in);
for (int i=0; i<sales.length; i++) {
System.out.print("Enter sales for salesperson " + i + ":");
sales[i] = scan.nextInt();
}
System.out.println("\nSalesperson Sales");
System.out.println("------------------");
sum = 0;
for (int i=0; i<sales.length; i++) {
System.out.println(" " + " " + sales[i]);
sum += sales[i];
}
System.out.println("\nTotal sales: " + sum);
System.out.println("Average sale " + sum/salesPeople);
}
Because it's your homework, I won't give you the direct answer, but instead give you some hints. If you add two variables, say,
int heighestSale = -1; // assume the sales are not negative or at least one sale is not negative
int heighestPerson = -1;
then you update the two variables based on the current sale value in one of the loops and print out them at the end. | https://codedump.io/share/t3VdTJENowq8/1/how-to-find-the-max-value-of-an-unsorted-array | CC-MAIN-2016-50 | refinedweb | 273 | 74.29 |
I’m not ashamed to say it: I’m a bit of an ATMEL guy. AVR microcontrollers are virtually exclusively what I utilize when creating hobby-level projects. Wile I’d like to claim to be an expert in the field since I live and breathe ATMEL datasheets and have used many intricate features of these microchips, the reality is that I have little experience with other platforms, and have likely been leaning on AVR out of habit and personal convention rather than a tangible reason.
Although I was initially drawn to the AVR line of microcontrollers because of its open-source nature (The primary compiler is the free AVR-GCC) and longstanding ability to be programmed from non-Windows operating systems (like Linux), Microchip’s PIC has caught my eye over the years because it’s often a few cents cheaper, has considerably large professional documentation, and offers advanced integrated peripherals (such as native USB functionality in a DIP package) more so than the current line of ATTiny and ATMega microcontrollers. From a hobby standpoint, I know that ATMEL is popular (think Arduino), but from a professional standpoint I usually hear about commercial products utilizing PIC microcontrollers. One potential drawback to PIC (and the primary reason I stayed away from it) is that full-featured C compilers are often not free, and as a student in the medical field learning electrical engineering as a hobby, I’m simply not willing to pay for software at this stage in my life.
I decided to take the plunge and start gaining some experience with the PIC platform. I ordered some PIC chips (a couple bucks a piece), a PIC programmer (a Chinese knock-off clone of the Pic Kit 2 which is <$20 shipped on eBay), and shelved it for over a year before I got around to figuring it out today. My ultimate goal is to utilize its native USB functionality (something at ATMEL doesn’t currently offer in DIP packages). I’ve previously used bit-banging libraries like V-USB to hack together a USB interface on AVR microcontrollers, but it felt unnecessarily complex. PIC is commonly used and a bit of an industry standard, so I’m doing myself a disservice by not exploring it. My goal is USB functionality, but I have to start somewhere: blinking a LED.
Here’s my blinking LED. It’s a bit anticlimactic, but it represents a successful program design from circuit to writing the code to programming the microchip.
Based on my limited experience, it seems you need 4 things to program a PIC microcontroller with C:
- PIC microcontroller compatible with your programmer and your software (I’m using 18F2450)
- PIC programmer (I’m using a clone PicKit 2, $19.99 shipped on eBay) – get the PICkit2 installer here
- Install MPLAB IDE (programming environment for PIC) – has a free version
- Install a C compiler: I’m using PIC18 C Compiler for MPLAB Lite – has a free version
The first thing I did was familiarize myself with the pin diagram of my PIC from its datasheet. I’m playing with an 18F2450 and the datasheet is quite complete. If you look at the pin diagram, you can find pins labeled MCLR (reset), VDD (+5V), VSS (GND), PGC (clock), and PGD (data). These pins should be connected to their respective counterparts on the programmer. To test connectivity, install and run the PICkit2 installer software and it will let you read/verify the firmware on the chip, letting you know connectivity is solid. Once you’re there, you’re ready to start coding!
I wish I were friends with someone who programmed PIC, such that in 5 minutes I could be shown what took a couple hours to figure out. There are quite a few tutorials out there – borderline too many, and they all seem to be a bit different. To quickly get acquainted with the PIC programming environment, I followed the “Hello World” Program in C tutorial on PIC18F.com. Unfortunately, it didn’t work as posted, likely because their example code was based on a PIC 18F4550 and mine is an 18F2450, but I still don’t understand why such a small difference caused such a big problem. The problem was in their use of LATDbits and TRISDbits (which I tried to replace with LATBbits and TRISBbits). I got around it by manually addressing TRISB and LATB. Anyway, this is what I came up with:
#include <p18f2450.h> // load pin names #include <delays.h> // load delay library #pragma config WDT = OFF // disable watchdog timer #pragma config FOSC = INTOSCIO_EC // use internal clock void main() // this is the main program { TRISB=0B00000000; // set all pins on port B as output while(1) // execute the following code block forever { LATB = 0b11111111; // turn all port B pins ON Delay10KTCYx(1); // pause 1 second LATB = 0b00000000; // turn all port B pins OFF Delay10KTCYx(1); // pause 1 second } }
A couple notes about the code: the WDT=OFF disables the watchdog timer, which if left unchecked would continuously reboot the microcontroller. The FOSC=INTOSCIO_EC section tells the microcontroller to use its internal oscillator, allowing it to execute code without necessitating an external crystal or other clock source. As to what TRIS and LAT do, I’ll refer you to basic I/O operations with PIC.
Here is what the MPLAB IDE looked like after I successfully loaded the code onto the microcontroller. At this time, the LED began blinking about once per second. I guess that about wraps it up! This afternoon I pulled a PIC out of my junk box and, having never programmed a PIC before, successfully loaded the software, got my programmer up and running, and have a little functioning circuit. I know it isn’t that big of a deal, but it’s a step in the right direction, and I’m glad I’ve taken it. | http://www.swharden.com/wp/2012/06/ | CC-MAIN-2017-17 | refinedweb | 982 | 54.15 |
C++ Tutorial
C++ Flow Control
Switch case statement in C++
Switch statement is also used to control the flow of program and most of the cases it can be replaced by if else statement. Similarly, many if else statement can be replaced by the switch statement.
Let’s see the syntax of switch statement first.
// syntax of switch statement switch(expression){ case a: // this code will be executed when the expression is a break; case b: // this code will be executed when the expression is b break; case c: // this code will be executed when the expression is c break; ......................... ......................... default: // default code to execute when // the expression does not match with any case }
When the expression which we have passed through the switch statement will match with the first case, then the code bellow it will be executed. We have used break statement because if we don’t use a break statement the other case bellow the matched case will also be executed which is not expected. We will learn more about break statement later.
The code under default case will be executed when the expression does not match with any case.
Let’s see an executable program using C++ switch statement. In this bellow program we will make a simple calculator using switch statement.
C++ program using switch statement
// making a calculator using switch statement in C++ #include <iostream> using namespace std; int main(){ int selectedOperation; float firstNum, secondNum; cout << "Select which operation you want to do.\n"; cout << "1. Additon\n"; cout << "2. Subtraction\n"; cout << "3. Multiplication\n"; cout << "4. Division\n"; cout << "\nSelect operation : "; cin >> selectedOperation; cout << "Enter first number here : "; cin >> firstNum; cout << "Enter second number here : "; cin >> secondNum; switch (selectedOperation){ case 1: cout << firstNum << " + " << secondNum << " = " << firstNum + secondNum << endl; break; case 2: cout << firstNum << " - " << secondNum << " = " << firstNum - secondNum << endl; break; case 3: cout << firstNum << " * " << secondNum << " = " << firstNum * secondNum << endl; break; case 4: cout << firstNum << " / " << secondNum << " = " << firstNum / secondNum << endl; break; default: cout << "Error! operator not applicable\n"; break; } return 0; }
In this above program we will take the input from user that which operation he want to do.
After that we will take two number and perform that operation. Let’s see the output of above program.
Output of switch case program:
Assume a user has selected first operation. Then the output will be as bellow;
Similarly, if the user select the third operation then the output will be as follows;
The default keyword will work if the user gives other number without 1, 2, 3 or 4.
Select which operation you want to do. 1. Additon 2. Subtraction 3. Multiplication 4. Division Select operation : 5 Enter first number here : 34 Enter second number here : 21 Error! operator not applicable
Remember when working with switch statement
- We can use any valid expression as the case of switch statement.
float a; switch(a){ // error! switch quantity no an integer case 2.5: }
- Using float or double number as expression will through an error.
int x; switch(x){ case 3 + 5: // valid. 3 + 5 = 8 }
- Using character variable as expression is valid in C++ switch statement.
char c; switch(c){ case 'p': // valid // executable code case p: // invalid. character should be inside the single quote // executable code } | https://worldtechjournal.com/cpp-tutorial/switch-statement-in-cpp/ | CC-MAIN-2022-40 | refinedweb | 538 | 62.78 |
May 17, 2020 05:41 PM|dean4if|LINK
As part of my work on an ASP.NET Core 3.1 Razor Pages web application, I created several custom Tag Helpers. Once I had all of these working the way I wanted and expected (as part of the ASP.NET Core 3.1 application), I moved the Tag Helpers to a Razor Class Library (.NET Standard 2.1), so I could use the custom Tag Helpers in other applications.
That is where I ran into a problem with a Tag Helper to render a Partial Page using the PartialTagHelper class:
TypeLoadException: Could not load type 'Microsoft.AspNetCore.Mvc.ViewFeatures.Internal.IViewBufferScope' from assembly 'Microsoft.AspNetCore.Mvc.ViewFeatures, Version=3.1.3.0, Culture=neutral, PublicKeyToken=adb9793829ddae60'.
The constructor for the PartialTagHelper class requires the IViewBufferScope parameter noted in this error and is passed into the custom Tag Helper code via Dependency Injection.
In the ASP.NET Core 3.1 Razor Page, the custom Tag Helper code requires a 'using' reference to the Microsoft.AspNetCore.Mvc.ViewFeatures.Buffers namespace.
In the Razor Class Library, the custom Tag Helper code requires a 'using' reference to the Microsoft.AspNetCore.Mvc.ViewFeatures.Internal namespace.
I also tried using .NET Standard 2.0 and 2.1 as well as .NET Core 3.1 Class Libraries. In all of these situations, the Class Library required references to Microsoft.AspNetCore.Mvc.ViewFeatures version 2.2.0 and Microsoft.AspNetCore.Razor version 2.2.0 in order to compile.
So, the error sounds like ASP.NET Core 3.1 Razor Page is injecting the 3.1.3 Microsoft.AspNetCore.Mvc.ViewFeatures assembly and this does not contain the IViewBufferScope parameter from the correct assembly.
Is there a way to resolve this?
Thanks
All-Star
56774 Points
May 18, 2020 03:21 PM|bruce (sqlwork.com)|LINK
while you could add the nuget abstraction projects, the easiest fix is to update the target framework in the project file.
<PropertyGroup> <TargetFramework>netcoreapp3.0</TargetFramework> </PropertyGroup>
May 21, 2020 07:21 AM|dean4if|LINK
Thanks for your response Bruce. I had tried several ways of setting my Class Library up, including manually editing the csproj file, updating the version in the Property pages and creating a new project from scratch.
In each of these cases, I had to add a references to Microsoft.AspNetCore.Mvc.TagHelpers, Microsoft.AspNetCore.Mvc.ViewFeatures and Microsoft.AspNetCore.Razor ... All of these were Version="2.2.0"
These all failed with the same problem reported in my original post.
May 21, 2020 07:44 AM|dean4if|LINK
A little over a week ago (on 5/20/2020), I found out how to create a package that would work for the ASP.NET Core 3.1 Razor pages, but had a different package that would work for lower version levels (for example, a .NET Framework 4.8 Razor Pages application). Let me first explain how I initially got things to work and then I will explain how I now have a single package that I can use for either the .NET 4.8 Razor Pages or the .NET Core 3.1 Razor Pages.
Since I was originally able to get the TagHelper for Partial Pages to work as part of my ASP.NET Core 3.1 Razor Pages application, I compared the csproj file from that web application to my ASP.NET Core 3.1 Class Library csproj file.
That is when I noticed the ASP.NET Core 3.1 Razor Pages csproj file started with:
<Project Sdk="Microsoft.NET.Sdk.Web>
But the various Class Library projects did not end with ".Web".
So, I updated the ASP.NET Core Class Library csproj file to the same Project Sdk value as the ASP.NET Core Razor Pages application.
When I saved the csproj file, it automatically updated the Output Type (on Project Properties Application tab) to Console Application. I changed this to Class Library and I am now able to use the Partial Page Tag Helper without any errors, so my problem is partially solved.
Here is my csproj file for the Class Library:
<Project Sdk="Microsoft.NET.Sdk.Web"> <PropertyGroup> <TargetFramework>netcoreapp3.1</TargetFramework> <GeneratePackageOnBuild>true</GeneratePackageOnBuild> <IsPackable>true</IsPackable> <Version>1.0.5</Version> <OutputType>Library</OutputType> </PropertyGroup> </Project>
This reason I say partially solved my problem is that further changes are required to have a single package for the ASP.NET Core 3.1 Razor Pages as well as lower version Razor Pages web applications like .NET Framework 4.8 Razor Pages or even .NET Core 2.2 Razor Pages. A couple things are required.
First, replace the TargetFramework entry in the PropertyGroup to TargetFrameworks and add netstandard2.0:
<TargetFrameworks>netcoreapp3.1;netstandard2.0</TargetFrameworks>
Next, add a conditional ItemGroup for the PackageReferences needed by the .NET Standard 2.0 TFM (Target Framework):
<ItemGroup Condition="'$(TargetFramework)' == 'netstandard2.0'"> <PackageReference Include="Microsoft.AspNetCore.Mvc.TagHelpers" Version="2.2.0" /> <PackageReference Include="Microsoft.AspNetCore.Mvc.ViewFeatures" Version="2.2.0" /> <PackageReference Include="Microsoft.AspNetCore.Razor" Version="2.2.0" /> </ItemGroup>
Finally, all of the Class Library code is exactly the same to support ASP.NET Core 3.1 and .NET Standard 2.0, except one using statement in the custom Tag Helper for Partial pages, so add a conditional compile check as follows:
#if NETCOREAPP3_1 using Microsoft.AspNetCore.Mvc.ViewFeatures.Buffers; #else using Microsoft.AspNetCore.Mvc.ViewFeatures.Internal; #endif
3 replies
Last post May 21, 2020 07:44 AM by dean4if | https://forums.asp.net/t/2167032.aspx?Class+Library+Tag+Helper+for+Partial+Pages+not+working+in+ASP+NET+Core+Razor+Pages+application | CC-MAIN-2020-40 | refinedweb | 914 | 54.18 |
- Getting Started
-
-
-
Have an account?
Need an account?Create an account
Service Link is the Service Catalog module that provides integration with external systems. It supplies a framework for configuring interfaces that allow delivery tasks, authorizations or reviews defined within a Service Catalog workflow to be performed by other systems and a user interface for monitoring the operation of these interfaces.
The most common scenario for the use of Service Link is where data associated with a delivery plan task needs to be passed outside of Service Catalog in order to ensure that the service is delivered satisfactorily. For example, Service Link might invoke Cisco Process Orchestrator to fulfill a service request, a message might be passed to a hardware vendor for a procurement action or to an inventory or asset management system for a data record update. The external application may then send one or more messages back to Service Catalog. Each message, in turn, could update Service Catalog with the current status of the task within the external system, eventually indicating that the task has been completed and that the Service Catalog workflow (delivery plan) can continue with subsequent tasks.
Service Link provides a number of built-in adapters to facilitate communication with external applications using different transport mechanisms including the interchange of files; database updates; web communication via http post requests or web services; and queue-based messaging. In addition to these default adapters, developers may use the Service Link Adapter Development Kit (ADK) to develop and deploy custom adapters.
Developing Service Link integrations requires a range of technical skills. These include:
Service Catalog offers two approaches to designing integrations.
Once an integration has been added, it may be viewed and maintained through the advanced configuration capabilities available through Service Link. Advanced users may create even web services integrations using this functionality, bypassing the wizards if desired.
Administrators also use Service Link to administer and troubleshoot integrations in a production environment.
An integration consists of the following components:
An adapter is a logical representation of a transport component by which Service Catalog sends XML documents or other messages to third-party systems. Prepackaged adapters support different message transport protocols; including file, http/web service, JMS, IBM MQ, and database.
Adapters are composed of two components:
Inbound adapters manage messages coming from an external system. The external system message may be altered into a “standard” nsXML (formerly known as newScale XML) format through the use of transformations so that the data can be interpreted by Service Catalog.
There are two types of inbound adapters: pollers and listeners. A poller is a thread that periodically wakes up and looks for incoming messages, while a listener waits and is awakened by an incoming external message. An example of a poller is the inbound file adapter, which needs to periodically check for messages. An example of a listener adapter is the Web Services Listener Adapter which waits until an HTTP response is received.
Outbound adapters manage the XML messages coming out of Service Catalog and send them to the configured external system. A “standard” nsXML outbound message comes to Service Link which may then alter the message through the use of transformations, so that it meets the expected format of messages directed to the external system. The outbound adapters then apply the correct protocol and logic to send the messages to the external system.
An agent is a logical representation of a transport mechanism by which Service Catalog communicates to/from a third-party system. Agents may be used by service designers to direct tasks to their proper third-party destination. In addition to tasks, authorizations and reviews can be externalized by specifying an agent to direct this action to an external system.
An agent is composed of an inbound and outbound adapter, optional message transformation (XSLT) components, optional parameters, and other settings to address error conditions.
XML stylesheet (XSL) transformations transform outgoing messages into a format understood by a third-party system, and transform incoming messages into a format understood by Service Catalog.
An agent which includes an outbound adapter automatically s an nsXML message, containing information relevant to the current requisition and task. A transformation associated with the agent may then transform that message into an external message, which are delivered to the external system via the outbound adapter configured for the agent. Similarly, an inbound agent receives an external message via the associated adapter. A transformation must then transform the message into an incoming message type that is recognized and processed by the Business Engine, the Workflow Manager for Service Catalog.
A dictionary is the service design component that holds fields of data required to fulfill a specific service request. Agent parameters mapped to dictionary fields (or other data available in the service request) provide a standard outbound message format easily understood by external systems. Agent parameters in an inbound message received from an external system instruct Service Link to update the value of the dictionary fields mapped to those parameters. The changed form data is immediately available in the service form. The active form component in which the dictionary is included must, in turn, be included in the service that implements the Service Link integration.
The Integration Wizard automatically s an agent and transformation, as well as an integration dictionary and active form component to complete the agent configuration. Once these components have been created, they are maintained through Service Link and Service Designer.
The key to understanding Service Link is to understand its interaction with the Business Engine. The Business Engine is the component that is responsible for all workflow. Workflow actions include:
In a task plan that doesn't use Service Link (that is, where all tasks are internal to Service Catalog), the operation of the Business Engine is largely invisible. The use of the Business Engine becomes apparent in Service Link, because Service Link must handle or generate messages that the Business Engine understands in order for the status of external tasks to be changed.
When the Business Engine starts an external task (that is, a task which is to be handled by Service Link), it generates an outbound nsXML message. The Service Link agent that is handling the outbound task is then responsible for transforming that nsXML message into a format that can be understood by target system and delivering that message to the target system via the outbound transport mechanism (adapter) specified in the agent.
Similarly, if Service Link is configured to receive an inbound message from an external system, it must transform that message into an inbound nsXML message that can be understood by the Business Engine. Inbound messages are available to update the service form data for the current request; to complete the current task; or to add user comments to the current request.
Valid nsXML messages are discussed in more detail in the following section.
Cisco Prime Service Catalog offers two approaches for designing, developing and deploying Service Link integrations with third-party systems:
Service Link and Service Designer configuration uses the following methodology to design, develop and deploy integrations with third-party systems:
This process is discussed in more detail in the following sections.
The Integration Wizard is described in the Using the Integration Wizard in Service Designer.
To access Service Link, choose Service Link from the Module Menu. The Service Link home page appears.
The Service Link home page contains the elements shown in Service Link Home Page Elements below.
In addition to the configuration and testing work that are executed in Service Link, equivalent work must be undertaken by the technical resources responsible for the third-party system. A well considered design is essential if the interface is to operate robustly.
The interfacing capabilities of the system to be communicated to will normally dictate the basic design of the integration, that is, which adapters will be used.
File and database adapters are the simplest to configure. If JMS, MQ or http/web service adapters are deployed, some expertise from network management teams may be involved to ensure that connectivity issues do not prevent the data from moving from one system to the other. Data security concerns are likely to be a factor if the target system exists outside your company's network—for example, to use a SOAP message sent via http or https to communicate with an outside vendor.
Normally the data required for the outbound Service Link communication would be assessed first. Consideration needs to be given to what fields are readily available (via the nsXML outbound message) and what additional data needs to be provided (via form data and agent parameters). While outbound communications only occur once per task (when the task starts), multiple separate inbound communications can be supported. On receiving its instruction to perform work, a third-party system may issue a single (inbound) communication on completion. Alternatively, multiple updates could be sent before the work is complete. Examples include where a reference ID may be communicated, textual status updates have to be sent back and finally the completion confirmation is communicated.Managing Service Link Adapters
This section describes about managing integrations.
Service Link adapters are preconfigured for use. Additional adapters may be developed using the Adapter Development Kit. You may review the available adapters using the Adapters page of the Manage Integrations tab in Service Link.
Step 1
From the Service Link home page, click Manage Integrations. Then click the Adapters sub tab.
The Adapters page appears.
Step 2
In the Name column, click the desired adapter.
The Manage Adapter page appears for the chosen adapter. The details for the Database Adapter are shown below.
Figure 5-1 Manage Adapter Page
Most of these general properties should typically not be changed by Service Link developers. The “Polling Interval”, “Retry interval” and “Maximum Attempts” may need to be changed as per requirement. Any changes are inherited by all agents that use this adapter type.
Additional outbound and inbound properties are specified when the adapter is used in an agent. These properties are described in the Managing Service Link Adapters.
The Agent Wizard walks you through configuring an agent. The wizard consists of eight pages; some pages may be skipped, depending on options chosen on the previous pages.
The pages of the Agent Wizard are summarized in Agent Wizard Pages Table below.
Step 1
From the Common Tasks area of the Service Link home page, click Agent, or choose Manage Integrations > Agents > Agent.
The General Information page of the Agent wizard appears. This is the first of eight pages that comprise the wizard. Some pages might not be relevant for a particular agent configuration, and can be skipped.
Figure 5-2 General Information Page
Provide values for the fields described in Creating Agents – General Attributes Table below, then click Next.
The Failed Email notification can be generated in case an outbound message cannot be delivered. In addition to the standard sets of namespaces available for all task-related Emails, Service Link Failed Emails can include details about the message being generated at the time of the failure. Including these namespaces in the Email template may help in diagnosing the problem. Details on available namespaces are given in the Cisco Prime Service Catalog Designer Guide.
Failed email is not applicable to inbound messages; a notification is not generated when Service Link fails to process an inbound message.
The outbound adapter generates an nsXML message that is stored in the Service Catalog database. This message is then subject to a transformation, and the resultant external message is delivered via the specified adapter to the desired destination.
The format of the message is documented in Designing Integration with Adapter Development Kit and by the corresponding XML schema available on the application server at ISEE.war/WEB-INF/classes/nsxml.xsd. The complete message includes all information about the service request. Such messages can get quite large (easily exceeding 500 K, depending on the number of dictionaries and fields used in the service) and consequently consume large amounts of storage within the database, as well as consuming significant amounts of CPU to produce.
To reduce resource consumption, Service Catalog offers the following options.
The default message content is “Data and parameters; no Service Details (small),” which generates a nsXML message that does not include content nodes describing the service requested. Agent parameters and transformation must be designed with the outgoing content type in mind, to ensure that all required content is included in the nsXML message. Specifically, if eliminating the dictionary data from the outbound message, agent parameters must be mapped to appropriate form fields (or constants). In cases where a service has many form fields that are not needed for an external task, the XML size reduction and associated CPU utilization reduction are substantial.
Outgoing content options are summarized in Outgoing Message Content – Options Table below.
An agent may be configured to manage both outbound and inbound communications; just outbound communications; or just inbound data. It is possible to use different adapter types for each direction, for example, a database adapter could be used to write data outbound but that system would then respond by writing files into a directory that would be read by an inbound file adapter.
Once an adapter type is chosen, subsequent pages of the wizard are adjusted to display properties relevant to the adapter type and usage (outbound or inbound). Property values must be supplied as part of the agent definition.
The End Points page of the Agent wizard (page 2 of 8) allows you to designate the adapters to be used in the agent as well as any transformations to be applied to the outgoing or incoming message. Transformations must have previously been defined using the Transformations subtab of the Manage Integrations option.
Figure 5-3 End Points page of the Agent wizard
Properties applicable to each type of adapter are described later in this chapter. In fact, the next two pages of the wizard, dedicated to configuring the outbound and inbound adapters, will vary, depending on which adapter has been chosen. However it is worth discussing two special cases: Dummy adapters and the auto complete option.
Dummy adapters can be configured within an agent as either the outbound or inbound adapter. If a dummy adapter is chosen, it means that the agent is only operating in a single direction. For example, an agent configured with a dummy inbound adapter means that the agent is only responsible for outbound communications. In turn, there could be a separate (inbound only) agent configured that would be dedicated to updating and closing tasks.
The auto complete option is available only as the choice for the inbound adapter part of the agent. Its effect is similar to choosing “Dummy adapter”, that is, the agent will only be managing outbound communications. The key difference is that after the outbound communication associated with the task has been sent, the task will automatically be completed and the rest of the delivery plan will continue to be executed.
Agent parameters may be used in conjunction with both outbound and inbound adapters.
Parameter mappings specified as part of the agent definition provide default value to be used. These mappings can be overridden on a service-by-service basis by editing the task definition for the service in Service Designer.
Agent parameters used in an outbound message provide a way to supplement the content nodes in the standard nsXML outbound message with additional data and to organize content nodes in an easy-to-address format. The parameters are easily accessible via the XSL transformation, to allow their inclusion in the external message.
Figure 5-4 Outbound Request Parameters
A parameter mapping is assigned by typing the source elements in the Service Data Mapping area, or by building an expression by using the elements available in the drop-down lists to the left of the Parameter Mappings pane. A mapping may consist of a combination of:
Mapping an agent parameter has the following advantages:
To an outbound parameter:
Step 1
On the Outbound Request Parameters page, click Add Mapping.
The Edit Parameter Values dialog box appears at the bottom of the page.
Step 2
Enter a name for the parameter.
Parameter names can include spaces, but should not include special characters (such as “>” or “&”), which have significance in XML messages.
Step 3
Specify the value/mapping for the parameter, using the guidelines given below.
Sometimes a constant value that is not dependent on the requisition or service details must be passed to the external system. For example, if the system needs a name for the source of the external system, “Service Catalog” can simply be typed as the Service Data Mapping (without the quotation marks).
Selected elements of the standard nsXML outbound message are available to be mapped to agent parameters. These are summarized in the table below.
To map an agent parameter to a dictionary field:
Step 1
Expand the Dictionaries node so that the “ Select a dictionary” option appears.
Figure 5-5 Mapping Agent Parameter to Dictionary Field
Step 2
Click the Select a dictionary drop-down menu to display a list of all Service Catalog dictionaries.
Figure 5-6 Select a dictionary drop-down
Step 3
Choose the dictionary containing the field to be mapped to the agent parameter. A list of all fields in the dictionary appears.
Figure 5-7 Fields in Dictionary
Step 4
Click the field to be mapped to the agent parameter and drag it to the Service Data Mapping text area. When the drag icon changes to a green check mark, release the mouse. A lightweight namespace for the selected field appears.
Figure 5-8 Lightweight Namespace
Since the agent is defined independent of a service, with the exception of grid dictionaries, any dictionary field can be chosen. It is the responsibility of the service designer to ensure that the referenced dictionary is, in fact, included in the service in which this agent is used.
For any integration that passes the contents of one or more grid dictionaries, you will need a transformation to handle the rows in each grid dictionary, which are stored as multiple dictionary instances. They follow the naming convention of “DictionaryName-n”, where n is the grid row number, in the <data-values> section of the outbound nsXML. For example, if a grid dictionary named VMOperation has two rows of data in the service request, the values are represented as below:
There is no support for inbound agent parameter mapping and update to grid dictionary fields.
Prebuilt functions can be applied to the mapped elements, so that the parameter value fits the semantics or formatting requirements expected in the target system. For example, a field may be shortened, by applying a substring function, if the data definition for the field in the target system accommodates fewer characters than are maintained in Service Catalog. Prebuilt functions are summarized below and explained in more detail in the Prebuilt Functions.
To apply a prebuilt function to an agent parameter:
Step 1
Expand the Prebuilt Functions node so that the function names appear.
Step 2
Highlight the function you want to use—notice that Help appears, explaining function usage, at the bottom of the pane.
Figure 5-9 Prebuilt Functions
Step 3
Drag the function into the Service Data Mapping box for the parameter. When the drag icon changes to a green check mark, release the mouse.
Figure 5-10 Green Check Mark
Step 4
The function is defined, with $Parameter$ as a placeholder for the actual value.
Figure 5-11 Defined Function
Step 5
Replace the placeholder with the dictionary field or element of the nsXML message to be used. You need to drag the field or nsXML element to the Service Data Mapping text box, then manually edit the parameter definition.
Figure 5-12 Edit Parameter Values
Agent parameters are added to the end of the outbound nsXML message. For example, agent parameters shown below generate the nsXML snippet immediately following.
Outbound response parameters may be used in conjunction with an outbound http/web service adapter. If the adapter's Process Response setting is true, the response to the original request is processed. That response may include a “Send Parameters” message type. Parameters are defined as for Inbound Agent Parameters.
When used in conjunction with an inbound “Send Parameters” message type, agent parameters allow the external task to update the dictionary field to which the parameter is mapped.
To manage a transformation:
Step 1
On the Manage Integrations tab, click the Transformations subtab.
Step 2
Click Transformation.
The Transformation page appears.
A transformation may be applied to either an outbound and inbound message, by designating the Direction. Two transformations may need to be used for web services outbound messages—one is applied to the outbound request and the second may be applied to a response to that request, if the Process Response setting is turned on.
The Validate button parses the XSLT to ensure that the transformation is well-formed. If it is not (for example, if an XML tag is misspelled or missing), a diagnostic message appears. You need to fix the error before saving the transformation.
However, Service Link does not validate the transformation. For example, no error is detected if a transformation refers to an element that does not exist in the source message; a well-formed XML message would be produced, but it would not be valid for the target system. Therefore, a runtime error would be produced if Service Link produced an external message that was not recognizable by the target application or an inbound message that was not recognizable to the Business Engine.
If you have access to an XML development environment and are familiar with its usage, it may be efficient to use that environment to test the transformation. For an outbound transformation, simply copy the nsXML produced by the agent (before you apply a transformation) into the XML development environment and use this as the source XML. Once you have validated the transformation, copy and paste the XSLT code into a Service Link transformation and associate it with the appropriate agent.
Alternatively, use the "Test" function on this page to quickly preview the output XML from the transformation, especially if you are making a minor change to an existing transformation. Upon clicking the Test button, a popup window is shown where you can paste the nsXML into the source XML panel and exercise the transformation to get the output XML.
The process of manually developing and debugging a transformation is eliminated for outbound web service integrations that are developed using the Integration Wizard. A transformation is automatically created that will transform outbound nsXML into a format compatible with the specified WSDL for the web service. However, if the wsdl or integration requirements change, you will need to follow the steps outlined above to update the transformation.
Step 3
Provide values for the fields described in Transformation Settings Table below.
Step 4
Click Validate to check that the transformation contains well-formed XSL.
You can review or revise any agent definition, whether the agent was created in Service Link or via the Integration Wizard. Once the Agents subtab of the Manage Integrations tab appears, you can either:
The Agent entry in the list pane is expanded. Click the property sheet for that portion of the agent definition to be edited or reviewed. Once the property sheet appears, enter any changes and click Save to save them.
Figure 5-14 Agent Entries
Clicking on the agent name provides an overview of the agent definition:
Figure 5-15 Agent Information
This page is read-only except for the button to Reset Agent Parameters.
The procedure below shows the typical sequence of tasks required to deploy a Service Link integration using a file adapter. It can also be used to validate a Service Link installation.
Step 1
From the Common Tasks area of the Service Link home page, select an agent that uses an outbound file adapter by clicking the Agent wizard, filling in the location fields and supplying other outbound adapter properties. (Details on these properties are explained in the File Adapter.)
Step 2
Start the agent by navigating to the Control Agents tab, locating the agent, choosing it by clicking the mouse anywhere on that line except on the agent name (which is a link to the agent definition) and clicking Start Selected at the upper right of the page. Did it start? If not, one of your Service Link configuration settings is wrong or the Integration Server (ISEE) did not start correctly.
Step 3
Verify that the file directories you entered exist on the application server; if not, them. Assure that both Service Catalog and the external application have appropriate access (write or read) to the directories. If these conditions are not met, file transmission will fail at runtime.
Step 4
Go to Service Designer and a service to use this agent.
Step 5
Go to My Services and order the service.
Step 6
If the requisition is created successfully, congratulations! the ISEE outbound queue is working. If you get an “our apologies” page, the JMS queues are not working.
Step 7
Go to the Messages page, accessible from the View Transactions tab. If you see messages from the requisition you just created, congratulations. Your message should have status of “Message sent”.
Step 8
Go to the outbound files directory (for example, C:\cisco\SL\OutboundFiles). If there is an XML file there (verify the date time stamp of the XML file to make sure that it is a new one corresponding to your requisition), your outbound trip for the file agent is completed. Congratulations! The outbound XML file would be a valid nsXML message.
Step 9
For your requisition in the Message Type column, click the Execute Task link. The Message Details page appears.
Step 10
Verify that the Requisition ID is correct. Copy the “Channel ID” from the message details screen.
Step 11
an XML file named SampleInbound.xml as follows. Where it says “insert your Channel ID here”, paste the value of the Channel ID that you copied in the last step. (Leave the double-quotes intact).
For example, after pasting the Channel ID value, the SampleInbound.xml file would look something similar to the following:
Step 12
Put the SampleInbound.xml file in the inbound files directory (for example, C:\cisco\SL\InboundFiles).
Step 13
When the File Agent polls for input, it will automatically pick up the inbound file. (The default setting for the File Adapter's Polling Interval Time is every 10 seconds.) If the SampleInbound.xml file is processed successfully, it will disappear from the directory.
Step 14
Go to the Messages page of the View Transactions tab and look for your requisition. If there is another message for your requisition with the Type=Take Action, and the Status=Inbound Message Completed, then you have achieved a roundtrip.
Step 15
Click the Requisition ID link to open the Requisition Status page. Verify that your requisition has the status of “Closed (1 of 1 completed)”.
Once the agent has been defined, it can be used in a service by creating an external task whose workflow invokes the agent. Once the workflow has been configured, you may review or override any agent parameters defined for the included agent.
To direct a task to an external (third-party) application using a Service Link agent:
Step 1
Start Service Designer. Select the service that is to include the external task. Click the Plan tab.
Step 2
You can use either the Tasks subtab or the Graphical Designer subtab to specify the external task.
Step 3
Using the Workflow Type drop-down list on the General tab, select the desired action from the drop-down list. Note that it is the defined actions in the Service Link module and not the agent names that are listed in this drop-down.
Step 4
Save the workflow/task plan. If you were using the Workflow Designer, return to the Tasks subtab.
Step 5
Once you have saved the external task, an ellipsis button appears next to the Workflow Type. Clicking the ellipsis allows you to review the parameter mappings currently in effect for the agent (if any) or to change these mappings for this specific service. Click the ellipsis button.
Figure 5-16 Ellipsis Button Next to Workflow Type
Step 6
The Agent Parameter Override popup window appears. Review the agent parameters or change the mapping for one or more parameters. Be sure to click Apply as you change each parameter and Save before closing the window.
Figure 5-17 Agent Parameter Override
Step 7
You may wish to define a performer (person, queue, or functional position) on the Participants tab. The calendar of that performing entity will then be used to calculate the Due Date of the task. If you do not set a value on the Participants tab for external tasks, the calendar of the Default Service Queue is used to calculate the Due Date. In this fashion, due dates are set in the plan and Service Catalog can calculate delivery Operating Level Agreement (OLA) compliance for external tasks and compliance with the Service Level Agreement (SLA) for services containing such tasks.
When you and save a task that uses an agent, the agent parameter mappings specified for the agent are automatically inherited by that individual task. As described above, a service designer may override any of the agent mappings at the task level.
However, if the agent is subsequently modified, to include a different set of agent parameter mappings, such changes are not automatically inherited by tasks that were previously defined to use the agent. Such changes may include:
Propagating these changes to services that use the agent can be automated, by following the procedure below:
Step 1
From the Service Link Manage Integrations tab, click the Agents subtab.
Step 2
Click the name of the agent whose parameter mappings have been changed.
The Agent Information page appears.
Figure 5-18 Agent Information Page
Step 3
Choose the service or services whose agent parameter mappings need to be resynchronized with the updated agent definition.
Step 4
Click Reset Selected Tasks. This button automatically resets all parameter mappings to their agent defaults, so any task-specific mappings would need to be reapplied.
The transformation must not only contain well-formed XML, it must produce a well-formed and valid nsXML message. All nsXML messages must conform to the nsXML schema (an XML document that describes the structure of an XML document). The schema is available on the application server at ISEE.war/WEB-INF/classes/xsl/nsxml.xsd.
When an external task moves to a status of Ongoing, an outbound nsXML message is generated.
The generated nsXML for each message can be viewed in the Service Link module, by clicking on the nsXML message in the Message Details page. It contains task related data as well as data associated with the parent requisition.
The most important element within the nsXML is the channelId, an ID that uniquely identifies the external task. This ID is provided to the third-party system and needs to appear in their response if the corresponding data update is to be successfully applied by the business engine.
The channelId is formatted as a Globally Unique Identifier (GUID). GUIDs are most commonly written in text as a sequence of hexadecimal digits such as:
This text notation consists of 5 sets of data, each separated by a hyphen. The GUID consists of 32 characters plus 4 hyphens, for a total length of 36 characters.
There are two outbound message types.
The task-started message type is generated when an external task is started. A detailed description of the elements of the task-started message is available in Designing Integration with Adapter Development Kit .
The task-canceled message type is generated when the request which includes an external task is cancelled. If the user is not allowed to cancel a request once the external task has been performed (via the corresponding setting in the service's delivery plan), this message would never be generated. If, however, canceling the request is allowed, the transformation used in the agent responsible for the external task must be “smart enough” to handle both a task-started and task-canceled message. The transformation would need to test for the task-canceled message type and to send an appropriate message to the external system:
Two types of operations are supported for inbound messages from the third-party system—requisition operations and service item operations. Requisition operations are used for the update of request data and task status. Service item operations are used for the addition, modification, deletion, and retrieval of service items associated with the request. Certain operations may be combined in one inbound message, known as a “Composite Message”. The details and restrictions for each operation are described in the following sections.
The third-party system may send one or multiple inbound messages for an external task by referencing the channelId of the corresponding outbound message. The external task is completed when one of the take-action operations is sent and this allows the next task in the authorization/delivery plan to proceed.
A take-action message may be applied to an authorization or delivery task, to change the status of the task. The action attribute of the take-action tag identifies the action to be taken. Valid actions are summarized in Take-action Messages Table below.
When the last delivery task in a task plan is marked as done, the requisition is closed (completed). An approval task can be marked as Approved or Rejected, by setting the “action” attribute of the take-action tag to the corresponding value.
Parameters are data elements that are bound to dictionary fields within the agent definition. The send-parameters message type allows one or more specified parameters to be updated which, in turn, updates the corresponding dictionary fields in the service. Using this type of inbound message is the preferred way for the external system to update dictionary fields used in a service request.
An add-comments message is used to add comments to the System Comments section of the requisition.
Service items are entities defined in Service Item Managerr. Their lifecycles are associated with service requests—from the point the service item instances are provisioned, to the point when they are decommissioned. In the cases when the service item lifecycle events are handled by external systems, service item data can be synchronized with Lifecycle Center via Service Link service item, update, delete, and get messages. These message types are supported only through the web service-based Service Item Listener Adapter (see the Service Item Listener Adapter).
One or more service item types and service item instances can be included in these messages. Multiple service item operations cannot be combined in a message. In other words,, update, or delete operations have to be sent in separate inbound messages. Whenever an error condition is encountered, all the service item operations in the same message are rolled back.
Service item attributes of datetime type must be specified in the format YYYY-MM-DD HH:MI:SS or YYYY-MM-DD. All times are stored as UTC time.
Service item subscriptions can be included optionally at the time a service item instance is created. If no subscription information is provided in the operation, the item is assigned to the customer of the requisition and that person's Home Organizational Unit. If values for either the customer login ID or Organizational Unit name are specified in the subscription section of the message, those values are used to override the default service item assignment. For more details about subscription processing rules, see the Service Designer chapter in the Cisco Prime Service Catalog Designer Guid e.
In update messages, omitting a service item attribute results in no change to the attribute value. When an attribute is explicitly specified in the message but contains no value, the value of the attribute for the service item is set to blank for text fields and zero for numeric fields.
Delete service item requests require only the names for the service item type and instance. Additional service item attribute and subscription information is ignored.
The getRequest operation is used for retrieving service item instances. The channelId and topic-id attributes are optional, unlike the /update/delete service item requests. Each inbound message may contain only one getRequest operation and within it, only one service item type. There is no logging of the request as seen in the Service Link user interface.
Service item instances are retrieved according to the search filters specified in the request XML, using the service item attributes and subscription data (that is, Customer Login ID, Organizational Unit Name, Account ID, and Agreement). Up to five filters may be used in a getRequest and they are interpreted as AND joins. Search Filter Operators for getRequest Table below shows the operators that are supported in search filters.
The response data from the getRequest contains service item attribute names and values, as well as its subscription information. The maximum number of records returned in each getRequest operation is 100. The next set of records can be retrieved by specifying the ‘startRow’ and ‘count’ elements in the request. The startRow element indicates the beginning row number of the result set. The count element indicates the number of records to be returned. The ‘startRow’ and ‘count’ values are defaulted to 1 and 100, respectively, if they are absent in the request XML. The value for count is limited to 100 for performance reasons.
Here is an example of the getRequest XML:
Response for the above request:
The getDefinitionRequest operation is used for retrieving the metadata or definition of a service item type. Like the getRequest operation, the channelId and topic-id attributes are optional. Each inbound message may contain only one getRequestDefinition operation, and within it, only one service item type. There is no logging of the request as seen in the Service Link user interface.
Here is an example of the service item getDefinitionRequest:
Response for the above request:
The above message types can be combined in a single inbound message. Such a combination is known as a “composite” message. The order of execution matters; you must send the parameters or add comments before including the take-action tag, and place the service item operation tags last.
A Service Item Manager (SIM) Import message type supports importing service item and standards definitions and data from an external system into Service Catalog. Unlike the service item /update/delete operations, SIM import is based on the File Adapter protocol which polls for incoming files located in a specific directory. In addition to service item instance operations, SIM Import also supports the maintenance of service item groups and service item types. For details on Service Item Manager imports, see the Cisco Prime Service Catalog Designer Guide.
Outbound nsXML messages will typically be quite large and complex, often in excess of 500 KB. Although it is not mandatory to use transformations to alter the message format, it is unlikely that external systems would be configured to read nsXML. Consequently using transformations to alter the outbound message formats is normally unavoidable.
However as formats for inbound messages will probably be negotiated with those responsible for the third-party system, it is quite possible that a specification could be agreed that aligns closely to the nsXML message formats. If this is the case, the Inbound transformation could be much simpler than the corresponding outbound one.
Although we refer to XSL Transformations (XSLT), the technology used is actually called eXtensible Stylesheet Language and also includes XPATH. XPATH is a language for finding information and navigating through elements and attributes in an XML document. XPATH includes built-in functions for string values, numeric values, date and time comparison, sequence manipulation, Boolean values and other methods.
There are multiple ways to monitor Service Link usage:
All Service Link monitoring/administration pages are displayed using configurable “data tables”. The appearance of these tables (the columns displayed, the width of each column and the order in which data is presented) can be customized. In addition, Filter and Search capabilities allow administrators to view only those rows which are of interest.
Rate limiting easily throttle REST calls, and prevents distributed denial-of-service (DDoS) security threats from malicious users. Application level rate limiting also effectively balances load. See Configuring Rate Limits for REST API Requests.
The Recent Failed Messages pane of the Service Link home page displays Service Link messages that could not be delivered to their destination within the past 30 days. By default, messages are displayed in reverse chronological order based on the date and time when they were sent.
Figure 5-19 Failed Messages
Click one of the column links in the Failed Messages grid to view associated information:
The Messages page, available from the View Transactions tab, allows you to view all messages, both inbound and outbound, regardless of their status; to explicitly filter the messages that appear on the page; and to search messages which fit specified search criteria.
The Messages page displays all or selected Service Link messages, depending on which filters have been set. By default, completed messages are not displayed. To display the Messages page, click the View Transactions tab from the Service Link home page. Then click the Messages subtab. The View Failed Messages link in the Common Tasks area of the Service Link home page also displays the Messages page, with a filter set to show only messages with a status of “Failed”.
The Messages page appears, as shown below.
Figure 5-20 Messages Page
The Message Details popup pages allows you to view both the Service Catalog and external messages. This page also displays the channel Id, which uniquely identifies the task in this requisition. You can use this Id when working out issues with the third-party system.
Figure 5-21 Message Details Page
Click one of the tabs on the Message Details popup page to view associated information.
You can use the search functionality to view a subset of messages, for example, all messages with a Failed status. Search allows you to specify one of the columns in the Messages window as the search target and to select or type a value to be matched.
Click Filter and Search (at the top of the Messages page).
Figure 5-22 Filter and Search
The Filter and Search dialog box also allows you to:
The Filter and Search dialog box is non modal. You can fill out the desired criteria and click Apply to view the results of the current settings. If required, simply adjust the settings and Apply again. Remember that you can also display the messages in ascending or descending order by any column, or change the columns that are displayed by using the techniques.
During Service Link development, you may generate many messages that fail to be delivered because of errors in the agent or transformation configuration. These messages should not be resent. Similarly, messages generated via a Service Item Import task should not be resent—the import file format should be adjusted, and the import task tried again.
In a production environment, however, messages may fail to be delivered because of an outage of the external system or other external factor that can be corrected. Once the cause of the delivery failure has been corrected, failed messages can be resent.
To resend failed messages:
Step 1
In the Messages page of the View Transactions tab, click the row containing the Failed message or messages.
Step 2
In the bottom left corner of the Messages page, click Resend Message.
Service Link will attempt to resend the message to its designated destination. If the resend succeeds, the message status and date are updated, and the resend date is recorded and displayed in the Resent On column.
Transformations are not reapplied while resending a message. The agent tries to send the already transformed message to its destination.
Resending of failed inbound messages for service item operations is not supported. The process attempts to retry task actions. Hence the destination for those messages is the Business Engine, not the Service Item import processor.
Step 1
From the Service Link home page, click View Transactions. Then click the External Tasks subtab.
The External Tasks page appears, as shown below.
Figure 5-23 External Tasks Page
Step 2
Click one of the following column links to view associated information.
Like the Messages display, the External Tasks page offers the ability to customize the columns and order of data displayed in the data table and to filter and search on that data.
Figure 5-24 filter and Search
A task that has been started and is expecting to receive an inbound message is in an “Ongoing” state. The incoming message will typically update the task or change its status. No subsequent tasks in the requisition's delivery plan can be performed until a message is received and the task is completed. If you suspect (or can confirm by conferring with administrators of the external system) that the expected message has already been sent, but has somehow been “lost”, you can emulate receipt of the message by sending a manual message.
Manual messages cannot be used to emulate failed service item operations.
Note
Use this feature carefully. This feature overrides all the communication protocols in the system, and using it may leave artifacts in the third-party system to which Service Link may no longer be able to respond. Also, if you use this feature to cancel a requisition, for example, Service Link will not notify the interested parties, so you will have to follow up on your own.
To send a manual message to the Business Engine:
Step 1
From the Service Link Home page, click View Transactions. Then click External Tasks.
The External Tasks page appears.
Step 2
In the bottom left corner of the External Tasks page, click the line containing the task for which you want to send a manual message.
Step 3
Click Send Manual Message.
The Send Manual Message dialog box appears, as shown below.
Figure 5-25 Send Manual Message
Step 4
Click the button corresponding to the type of message you want to send— Add Comment, Add Parameter, Update Values or Take Action.
Step 5
Respond to the associated popup dialog boxes (turn off your popup blocker) for the message type chosen. This will populate the message window with a well-formed XML message of the appropriate type. An <add-comments> message will also be included, to indicate that this message was not received through normal channels, but manually generated.
Step 6
If desired, you may edit the generated message. When you have constructed the entire message, click Send. An inbound message is sent to the Business Engine.
In the rare occasion of extended outage or incorrect configurations of the Service Link application, external tasks might not have corresponding outbound messages created in Service Link.
Once the underlying issue is resolved in Service Link and the application is up and running again, the problem external tasks can be republished to Service Link to allow the outbound messages to be created and the delivery process to resume.
To republish outbound messages:
Step 1
From the Service Link Home page, click View Transactions. Then click Message Republish.
Step 2
On the left-hand pane, enter the Requisition ID for the requests which have one or more missing outbound Service Link messages. All authorization and delivery tasks associated with the requisition are evaluated and only those tasks that require republishing are processed for outbound message creation. Up to 20 requisitions can be entered at a time.
Step 4
Review the processing status on the right-hand pane once the republish process is completed.
All Service Link adapters support nsXML as the data exchange format. For more information about the nsXML format, see Designing Integration with Adapter Development Kit .
All poller-based adapters support processing on only one message per invocation.
The Service Link Adapters installed in all application instances are:
In addition to these adapters, Service Link supports an Auto-Complete Adapter.
Additional adapters may be installed and configured using the Service Link Adapter Development Kit (ADK). Any such custom adapters also appear on the Adapters page, and their properties may be reviewed. For details on building and installing custom adapters, see Designing Integration with Adapter Development Kit .
The following sections describe these adapters.
The Auto-Complete adapter allows an agent to send an outbound message and to mark the task as complete without waiting to receive an acknowledgement from the external system. If the outbound message is successfully sent (for example, a file is written to the specified directory by an outbound file adapter), the auto-complete adapter generates an incoming message for the same task. That incoming message has the message type “take-action”. This message is processed normally by the Business Engine, marking the action as done and completing the external task.
The Dummy Adapter is a placeholder. It can be used in several processing scenarios:
The Database (DB) adapter uses one or more tables in a database to pass data between Service Catalog and external applications.
Inbound and outbound database adapters are capable of communicating with any JDBC-compliant relational database that supports ANSI-standard SQL. Valid connection criteria must be provided, as well as the JDBC URL, and a database driver. If the external database is SQLServer or Oracle, Cisco-provided drivers may be used. Drivers available from Cisco are:
The JDBC URL has the format:
A user-supplied driver may be used if supporting jar files are installed on the directory ISEE.war/WEB-INF/lib in the Service Catalog directory structure.
Step 1
Obtain the appropriate third-party JDBC driver. For example, the Sybase JDBC Driver can be downloaded from Sybase's website.
Step 2
Copy any required jars to the ISEE.war/WEB-INF/lib folder.
Step 3
Modify the Agent settings to use the custom driver and the correct JDBC URL format. For example, the format for the JDBC URL for the Sybase driver is:
Step 4
Restart the Service Link and Service Catalog services.
The format of the JDBC Url may also be influenced by the application server on which Service Link is deployed. For example, a possible JDBC URL to establish a connection to SQLServer database from a WebSphere application server would be:
When the database adapter is used as an inbound adapter, the agent properties include a SQL statement to be executed against the specified database connection. The SQL is typically a select command which returns a set of rows. These rows are then formatted into an external XML message. The message must be transformed via an inbound transformation (specified in the agent) into a valid nsXML inbound message. That message is, in turn, passed to the Business Engine. If the Business Engine finds an open task identified by the Channel ID specified in the inbound message, the inbound message is processed and the specified action taken.
The Property sheet for the database inbound adapter prefixes the property names given below with “DBInboundAdapter”.
The process flow for the inbound database adapter is shown below:
Figure 5-26 Process Flow for Inbound Database Adapter
For each row in the result set, the adapter generates an XML message with the following structure:
For example, the SQL statement
might yield an XML stream like the following:
A transformation must then be applied to this XML stream to produce a valid nsXML inbound message. For example, a transformation which would complete an ongoing task might include the following code:
The Business Engine processes the resultant nsXML message. If the message was applied successfully, the SuccessSQL specified in the agent is executed. The SuccessSQL typically updates the columns in the source table that caused the row to be selected for processing, so that the row will not be found again in the next polling interval. To specify that Service Link should update the current row, identify the column or columns that comprise the row's unique identifier. Those columns must have been included in the inbound SQL statement. For example:
Similarly, the FailureSQL is executed if the Business Engine failed to apply the nsXML message-for example, if an error occurred during processing of the message. The FailureSQL typically updates the status of the current row to indicate that the row was not correctly processed. For example:
When the database adapter is used as an outbound adapter, it provides a “staging table” style interface between Service Catalog and the external system. The nsXML outbound message which is provided to the agent by the Business Engine must be transformed into an external message containing one or more SQL statements. These SQL statements are then executed in the specified database, using the specified connection.
The Property sheet for the DB outbound adapter prefixes the property names given below with “DBOutboundAdapter”.
The outbound message produced by the XSLT transformation must have the format:
The message can contain multiple SQL statements, each within an <execute-sql> tag. These statements typically insert or update rows in SQL tables. Any SQL statement supported by the JDBC driver specified for the adapter can be used. Stored procedures (in SQLServer Transact-SQL or Oracle PL/SQL) are not supported, although the SQL statement can include user-defined functions. Since each external task is uniquely identified by a Channel ID, the target table for the outbound SQL statement must include a column for the Channel ID in order for that task to be updateable by an inbound message.
The File Adapter provides support for reading files from a specified directory or writing files to a specified directory.
Following are the properties with the default values for the File Adapter.
The Property sheet for the File inbound adapter prefixes the property names given below with “FileInboundAdapter”.
The outbound file adapter produces an XML file on the specified file location. The name of the file contains the channelId, a unique identifier for the external task that included the agent and created the message. The file name ends with the date format specified as an outbound property.
The Property sheet for the File outbound adapter prefixes the property names given below with “FileOutboundAdapter”.
The HTTP/WS adapter is used to send or receive HTTP requests or web service requests and responses. HTTPS is also supported.
The use of a proxy server in connecting to the web service is not supported.
When used to call web services, only synchronous calls are possible. The outbound transformation must be written in a way to produce an external message that is compliant to the web service standard. For SOAP-based web services, appropriate SOAP header and SOAP body elements should be included
The HTTP/WS Adapter outbound properties specify the behavior of the outbound adapter.
The HTTP/WS Adapter Outbound Properties table for the http/ws outbound adapter prefixes the property names given below with “HttpOutboundAdapter”.
Some properties of the outbound http/ws adapter are required only for certain authentication schemes and, then, perhaps only for web servers with customized authentication. Authentication Schemes table summarizes authentication schemes supported by the outbound http/ws adapter.
When a request is posted to a web site or a message sent to a web service, the target site typically sends a response to the message originator. If that response is unlikely to contain information useful to Service Link, you may set the Process Response property to false, to instruct Service Link to ignore any such messages. However, such responses might include additional information, such as the external system's ticket number or case number assigned to the task that originated in Service Catalog. In this case, you can set both the Process Response and Save Ref Field properties to true and specify the xpath for the Reference field for Service Link to capture the reference from the web service response. In addition, a transformation can be applied to the response to invoke actions to update the service form with information from the external system.
Outbound properties may now contain agent parameter namespace.This allows a single agent to be used for multiple operations if they are routed to the same external system and differ only in the routing URL segments or request header values. The syntax for agent parameter namespace is $ParamName$
External systems generally have their own means for identifying incidents, requests, or other objects, whether opened by a third-party system or maintained via the product's user interface. A designated Reference Field (TopicID) allows Service Catalog to maintain a cross-reference between the external system's unique identifier and the Service Catalog channelId. Once the TopicID is identified in the initial response to the web service request and saved, further messages from the external system, received via the web services listener adapter, can use the TopicID to identify the Service Catalog external task.
No properties may be specified for an inbound http adapter. All http posts should be directed to the Integration Server’s URL:
<ServerName>:<Port>/IntegrationServer/ishttplistener?channelId=<channelId>
A web service is not-so-simply “XML over HTTP”. For an outbound adapter, an XML message is sent via http (or https) to a web service. The message, created by application of a transformation to the outbound message, must be enclosed within a SOAP envelope. A sample XML message to a web service might look like the following:
The JMS inbound adapter is a listener adapter and does not support polling based invocations.
The JMS adapter can read and write messages from a queue or publish/subscribe to a particular topic. Only one JMS inbound agent should be configured for a given queue. It is not possible to use the same agent to subscribe to multiple topics. The topic must be fully specified; for example, “topic.sample.exported”.
The Property sheet for the JMS inbound adapter prefixes the property names given below with “JMSInboundAdapter”.
The Property sheet for the JMS outbound adapter prefixes the property names given below with “JMSOutboundAdapter”.
The MQ inbound adapter is a poller adapter which uses the IBM WebSphere Message Queue (MQ) system. The adapter supports IBM MQ Series versions 5.x and above. It uses IBM MQ Series Java API for the integration. IBM MQ software is not included with Service Catalog, and a license must be obtained from IBM.
The Property sheet for the MQ inbound adapter prefixes the property names given below with “MQInboundAdapter”.
The Property sheet for the MQ outbound adapter prefixes the property names given below with “MQOutboundAdapter”.
Similar to the Web Service Listener Adapter (see the Web Service Listener Adapter), the Service Item Listener Adapter provides a Web service (SOAP) end point to be used by external systems to send updates to external tasks. In addition to task updates, the adapter allows the creation, update, and deletion of service items in Lifecycle Center as part of the inbound SOAP message. The adapter also allows the retrieval of service item metadata and the data for service item instances.
The SOAP message sent by an external system must invoke the “processMessage” operation. The message content within the soap body is transformed into a message that Service Link understands, then segregated based on the operation type, and forwarded to the Business Engine and Service Item Import processor, respectively. Up to two messages may result in the View Transactions page for an inbound SOAP message—one for task update operations (take-action, add-comments, send-parameters) and one for service item operations (, update, delete). The latter has “Service Item” as the message type.
Authentication for inbound messages can be enabled optionally by turning on the site setting "Inbound HTTP Requests Authentication" in the Administration module. For more information, see Web Service Listener Adapter.
The Property sheet for the Service Item Listener inbound adapter prefixes the property names given below with “ServiceItemListenerInboundAdapter”.
The Service Item Listener Adapter is unidirectional—inbound only. Therefore, there are no Outbound Properties.
The Web Service Listener Adapter provides a Web service (SOAP) end point to be used by external systems to send updates to external tasks. The SOAP message sent by an external system must invoke the “processMessage” operation. The message content within the soap body is transformed into a message that Service Link understands, then forwarded to the HTTP/WS inbound adapter to be processed further.
The Web service Listener Adapter uses an underlying Web Service Listener. Authentication for inbound messages can be enabled optionally by turning on the site setting "Inbound HTTP Request Authentication" in the Administration module. Once enabled, a valid username and the corresponding password are required to be passed in the request header for the inbound message to be processed. If desired, the "Accept Encrypted Password" setting can be enabled to enforce the use of encrypted password only. An encryption utility is available for users with the Site Administrator role to obtain the encrypted value of a password. To access this utility, open the browser page:
http://<server>:<port>/RequestCenter/EncryptedPassword.jsp
The Property sheet for the Web Service Listener inbound adapter prefixes the property names given below with “WSListenerInboundAdapter”.
The Web Service Listener Adapter is unidirectional—inbound only. Therefore, there are no Outbound Properties.
Prime Service Catalog often contains data that must be secured when it is viewed/accessed within the product as well as when it is exchanged with external systems. Therefore, you can use the encryption methods during secure data transaction.
For information about encrypting dictionary attributes within Prime Service Catalog, see “Configuring Dictionaries” in Cisco Prime Service Catalog Designer Guide.
Encrypt attributes sent out through Service Link agents based on HTTP/WS and AMQP task adapters are secured or encrypted using the 1024/2048-bit RSA Public Key of the external system that is configured in the agents. You must configure the Public Key and EncryptStringFormat of the external system in Http/WS outbound properties page of the adapter and then configure the agents properties. For more information, see Managing Adapters.
For example, the encrypt attribute of the outbound message to Process Orchestrator (PO) is converted in the format below:
When there is a public key update in the external systems, then you must update the agent accordingly. For example, when there is a public key update in PO, then the PO will accept the old key pair for 3 days as grace period so the old messages in Service Link could still pass through. After three days of grace period PO will return an error if the agent in Prime Service Catalog has not been updated. Therefore the public key needs to be manually re-configured in agents once again when re-key happens in the external systems.
Note
If the Service Link agent does not have the public key of the external system configured, then dictionary field attributes that are defined to be stored and encrypted are decrypted and clear-text values for these attributes are sent over the outbound message only when the agent uses HTTPs/SSL protocol. If SSL is not configured, then the dictionary field attributes are sent as encrypted format only.
Note Prime Service Catalog supports AES algorithm and 128/256 bit symmetric keys. Therefore, ensure that you install “Java Unlimited Strength Crypto Policy” file manually on Java Development Kit (JDK) during Service Catalog installation. Java Unlimited Strength Crypto Policy file enables the external system to perform 2048 bit encryption. If you do not install Unlimited Strength Crypto Policy the installer displays an error message during server startup. For instructions about installing the unlimited strength crypto policy file, see Oracle Website. For more information about installing Cisco Prime Service Catalog, see Cisco Prime Service Catalog Installation and Upgrade Guide.
The Cloud Resource Manager adapter supports outbound communications from Service Link to UCSD/ ICFB. The Cloud Resource Manager adapter supports polling interval attribute which determines the frequency at which the inbound poller is activated.
Note
Cloud Resource Manager adapter is provided with predefined configuration and it is recommended that you deploy this adapter using the default configuration.
The Integration Wizard automates many of the steps involved in creating an integration between Service Catalog and SOAP web services. The Integration Wizard works by retrieving the wsdl and operation to be invoked by the web service integration. Based on that definition of the integration, the integration wizard displays all components required to support the integration.
The Integration Wizard uses some default options in defining the authentication method and behavior of the integration. If these settings are not appropriate, or if the integration must be modified after it has been created, the advanced configuration options available in Service Link Manage Integration pages can be used to edit the agent definition.
The Integration Wizard is available only to those service designers who have been granted a role that allows creation of Service Link agents and transformations.
Wsdl's to be accessed by the Integration Wizard must comply with Web Services Operability (WS-I) best practices.
To use the Integration Wizard:
Step 1
Edit the service in Service Designer.
Step 2
Go to the General subtab of the Plan tab for the service. Optionally fill in other data relating to the task.
The first page of the Integration Wizard appears. The wizard may consist of up to eight pages, depending on how the agent is configured. As each page is completed, click Next to advance to the next page, or Previous to return to a previous page. When you are finished, click Save to save the definition of the agent (and other design components) or Cancel to exit without saving your work.
Start by specifying general information about the agent:
The dictionary and active form component to be created will have the same name as the agent. Therefore, since naming standards for dictionaries are more stringent than for agents, the agent name can contain only letters, numbers and the underscore, and cannot start with a number.
All other settings on this screen match those available in the Agents page of Service Link.
Click Next to proceed to the next page of the wizard.
Figure 5-29 Create Agent Outbound Properties
Enter the location of the WSDL containing the operation to be performed by the integration. This will typically be the URL where the WSDL resides.
The Integration Wizard reads the wizard and displays a list of supported operations. Select the desired operation. The attributes specified for the operation will drive the definition of agent parameters on subsequent pages of the wizard.
If the wsdl includes a routing url, that, too, is displayed.
If desired, click the Advanced Properties drop-down button to display additional settings for the integration. These may be entered now or specified later via Service Link. Only basic authentication can be specified via the wizard.
Figure 5-30 Outbound Request Parameter Mappings
The wizard parses the wsdl and, in the sample shown, determines that it includes two attributes that must be used in the web service outbound request. Therefore, it s two agent parameters whose names match the names of the attributes in the wsdl.
The agent parameters are mapped to dictionary fields. The field names match the names of attributes in the wsdl, and the dictionary name matches the agent name. This dictionary is automatically created when you save the agent.
If desired, you can change the Service Data Mapping to refer to a dictionary and field that have previously been defined in Service Designer. This effectively changes the agent parameter mapping. However, the dictionary created by the wizard will still contain the original field. You may remove this by editing the dictionary definition.
A short digression might be useful here about structuring and using dictionaries in services. The primary purpose of a Service Catalog dictionary is to structure the data to be shown to users on a service form. Therefore, service designers typically design dictionaries with the user interface in mind, grouping and arranging fields to optimize the experience of both customers and service team members.
In principle, an outbound message might need to include data that has been entered (or defaulted or computed) in fields in many dictionaries. However, this would make maintaining the agent parameter mappings more complicated and prone to error—integration designers would have to be well acquainted with the design of the service form and its dictionaries. Therefore, it is recommended practice to a dictionary solely for the purpose of containing integration data. Some fields in the dictionary may be redundant with fields displayed on the service form. In this case, the service designer should supply conditional rules to copy the value of the field from the displayed dictionary to the integration dictionary. Further, the integration dictionary is not displayed on the service form (it is typically hidden via an active form rule); however, it can be kept visible during development to facilitate debugging.
By default, the Integration Wizard assumes that a response received from the target system will be processed. Any attributes sent in the response have corresponding agent attributes that are mapped to dictionary fields.
Figure 5-31 Outbound Response Parameter Mappings
In addition to a agent-to-field correspondence, the mapping may include simple XSLT operations, available via the Prebuilt Functions drop-down arrow to the left of the page.
As for outbound parameters, the inbound parameter could also be mapped to an alternative dictionary field. All dictionaries can be browsed via the Dictionaries drop-down arrow to the left of the page.
The last page of the wizard summarizes the integration as defined. You may return to any previous page to make corrections or click Save to save the agent and all other integration components created. By default, the agent is started when the integration is saved. You can alter this behavior by unchecking the “Start agent upon saving” check box.
Figure 5-32 Integration Summary
All components are now available for editing via Service Link and Service Designer screens. These components are shown in Integration Components table.
The starting point for checking the operational status of Service Link is the Service Link Status display. The Service Link status is always displayed beneath the Common Tasks area of the Service Link Home page.
Figure 5-33 Service Link Status
This feature helps you verify that Service Link is communicating with the Service Catalog service via its assigned port.
The Service Link Status display indicates whether the Service Link connection status is operational, and shows the port and protocol being used.
You must stop a service link agent modify the agent properties and restart again for it to function. Agents can be started and stopped individually by using the Control Agents page.
Note
If the Service Link service is stopped and restarted, all agents that were running when the service was stopped are automatically restarted.
All adapters log their activities into the server log file.
In addition, each standard adapter has its own log file on the Service Link\log directory. The degree of detail written to the log is configurable; instructions for doing so are application-server specific.
For details on managing both server and adapter-specific log files, see the Cisco Prime Service Catalog Administration and Operations Guide.
In a JBoss installation, Service Link adapter logs can be segregated from the server log into separate files by modifying the logging.properties file under the “<JBOSS_DIR>\standalone\configuration” directory. Examples of such configurations can be found in the sample property files “\preinstall\jboss\templates” directory in the product package.
WebLogic does not allow separation of log files per adapter, and the IntegrationServer component is configured to use the WebLogic logger by default. If separation of logs is desired, edit the file newscalelog.properties under ISEE.war/WEB-INF/classes. Uncomment the line that specifies commons logging as the logging mechanism. It is also very important that you uncomment and set a valid value for logger.directory to a valid and existing directory in the system, where the user that is used to run IntegrationServer has full write access. The file newscalelog.properties has additional instructions. In addition, if additional settings for other adapters are desired, edit the file log4j.xml and use the FILE_ADAPTER appender and category as a base and adjust the appender name and reference, the package of the appender and the file name.
WebSphere logging of Service Link is based by default on log4j as included in the WebSphere application server. The log4j implementation in WebSphere is powerful and configurable through the administration console and other tools. However, it does not allow for easy separation of log files. If you want to separate log files per adapter in WebSphere, follow the steps below:
Step 1
Under “ISEE.war/WEB-INF/classes/config”, locate the file newscalelog.properties and open it with an editor.
Step 2
Uncomment the line:
Step 3
Locate the line for logger.directory. Specify the log directory; for example:
It is very important that you enter a valid directory where these log files reside and the user that is used to run IntegrationServer has full write access to it.
Step 4
Under the “ISEE.war/META-INF/” directory, manually a folder named “services”.
Step 5
Under this services folder, manually a text file named “org.apache.commons.logging.LogFactory”. Within the file, add one line as follows:
As Service Link messages may occupy a significant amount of space in the database and they are no longer referenced once a request is completed, there are benefits in purging them from the database on a regular basis to reduce the database size. The purge utility can be accessed in the Administration module under the Utilities tab. The utility does not actually remove the message record entries. Instead it replaces the XML content of the message with "Message has been purged" for all completed or failed message older than the retention period specified. For more information, see the Purge Utilities section in the Cisco Prime Service Catalog Administration and Operations Guide.
You can analyze the following files when troubleshooting.
ISEEOutbound.JndiProviderUrl
ISEEOutbound.JmsTopicFactory
ISEEOutbound.JmsQueueFactory
In addition to the server log file and adapter-specific log files, any errors detected by Service Link can also be viewed online. The message text for a failed message shown on the Messages page is a hyperlink to the detailed error for that message.
The error messages are exactly those that appear in the server logs and may be highly technical. Some sample error messages, and an explanation, are given below.
An outbound web services message was sent, but the inbound response could not be processed because the specified referenced field was not in the response message.
The transformation produced an invalid XML message.
Prebuilt functions provide the ability to manipulate the values of agent parameters included in a nsXML message.
Prebuilt functions were developed using the FreeMarker template engine, version 2.3.12, available as open source software and developed by the Visigoth Software Society. Cisco has certified only those functions documented below and available in the drop-down list when building agent parameters. Other functions supported by the FreeMarker framework may be used, but should be extensively tested.
Basic function usage consists of applying to the function to an expression, specifying an argument list for the function if required. In general terms:
For Service Link, the expression is typically either a dictionary field, specified via lightweight namespace syntax, or an nsXML element. It must be enclosed in quotes:
Two or more functions can be chained-applied to the same expression-by using the syntax:
For example, the service data mapping below first trims any leading or trailing spaces from the designated dictionary field, then converts the result to lower case.
Figure 5-34 Edit Parameter Values
Multiple elements can be combined in one mapping, as shown below. The elements are implicitly concatenated together to form one string.
Figure 5-35 Inbound Parameter Mappings
This scenario also shows another coding technique—the use of “temporary” fields to hold input values so they can be used in a mapping expression.
The substring function has the syntax: is used. 1st parameter does not occur as a substring in this string (starting from the given index, if you use the second parameter), then it returns first parameter does not occur as a substring in this string (before the given index, if you use the second parameter), then it returns –1.
The number of characters in the string.
The lower case version of the string. For example, “ GrEeN MoUsE ” becomes “ green mouse ”.
It is used to replace all occurrences of a string in the original string with another string. It does not deal with word boundaries. For example:
The replacing occurs in left-to-right order. This means that this:
If the first parameter is an empty string, then all occurrences of the empty string are replaced, like “foo”?replace(“”,"|") will evaluate to “|f|o|o|”.
replace accepts an optional flags parameter, as its third parameter.
The upper case version of the string. For example, “ GrEeN MoUsE ” becomes “ GREEN MOUSE ”. | https://www.cisco.com/c/en/us/td/docs/net_mgmt/datacenter_mgmt/intel_auto/service_portal/v_11_0/integration/guide/CiscoPrimeServiceCatalog-11-0-IntegrationGuide/ServiceLink.html | CC-MAIN-2021-10 | refinedweb | 12,883 | 52.8 |
Authors: Aritra Roy Gosthipaty, Ritwik Raha
Date created: 2021/08/09
Last modified: 2021/08/09
View in Colab •
GitHub source
Description: Minimal implementation of volumetric rendering as shown in NeRF.
In this example, we present a minimal implementation of the research paper NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis by Ben Mildenhall et. al. The authors have proposed an ingenious way to synthesize novel views of a scene by modelling the volumetric scene function through a neural network.
To help you understand this intuitively, let's start with the following question: would it be possible to give to a neural network the position of a pixel in an image, and ask the network to predict the color at that position?
The neural network would hypothetically memorize (overfit on) the image. This means that our neural network would have encoded the entire image in its weights. We could query the neural network with each position, and it would eventually reconstruct the entire image.
A question now arises, how do we extend this idea to learn a 3D volumetric scene? Implementing a similar process as above would require the knowledge of every voxel (volume pixel). Turns out, this is quite a challenging task to do.
The authors of the paper propose a minimal and elegant way to learn a 3D scene using a few images of the scene. They discard the use of voxels for training. The network learns to model the volumetric scene, thus generating novel views (images) of the 3D scene that the model was not shown at training time.
There are a few prerequisites one needs to understand to fully appreciate the process. We structure the example in such a way that you will have all the required knowledge before starting the implementation.
# Setting random seed to obtain reproducible results. import tensorflow as tf tf.random.set_seed(42) import os import glob import imageio import numpy as np from tqdm import tqdm from tensorflow import keras from tensorflow.keras import layers import matplotlib.pyplot as plt # Initialize global variables. AUTO = tf.data.AUTOTUNE BATCH_SIZE = 5 NUM_SAMPLES = 32 POS_ENCODE_DIMS = 16 EPOCHS = 20
The
npz data file contains images, camera poses, and a focal length.
The images are taken from multiple camera angles as shown in
Figure 3.
To understand camera poses in this context we have to first allow ourselves to think that a camera is a mapping between the real-world and the 2-D image.
Consider the following equation:
Where x is the 2-D image point, X is the 3-D world point and P is the camera-matrix. P is a 3 x 4 matrix that plays the crucial role of mapping the real world object onto an image plane.
The camera-matrix is an affine transform matrix that is
concatenated with a 3 x 1 column
[image height, image width, focal length]
to produce the pose matrix. This matrix is of
dimensions 3 x 5 where the first 3 x 3 block is in the camera’s point
of view. The axes are
[down, right, backwards] or
[-y, x, z]
where the camera is facing forwards
-z.
The COLMAP frame is
[right, down, forwards] or
[x, -y, -z]. Read
more about COLMAP here.
# Download the data if it does not already exist. file_name = "tiny_nerf_data.npz" url = "" if not os.path.exists(file_name): data = keras.utils.get_file(fname=file_name, origin=url) data = np.load(data) images = data["images"] im_shape = images.shape (num_images, H, W, _) = images.shape (poses, focal) = (data["poses"], data["focal"]) # Plot a random image from the dataset for visualization. plt.imshow(images[np.random.randint(low=0, high=num_images)]) plt.show()
Downloading data from 12730368/12727482 [==============================] - 0s 0us/step
Now that you've understood the notion of camera matrix and the mapping from a 3D scene to 2D images, let's talk about the inverse mapping, i.e. from 2D image to the 3D scene.
We'll need to talk about volumetric rendering with ray casting and tracing, which are common computer graphics techniques. This section will help you get to speed with these techniques.
Consider an image with
N pixels. We shoot a ray through each pixel
and sample some points on the ray. A ray is commonly parameterized by
the equation
r(t) = o + td where
t is the parameter,
o is the
origin and
d is the unit directional vector as shown in Figure 6.
In Figure 7, we consider a ray, and we sample some random points on
the ray. These sample points each have a unique location
(x, y, z)
and the ray has a viewing angle
(theta, phi). The viewing angle is
particularly interesting as we can shoot a ray through a single pixel
in a lot of different ways, each with a unique viewing angle. Another
interesting thing to notice here is the noise that is added to the
sampling process. We add a uniform noise to each sample so that the
samples correspond to a continuous distribution. In Figure 7 the
blue points are the evenly distributed samples and the white points
(t1, t2, t3) are randomly placed between the samples.
Figure 8 showcases the entire sampling process in 3D, where you can see the rays coming out of the white image. This means that each pixel will have its corresponding rays and each ray will be sampled at distinct points.
These sampled points act as the input to the NeRF model. The model is then asked to predict the RGB color and the volume density at that point.
def encode_position(x): """Encodes the position into its corresponding Fourier feature. Args: x: The input coordinate. Returns: Fourier features tensors of the position. """ positions = [x] for i in range(POS_ENCODE_DIMS): for fn in [tf.sin, tf.cos]: positions.append(fn(2.0 ** i * x)) return tf.concat(positions, axis=-1) def get_rays(height, width, focal, pose): """Computes origin point and direction vector of rays. Args: height: Height of the image. width: Width of the image. focal: The focal length between the images and the camera. pose: The pose matrix of the camera. Returns: Tuple of origin point and direction vector for rays. """ # Build a meshgrid for the rays. i, j = tf.meshgrid( tf.range(width, dtype=tf.float32), tf.range(height, dtype=tf.float32), indexing="xy", ) # Normalize the x axis coordinates. transformed_i = (i - width * 0.5) / focal # Normalize the y axis coordinates. transformed_j = (j - height * 0.5) / focal # Create the direction unit vectors. directions = tf.stack([transformed_i, -transformed_j, -tf.ones_like(i)], axis=-1) # Get the camera matrix. camera_matrix = pose[:3, :3] height_width_focal = pose[:3, -1] # Get origins and directions for the rays. transformed_dirs = directions[..., None, :] camera_dirs = transformed_dirs * camera_matrix ray_directions = tf.reduce_sum(camera_dirs, axis=-1) ray_origins = tf.broadcast_to(height_width_focal, tf.shape(ray_directions)) # Return the origins and directions. return (ray_origins, ray_directions) def render_flat_rays(ray_origins, ray_directions, near, far, num_samples, rand=False): """Renders the rays and flattens it. Args: ray_origins: The origin points for rays. ray_directions: The direction unit vectors for the rays. near: The near bound of the volumetric scene. far: The far bound of the volumetric scene. num_samples: Number of sample points in a ray. rand: Choice for randomising the sampling strategy. Returns: Tuple of flattened rays and sample points on each rays. """ # Compute 3D query points. # Equation: r(t) = o+td -> Building the "t" here. t_vals = tf.linspace(near, far, num_samples) if rand: # Inject uniform noise into sample space to make the sampling # continuous. shape = list(ray_origins.shape[:-1]) + [num_samples] noise = tf.random.uniform(shape=shape) * (far - near) / num_samples t_vals = t_vals + noise # Equation: r(t) = o + td -> Building the "r" here. rays = ray_origins[..., None, :] + ( ray_directions[..., None, :] * t_vals[..., None] ) rays_flat = tf.reshape(rays, [-1, 3]) rays_flat = encode_position(rays_flat) return (rays_flat, t_vals) def map_fn(pose): """Maps individual pose to flattened rays and sample points. Args: pose: The pose matrix of the camera. Returns: Tuple of flattened rays and sample points corresponding to the camera pose. """ (ray_origins, ray_directions) = get_rays(height=H, width=W, focal=focal, pose=pose) (rays_flat, t_vals) = render_flat_rays( ray_origins=ray_origins, ray_directions=ray_directions, near=2.0, far=6.0, num_samples=NUM_SAMPLES, rand=True, ) return (rays_flat, t_vals) # Create the training split. split_index = int(num_images * 0.8) # Split the images into training and validation. train_images = images[:split_index] val_images = images[split_index:] # Split the poses into training and validation. train_poses = poses[:split_index] val_poses = poses[split_index:] # Make the training pipeline. train_img_ds = tf.data.Dataset.from_tensor_slices(train_images) train_pose_ds = tf.data.Dataset.from_tensor_slices(train_poses) train_ray_ds = train_pose_ds.map(map_fn, num_parallel_calls=AUTO) training_ds = tf.data.Dataset.zip((train_img_ds, train_ray_ds)) train_ds = ( training_ds.shuffle(BATCH_SIZE) .batch(BATCH_SIZE, drop_remainder=True, num_parallel_calls=AUTO) .prefetch(AUTO) ) # Make the validation pipeline. val_img_ds = tf.data.Dataset.from_tensor_slices(val_images) val_pose_ds = tf.data.Dataset.from_tensor_slices(val_poses) val_ray_ds = val_pose_ds.map(map_fn, num_parallel_calls=AUTO) validation_ds = tf.data.Dataset.zip((val_img_ds, val_ray_ds)) val_ds = ( validation_ds.shuffle(BATCH_SIZE) .batch(BATCH_SIZE, drop_remainder=True, num_parallel_calls=AUTO) .prefetch(AUTO) )
The model is a multi-layer perceptron (MLP), with ReLU as its non-linearity.
An excerpt from the paper:
"We encourage the representation to be multiview-consistent by
restricting the network to predict the volume density sigma as a
function of only the location
x, while allowing the RGB color
c to be
predicted as a function of both location and viewing direction. To
accomplish this, the MLP first processes the input 3D coordinate
x
with 8 fully-connected layers (using ReLU activations and 256 channels
per layer), and outputs sigma and a 256-dimensional feature vector.
This feature vector is then concatenated with the camera ray's viewing
direction and passed to one additional fully-connected layer (using a
ReLU activation and 128 channels) that output the view-dependent RGB
color."
Here we have gone for a minimal implementation and have used 64 Dense units instead of 256 as mentioned in the paper.
def get_nerf_model(num_layers, num_pos): """Generates the NeRF neural network. Args: num_layers: The number of MLP layers. num_pos: The number of dimensions of positional encoding. Returns: The [`tf.keras`]() model. """ inputs = keras.Input(shape=(num_pos, 2 * 3 * POS_ENCODE_DIMS + 3)) x = inputs for i in range(num_layers): x = layers.Dense(units=64, activation="relu")(x) if i % 4 == 0 and i > 0: # Inject residual connection. x = layers.concatenate([x, inputs], axis=-1) outputs = layers.Dense(units=4)(x) return keras.Model(inputs=inputs, outputs=outputs) def render_rgb_depth(model, rays_flat, t_vals, rand=True, train=True): """Generates the RGB image and depth map from model prediction. Args: model: The MLP model that is trained to predict the rgb and volume density of the volumetric scene. rays_flat: The flattened rays that serve as the input to the NeRF model. t_vals: The sample points for the rays. rand: Choice to randomise the sampling strategy. train: Whether the model is in the training or testing phase. Returns: Tuple of rgb image and depth map. """ # Get the predictions from the nerf model and reshape it. if train: predictions = model(rays_flat) else: predictions = model.predict(rays_flat) predictions = tf.reshape(predictions, shape=(BATCH_SIZE, H, W, NUM_SAMPLES, 4)) # Slice the predictions into rgb and sigma. rgb = tf.sigmoid(predictions[..., :-1]) sigma_a = tf.nn.relu(predictions[..., -1]) # Get the distance of adjacent intervals. delta = t_vals[..., 1:] - t_vals[..., :-1] # delta shape = (num_samples) if rand: delta = tf.concat( [delta, tf.broadcast_to([1e10], shape=(BATCH_SIZE, H, W, 1))], axis=-1 ) alpha = 1.0 - tf.exp(-sigma_a * delta) else: delta = tf.concat( [delta, tf.broadcast_to([1e10], shape=(BATCH_SIZE, 1))], axis=-1 ) alpha = 1.0 - tf.exp(-sigma_a * delta[:, None, None, :]) # Get transmittance. exp_term = 1.0 - alpha epsilon = 1e-10 transmittance = tf.math.cumprod(exp_term + epsilon, axis=-1, exclusive=True) weights = alpha * transmittance rgb = tf.reduce_sum(weights[..., None] * rgb, axis=-2) if rand: depth_map = tf.reduce_sum(weights * t_vals, axis=-1) else: depth_map = tf.reduce_sum(weights * t_vals[:, None, None], axis=-1) return (rgb, depth_map)
The training step is implemented as part of a custom
keras.Model subclass
so that we can make use of the
model.fit functionality.
class NeRF(keras.Model): def __init__(self, nerf_model): super().__init__() self.nerf_model = nerf_model def compile(self, optimizer, loss_fn): super().compile() self.optimizer = optimizer self.loss_fn = loss_fn self.loss_tracker = keras.metrics.Mean(name="loss") self.psnr_metric = keras.metrics.Mean(name="psnr") def train_step(self, inputs): # Get the images and the rays. (images, rays) = inputs (rays_flat, t_vals) = rays with tf.GradientTape() as tape: # Get the predictions from the model. rgb, _ = render_rgb_depth( model=self.nerf_model, rays_flat=rays_flat, t_vals=t_vals, rand=True ) loss = self.loss_fn(images, rgb) # Get the trainable variables. trainable_variables = self.nerf_model.trainable_variables # Get the gradeints of the trainiable variables with respect to the loss. gradients = tape.gradient(loss, trainable_variables) # Apply the grads and optimize the model. self.optimizer.apply_gradients(zip(gradients, trainable_variables)) #()} def test_step(self, inputs): # Get the images and the rays. (images, rays) = inputs (rays_flat, t_vals) = rays # Get the predictions from the model. rgb, _ = render_rgb_depth( model=self.nerf_model, rays_flat=rays_flat, t_vals=t_vals, rand=True ) loss = self.loss_fn(images, rgb) #()} @property def metrics(self): return [self.loss_tracker, self.psnr_metric] test_imgs, test_rays = next(iter(train_ds)) test_rays_flat, test_t_vals = test_rays loss_list = [] class TrainMonitor(keras.callbacks.Callback): def on_epoch_end(self, epoch, logs=None): loss = logs["loss"] loss_list.append(loss) test_recons_images, depth_maps = render_rgb_depth( model=self.model.nerf_model, rays_flat=test_rays_flat, t_vals=test_t_vals, rand=True, train=False, ) # Plot the rgb, depth and the loss plot. fig, ax = plt.subplots(nrows=1, ncols=3, figsize=(20, 5)) ax[0].imshow(keras.preprocessing.image.array_to_img(test_recons_images[0])) ax[0].set_title(f"Predicted Image: {epoch:03d}") ax[1].imshow(keras.preprocessing.image.array_to_img(depth_maps[0, ..., None])) ax[1].set_title(f"Depth Map: {epoch:03d}") ax[2].plot(loss_list) ax[2].set_xticks(np.arange(0, EPOCHS + 1, 5.0)) ax[2].set_title(f"Loss Plot: {epoch:03d}") fig.savefig(f"images/{epoch:03d}.png") plt.show() plt.close() num_pos = H * W * NUM_SAMPLES nerf_model = get_nerf_model(num_layers=8, num_pos=num_pos) model = NeRF(nerf_model) model.compile( optimizer=keras.optimizers.Adam(), loss_fn=keras.losses.MeanSquaredError() ) # Create a directory to save the images during training. if not os.path.exists("images"): os.makedirs("images") model.fit( train_ds, validation_data=val_ds, batch_size=BATCH_SIZE, epochs=EPOCHS, callbacks=[TrainMonitor()], steps_per_epoch=split_index // BATCH_SIZE, ) def create_gif(path_to_images, name_gif): filenames = glob.glob(path_to_images) filenames = sorted(filenames) images = [] for filename in tqdm(filenames): images.append(imageio.imread(filename)) kargs = {"duration": 0.25} imageio.mimsave(name_gif, images, "GIF", **kargs) create_gif("images/*.png", "training.gif")
Epoch 1/20 16/16 [==============================] - 15s 753ms/step - loss: 0.1134 - psnr: 9.7278 - val_loss: 0.0683 - val_psnr: 12.0722
Epoch 2/20 16/16 [==============================] - 13s 752ms/step - loss: 0.0648 - psnr: 12.4200 - val_loss: 0.0664 - val_psnr: 12.1765
Epoch 3/20 16/16 [==============================] - 13s 746ms/step - loss: 0.0607 - psnr: 12.5281 - val_loss: 0.0673 - val_psnr: 12.0121
Epoch 4/20 16/16 [==============================] - 13s 758ms/step - loss: 0.0595 - psnr: 12.7050 - val_loss: 0.0646 - val_psnr: 12.2768
Epoch 5/20 16/16 [==============================] - 13s 755ms/step - loss: 0.0583 - psnr: 12.7522 - val_loss: 0.0613 - val_psnr: 12.5351
Epoch 6/20 16/16 [==============================] - 13s 749ms/step - loss: 0.0545 - psnr: 13.0654 - val_loss: 0.0553 - val_psnr: 12.9512
Epoch 7/20 16/16 [==============================] - 13s 744ms/step - loss: 0.0480 - psnr: 13.6313 - val_loss: 0.0444 - val_psnr: 13.7838
Epoch 8/20 16/16 [==============================] - 13s 763ms/step - loss: 0.0359 - psnr: 14.8570 - val_loss: 0.0342 - val_psnr: 14.8823
Epoch 9/20 16/16 [==============================] - 13s 758ms/step - loss: 0.0299 - psnr: 15.5374 - val_loss: 0.0287 - val_psnr: 15.6171
Epoch 10/20 16/16 [==============================] - 13s 779ms/step - loss: 0.0273 - psnr: 15.9051 - val_loss: 0.0266 - val_psnr: 15.9319
Epoch 11/20 16/16 [==============================] - 13s 736ms/step - loss: 0.0255 - psnr: 16.1422 - val_loss: 0.0250 - val_psnr: 16.1568
Epoch 12/20 16/16 [==============================] - 13s 746ms/step - loss: 0.0236 - psnr: 16.5074 - val_loss: 0.0233 - val_psnr: 16.4793
Epoch 13/20 16/16 [==============================] - 13s 755ms/step - loss: 0.0217 - psnr: 16.8391 - val_loss: 0.0210 - val_psnr: 16.8950
Epoch 14/20 16/16 [==============================] - 13s 741ms/step - loss: 0.0197 - psnr: 17.2245 - val_loss: 0.0187 - val_psnr: 17.3766
Epoch 15/20 16/16 [==============================] - 13s 739ms/step - loss: 0.0179 - psnr: 17.6246 - val_loss: 0.0179 - val_psnr: 17.5445
Epoch 16/20 16/16 [==============================] - 13s 735ms/step - loss: 0.0175 - psnr: 17.6998 - val_loss: 0.0180 - val_psnr: 17.5154
Epoch 17/20 16/16 [==============================] - 13s 741ms/step - loss: 0.0167 - psnr: 17.9393 - val_loss: 0.0156 - val_psnr: 18.1784
Epoch 18/20 16/16 [==============================] - 13s 750ms/step - loss: 0.0150 - psnr: 18.3875 - val_loss: 0.0151 - val_psnr: 18.2811
Epoch 19/20 16/16 [==============================] - 13s 755ms/step - loss: 0.0141 - psnr: 18.6476 - val_loss: 0.0139 - val_psnr: 18.6216
Epoch 20/20 16/16 [==============================] - 14s 777ms/step - loss: 0.0139 - psnr: 18.7131 - val_loss: 0.0137 - val_psnr: 18.7259
100%|██████████| 20/20 [00:00<00:00, 57.59it/s]
Here we see the training step. With the decreasing loss, the rendered
image and the depth maps are getting better. In your local system, you
will see the
training.gif file generated.
In this section, we ask the model to build novel views of the scene.
The model was given
106 views of the scene in the training step. The
collections of training images cannot contain each and every angle of
the scene. A trained model can represent the entire 3-D scene with a
sparse set of training images.
Here we provide different poses to the model and ask for it to give us the 2-D image corresponding to that camera view. If we infer the model for all the 360-degree views, it should provide an overview of the entire scenery from all around.
# Get the trained NeRF model and infer. nerf_model = model.nerf_model test_recons_images, depth_maps = render_rgb_depth( model=nerf_model, rays_flat=test_rays_flat, t_vals=test_t_vals, rand=True, train=False, ) # Create subplots. fig, axes = plt.subplots(nrows=5, ncols=3, figsize=(10, 20)) for ax, ori_img, recons_img, depth_map in zip( axes, test_imgs, test_recons_images, depth_maps ): ax[0].imshow(keras.preprocessing.image.array_to_img(ori_img)) ax[0].set_title("Original") ax[1].imshow(keras.preprocessing.image.array_to_img(recons_img)) ax[1].set_title("Reconstructed") ax[2].imshow( keras.preprocessing.image.array_to_img(depth_map[..., None]), cmap="inferno" ) ax[2].set_title("Depth Map")
Here we will synthesize novel 3D views and stitch all of them together to render a video encompassing the 360-degree view.
def get_translation_t(t): """Get the translation matrix for movement in t.""" matrix = [ [1, 0, 0, 0], [0, 1, 0, 0], [0, 0, 1, t], [0, 0, 0, 1], ] return tf.convert_to_tensor(matrix, dtype=tf.float32) def get_rotation_phi(phi): """Get the rotation matrix for movement in phi.""" matrix = [ [1, 0, 0, 0], [0, tf.cos(phi), -tf.sin(phi), 0], [0, tf.sin(phi), tf.cos(phi), 0], [0, 0, 0, 1], ] return tf.convert_to_tensor(matrix, dtype=tf.float32) def get_rotation_theta(theta): """Get the rotation matrix for movement in theta.""" matrix = [ [tf.cos(theta), 0, -tf.sin(theta), 0], [0, 1, 0, 0], [tf.sin(theta), 0, tf.cos(theta), 0], [0, 0, 0, 1], ] return tf.convert_to_tensor(matrix, dtype=tf.float32) def pose_spherical(theta, phi, t): """ Get the camera to world matrix for the corresponding theta, phi and t. """ c2w = get_translation_t(t) c2w = get_rotation_phi(phi / 180.0 * np.pi) @ c2w c2w = get_rotation_theta(theta / 180.0 * np.pi) @ c2w c2w = np.array([[-1, 0, 0, 0], [0, 0, 1, 0], [0, 1, 0, 0], [0, 0, 0, 1]]) @ c2w return c2w rgb_frames = [] batch_flat = [] batch_t = [] # Iterate over different theta value and generate scenes. for index, theta in tqdm(enumerate(np.linspace(0.0, 360.0, 120, endpoint=False))): # Get the camera to world matrix. c2w = pose_spherical(theta, -30.0, 4.0) # ray_oris, ray_dirs = get_rays(H, W, focal, c2w) rays_flat, t_vals = render_flat_rays( ray_oris, ray_dirs, near=2.0, far=6.0, num_samples=NUM_SAMPLES, rand=False ) if index % BATCH_SIZE == 0 and index > 0: batched_flat = tf.stack(batch_flat, axis=0) batch_flat = [rays_flat] batched_t = tf.stack(batch_t, axis=0) batch_t = [t_vals] rgb, _ = render_rgb_depth( nerf_model, batched_flat, batched_t, rand=False, train=False ) temp_rgb = [np.clip(255 * img, 0.0, 255.0).astype(np.uint8) for img in rgb] rgb_frames = rgb_frames + temp_rgb else: batch_flat.append(rays_flat) batch_t.append(t_vals) rgb_video = "rgb_video.mp4" imageio.mimwrite(rgb_video, rgb_frames, fps=30, quality=7, macro_block_size=None)
120it [00:12, 9.24it/s]
Here we can see the rendered 360 degree view of the scene. The model
has successfully learned the entire volumetric space through the
sparse set of images in only 20 epochs. You can view the
rendered video saved locally, named
rgb_video.mp4.
We have produced a minimal implementation of NeRF to provide an intuition of its core ideas and methodology. This method has been used in various other works in the computer graphics space.
We would like to encourage our readers to use this code as an example and play with the hyperparameters and visualize the outputs. Below we have also provided the outputs of the model trained for more epochs. | https://keras.io/examples/vision/nerf/ | CC-MAIN-2021-39 | refinedweb | 3,448 | 54.18 |
How do I use rqt_plot to subscribe to std_msgs
I do have an Array from
#include "std_msgs/Float64MultiArray.h" and it is constituted as followed:
layout: dim: [] data_offset: 0 data: [0.0073274714, -0.019451106, -0.009992006] ---
Now, if I use rqt_plot it does not show any data when selecting the topic. I cannot figure out why? If I hover over the
+ button in the GUI it just says no plottable fields.
Are you sure that empty
dimfield is correct? I wouldn't be surprised if
rqt_plotcannot work with
*Arraymsgs that have no valid dimensions associated with them.
how do I fill in the dim field? What i do so far is | https://answers.ros.org/question/226584/how-do-i-use-rqt_plot-to-subscribe-to-std_msgs/ | CC-MAIN-2021-17 | refinedweb | 111 | 76.52 |
Modularity in Java: Java 9 Modularity versus Prior Versions
Modularity is the basic tenet of good software engineering practices. This is a technique to harness complexity of both in design and maintenance of a software product. Until Java 9, it had been the responsibility of the developer to imbibe this principle with least help from the working tool chain. Java 9 used this idea right from the build-up of JRE and JDK. This is going to change the paradigm of programming by bringing this feature as a recipe right from the beginning of the development process. This article is an attempt to look back on the principles of modular programming from the threshold of a new era that is dawning with Java 9.
Overview
Modular programming is basically a software design technique emphasizing that the program be written as individual components with their distinct functionality in a manner that these individual components can be combined to create an useful program. These individual components are called modules. According to Mark Reinhold (The State of the module System), a module is a named, self-describing collection of code and data. The key idea behind modular programming is to break a complex system into manageable parts. The individual parts should be separated in a manner that least concerns other parts and perform discrete functions. In contrast to this, the programs that are generally built in an undivided single module are called monolithic programs. Monolithic programs may be efficient in performance yet they are unmaintainable as the complexity and size of the program increases. Modern applications are huge and complex; they are not suitable to be written as a monolithic program. Programs that are short and compact, like implementing an algorithm, are generally good candidates for a monolithic program.
Advantages
If the development of software follows a modular design and, of course, built in the correct fashion, it has several advantages both in the long and the short run. A few of them are as follows:
- Because it comprises individual modules, it leverages reusability. Some (or all) components may be reused in any other program.
- A modular code is more readable than a monolithic code.
- It is easy to maintain and upgrade because individual components have separate concerns. It is easy to pick one and make necessary changes, causing rippled changes to other modules as minimally as possible.
- Modular programs are comparatively easy to debug because their decoupled nature isolates individual components for a quick unit test. Also, in integration testing the problem may be localised and rectified in an efficient manner.
Modular Programming with Java 8 and Its Predecessor
The idea of modularity is closer to using packages and JAR archives. And, perhaps the most easily available example is the libraries that are part of Java. Library modules are built by assembling several parts together; each part performs a discrete function. So, to get an idea what a module mean in Java is—it is but a collection of classes and interfaces of common interest typically clubbed into a package and distributed as a JAR file. Now, every module has a public face, meaning, a set of exported APIs that provides the means by which the outside world communicates with the module. Therefore, those classes or interfaces that are designed to be the external interface of the module are designated with a public access modifier. The main function of these classes and interfaces is to serve as an available API for interaction with other modules. The private members, on the other hand, are inaccessible and concerned with the internal processing or agenda of the particular part only.
One must understand that Java was not build to be modular from the ground up, yet one can achieve some aspects of modularity with packages and JAR archives. Modularity with JARs has many inherent problems, although they are often designated to be discreet. One of them is dependency. This means that a module in Java has a set of dependencies on one or more other modules. This dependency is indispensable on most occasions and is required as a functional requirement for proper execution of the module.
This is the basis of modular programming, as close one can achieve, pre Java 9. Modular programming gave a new drift of thought when it was most needed and the mess of object-oriented programming became manageable to some extent. Since then, almost two decades later, programmers realised that there is still something missing. Problems cropped up frequently, there were complaints, and so forth (a few of them are discussed later in this article). The marriage between modularity and OOP Java needed a fresh impetus.
The Jigsaw project was conceived sometime back, prior to the release of Java 8, and was supposed to be a part of its final release. But, the sheer complexity of its construct struck the way and it was decided that it would be a feature slated to be released with the later version. Even the public release of Java 9 was slated many times and, as of writing this article, it is still awaited for a few more days.
The Problems
Most Java developers have encountered a knotty little problem, infamously called JAR-hell or Classpath-hell. In fact, it is so infamous that one can almost distinguish a Java programmer in a group by just asking this simple question, "What is JAR-hell?" One who answers with an emotional outburst is surely a Java programmer.
There are several problems with modular programming in Java. Here is a list of frequently encountered ones. This specific problem is associated with Java's class loading mechanism.
- The classes and interfaces contained in the JAR file often overlap with other classes in other packages to function properly. This makes them dependent on one another. Therefore, one JAR file may be dependent on another to execute. The Java runtime simply loads the JARs in the classpath first encountered without considering that a copy of the same class may exists in the multiple JARs.
- Sometimes, there are missing classes or interfaces. The worst part of it is that it is only found out during execution. In most cases, this crashes the application inadvertently, giving a runtime error message.
- Another annoying problem is the version mismatch. A JAR module that is dependent on another JAR often does not work because one or many of its dependent modules may have to be upgraded or downgraded to make it compatible with the built version of the dependent module to work with. (Respite is to use external tools such as Maven and OSGi. They are popularly used to manage such dependencies.)
Also, there were other problems such as the Java package having no access modifier. Therefore, a public class in a package is visible to any other packages. So, modularity with the package has little value in view of global visibility with respect to packages. Until Java 7, JRE and JDK were available as a monolithic artefact, shipping increases memory footprint, expensive in terms of space and download time.
Modularity in Java 9
The module system of Java 9 is developed in view of keeping three core ideas:
- Leveraging strong encapsulation
- Establishing well-defined interfaces
- Defining explicit dependencies
To appreciate what Java developers did with JRE, one needs to understand the idea that actually formed as part of Java 8. Java 8 introduced the idea of modularity in JRE in the form of compact profiles. A compact profile is a subset of the API library designated as compact1, compact2, and compact3. Each of the libraries includes a subset of the library in the following order: compact2 includes all of compact1 and more, compact3 includes all compact2 and more. Hence, each profile is actually built upon its previous one. The advantage is that one can download and use the part of library that one needs and does not have to use the whole JRE as a monolithic artefact. In Java 8, one can refer to the profile at compile time as follows:
$ javac -profile compact1 TestProg.java
Java 9 took this idea to a level beyond and dropped the techniques of compacted profiles altogether. Instead, it gave the user full control over the list of modules one wants to include in the custom JRE.
Java 9 also introduced a new program component called the module. This new component is used to demarcate well-defined boundaries between interacting and dependent modules. This is in response to solving the classpath problem. A defined module must declare its dependencies on other modules explicitly. The module system will verify the dependencies stated in all three phases—during compilation, linking, and also during runtime. Therefore, this will eradicate the problem of missing dependencies well before crashing the application.
Accessibility of modules has been refined to leverage strong encapsulation by drawing a clear line between code that is exposed to the public and the code used for internal implementation. This decouples the concern of the module and prevents making unwanted or accidental changes. A module can clearly state which public types it wants to expose to other modules.
With the inception of modules, Java 9 leverages flexibility as well as reusability.
Conclusion
Prior to Java 9, JAR files were something that were closest one could get to modules. But, there were many problems associated with the use of JAR files that needs to be rectified to embrace pure modularity in Java. The expectation on Java 9 is high in this regard. It has twofold responsibility; one, it must replenish JRE and JDK with the new idea of modularity; two, it must not break the existing system. This means that, apart from giving a new impetus to a Java project, it should also be backward compatible. The existing project also should be able to upgrade seamlessly. Whatever the end result may be, it is by nature a massive effort. This flagship feature is surely going to rejuvenate the core of Java and pave the way for new type of product development, deployment, and packaging.
| https://www.developer.com/java/other/modularity-in-java-java-9-modularity-versus-prior-versions.html | CC-MAIN-2017-43 | refinedweb | 1,673 | 53.1 |
This section outlines a few application scenarios to help illustrate the capabilities enabled by JNDI.
The examples below are not meant to be prescriptive. There are often several ways to solve a problem, and JNDI is designed with flexibility in mind., "password") of the user and verify the authenticity using the information supplied by the user.
DirContext ctx = new InitialDirContext(); Attribute attr = ctx.getAttributes(userName).get("password"); String password = (String)attr.get();
A useful feature of an electronic mail system is a directory service that provides a mapping between users and email addresses. This allows mail users to search for the email address of a particular user. This is analogous to searching for an individual's telephone number in the phone book in order to dial his phone number. For example, when I want to send mail to John Smith in my department, I search for "John Smith" in the directory using a "search"'s email address, I might not want to search the directory again each time I send him mail. Instead, I can create a personal subtree in the directory in which I maintain entries that I frequently use, possibly by creating links to the existing entries.
Database applications can use the directory to locate database servers. For example, a financial application needs to get the stock quotes from a stock quote server using JDBC. This application can enable the user to select the stock quote server based on specification of some attributes (such as coverage of which markets and frequency of quote updates). The application searches the directory for quote servers that meet these attributes, and then retrieves the "location""); ... }
When using almost any kind of interactive application that asks a user to input names, the user's job is made easier if a namespace browser is available to him. The browser can either be built into the application and tailored to suit that application in particular, or it can be more general-purpose such as a typical web browser.
A very simple example of
a JNDI browser allows a user to "walk" through a
namespace, viewing the atomic names at each step along the way. The
browser prints a "*" to highlight the name of each
Context , thus telling the user where he can go next.
1
// network. user interface. printers on the second floor of building 5 in the Mountain View
campus, or search for all color laser printers with 600dpi
resolution. From the application's perspective, just as
lookup() returned a
Printer object, the
list and search operations also provide the same capability of
returning
Printer objects that the application could
use to submit print requests. | http://docs.oracle.com/javase/7/docs/technotes/guides/jndi/spec/jndi/jndi.7.html | CC-MAIN-2015-27 | refinedweb | 441 | 50.67 |
Programming Books
books and download these books for future reference. These Free Computer Books...
Free Java Books
Free JSP Books...
Programming Books
A Collection of Large number questions
upload and download files - JSP-Servlet
upload and download files HI!!
how can I upload (more than 1 file) and download the files using jsp.
Is any lib folders to be pasted? kindly... and download files in JSP visit
Introduction to the JSP Java Server Pages
it offline.
Free JSP Books
Download the following JSP books.
Free
JSP Download
BooksFree...;
Accessing database fro JSP
In This article I am going
Ajax Books
over ASP, PHP, JSP, etc.) - the coding on the client does become more...
Ajax Books
AJAX - Asynchronous JavaScript and XML - some books and resource links
These books2ME Books
J2ME Books
Free
J2ME Books
J2ME programming camp... a list of all the J2ME related books I know about. The computer industry is fast - JSP-Servlet
jsp image preview when uploading using jsp? Hi friend,
I am sending you a link. This link will help you.
Please visit for more information. - JSP-Servlet
; Hi friend,
I am sending a link. This link will help you.
please visit for more information. sir, can we include more java code asif we do in servlet
java/jsp code to download a video
java/jsp code to download a video how can i download a video using jsp/servlet
JSP
;jsp:param ..../></jsp:forward>
For more information, visit...what is JSP forward tag for what is JSP forward tag for
It forwards the current request to another JSP page. Below is the syntax
Free Java Download
Free Java Download Hi,
I am beginner in Java and trying to find the url to download Java. Can anyone provide me the url of Free Java Download ?
Is Java
EL error - JSP-Servlet
jsp error Hello, my name is sreedhar. i wrote a jsp application... out of this problem as soon as possible. Hi friend,
I am... in detail.
Visit for more information.
Thanks
download - JSP-Servlet
download here is the code in servlet for download a file.
while...;
/**
*
* @author saravana
*/
public class download extends HttpServlet... if an I/O error occurs
*/
@Override
protected void doPost
jsp - JSP-Interview Questions
for developing webapplication?Actually i m developing a jsp project,ur ans for my quest is very useful for my viva presentation. Hi
JavaServer Pages (JSP....
-----------------------------------------------
Read for more information with example.
Deployment on Server that can be used simultaneously by more than one user
Deployment on Server that can be used simultaneously by more than one user Sir, I have deployed my web application developed using JSP & sevlet. how could i access it on the network using xp os. I have deployed it on Tomcat
searching books
searching books how to write a code for searching books in a library through
CORBA and RMI Books
CORBA and RMI
Books
... and CORBA
The standard by which all other CORBA books are judged, Client..., JSP, and Servlets
CodeNotes provides the most succinct, accurate
Perl Programming Books
This book is a work in progress. I have some ideas about what will go into the next few chapters, but I am open to suggestions. I am looking for interesting...
Perl Programming Books
| http://roseindia.net/tutorialhelp/comment/4685 | CC-MAIN-2014-42 | refinedweb | 540 | 76.32 |
3. Users and directories¶
Right now your Django project is at
/root, or maybe at
/home/joe. The first thing we are going to fix is put your Django
project in a proper place.
I will be using
$DJANGO_PROJECT as the name of your Django
project.
3.1. Creating a user and group¶
It’s a good idea to not run Django as root. We will create a user
specifically for that, and we will give the user the same name as the
Django project, i.e.
$DJANGO_PROJECT. However, in principle it can
be different, and I will be using
$DJANGO_USER to denote the user
name, so that you can distinguish when I’m talking about the user and
when about the project.
Execute this command:
adduser --system --home=/var/opt/$DJANGO_PROJECT \ --no-create-home --disabled-password --group \ --shell=/bin/bash $DJANGO_USER
Here is why we use these parameters:
--system
- This tells
adduserto create a system user, as opposed to creating a normal user. System users are intended to run programs, whereas normal users are people. Because of this parameter,
adduserwill assign a user id less than 1000, which is only a convention for knowing that this is a system user. Otherwise there isn’t much difference.
--home=/var/opt/$DJANGO_PROJECT
- This specifies the home directory for the user. For system users, it doesn’t really matter which directory we will choose, but by convention we choose the one which holds the program’s data. We will talk about the
/var/opt/$DJANGO_PROJECTdirectory later.
--no-create-home
- We tell
adduserto not create the home directory. We could allow it to create it, but we will create it ourselves later on, for instructive purposes.
--disabled-password
- The password will be, well, disabled. This means that you won’t be able to become this user by using a password. However, the root user can always become another user (e.g. with
su) without using a password, so we don’t need one.
--group
- This tells
adduserto not only add a new user, but to also add a new group, having the same name as the user, and make the new user a member of the new group. We will see further below why this is useful. I will be using
$DJANGO_GROUPto denote the new group. In principle it could be different than
$DJANGO_USER(but then the procedure of creating the user and the group would be slightly different), but the most important thing is that I want it to be perfectly clear when we are talking about the user and when we are talking about the group.
--shell=/bin/bash
- By default,
adduseruses
/bin/falseas the shell for system users, which practically means they are disabled;
/bin/falsecan’t run any commands. We want the user to have the most common shell used in GNU/Linux systems,
/bin/bash.
3.2. The program files¶
Your Django project should be structured either like this:
$DJANGO_PROJECT/ |-- manage.py |-- requirements.txt |-- your_django_app/ `-- $DJANGO_PROJECT/
or like this:
$REPOSITORY_ROOT/ |-- requirements.txt `-- $DJANGO_PROJECT/ |-- manage.py |-- your_django_app/ `-- $DJANGO_PROJECT/
I prefer the former, but some people prefer the extra repository root directory.
We are going to place your project inside
/opt. This is a standard
directory for program files that are not part of the operating system.
(The ones that are installed by the operating system go to
/usr.)
So, clone or otherwise copy your Django project in
/opt/$DJANGO_PROJECT or in
/opt/$REPOSITORY_ROOT. Do
this as the root user. Create the virtualenv for your project as
the root user as well:
virtualenv --system-site-packages --python=/usr/bin/python3 \ /opt/$DJANGO_PROJECT/venv /opt/$DJANGO_PROJECT/venv/bin/pip install \ -r /opt/$DJANGO_PROJECT/requirements.txt
While it might seem strange that we are creating these as the root user
instead of as
$DJANGO_USER, it is standard practice
for program files to belong to the root user. If you check, you will see
that
/bin/ls belongs to the root user, though you may be running it
as joe. In fact, it would be an error for it to belong to joe, because
then joe would be able to modify it. So for security purposes it’s
better for program files to belong to root.
This poses a problem: when
$DJANGO_USER attempts to execute your
Django application, it will not have permission to write
the compiled Python files in the
/opt/$DJANGO_PROJECT directory,
because this is owned by root. So we need to pre-compile
these files as root:
/opt/$DJANGO_PROJECT/venv/bin/python -m compileall \ -x /opt/$DJANGO_PROJECT/venv/ /opt/$DJANGO_PROJECT
The option
-x /opt/$DJANGO_PROJECT/venv/ tells compileall to exclude
directory
/opt/$DJANGO_PROJECT/venv from compilation. This is
because the virtualenv takes care of its own compilation and we should
not interfere.
3.3. The data directory¶
As I already hinted, our data directory is going to be
/var/opt/$DJANGO_PROJECT. It is standard policy for programs
installed in
/opt to put their data in
/var/opt. Most notably,
we will store media files in there (in a later chapter). We will also
store the SQLite file there. Usually in production we use a
different RDBMS, but we will deal with this in a later chapter as well.
So, let’s now prepare the data directory:
mkdir -p /var/opt/$DJANGO_PROJECT chown $DJANGO_USER /var/opt/$DJANGO_PROJECT
Besides creating the directory, we also changed its owner to
$DJANGO_USER. This is necessary because Django will be needing to
write data in that directory, and it will be running as that user, so it
needs permission to do so.
3.4. The log directory¶
Later we will setup our Django project to write to log files in
/var/log/$DJANGO_PROJECT. Let’s prepare the directory.
mkdir -p /var/log/$DJANGO_PROJECT chown $DJANGO_USER /var/log/$DJANGO_PROJECT
3.5. The production settings¶
Debian puts configuration files in
/etc. More specifically, the
configuration for programs that are installed in
/opt is supposed to
go to
/etc/opt, which is what we will do:
mkdir /etc/opt/$DJANGO_PROJECT
For the time being this directory is going to have only
settings.py;
later it will have a bit more. Your
/etc/opt/$DJANGO_PROJECT/settings.py file should be like this:
from DJANGO_PROJECT.settings import * DEBUG = True ALLOWED_HOSTS = ['$DOMAIN', ''] DATABASES = { 'default': { 'ENGINE': 'django.db.backends.sqlite3', 'NAME': '/var/opt/$DJANGO_PROJECT/$DJANGO_PROJECT.db', } }
Note
The above is not valid Python until you replace
$DJANGO_PROJECT
with the name of your django project and
$DOMAIN with your
domain. In all examples until now you might have been able to copy
and paste the code from the book and use shell variables for
$DJANGO_PROJECT,
$DJANGO_USER,
$DJANGO_GROUP, and so on.
This is, indeed, the reason I chose this notation. However, in some
places, like in this Python, you have to actually replace it
yourself. (Occasionally I use DJANGO_PROJECT without the leading
dollar sign, in order to get the syntax highlighter to work.)
Note
These settings might give you the error “The SECRET_KEY setting must not be empty”, or “Unknown command: ‘collectstatic’”, or some other error that indicates a problem with the settings. If this happens, a likely explanation is that this line at the top of your production settings isn’t working correctly:
from DJANGO_PROJECT.settings import *
It may be that, in your Django project,
settings is a directory
that has no
__init__.py file or an empty
__init__.py file.
Maybe you have to change the line to this:
from DJANGO_PROJECT.settings.base import *
Check what your project’s settings file actually is, and import from that one.
Let’s now secure the production settings. We don’t want other users of the system to be able to read the file, because it contains sensitive information. Maybe not yet, but after a few chapters it is going to have the secret key, the password to the database, the password for the email server, etc. At this point, you are wondering: what other users? I am the only person using this server, and I have created no users. Indeed, now that it’s so easy and cheap to get small servers and assign a single job to them, this detail is not as important as it used to be. However, it is still a good idea to harden things a little bit. Maybe a year later you will create a normal user account on that server as an unrelated convenience for a colleague.
If your Django project has a vulnerability, an attacker might be able to
give commands to the system as the user as which the project runs (i.e.
as
$DJANGO_USER). Likewise, in the future you might install some
other web application, and that other web application might have a
vulnerability and could be attacked, and the attacker might be able to
give commands as the user running that application. In that case, if we
have secured our
settings.py, the attacker won’t be able to read it.
Eventually servers get compromised, and we try to set up the system in
such a way as to minimize the damage, and we can minimize it if we
contain it, and we can contain it if the compromising of an application
does not result in the compromising of other applications. This is why
we want to run each application in its own user and its own group.
Here is how to make the contents of
/etc/opt/$DJANGO_PROJECT
unreadable by other users:
chgrp $DJANGO_GROUP /etc/opt/$DJANGO_PROJECT chmod u=rwx,g=rx,o= /etc/opt/$DJANGO_PROJECT
What this does is make the directory unreadable by users other than
root and
$DJANGO_USER. The directory is owned by
root, and
the first command above changes the group of the directory to
$DJANGO_GROUP. The second command changes the permissions of the
directory so that:
- u=rwx
- The owner has permission to read (rx) and write (w) the directory (the
uin
u=rwxstands for “user”, but actually it means the “user who owns the directory”). The owner is
root. Reading a directory is denoted with
rxrather than simply
r, where the
xstands for “search”; but giving a directory only one of the
rand
xpermissions is an edge case that I’ve seen only once in my life. For practical purposes, when you want a directory to be readable, you must specify both
rand
x. (This applies only to directories; for files, the
xis the permission to execute the file as a program.)
- g=rx
- The group has permission to read the directory. More precisely, users who belong in that group have permission to read the directory. The directory’s group is
$DJANGO_GROUP. The only user in that group is
$DJANGO_USER, so this adjustment applies only to that user.
- o=
- Other users have no permission, they can’t read or write to the directory.
You might have expected that it would have been easier to tell the
system “I want
root to be able to read and write, and
$DJANGO_USER to be able to only read”. Instead, we did something
much more complicated: we made
$DJANGO_USER belong to a
$DJANGO_GROUP, and we made the directory readable by that group,
thus indirectly readable by the user. The reason we did it this way is
an accident of history. In Unix there has traditionally been no way to
say “I want
root to be able to read and write, and
$DJANGO_USER
to be able to only read”. In many modern Unixes, including Linux, it is
possible using Access Control Lists, but this is a feature added later,
it does not work the same in all Unixes, and its syntax is harder to
use. The way we use here works the same in FreeBSD, HP-UX, and all other
Unixes, and it is common practice everywhere.
Finally, we need to compile the settings file. Your settings file
and the
/etc/opt/$DJANGO_PROJECT directory is owned by root, and, as
with the files in
/opt, Django won’t be able to write the
compiled version, so we pre-compile it as root:
/opt/$DJANGO_PROJECT/venv/bin/python -m compileall \ /etc/opt/$DJANGO_PROJECT
Compiled files are the reason we changed the permissions of the
directory and not the permissions of
settings.py. When Python writes
the compiled files (which also contain the sensitive information), it
does not give them the permissions we want, which means we’d need to be
chgrping and chmoding each time we compile. By removing read permissions
from the directory, we make sure that none of the files in the directory
is readable; in Unix, in order to read file
/etc/opt/$DJANGO_PROJECT/settings.py, you must have permission to
read
/ (the root directory),
/etc,
/etc/opt,
/etc/opt/$DJANGO_PROJECT, and
/etc/opt/$DJANGO_PROJECT/settings.py.
You can check the permissions of a directory with the
-d option of
ls, like this:
ls -lhd / ls -lhd /etc ls -lhd /etc/opt ls -lhd /etc/opt/$DJANGO_PROJECT
(In the above commands, if you don’t use the
-d option it will show
the contents of the directory instead of the directory itself.)
Hint
Unix permissions
When you list a file or directory with the
-l option of
ls,
it will show you something like
-rwxr-xr-x at the beginning of
the line. The first character is the file type:
- for a file and
d for a directory (there are also some more types, but we won’t
bother with them). The next nine characters are the permissions:
three for the user, three for the group, three for others.
rwxr-xr-x means “the user has permission to read, write and
search/execute, the group has permission to read and search/execute
but not write, and so do others”.
rwxr-xr-x can also be denoted as 755. If you substitute 0 in
place of a hyphen and 1 in place of r, w and x, you get 111 101 101.
In octal, this is 755. Instead of
chmod u=rwx,g=rx,o= /etc/opt/$DJANGO_PROJECT
you can type
chmod 750 /etc/opt/$DJANGO_PROJECT
which means exactly the same thing. People use this latter version much more than the other one, because it is so much easier to type, and because converting permissions into octal becomes second nature with a little practice.
3.6. Managing production vs. development settings¶
How to manage production vs. development settings seems to be an eternal
question. Many people recommend, instead of a single
settings.py
file, a
settings directory containing
__init__.py and
base.py.
base.py is the base settings, those that are the same
whether in production or development or testing. The directory often
contains
local.py (alternatively named
dev.py), with common
development settings, which might or might not be in the repository.
There’s often also
test.py, settings that are used when testing.
Both
local.py and
test.py start with this line:
from .base import *
Then they go on to override the base settings or add more settings.
When the project is set up like this,
manage.py is usually modified
so that, by default, it uses
$DJANGO_PROJECT.settings.local instead
of simply
$DJANGO_PROJECT.settings. For more information on this
technique, see Section 5.2, “Using Multiple Settings Files”, in the book
Two Scoops of Django; there’s also a stackoverflow answer about it.
Now, people who use this scheme sometimes also have
production.py in
the settings directory of the repository. Call me a perfectionist (with
deadlines), but the production settings are the administrator’s job, not
the developer’s, and your django project’s repository is made by the
developers. You might claim that you are both the developer and the
administrator, since it’s you who are developing the project and
maintaining the deployment, but in this case you are assuming two roles,
wearing a different hat each time. Production settings don’t belong in
the project repository any more than the nginx or PostgreSQL
configuration does.
The proper place to store such settings is another repository—the
deployment repository. It can be as simple as holding only the
production
settings.py (along with
README and
.gitignore),
or as complicated as containing all your nginx, PostgreSQL, etc.,
configuration for several servers, along with the “recipe” for how to
set them up, written with a configuration management system such as
Ansible.
If you choose, however, to keep your production settings in your Django
project repository, then your
/etc/opt/$DJANGO_PROJECT/settings.py
file shall eventually be a single line:
from $DJANGO_PROJECT.settings.production import *
However, I don’t want you to do this now. We aren’t yet going to use our
real production settings, because we are going step by step. Instead,
create the
/etc/opt/$DJANGO_PROJECT/settings.py file as I explained
in the previous section.
3.7. Running the Django server¶
Warning
We are running Django with
runserver here, which is inappropriate
for production. We are doing it only temporarily, so that you
understand several concepts. We will run Django correctly in the
chapter about Gunicorn.
su $DJANGO_USER source /opt/$DJANGO_PROJECT/venv/bin/activate export PYTHONPATH=/etc/opt/$DJANGO_PROJECT:/opt/$DJANGO_PROJECT export DJANGO_SETTINGS_MODULE=settings python /opt/$DJANGO_PROJECT/manage.py migrate python /opt/$DJANGO_PROJECT/manage.py runserver 0.0.0.0:8000
You could also do that in an exceptionally long command (provided you
have already done the
migrate part), like this:
PYTHONPATH=/etc/opt/$DJANGO_PROJECT:/opt/$DJANGO_PROJECT \ DJANGO_SETTINGS_MODULE=settings \ su $DJANGO_USER -c \ "/opt/$DJANGO_PROJECT/venv/bin/python \ /opt/$DJANGO_PROJECT/manage.py runserver 0.0.0.0:8000"
Hint
su
You have probably heard of
sudo, which is a very useful program
on Unix client machines (desktops and laptops). On the server,
sudo is less common and we use
su instead.
su, like
sudo, changes the user that executes a program. If
you are user joe and you execute
su -c ls, then
ls is run as
root.
su will ask for the root password in order to proceed.
su alice -c ls means “execute
ls as user alice”.
su alice
means “start a shell as user alice”; you can then type commands as
user alice, and you can enter
exit to “get out” of
su, that
is, to exit the shell than runs as alice. If you are a normal user
su will ask you for alice’s password. If you are root, it will
become alice without questions. This should make clear how the
su
command works when you run the Django server as explained above.
sudo works very differently from
su. Instead of asking the
password of the user you want to become, it asks for your password,
and has a configuration file that describes which user is allowed to
become what user and with what constraints. It is much more
versatile.
su does only what I described and nothing more.
su
is guaranteed to exist in all Unix systems, whereas
sudo is an
add-on that must be installed. By default it is usually installed on
client machines, but not on servers.
su is much more commonly
used on servers and shell scripts than
sudo.
Do you understand that very clearly? If not, here are some tips:
- Make sure you have a grip on virtualenv and environment variables.
- Python reads the
PYTHONPATHenvironment variable and adds the specified directories to the Python path.
- Django reads the
DJANGO_SETTINGS_MODULEenvironment variable. Because we have set it to “settings”, Django will attempt to import
settingsinstead of the default (the default is
$DJANGO_PROJECT.settings, or maybe
$DJANGO_PROJECT.settings.local).
- When Django attempts to import
settings, Python looks in its path. Because
/etc/opt/$DJANGO_PROJECTis listed first in
PYTHONPATH, Python will first look there for
settings.py, and it will find it there.
- Likewise, when at some point Django attempts to import
your_django_app, Python will look in
/etc/opt/$DJANGO_PROJECT; it won’t find it there, so then it will look in
/opt/$DJANGO_PROJECT, since this is next in
PYTHONPATH, and it will find it there.
- If, before running
manage.py [whatever], we had changed directory to
/opt/$DJANGO_PROJECT, we wouldn’t need to specify that directory in
PYTHONPATH, because Python always adds the current directory to its path. This is why, in development, you just tell it
python manage.py [whatever]and it finds your project. We prefer, however, to set the
PYTHONPATHand not change directory; this way our setup will be clearer and more robust.
Instead of using
DJANGO_SETTINGS_MODULE, you can also use the
--settings parameter of
manage.py:
PYTHONPATH=/etc/opt/$DJANGO_PROJECT:/opt/$DJANGO_PROJECT \ su $DJANGO_USER -c \ "/opt/$DJANGO_PROJECT/venv/bin/python \ /opt/$DJANGO_PROJECT/manage.py runserver --settings=settings 0.0.0.0:8000"
(
manage.py also supports a
--pythonpath parameter which could be
used instead of
PYTHONPATH, however it seems that
--settings
doesn’t work correctly together with
--pythonpath, at least not in
Django 1.8.)
If you fire up your browser and visit, you should see your Django project in action.
3.8. Chapter summary¶
- Create a system user and group with the same name as your Django project.
- Put your Django project in
/opt, with all files owned by root.
- Put your virtualenv in
/opt/$DJANGO_PROJECT/venv, with all files owned by root.
- Put your data files in a subdirectory of
/var/optwith the same name as your Django project, owned by the system user you created. If you are using SQLite, the database file will go in there.
- Put your settings file in a subdirectory of
/etc/optwith the same name as your Django project, whose user is root, whose group is the system group you created, that is readable by the group and writeable by root, and whose contents belong to root.
- Precompile the files in
/opt/$DJANGO_PROJECTand
/etc/opt/$DJANGO_PROJECT.
- Run
manage.pyas the system user you created, after setting the environment variables
PYTHONPATH=/etc/opt/$DJANGO_PROJECT:/opt/$DJANGO_PROJECTand
DJANGO_SETTINGS_MODULE=settings. | https://djangodeployment.readthedocs.io/en/latest/03-users-and-directories.html | CC-MAIN-2019-35 | refinedweb | 3,634 | 55.03 |
[
]
Hiram Chirino resolved APLO-133.
--------------------------------
Resolution: Fixed
This is now fixed. The export file format is now compatible between the BDB and LevelDB stores
and uses a compressed tar instead of zip container to avoid file size limit issues.
> LevelDB Store import/export not working.
> ----------------------------------------
>
> Key: APLO-133
> URL:
> Project: ActiveMQ Apollo
> Issue Type: Bug
> Reporter: Hiram Chirino
> Assignee: Hiram Chirino
> Fix For: 1.0
>
>
> Export with:
> ./bin/apollo-broker store-export data.zip
> And then:
> rm -rf data/*
> ./bin/apollo-broker store-import data.zip
> but import did not restore the original data.
> Both the export and import command seem to not return (perhaps it's a karaf shell oddness)
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators:
For more information on JIRA, see: | http://mail-archives.apache.org/mod_mbox/activemq-commits/201201.mbox/%3C1490400007.63915.1327185340685.JavaMail.tomcat@hel.zones.apache.org%3E | CC-MAIN-2014-52 | refinedweb | 136 | 52.56 |
Post Syndicated from Mohamed AboElKheir original
AWS CloudHSM is a cloud-based hardware security module (HSM) that enables you to generate and use your own encryption keys on the AWS Cloud. With CloudHSM, you can manage your own encryption keys using FIPS 140-2 Level 3 validated HSMs. Your HSMs are part of a CloudHSM cluster. CloudHSM automatically manages synchronization, high availability, and failover within a cluster.
CloudHSM is part of the AWS Cryptography suite of services, which also includes AWS Key Management Service (KMS) and AWS Certificate Manager Private Certificate Authority (ACM PCA). KMS and ACM PCA are fully managed services that are easy to use and integrate. You’ll generally use AWS CloudHSM only if your workload needs a single-tenant HSM under your own control, or if you need cryptographic algorithms that aren’t available in the fully-managed alternatives.
CloudHSM offers several options for you to connect your application to your HSMs, including PKCS#11, Java Cryptography Extensions (JCE), or Microsoft CryptoNG (CNG). Regardless of which library you choose, you’ll use the CloudHSM client to connect to all HSMs in your cluster. The CloudHSM client runs as a daemon, locally on the same Amazon Elastic Compute Cloud (EC2) instance or server as your applications.
The deployment process is straightforward if you’re running your application directly on your compute resource. However, if you want to deploy applications using the HSMs in containers, you’ll need to make some adjustments to the installation and execution of your application and the CloudHSM components it depends on. Docker containers don’t typically include access to an init process like systemd or upstart. This means that you can’t start the CloudHSM client service from within the container using the general instructions provided by CloudHSM. You also can’t run the CloudHSM client service remotely and connect to it from the containers, as the client daemon listens to your application using a local Unix Domain Socket. You cannot connect to this socket remotely from outside the EC2 instance network namespace.
This blog post discusses the workaround that you’ll need in order to configure your container and start the client daemon so that you can utilize CloudHSM-based applications with containers. Specifically, in this post, I’ll show you how to run the CloudHSM client daemon from within a Docker container without needing to start the service. This enables you to use Docker to develop, deploy and run applications using the CloudHSM software libraries, and it also gives you the ability to manage and orchestrate workloads using tools and services like Amazon Elastic Container Service (Amazon ECS), Kubernetes, Amazon Elastic Container Service for Kubernetes (Amazon EKS), and Jenkins.
Solution overview
My solution shows you how to create a proof-of-concept sample Docker container that is configured to run the CloudHSM client daemon. When the daemon is up and running, it runs the AESGCMEncryptDecryptRunner Java class, available on the AWS CloudHSM Java JCE samples repo. This class uses CloudHSM to generate an AES key, then it uses the key to encrypt and decrypt randomly generated data.
Note: In my example, you must manually enter the crypto user (CU) credentials as environment variables when running the container. For any production workload, you’ll need to carefully consider how to provide, secure, and automate the handling and distribution of your HSM credentials. You should work with your security or compliance officer to ensure that you’re using an appropriate method of securing HSM login credentials for your application and security needs.
Figure 1: Architectural diagram
Prerequisites
To implement my solution, I recommend that you have basic knowledge of the below:
- CloudHSM
- Docker
- Java
Here’s what you’ll need to follow along with my example:
- An active CloudHSM cluster with at least one active HSM. You can follow the Getting Started Guide to create and initialize a CloudHSM cluster. (Note that for any production cluster, you should have at least two active HSMs spread across Availability Zones.)
- An Amazon Linux 2 EC2 instance in the same Amazon Virtual Private Cloud in which you created your CloudHSM cluster. The EC2 instance must have the CloudHSM cluster security group attached—this security group is automatically created during the cluster initialization and is used to control access to the HSMs. You can learn about attaching security groups to allow EC2 instances to connect to your HSMs in our online documentation.
- A CloudHSM crypto user (CU) account created on your HSM. You can create a CU by following these user guide steps.
Solution details
- On your Amazon Linux EC2 instance, install Docker:
- Start the docker service:
- Create a new directory and step into it. In my example, I use a directory named “cloudhsm_container.” You’ll use the new directory to configure the Docker image.
- Copy the CloudHSM cluster’s CA certificate (customerCA.crt) to the directory you just created. You can find the CA certificate on any working CloudHSM client instance under the path /opt/cloudhsm/etc/customerCA.crt. This certificate is created during initialization of the CloudHSM Cluster and is needed to connect to the CloudHSM cluster.
- In your new directory, create a new file with the name run_sample.sh that includes the contents below. The script starts the CloudHSM client daemon, waits until the daemon process is running and ready, and then runs the Java class that is used to generate an AES key to encrypt and decrypt your data.
- In the new directory, create another new file and name it Dockerfile (with no extension). This file will specify that the Docker image is built with the following components:
- The AWS CloudHSM client package.
- The AWS CloudHSM Java JCE package.
- OpenJDK 1.8. This is needed to compile and run the Java classes and JAR files.
- Maven, a build automation tool that is needed to assist with building the Java classes and JAR files.
- The AWS CloudHSM Java JCE samples that will be downloaded and built.
- Cut and paste the contents below into Dockerfile.
Note: Make sure to replace the HSM_IP line with the IP of an HSM in your CloudHSM cluster. You can get your HSM IPs from the CloudHSM console, or by running the describe-clusters AWS CLI command.
- Now you’re ready to build the Docker image. Use the following command, with the name jce_sample_client. This command will let you use the Dockerfile you created in step 6 to create the image.
- To run a Docker container from the Docker image you just created, use the following command. Make sure to replace the user and password with your actual CU username and password. (If you need help setting up your CU credentials, see prerequisite 3. For more information on how to provide CU credentials to the AWS CloudHSM Java JCE Library, refer to the steps in the CloudHSM user guide.)
If successful, the output should look like this:
Conclusion
My solution provides an example of how to run CloudHSM workloads on Docker containers. You can use it as a reference to implement your cryptographic application in a way that benefits from the high availability and load balancing built in to AWS CloudHSM without compromising on the flexibility that Docker provides for developing, deploying, and running applications. If you have comments about this post, submit them in the Comments section below.
Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter. | https://noise.getoto.net/2019/04/02/how-to-run-aws-cloudhsm-workloads-on-docker-containers/ | CC-MAIN-2020-29 | refinedweb | 1,235 | 52.39 |
![if gte IE 9]><![endif]>
Flag
this post as spam/abuse.
Please use Formula Debugger ,
I tried putting 'return' and no use. When I try with the debugger it shows the correct value without any error. But I couldn't create this field as its saying there is an error in the formula
Any update on this?
This is what I am getting when debug the formula..
It seems like you have two values in ms, subtracting them and dividing dividing on number of ms in 24 hours. What seems to be wrong?
Debugger says nothing wrong but the system is not allowing to create this formula field saying error in the formula without telling what the error is.
If no value is set for Time field, corresponding template token will be resolved as empty string. That will cause an error in your formula. But if value is set, formula works just fine.
To resolve this try using parseInt:
parseInt("{!time}")
In my case I only need to calculate the value when the time is update other wise it should return 0.
So I tried with,
if ("{!End_Time_tt}" !== "" && "{!Start_Time_tt}" !== ""){
return ( ({!End_Time_tt} - {!Start_Time_tt}) / (60*60*1000) );
}else{
return 0;
}
But still I get the error when there is at least one record without time values. I wonder why it is trying to compute something inside a failed condition.I am still looking for a way to achieve this without using after update trigger.Ithrees
The server side replaces the tokens with actual values during validation. So, your formula becomes syntactically incorrect to process.
line 2 in your formula turns out as : return ( ( - ) ) / 60*60*1000); (if the values are empty)
Try assigning them to variables and use them. The forumla should validte successfully then.
var x = "{!End_Time_tt}";
var y = "{!Start_Time_tt}";
if (x !== "" && y !== ""){
return ( (x - y) / (60*60*1000) );
}else{
return 0;
}
Please let us know if this works.
I would recommend
parseInt(x) - parseInt(y)
Hi,
I think he is trying to return a Decimal, use parseFloat() to your start time and end time BUT you cannot directly parse it once the value is empty it will return NaN(see image below), I suggest to use conditional variable. See code below, just replace my tokens.
Images: Null value in using parseInt/parseFloat empty field
var sTime = "{!Start_Time}" == "" ? 0 : parseFloat("{!Start_Time}");
var eTime = "{!End_Time}" == "" ? 0 : parseFloat("{!End_Time}");
rbv_api.println(sTime);
rbv_api.println(eTime);
if (eTime !== 0 && sTime !== 0){
return ( (eTime - sTime) / (60*60*1000) );
}else{
return 0;
}
Hope this may help.
Regards,
Orchid | https://community.progress.com/community_groups/rollbase/f/25/p/17094/62190 | CC-MAIN-2018-34 | refinedweb | 422 | 66.94 |
Opened 3 years ago
Last modified 3 months ago
#8303 new bug
defer StackOverflow exceptions (rather than dropping them) when exceptions are masked
Description
See for a very simple program (main') that accidentally evades the stack size limit, running to completion even though it has allocated hundreds of megabytes of stack chunks, and my comment for an explanation of this behavior.
ryani suggested that when a thread exceeds its stack limit but it is currently blocking exceptions, the RTS shouldn't simply drop the StackOverflow exception, but rather deliver it when the mask operation completes. That sounds sensible to me and it would give a nice guarantee that when any individual mask operation uses a small amount of stack, the stack size limit is approximately enforced.
(I know that the default stack size limit may go away or essentially go away, but it can still be nice when developing to use a small stack size limit, so that one's system isn't run into the ground by infinite recursion quickly gobbling up tons of memory.)
Attachments (2)
Change History (23)
comment:1 Changed 3 years ago by ezyang
- Cc simonmar added
- difficulty changed from Unknown to Easy (less than 1 hour)
- Owner set to ezyang
- Version changed from 7.6.3 to 7.7
Changed 3 years ago by ezyang
Validated patch
comment:2 Changed 3 years ago by ezyang
- Status changed from new to patch
Posted a new patch that validates.
comment:3 Changed 3 years ago by simonmar
This is a very clever trick, and it has a certain devious beauty. I have no idea if we're going to trip up an assumption somewhere by having a throwTo message where the source and the destination are the same thread, but I'm happy to take the risk (and I'm sure we can fix up any breakage that results).
comment:4 Changed 3 years ago by simonpj
"a clever trick, with devious beauty". I love the sound of that, but Edward can you make sure there is a clear Note [Clever trick of devious beauty] to document the intent and explain how it works? Thanks.
Simon
comment:5 Changed 3 years ago by rwbarton
Here is a test, adapted from the original program. I tested that it fails currently and passes if I replace (hPutStrLn h "Hi") by (return ()); I haven't tested your patch, Edward.
Changed 3 years ago by rwbarton
comment:6 Changed 3 years ago by ezyang
=====> T8303(normal) 59 of 3801 [0, 0, 0] cd ./rts && '/home/hs01/ezyang/ghc-stackoverflow/inplace/bin/ghc-stage2' -fforce-recomp -dcore-lint -dcmm-lint -dno-debug-output -no-user-package-db -rtsopts -fno-ghci-history -o T8303 T8303.hs >T8303.comp.stderr 2>&1 cd ./rts && ./T8303 +RTS -K2K -RTS </dev/null >T8303.run.stdout 2>T8303.run.stderr OVERALL SUMMARY for test run started at Fri Oct 11 11:24:21 2013 PDT 0:00:02 spent to go through 3801 total tests, which gave rise to 14943 test cases, of which 14942 were skipped 0 had missing libraries 1 expected passes 0 expected failures
I'll go ahead and add extra docs, and then push.
comment:7 Changed 3 years ago by Edward Z. Yang <ezyang@…>
comment:8 Changed 3 years ago by ezyang
Hey Reid, I wanted to use your test case, but its use of /dev/null makes it not portable. Can you fix that?
comment:9 Changed 3 years ago by rwbarton
Hey Reid, I wanted to use your test case, but its use of /dev/null makes it not portable.
Does it really? I figured it would be okay since I noticed the testsuite driver redirected stdin from /dev/null anyways... does it do something different on Windows?
Can you fix that?
Hmm, maybe. I guess since I set the max stack size so low, I can reduce bignum to something like 500, set ignore_output, and have the failure mode of the test be that the program completes successfully... I'll try it.
comment:10 Changed 3 years ago by ezyang
I believe the situation is that msys will provide /dev/null, but MingW compiled things will not. I can pull it up on my Windows box and test it out if you like.
comment:11 Changed 2 years ago by simonmar
- Milestone set to 7.8.1
- Owner ezyang deleted
- Priority changed from normal to high
- Status changed from patch to new
comment:12 Changed 2 years ago by simonmar
- Owner set to thoughtpolice
comment:13 Changed 2 years ago by thoughtpolice
- Priority changed from high to low
comment:14 Changed 2 years ago by thoughtpolice
- Priority changed from low to high
comment:15 Changed 2 years ago by thoughtpolice
- Priority changed from high to low
This is merged, the testsuite needs investigation, but I believe this should be fine with MSYS.
comment:16 Changed 23 months ago by thoughtpolice
- Milestone changed from 7.8.3 to 7.8.4
Moving to 7.8.4.
comment:17 Changed 19 months ago by thoughtpolice
- Milestone changed from 7.8.4 to 7.10.1
Moving (in bulk) to 7.10.4
comment:18 Changed 16 months ago by thoughtpolice
- Milestone changed from 7.10.1 to 7.12.1
Moving to 7.12.1 milestone; if you feel this is an error and should be addressed sooner, please move it back to the 7.10.1 milestone.
comment:19 Changed 8 months ago by thoughtpolice
- Milestone changed from 7.12.1 to 8.0.1
Milestone renamed
comment:20 Changed 8 months ago by thomie
- Component changed from Runtime System to Test Suite
Bug is fixed. Just needs a test.
comment:21 Changed 3 months ago by thomie
- Milestone 8.0.1 deleted
Posted an untested patch I am validating. It is a little bit of a hack, but not very much. | https://ghc.haskell.org/trac/ghc/ticket/8303 | CC-MAIN-2016-18 | refinedweb | 983 | 70.02 |
What's the best way to extend the "starter" module to read from multiple boards at once?
Hi there, I'm a pretty inexperienced programmer (physics background but working in a HCI lab currently) and am working on a Unity app that can display the orientation of a human leg based on information streaming from 3 MMC sensors. So far I have modified the "starter" script to stream quaternions to a Unity app on an android phone in real time, and update the orientation of a cube accordingly.
The last step before I'm out of the woods programming-wise (and into the area I'm actually competent in) is extending this so that data is being taken from 3 sensors instead of 1. I spent most of today trying to modify the starter script but attempting to change certain parts to set up 3 sensors seemed to completely break the rest of the code. I'm sorry I don't have a concise/specific question here but if anyone could give me some tips on how to move forward with this it would save me a lot of hours. I know that there is a multimw script in the tutorial pack, but changing that to stream 3 quaternions seems more difficult than updating the starter script to connect to multiple boards (I think, anyways).
Thanks in advance.
For the record, this is my first time doing any Android programming whatsoever so it's been a challenge. I feel like my goal should be easy to accomplish if I knew what I was looking for so I'm hoping there's a simple solution.
I'm referring to the starter script located here:
My attempt so far involves keeping count of how many devices have been connected and if it is not yet the desired amount (3 in my case), then "startActivityForResult(navActivityIntent, REQUEST_START_APP);" is bypassed in the onDeviceSelected part of MainActivity. Once the required number of devices have been connected, the intent containing the 3 devices is passed to startActivityForResult.
To add to this I also made some changes to DeviceSetupActivityFragment, such as replacing "MetawearBoard metawear" with a list of such objects (similarly I have a list of SensorFusionBosch objects, and BluetoothDevice). Does this approach make sense in general? My main problem is that I have so little experience with android that I'm not even certain what many of the other methods are doing so it's hard to adjust them to work for multiple boards.
Looks go to me so far
This ended up being more trouble than its worth so I just wrote my own script. If anyone is trying to do something similar in the future (stream 3 quaternions simultaneously) my code is below. It's very bare bones but gets the job done. Basically it assigns the values to a static variable which is then accessed by an android service and streamed to my unity app. There are certainly much cleaner/more efficient ways to accomplish this but feel free to use my work as a starting point.
`public class MainActivity extends Activity implements ServiceConnection {
private BtleService.LocalBinder serviceBinder;
}`
Nice job! | https://mbientlab.com/community/discussion/comment/8454 | CC-MAIN-2021-25 | refinedweb | 527 | 56.49 |
In Web based Application, each of the users have different privileges and access rights based on their roles. Each of these roles can have Read or Write Access for different webpages.When the user logs into the System, the button are disabled/enabled or hidden/shown based on the role of the user. Generally we have to write a lot of code to provide this kind of functionality of disabling/enabling or hiding/showing the button and that too this code has to be implemented on each of the web page.
This can be avoided by overriding the default aspx button. A property called "AccessRight" is added to the default aspx button where we can pass the type of the access for the button. The button is automatically disabled/enabled or hidden/shown based on the access right of the user and access type of the button.
In this article, I will show you how to override the default button which saves us from writing a lot of code. The code is written in C#.
The following code shows you how to override the default aspx button and add a property called "AccessRight" which defines the type of access for the button.
namespace
We are required to make one c# class or vb class which can be included in the ASP.Net project itself or the class can be compiled separately into a dll which can be referred in the ASP.Net project.
Custom Namespace used
akhilpittu: The namespace of the control which can be changed accordingly.
Public Class used
The button is defined as a public C# class, which is inherited from System.Web.UI.WebControls.Button class. The button class contains one Property called AccessRight and one Protected Method.
Property UsedAccessRight: This property is used to set the access type (Read/Write) for the button during the design mode.
Protected Methods UsedOnPreRender: This method is used to disable/enable or hide/show the button before it is rendered to the webpage.
Compilation of ClassThis class is compiled using the command prompt. Command Prompt window can be opened from Programs->Microsoft Visual Studio.Net->Visual Studio Tools ->Visual Studio.Net Command Prompt
vbc /r:System.dll,System.Web.dll /t:library /out:button.dll C:/button.vb
csc /r:System.dll,System.Web.dll /t:library /out:button.dll C:/button.cs
Switches used
button.cs is the name of class file which is to be compiled. Here in this example, this file is contained in the C Drive but you can put this file in any folder you wish to. But you will have to give the full path of button.cs file while compiling.
Using the button Control in ASPX FileThis dll has to be included in the Reference of the Web Application before it can be used in an aspx page
Registering the Control in the ASPX FileThe control needs to be registered in the aspx page in which we wish to use this control using the Register directive.
<%@ Register TagPrefix="aspx" NameSpace="akhilpittu" Assembly="button"%>
Using Control in the ASPX File<aspx:button</aspx:button>
Here we have seen how to override the default button to make it disable/enable or hide/show based on the Access Right of the user who has logged into the System thus protecting the System from any action which is not allowed for a particular user. This will certainly save your time when you get down writing code.
Implementing Security Access Rights in ASP.NET Button
Display Alphabetically Sorted Data in a Data Grid
PLZ SEND ME THE LINK TO DOWNLODE ARTICLE | http://www.c-sharpcorner.com/UploadFile/Akhilesh%20Kumar/SecurityToASPNetButton11212005060917AM/SecurityToASPNetButton.aspx | crawl-003 | refinedweb | 603 | 63.49 |
Werner,
I think what you may want is something like this:
def OnButton1Button(self, event):
print 'done it'
### self.figure.clear()
### self.axes = self.figure.add_subplot(111)
self.axes.cla() # <-- clear the axes
t = arange(0.0,4.0,0.01)
s = sin(2*pi*t)
self.axes.plot(t,s)
self.axes.set_xlabel('Time 2 (s)')
self.axes.set_ylabel('Price 2 ($)')
self.canvas.draw() # <-- force a redraw
Is that OK? It works for me on Windows and Linux. I do the
same thing for 'make a fresh plot' in my own codes that I know
work OK on Mac OS X as well.
Cheers,
--Matt
View entire thread | https://sourceforge.net/p/matplotlib/mailman/message/9333975/ | CC-MAIN-2017-47 | refinedweb | 109 | 71 |
Urwid uses widgets to divide up the available screen space. This makes it easy to create a fluid interface that moves and changes with the user’s terminal and font size.
The result of rendering a widget is a canvas suitable for displaying on the screen. When we render the topmost widget:
Widgets (a), (b) and (e) are called container widgets because they contain other widgets. Container widgets choose the size and position their contained widgets.
Container widgets must also keep track of which one of their contained widgets is in focus. The focus is used when handling keyboard input. If in the above example (b) ‘s focus widget is (e) and (e) ‘s focus widget is (f) then keyboard input will be handled this way:
The size of a widget is measured in screen columns and rows. Widgets that are given an exact number of screen columns and rows are called box widgets. The topmost widget is always a box widget.
Much of the information displayed in a console user interface is text and the best way to display text is to have it flow from one screen row to the next. Widgets like this that require a variable number of screen rows are called flow widgets. Flow widgets are given a number of screen columns and can calculate how many screen rows they need.
Occasionally it is also useful to have a widget that knows how many screen columns and rows it requires, regardless of the space available. This is called a fixed widget.
It is an Urwid convention to use the variables maxcol and maxrow to store a widget’s size. Box widgets require both of (maxcol, maxrow) to be specified.
Flow widgets expect a single-element tuple (maxcol,) instead because they calculate their maxrow based on the maxcol value.
Fixed widgets expect the value () to be passed in to functions that take a size because they know their maxcol and maxrow values.
Basic and graphic widgets are the content with which users interact. They may also be used as part of custom widgets you create.
Decoration widgets alter the appearance or position of a single other widget. The widget they wrap is available as the original_widget property. If you might be using more than one decoration widget you may use the base_widget property to access the “most” original_widget. Widget.base_widget points to self on all non-decoration widgets, so it is safe to use in any situation.
Container widgets divide their available space between their child widgets. This is how widget layouts are defined. When handling selectable widgets container widgets also keep track of which of their child widgets is in focus. Container widgets may be nested, so the actual widget in focus may be many levels below the topmost widget.
Urwid’s container widgets have a common API you can use, regardless of the container type. Backwards compatibility is still maintained for the old container-specific ways of accessing and modifying contents, but this API is now the preferred way of modifying and traversing containers.
container.focus
is a read-only property that returns the widget in focus for this container. Empty containers and non-container widgets (that inherit from Widget) return None.
container.focus_position
is a read/write property that provides access to the position of the container’s widget in focus. This will often be a integer value but may be any object. Columns, Pile, GridFlow, Overlay and ListBox with a SimpleListWalker or SimpleFocusListWalker as its body use integer positions. Frame uses 'body', 'header' and 'footer'; ListBox with a custom list walker will use the positions the list walker returns.
Reading this value on an empty container or on any non-container widgets (that inherit from Widget) raises an IndexError. Writing to this property with an invalid position will also raise an IndexError. Writing a new value automatically marks this widget to be redrawn and will be reflected in container.focus.
container.contents
is a read-only property (read/write in some cases) that provides access to a mapping- or list-like object that contains the child widgets and the options used for displaying those widgets in this container. The mapping- or list-like object always allows reading from positions with the usual __getitem__() method and may support assignment and deletion with __setitem__() and __delitem__() methods. The values are (child widget, option) tuples. When this object or its contents are modified the widget is automatically flagged to be redrawn.
Columns, Pile and GridFlow allow assigning an iterable to container.contents to overwrite the values in with the ones provided.
Columns, Pile, GridFlow, Overlay and Frame support container.contents item assignment and deletion.
container.options(...)
is a method that returns options objects for use in items added to container.contents. The arguments are specific to the container type, and generally match the __init__() arguments for the container. The objects returned are currently tuples of strings and integers or None for containers without child widget options. This method exists to allow future versions of Urwid to add new options to existing containers. Code that expects the option tuples to remain the same size will fail when new options are added, so defensive programming with options tuples is strongly encouraged.
container.__getitem__(x) # a.k.a. container[x]
is a short-cut method behaving identically to: container.contents[x][0].base_widget. Which means roughly “give me the child widget at position x and skip all the decoration widgets wrapping it”. Decoration widgets include Padding, Filler, AttrMap etc.
container.get_focus_path()
is a method that returns the focus position for this container and all child containers along the path defined by their focus settings. This list of positions is the closest thing we have to the singular widget-in-focus in other UI frameworks, because the ultimate widget in focus in Urwid depends on the focus setting of all its parent container widgets.
container.set_focus_path(p)
is a method that assigns to the focus_position property of each container along the path given by the list of positions p. It may be used to restore focus to a widget as returned by a previous call to container.get_focus_path().
container.get_focus_widgets()
is a method that returns the .focus values starting from this container and proceeding along each child widget until reaching a leaf (non-container) widget.
Note that the list does not contain the topmost container widget (i.e, on which this method is called), but does include the lowest leaf widget.
container.__iter__() # typically for x in container: ... container.__reversed__() # a.k.a reversed(container)
are methods that allow iteration over the positions of this container. Normally the order of the positions generated by __reversed__() will be the opposite of __iter__(). The exception is the case of ListBox with certain custom list walkers, and the reason goes back to the original way list walker interface was defined. Note that a custom list walker might also generate an unbounded number of positions, so care should be used with this interface and ListBox.
Pile widgets are used to combine multiple widgets by stacking them vertically. A Pile can manage selectable widgets by keeping track of which widget is in focus and it can handle moving the focus between widgets when the user presses the UP and DOWN keys. A Pile will also work well when used within a ListBox.
A Pile is selectable only if its focus widget is selectable. If you create a Pile containing one Text widget and one Edit widget the Pile will choose the Edit widget as its default focus widget.
Columns widgets may be used to arrange either flow widgets or box widgets horizontally into columns. Columns widgets will manage selectable widgets by keeping track of which column is in focus and it can handle moving the focus between columns when the user presses the LEFT and RIGHT keys. Columns widgets also work well when used within a ListBox.
Columns widgets are selectable only if the column in focus is selectable. If a focus column is not specified the first selectable widget will be chosen as the focus column.
The GridFlow widget is a flow widget designed for use with Button, CheckBox and RadioButton widgets. It renders all the widgets it contains the same width and it arranges them from left to right and top to bottom.
The GridFlow widget uses Pile, Columns, Padding and Divider widgets to build a display widget that will handle the keyboard input and rendering. When the GridFlow widget is resized it regenerates the display widget to accommodate the new space.
The Overlay widget is a box widget that contains two other box widgets. The bottom widget is rendered the full size of the Overlay widget and the top widget is placed on top, obscuring an area of the bottom widget. This widget can be used to create effects such as overlapping “windows” or pop-up menus.
The Overlay widget always treats the top widget as the one in focus. All keyboard input will be passed to the top widget.
If you want to use a flow flow widget for the top widget, first wrap the flow widget with a Filler widget.
ListBox is a box widget that contains flow widgets. Its contents are displayed stacked vertically, and the ListBox allows the user to scroll through its content. One of the flow widgets displayed in the ListBox is its focus widget.
The ListBox is a box widget that contains flow widgets. Its contents are displayed stacked vertically, and the ListBox allows the user to scroll through its content. One of the flow widgets displayed in the ListBox is the focus widget. The ListBox passes key presses to the focus widget to allow the user to interact with it. If the focus widget does not handle a keypress then the ListBox may handle the keypress by scrolling and/or selecting another widget to become the focus widget.
The ListBox tries to do the most sensible thing when scrolling and changing focus. When the widgets displayed are all Text widgets or other unselectable widgets then the ListBox will behave like a web browser does when the user presses UP, DOWN, PAGE UP and PAGE DOWN: new text is immediately scrolled in from the top or bottom. The ListBox chooses one of the visible widgets as its focus widget when scrolling. When scrolling up the ListBox chooses the topmost widget as the focus, and when scrolling down the ListBox chooses the bottommost widget as the focus.
The ListBox remembers the location of the widget in focus as either an “offset” or an “inset”. An offset is the number of rows between the top of the ListBox and the beginning of the focus widget. An offset of zero corresponds to a widget with its top aligned with the top of the ListBox. An inset is the fraction of rows of the focus widget that are “above” the top of the ListBox and not visible. The ListBox uses this method of remembering the focus widget location so that when the ListBox is resized the text displayed will stay roughly aligned with the top of the ListBox.
When there are selectable widgets in the ListBox the focus will move between the selectable widgets, skipping the unselectable widgets. The ListBox will try to scroll all the rows of a selectable widget into view so that the user can see the new focus widget in its entirety. This behavior can be used to bring more than a single widget into view by using composite widgets to combine a selectable widget with other widgets that should be displayed at the same time.
While the ListBox stores the location of its focus widget, it does not directly store the actual focus widget or other contents of the ListBox. The storage of a ListBox‘s content is delegated to a “List Walker” object. If a list of widgets is passed to the ListBox constructor then it creates a SimpleListWalker object to manage the list.
When the ListBox is rendering a canvas or handling input it will:
This is the only way the ListBox accesses its contents, and it will not store copies of any of the widgets or position objects beyond the current rendering or input handling operation.
The SimpleListWalker stores a list of widgets, and uses integer indexes into this list as its position objects. It stores the focus position as an integer, so if you insert a widget into the list above the focus position then you need to remember to increment the focus position in the SimpleListWalker object or the contents of the ListBox will shift.
A custom List Walker object may be passed to the ListBox constructor instead of a plain list of widgets. List Walker objects must implement the List Walker Interface.
The fib.py example program demonstrates a custom list walker that doesn’t store any widgets. It uses a tuple of two successive Fibonacci numbers as its position objects and it generates Text widgets to display the numbers on the fly. The result is a ListBox that can scroll through an unending list of widgets.
The edit.py example program demonstrates a custom list walker that loads lines from a text file only as the user scrolls them into view. This allows even huge files to be opened almost instantly.
The browse.py example program demonstrates a custom list walker that uses a tuple of strings as position objects, one for the parent directory and one for the file selected. The widgets are cached in a separate class that is accessed using a dictionary indexed by parent directory names. This allows the directories to be read only as required. The custom list walker also allows directories to be hidden from view when they are “collapsed”.
The easiest way to change the current ListBox focus is to call the ListBox.set_focus() method. This method doesn’t require that you know the ListBox‘s current dimensions (maxcol, maxrow). It will wait until the next call to either keypress or render to complete setting the offset and inset values using the dimensions passed to that method.
The position object passed to set_focus() must be compatible with the List Walker object that the ListBox is using. For SimpleListWalker the position is the integer index of the widget within the list.
The coming_from parameter should be set if you know that the old position is “above” or “below” the previous position. When the ListBox completes setting the offset and inset values it tries to find the old widget among the visible widgets. If the old widget is still visible, if will try to avoid causing the ListBox contents to scroll up or down from its previous position. If the widget is not visible, then the ListBox will:
If you know exactly where you want to display the new focus widget within the ListBox you may call ListBox.set_focus_valign(). This method lets you specify the top, bottom, middle, a relative position or the exact number of rows from the top or bottom of the ListBox.
ListBox does not manage the widgets it displays directly, instead it passes that task to a class called a “list walker”. List walkers keep track of the widget in focus and provide an opaque position object that the ListBox may use to iterate through widgets above and below the focus widget.
A SimpleFocusListWalker is a list walker that behaves like a normal Python list. It may be used any time you will be displaying a moderate number of widgets.
If you need to display a large number of widgets you should implement your own list walker that manages creating widgets as they are requested and destroying them later to avoid excessive memory use.
List walkers may also be used to display tree or other structures within a ListBox. A number of the example programs demonstrate the use of custom list walker classes.
See also
ListWalker base class reference
This API will remain available and is still the least restrictive option for the programmer. Your class should subclass ListWalker. Whenever the focus or content changes you are responsible for calling ListWalker._modified().
return a (widget, position) tuple or (None, None) if empty
set the focus and call self._modified() or raise an IndexError.
return the (widget, position) tuple below position passed or (None, None) if there is none.
return the (widget, position) tuple above position passed or (None, None) if there is none.
This API is an attempt to remove some of the duplicate code that V1 requires for many users. List walker API V1 will be implemented automatically by subclassing ListWalker and implementing the V2 methods. Whenever the focus or content changes you are responsible for calling ListWalker._modified().
return widget at position or raise an IndexError or KeyError
return the position below passed position or raise an IndexError or KeyError
return the position above passed position or raise an IndexError or KeyError
set the focus and call self._modified() or raise an IndexError.
attribute or property containing the focus position, or define MyV1ListWalker.get_focus() as above
Widgets in Urwid are easiest to create by extending other widgets. If you are making a new type of widget that can use other widgets to display its content, like a new type of button or control, then you should start by extending WidgetWrap and passing the display widget to its constructor.
The Widget interface is described in detail in the Widget base class reference and is useful if you’re looking to modify the behavior of an existing widget, build a new widget class from scratch or just want a better understanding of the library.
One Urwid design choice that stands out is that widgets typically have no size. Widgets don’t store their size on screen, and instead are passed that information when they need it.
This choice has some advantages:
It also has disadvantages:
For determining a widget’s size on screen it is possible to look up the size(s) it was rendered at in the CanvasCache. There are plans to address some of the duplicated size handling code in the container widgets in a future Urwid release.
The same holds true for a widget’s focus state, so that too is passed in to functions that need it.
The easiest way to create a custom widget is to modify an existing widget. This can be done by either subclassing the original widget or by wrapping it. Subclassing is appropriate when you need to interact at a very low level with the original widget, such as if you are creating a custom edit widget with different behavior than the usual Edit widgets. If you are creating a custom widget that doesn’t need tight coupling with the original widget then wrapping is more appropriate.
The WidgetWrap class simplifies wrapping existing widgets. You can create a custom widget simply by creating a subclass of WidgetWrap and passing a widget into WidgetWrap’s constructor.
This is an example of a custom widget that uses WidgetWrap:
The above code creates a group of RadioButtons and provides a method to query the state of the buttons.
Widgets must inherit from Widget. Box widgets must implement Widget.selectable() and Widget.render() methods, and flow widgets must implement Widget.selectable(), Widget.render() and Widget.rows() methods.
The default Widget.sizing() method returns a set of sizing modes supported from self._sizing, so we define _sizing attributes for our flow and box widgets below.
The above code implements two widget classes. Pudding is a flow widget and BoxPudding is a box widget. Pudding will render as much “Pudding” as will fit in a single row, and BoxPudding will render as much “Pudding” as will fit into the entire area given.
Note that the rows and render methods’ focus parameter must have a default value of False. Also note that for flow widgets the number of rows returned by the rows method must match the number of rows rendered by the render method.
To improve the efficiency of your Urwid application you should be careful of how long your rows() methods take to execute. The rows() methods may be called many times as part of input handling and rendering operations. If you are using a display widget that is time consuming to create you should consider caching it to reduce its impact on performance.
It is possible to create a widget that will behave as either a flow widget or box widget depending on what is required:
MultiPudding will work in place of either Pudding or BoxPudding above. The number of elements in the size tuple determines whether the containing widget is expecting a flow widget or a box widget.
Selectable widgets such as Edit and Button widgets allow the user to interact with the application. A widget is selectable if its selectable method returns True. Selectable widgets must implement the Widget.keypress() method to handle keyboard input.
import urwid class SelectablePudding(urwid.Widget): _sizing = frozenset(['flow']) _selectable = True def __init__(self): self.pudding = "pudding" def rows(self, size, focus=False): return 1 def render(self, size, focus=False): (maxcol,) = size num_pudding = maxcol / len(self.pudding) pudding = self.pudding if focus: pudding = pudding.upper() return urwid.TextCanvas([pudding * num_pudding], maxcol=maxcol) def keypress(self, size, key): (maxcol,) = size if len(key) > 1: return key if key.lower() in self.pudding: # remove letter from pudding n = self.pudding.index(key.lower()) self.pudding = self.pudding[:n] + self.pudding[n+1:] if not self.pudding: self.pudding = "pudding" self._invalidate() else: return key
The SelectablePudding widget will display its contents in uppercase when it is in focus, and it allows the user to “eat” the pudding by pressing each of the letters P, U, D, D, I, N and G on the keyboard. When the user has “eaten” all the pudding the widget will reset to its initial state.
Note that keys that are unhandled in the keypress method are returned so that another widget may be able to handle them. This is a good convention to follow unless you have a very good reason not to. In this case the UP and DOWN keys are returned so that if this widget is in a ListBox the ListBox will behave as the user expects and change the focus or scroll the ListBox.
Widgets that display the cursor must implement the Widget.get_cursor_coords() method. Similar to the rows method for flow widgets, this method lets other widgets make layout decisions without rendering the entire widget. The ListBox widget in particular uses get_cursor_coords to make sure that the cursor is visible within its focus widget.
CursorPudding will let the user move the cursor through the widget by pressing LEFT and RIGHT. The cursor must only be added to the canvas when the widget is in focus. The get_cursor_coords method must always return the same cursor coordinates that render does.
A widget displaying a cursor may choose to implement Widget.get_pref_col(). This method returns the preferred column for the cursor, and is called when the focus is moving up or down off this widget.
Another optional method is Widget.move_cursor_to_coords(). This method allows other widgets to try to position the cursor within this widget. The ListBox widget uses Widget.move_cursor_to_coords() when changing focus and when the user pressed PAGE UP or PAGE DOWN. This method must return True on success and False on failure. If the cursor may be placed at any position within the row specified (not only at the exact column specified) then this method must move the cursor to that position and return True.
The Widget base class has a metaclass defined that creates a __super attribute for calling your superclass: self.__super is the same as the usual super(MyClassName, self). This shortcut is of little use with Python 3’s new super() syntax, but will likely be retained for backwards compatibility in future versions.
This metaclass also uses MetaSignal to allow signals to be defined as a list of signal names in a signals class attribute. This is equivalent to calling register_signal() with the class name and list of signals and all those defined in superclasses after the class definition.
See also
Widget metaclass WidgetMeta | http://urwid.org/manual/widgets.html | CC-MAIN-2014-41 | refinedweb | 4,042 | 54.12 |
Top Apps
- Audio & Video
- Business & Enterprise
- Communications
- Development
- Home & Education
- Games
- Graphics
- Science & Engineering
- Security & Utilities
- System Administration
Showing page 7 of 3344.
agi sqlite call log
Log your asterisk calls to an sqlite database using agi. * Track call length when answered. * Associate numbers with contact details. * get notified of calls on your pc.0 weekly downloads
.net/mono VoIP implementation
This project tries to implement the Session Initiation Protocol (SIP, RFC 3261) and related protocols for VoIP in .NET/Mono9 weekly downloads
.plan Handler
Utility to facilitate time-sensitive use of .plan and .project files. Curses-based interface allows user to easily view, edit, replace, and delete contents. A timestamp is automatically prepended to the file contents.1 weekly downloads
.po file editors
UpdatePoFile (for programmers) and PoNewEdit (for translators) are an alternative editors to Poedit, for multilingual programs that use gettext (with .po files). Ready for users with visual problems or blind. Look below.10 weekly downloads
.sol Editor (Flash Shared Object)
This tool opens or create a Macromedia Flash shared object file (.sol) displays the content of the file and allow you to change the values.836 weekly downloads
/env file system
envfs is a virtual file system that provides namespace access to environment variables.0 weekly downloads
0 A.D.
0 A.D. is a free, open-source, cross-platform real-time strategy game.4,357 weekly downloads
06f522 Set
06f522 distributed set game
101 Bourne Shell Utility Scripts
The project includes several Bourne Shell utility scripts. Detailed manual pages accompany each of the scripts. The goal of the project is to provide at least 101 useful scripts.2 weekly downloads.()1 weekly downloads | http://sourceforge.net/directory/natlanguage%3Aenglish/license%3Aosi/?sort=name&page=7 | CC-MAIN-2013-20 | refinedweb | 278 | 52.46 |
33973/how-to-i-clear-tkinter-canvas-using-python
When I draw a shape using:
canvas.create_rectangle(15, 15, 60, 60, color="blue")
Does Tkinter keep track of the fact that it was created?
I am making a game where my code has one Frame create a bunch of rectangles and then draw a big black rectangle to clear the screen and then draw another set of updated rectangles.
Am I creating thousands of rectangle objects in memory? What is the right way to do it? Appreciate some help here!
To clear a canvas, use the delete method.
This ensures you avoid memory leaks and not end up creating thousands of objects.
Give it the special parameter "all" to delete all items on the canvas (the string "all"" is a special tag that represents all items on the canvas):
canvas.delete("all")
The code that I've written below. The ...READ MORE
In Logic 1, try if i<int(length/2): instead of if i<int((length/2+1)):
In ...READ MORE
def add(a,b):
return a + b
#when i call ...READ MORE
down voteacceptTheeThe problem is that you're iterating ...READ MORE
suppose you have a string with a ...READ MORE
if you google it you can find. ...READ MORE
Syntax :
list. count(value)
Code:
colors = ['red', 'green', ...READ MORE
can you give an example using a ...READ MORE
You could scale your data to the ...READ MORE
In the easiest way, you can create ...READ MORE
OR
Already have an account? Sign in. | https://www.edureka.co/community/33973/how-to-i-clear-tkinter-canvas-using-python | CC-MAIN-2019-47 | refinedweb | 256 | 77.23 |
Anyone who has used Visual Studio to create more than one project or solution has used the "Start Page" to see the most recently used (MRU) projects or solutions. If you are like me, you often have more than one Visual Studio instance opened and have been frustrated by the fact that the list never seems to be "right". In addition I find myself often creating "temporary" projects to test a new idea and don't want to see these projects on my MRU Project List.
I have seen two articles on CodeProject discussing the "Start Page", but neither gave me an easy way to edit or maintain this list. The Stagner article speaks of maintaining the list by editing the registry, which in the end is all my tool does, but I wanted an interface to maintain the list.
Visual Studio stores the MRU Projects in the registry under the path HKEY_CURRENT_USER\Software\Microsoft\VisualStudio\XX\ProjectMRUList where XX is the version number of Visual Studio.
There is a string value for each recent file named File1, File2, File3, etc. with a value of the full path to the file. Visual Studio updates this list on close, which is the root of some of my problems. Multiple versions of Visual Studio do not communicate about their open projects, nor do they read the current state of the registry and attempt to merge them. On close, Studio clears this key and re-writes out the list as it knows it. My scenario is as follows:
Since Visual Studio clears and rewrites the list on close, you must have all instances of Visual Studio before using this tool.
There is no real magic in this code; one listbox, six buttons and some Registry code.
Since we are using the Registry, we need to include the Microsoft.Win32 namespace.
Microsoft.Win32
using Microsoft.Win32;
On the form load we open the key in a writable mode:
private void MainForm_Load(object sender, System.EventArgs e) {
// Open a writable RegKey
m_ProjectMRUKey = Registry.CurrentUser.OpenSubKey(PROJECT_MRU_PATH, true);
}
/// <SUMMARY>
/// Loads (Reads) Registry Values into ListBox
/// </SUMMARY>
private void btnLoad_Click(object sender, System.EventArgs e) {
lstBoxMain.Items.Clear();
foreach (string key in m_ProjectMRUKey.GetValueNames()) {
lstBoxMain.Items.Add(m_ProjectMRUKey.GetValue(key));
}
}
/// <SUMMARY>
/// Save (Writes) Registry Values from ListBox
/// </SUMMARY>
private void btnSave_Click(object sender, System.EventArgs e) {
foreach (string key in m_ProjectMRUKey.GetValueNames())
m_ProjectMRUKey.DeleteValue(key);
for(int i=0; i < lstBoxMain.Items.Count; ++i) {
m_ProjectMRUKey.SetValue("File" + (i+1), lstBoxMain.Items[i]);
}
//Flush the object to force it to write the files to the Registry
m_ProjectMRUKey.Flush();
}
Form closing - Close the Registry key:
private void MainForm_Closed(object sender, System.EventArgs e) {
//Close and flush the registry value on close
m_ProjectMRUKey.Close();
}
The rest of the code deals with maintaining the ListBox by removing the selected item, or moving it up or down in order. There is also an Add button to create a new item in the list for unknown projects. Note that the OpenFileDialog called in the Add function only filters to look for a small subset of the total Visual Studio supported file types. It was enough to suit my needs and I added an "All Files" filter as a catch-all.
ListBox
OpenFileDialog
As the radio cliché goes, "I'm a long time listener first time caller to CodeProject". I hope this simple article is useful to those who take the time to read it. I use this tool all the time and have found it very helpful. Depending on how my experience with this article goes, I will hopefully be back to lend more tips and tricks to the coding community. I look forward to your comments and / or questions.
This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below.
A list of licenses authors might use can be found here
private RegistryKey m_ProjectMRUKey;
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | https://www.codeproject.com/articles/10831/visual-studio-project-mru-list-editor?fid=193850&df=30&mpp=50&noise=3&prof=true&sort=position&view=thread&spc=relaxed | CC-MAIN-2016-50 | refinedweb | 696 | 63.59 |
#include <Optika_GUI.hpp>
A class that allows the user to create and customize their Optika GUI.
Runs the GUI and gets the user input.
Gets the information to be added to the about dialog of the GUI.
Gets the file path describing the location of the file being used as the QT Style Sheet.
Gets the file path describing the location of the file being used for the window icon.
Gets the window title.
Adds the information specified to the about dialog of the GUI.
Sets the custom function to be used in the GUI. When ever the user hits submit, this function will be run.
Sets the QT style sheet that should be used for the GUI.
Sets the window icon to the image specified in the filePath.
Sets the title of the GUI window that is displayed to the user. | http://trilinos.sandia.gov/packages/docs/r10.6/packages/optika/doc/html/classOptika_1_1OptikaGUI.html | CC-MAIN-2013-48 | refinedweb | 142 | 76.42 |
From: Dave Abrahams (abrahams_at_[hidden])
Date: 2000-01-23 19:01:01
Valentin wrote:
> Dave Abrahams wrote:
>>
>> In generic programs, there are two ways to allow users to customize
>> behaviors of called functions for their own types:
>>
>> 1. We can use overloading and rely on Koenig lookup to help select specific
>> implementations.
>> 2. We can supply a function template and rely on the user to specialize it
>> for her own types.
>
> 3. For namespaces different from std, use overloading in only one
> namespace
> for things that can be overloaded (functions), and do not rely on
> Koenig lookup but, on the contrary, use qualified names; user
> code adds overloads directly in the library namespace
> use specialisations for things which can't (classes)
Okay, but let me get this straight (I think my sanity is back):
For template functions which are defined in std (e.g. swap) we are allowed
to FULLY specialize on a user-defined type, but we are not allowed to
"partially specialize" (write an overload which should be selected by
partial ordering) on a user-defined type.
For example:
namespace NS {
class X {...};
template <class T> class Y {...};
};
namespace std {
// 1. legal full specialization
template <> void swap(NS::X&, NS::X&);
// 2. illegal "partial" specialization (actually an overload)
template <class T> void swap(NS::Y<T>&, NS::Y<T>&) {};
}
But inside of boost, we are free to allow overloads like #2 above.
Furthermore, we have no idea why they are prohibited in std.
> In any cases, when adding something to a namespace, follow the
> requirements of the name of the thing being added.
For example, a full specialization of std::swap must swap the values of its
arguments.
> This works w/o reserving function names in every namespace,
> and allows complete parametrisation by library users.
right.
>> (e.g., how would you write an abs that works for unsigned?)
>
> Seems to me that
>
> namespace boost {
> unsigned abs (unsigned x) { return x; }
> unsigned long abs (unsigned long x) { return x; }
> }
>
> does the job.
My original quote was:
<<I think it's an open question whether option 2 can be applied effectively
in cases where there isn't a generalizable implementation for the function
(e.g., how would you write an abs that works for unsigned?)>>
Option 2 involved starting with a function template, and allowing the user
to specialize it for their own types.
What I meant was that it isn't clear that option 2 is appropriate for all
combinations of types and operations. For a better example, suppose you
wanted to supply a negation function - it wouldn't apply to unsigned.
Now that my sanity has returned I understand that we would probably just use
overloading inside of boost to handle that case.
So my current conclusion is that Koenig lookup is fine for code which
doesn't depend on template parameters (as a way to avoid writing too many
namespace qualifications). Code in templates, however, should be written to
avoid finding functions in the namespaces of template parameters through
Koenig lookup, since that practice effectively "reserves" the name in the
namespace of that template parameter, and can result in unexpected
ambiguities**. The exception is code in std, which (because of prohibitions
against overloading in std), MUST rely on Koenig lookup for functions like
abs or swap which it is expecting the user to customize. On the other hand,
implementors of the standard library should probably be cautious and
explicitly qualify most function calls to avoid causing conflicts with names
which the user reasonably believes he can use safely within his own
namespace.
-Dave
** having trouble coming up with a good, small example of this right now,
but have seen it recently in my work with the STLport, where there is a
separate namespace containing debug versions of many functions and debug
wrappers for container iterators. If someone can produce a small example of
the problem, I'd be indebted.
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2000/01/1901.php | CC-MAIN-2019-35 | refinedweb | 674 | 59.64 |
js> function f() { } f.__proto__ = this; this.m setter = f; uneval(this); Assertion failure: vlength > n, at jsobj.c:938 The code is trying to remove "(function " and ")" from something that turns out to be a sharp variable. Security sensitive because it looks like the code might jump past the end of a string. /* * Remove '(function ' from the beginning of valstr and ')' from the * end so that we can put "get" in front of the function definition. */ if (gsop[j] && VALUE_IS_FUNCTION(cx, val[j])) { size_t n = strlen(js_function_str) + 2; JS_ASSERT(vlength > n); vchars += n; vlength -= n + 1; } Strange behavior in opt: js> function f() { } f.__proto__ = this; this.m setter = f; uneval(this); ({f:#1={prototype:{}}, set m #1#}) js> function f() { } f.hhhhhhhhh = this; this.m setter = f; uneval(this); #1={f:#2=function f() {}, set m #2#}
Btw, the sharp stuff I mentioned at the end of comment 0 doesn't play well with the "set" syntax when trying to eval again. js> a = {}; h = function() { }; a.b getter = h; a.c getter = h; print(uneval(a)); eval(uneval(a)); ({get b #1=() {}, get c #1#}) typein:23: SyntaxError: missing ( before formal parameters: typein:23: ({get b #1=() {}, get c #1#}) typein:23: ........^ (I didn't test this before because I thought sharp variables were output-only!)
This bug is getting in the way of my testing :(
Maybe sharped functions just need to use the "x getter: " syntax rather than the "get x" syntax.
(In reply to comment #3) > Maybe sharped functions just need to use the "x getter: " syntax rather than > the "get x" syntax. Yes, that's the ticket. Brian, can you detect a # at the front of the value string and switch to this syntax? /be
Created attachment 264243 [details] [diff] [review] use new "needOldStyleGetterSetter" logic There must be a decompiler version of this same bug, though what it is isn't obvious to me. This uses the logic added in bug 356083 with some refactoring to handle the value side of expressions (previously only calculated needOldStyleGetterSetter for the propid).
> There must be a decompiler version of this same bug, though what it is > isn't obvious to me. The decompiler seems to get it right: it collapses some uses of "getter" in object literals into the new "get" syntax, but leaves ones with sharps alone. js> function() { return { x getter: function(){} } } function () { return {get x() {}}; } js> function() { return { x getter: #1=function(){} } } function () { return {x getter:#1=function () {}}; } js> function() { return { x getter: #1# } } function () { return {x getter:#1#}; }
Comment on attachment 264243 [details] [diff] [review] use new "needOldStyleGetterSetter" logic Moving review to Igor
Comment on attachment 264243 [details] [diff] [review] use new "needOldStyleGetterSetter" logic > val[valcnt] = (jsval) ((JSScopeProperty *)prop)->getter; >-#ifdef OLD_GETTER_SETTER >- gsop[valcnt] = >+ gsopold[valcnt] = > ATOM_TO_STRING(cx->runtime->atomState.getterAtom); >-#else >- gsop[valcnt] = needOldStyleGetterSetter >- ? ATOM_TO_STRING(cx->runtime->atomState.getterAtom) >- : ATOM_TO_STRING(cx->runtime->atomState.getAtom); >-#endif Why the patch removes OLD_GETTER_SETTER ifdefs?
Comment on attachment 264243 [details] [diff] [review] use new "needOldStyleGetterSetter" logic #ifndef OLD_GETTER_SETTER + if (vchars[0] == '#') +#endif + needOldStyleGetterSetter = JS_TRUE; + We use that ifdef to make this decision later, so the ifdefs above are just old untidiness from the last time I worked in this code.
Comment on attachment 264243 [details] [diff] [review] use new "needOldStyleGetterSetter" logic > #ifndef OLD_GETTER_SETTER >+ if (vchars[0] == '#') >+#endif >+ needOldStyleGetterSetter = JS_TRUE; >+ >+ if (needOldStyleGetterSetter) >+ gsop[j] = gsopold[j]; >+ >+#ifndef OLD_GETTER_SETTER > /* > * Remove '(function ' from the beginning of valstr and ')' from the > * end so that we can put "get" in front of the function definition. > */ That if wrappend into #ifndef makes code hard to follow. I suggest to move it into the following +#ifndef OLD_GETTER_SETTER block and add a copy of "gsop[j] = gsopold[j];" as #else code. r+ with that fixed.
Created attachment 264918 [details] [diff] [review] implementation v2 This addresses Igor's comments, some bugs I encountered in testing this patch and cleans up an old preprocessor-guarded section of code, which really is no longer necessary. People defining OLD_GETTER_SETTER will suffer some bloat, but the code is tidier now.
Created attachment 264920 [details] [diff] [review] implementation v2a A minor tweak for OLD_GETTER_SETTER users
jsobj.c: 3.335
on the 1.8 branch I hit the original assertion using a debug xpcshell but not when I try to run the code in the browser. What's the potential security impact for Mozilla clients from this flaw? Do we need to land this on the 1.8 branch?
I think this is a relatively harmless (ie., crash, but nothing more) UMR on the branches.
Does it make decisions based on the UMR that might reveal sensitive information in Firefox's address space.
Ah, there might be a privacy problem, then, since this string can be printed and inspected for sensitive information.
Created attachment 265857 [details] [diff] [review] MOZILLA_1_8_BRANCH patch, roll-up This should pick up fixes from bug 358594, bug 381303, bug 356083 (pre-requisite for 358594), bug 381211 (another bug introduced here, fixed in bug 367629), and bug 380933. I think if we're going to do a branch patch here, we should get all of these. If that is undesirable, I will try to back out a few of them. That proposal is more painful than using this, though. (not ready for review yet, still needs more testing)
The bug in comment #1 is still not fixed here (on trunk or by my patch roll-up). Jesse, would you mind opening another bug for that?
I already filed bug 380831 for that.
Comment on attachment 265857 [details] [diff] [review] MOZILLA_1_8_BRANCH patch, roll-up This roll-up catches brokenness from all the testcases in this and the bugs mentioned in comment #19, except as noted in comment #20. I haven't had time to run the full JS test suite on it.
(In reply to comment #21) > I already filed bug 380831 for that. Which explains why I thought I'd fixed that.
Comment on attachment 265857 [details] [diff] [review] MOZILLA_1_8_BRANCH patch, roll-up Do we want 1.8.0.x for this, too? Will wait for both approval and test-suite blessing to land this.
Created attachment 266144 [details] comparison of before|after MOZILLA_1_8_BRANCH rollup patch There are some changes in decompilation that aren't cool.
Thanks, Bob. I will address these and upload another attempt in a day or so.
Created attachment 266313 [details] js1_7/regress/regress-358594.js This doesn't include any of the decompiler tests.
I'm not worried about the two performance bugs here; 101964 is only showing a delta because it's value is different. The orders of magnitude are the same. There's no reason this test should affect the performance of sort() (bug 99120), either. Not sure why there's a delta there, but it seems like perhaps just a change in the test environment or something (maybe the file moved?) I'll have a patch to address the others shortly.
It looks like the rest of these are addressed in the patch for bug 355736. That bug should be nominated for branch approval if we care, otherwise we should probably just land this. Bob, can I get you to try the test-suite again when you get a chance? If nothing has changed, I think we can land this... I have a sneaking suspicion a few more tweaks have happened since I did the roll-up.
I'll get to it this evening on linux at the least.
crowder, do you just want the vanilla trunk tested? From what time period? I already have vanilla 1.8, trunk tests running on all three platforms with builds from this morning. Is that sufficient for your needs?
I want a 1.8 engine, but run against the trunk testsuite (if they differ) to make sure that this patch doesn't regress anything and that it improves the decompilation/obj_toSource situation. I'm not worried about '' quoted keywords, though; as I mentioned that should be handled in another patch. The patch for bug 355736 applies cleanly on 1.8
crowder, I've been trying to get a good result for this on 1.8 with the patch but am having problems for which I can not conclusively blame this patch. I'll keep trying and see if I can get a definite answer for you tonight.
Comment on attachment 266313 [details] js1_7/regress/regress-358594.js js1_7 not required.
Created attachment 268516 [details] js1_5/extensions/regress-358594-01.js
Created attachment 268517 [details] js1_5/extensions/regress-358594-02.js
Created attachment 268518 [details] js1_5/extensions/regress-358594-03.js
Created attachment 268519 [details] js1_5/extensions/regress-358594-04.js
Created attachment 268520 [details] js1_5/extensions/regress-358594-05.js
Created attachment 268521 [details] js1_5/extensions/regress-358594-06.js
Created attachment 269533 [details] difference between 1.8.1 without patch and with patch These differences are all due to (from what I can tell) bugs which are fixed on the trunk but not branch. /me stamps approval fwiw
Comment on attachment 265857 [details] [diff] [review] MOZILLA_1_8_BRANCH patch, roll-up approved for 1.8.1.5, a=dveditz for release-drivers
Checking in jsobj.c; /cvsroot/mozilla/js/src/jsobj.c,v <-- jsobj.c new revision: 3.208.2.52; previous revision: 3.208.2.51 done Checking in jsopcode.c; /cvsroot/mozilla/js/src/jsopcode.c,v <-- jsopcode.c new revision: 3.89.2.72; previous revision: 3.89.2.71 done
bc, could you help verifying this fix on the latest 2.0.0.5 rc builds?
verified fixed 1.8, 1.9.0 windows, linux, macppc with 7/16 opt/debug shell/browser.
Created attachment 274461 [details] [diff] [review] roll-up for 1.8.0
(In reply to comment #46) > Created an attachment (id=274461) [details] > roll-up for 1.8.0 > Could you add a patch using the same cvs diff -u -p -8 options as the patch from comment 19 and also add a plain diff between patches to simplify the review?
Comment on attachment 274461 [details] [diff] [review] roll-up for 1.8.0 Sorry for a late review, I forgot about it. >Index: mozilla/js/src/jsopcode.c >=================================================================== >--- mozilla.orig/js/src/jsopcode.c 2007-07-16 16:36:40.000000000 +0200 >+++ mozilla/js/src/jsopcode.c 2007-07-18 12:22:30.000000000 +0200 >@@ -61,16 +61,17 @@ ... > if (lastop == JSOP_GETTER || lastop == JSOP_SETTER) { > rval += strlen(js_function_str) + 1; >- todo = Sprint(&ss->sprinter, "%s%s%s %s%.*s", >- lval, >- (lval[1] != '\0') ? ", " : "", >- (lastop == JSOP_GETTER) >- ? js_get_str : js_set_str, >- xval, >- strlen(rval) - 1, >- rval); >+ if (!atom || !ATOM_IS_STRING(atom) || >+ !ATOM_IS_IDENTIFIER(atom) || >+ !!ATOM_KEYWORD(js_AtomizeChars(cx, >+ ATOM_TO_STRING(atom), >+ sizeof(ATOM_TO_STRING(atom)), >+ 0))|| No need to re-atomize the atom here meaning that atom == js_AtomizeChars(cx, JSSTRING_CHARS(ATOM_TO_STRING(atom)), JSSTRING_LENGTH(ATOM_TO_STRING(atom)), 0). In any case js_AtomizeChars(cx, ATOM_TO_STRING(atom), sizeof(ATOM_TO_STRING(atom)), 0) is bogus and should generate at least warnings on the wrong pointer type of js_AtomizeChars argumnet. Now given that ATOM_IS_IDENTIFIER is: !ATOM_KEYWORD(atom) && js_IsIdentifier(ATOM_TO_STRING(atom)) Then !ATOM_IS_IDENTIFIER(atom) is ATOM_KEYWORD(atom) || !js_IsIdentifier(ATOM_TO_STRING(atom)) meaning that !!ATOM_KEYWORD() can be omitted. >Index: mozilla/js/src/jsobj.c >=================================================================== >@@ -806,91 +808,111 @@ > /* > * We have four local roots for cooked and raw value GC safety. Hoist the > * "argv + 2" out of the loop using the val local, which refers to the raw > * (unconverted, "uncooked") values. > */ > val = argv + 2; > > for (i = 0, length = ida->length; i < length; i++) { >+ JSBool idIsLexicalIdentifier, needOldStyleGetterSetter; >+ char *atomstrchars; >+ > /* Get strings for id and value and GC-root them via argv. */ > id = ida->vector[i]; > > #if JS_HAS_GETTER_SETTER >- > ok = OBJ_LOOKUP_PROPERTY(cx, obj, id, &obj2, &prop); > if (!ok) > goto error; >+#endif >+ >+ /* >+ * Convert id to a jsval and then to a string. Decide early whether we >+ * prefer get/set or old getter/setter syntax. >+ */ >+ atom = JSID_IS_ATOM(id) ? JSID_TO_ATOM(id) : NULL; >+ idstr = js_ValueToString(cx, ID_TO_VALUE(id)); >+ if (!idstr) { >+ ok = JS_FALSE; >+ OBJ_DROP_PROPERTY(cx, obj2, prop); >+ goto error; >+ } >+ *rval = STRING_TO_JSVAL(idstr); /* local root */ >+ idIsLexicalIdentifier = js_IsIdentifier(idstr); >+ >+ atomstrchars = ATOM_TO_STRING(atom); >+ needOldStyleGetterSetter = >+ !idIsLexicalIdentifier || >+ ATOM_KEYWORD(js_AtomizeChars(cx, >+ atomstrchars, >+ sizeof(atomstrchars), >+ 0)) != TOK_EOF; >+ Again, use ATOM_KEYWORD(atom) here.
/cvsroot/mozilla/js/tests/js1_5/extensions/regress-358594-01.js,v <-- regress-358594-01.js initial revision: 1.1 /cvsroot/mozilla/js/tests/js1_5/extensions/regress-358594-02.js,v <-- regress-358594-02.js initial revision: 1.1 /cvsroot/mozilla/js/tests/js1_5/extensions/regress-358594-03.js,v <-- regress-358594-03.js initial revision: 1.1 /cvsroot/mozilla/js/tests/js1_5/extensions/regress-358594-04.js,v <-- regress-358594-04.js initial revision: 1.1 /cvsroot/mozilla/js/tests/js1_5/extensions/regress-358594-05.js,v <-- regress-358594-05.js initial revision: 1.1 /cvsroot/mozilla/js/tests/js1_5/extensions/regress-358594-06.js,v <-- regress-358594-06.js initial revision: 1.1
Created attachment 282402 [details] [diff] [review] roll-up for 1.8.0 (with comments) updated according to comment #48 this time using: cvs diff -u -p -8 jsobj.c jsopcode.c
Comment on attachment 282402 [details] [diff] [review] roll-up for 1.8.0 (with comments) Igor did the last review, so setting the ? to him again.
Comment on attachment 282402 [details] [diff] [review] roll-up for 1.8.0 (with comments) Sorry for a late review, I missed the request 2 months ago.
Comment on attachment 282402 [details] [diff] [review] roll-up for 1.8.0 (with comments) a=asac for 1.8.0.15
MOZILLA_1_8_0_BRANCH: Checking in js/src/jsobj.c; /cvsroot/mozilla/js/src/jsobj.c,v <-- jsobj.c new revision: 3.208.2.12.2.28; previous revision: 3.208.2.12.2.27 done Checking in js/src/jsopcode.c; /cvsroot/mozilla/js/src/jsopcode.c,v <-- jsopcode.c new revision: 3.89.2.8.2.12; previous revision: 3.89.2.8.2.11 done | https://bugzilla.mozilla.org/show_bug.cgi?id=358594 | CC-MAIN-2017-17 | refinedweb | 2,284 | 59.19 |
Hallo, I am trying to understand how multidimensional arrays work but can't seem to figure it out. I would like help in understanding them. Yeah googled already but what I have seen so far seem to say "a multidimensional is an array of an array" which doesn't help much in understanding. I have a code here and its output. I would like someone to explain it to me the outcome well. Thanks in advance.
Output: 789456789123Output: 789456789123Code:
#include <iostream>
using namespace std;
intmain()
{
intar[]={123,456,789};
intind[]={2,1,2,0};
intk;
for (k=0;k<4;k=k+1) {
cout<< ar[ind[k]];
}
return 0;
}
OK I think I figure it out.
OK the loop produces the values 0, 1, 2 and 3. Putting this in the array the statement becomes:
cout<< ar[ind[0]]; and so on till array 3.
From here then it is easier. Because ind[0] = 2, so it becomes array[2]. Although I understand this but intar[] has only 3 values while ind has 4, so when intar runs out of values to loop over, what is supposed to happen? Does it go back to zero and start it all over again or? | http://cboard.cprogramming.com/cplusplus-programming/136182-multidimensional-arrays-printable-thread.html | CC-MAIN-2015-11 | refinedweb | 202 | 74.29 |
Opened 7 years ago
Closed 7 years ago
#18686 closed Uncategorized (duplicate)
Models with same name and common subpackage name clash
Description
In the following example, Test is an empty model declared in both base/a/test/models.py and base/b/test/models2.py:
from base.a.test.models import Test from base.b.test.models2 import Test as Test2 Test.__module__ # prints com.a.test.models Test2.__module__ # prints com.a.test.models, but should print base.b.test.models2
It appears that the commonly named 'test' subpackage in both 'base.a' and 'base.b' is the issue. Renaming either subpackage causes the issue to disappear.
# models.py/models2.py from django.db import models class Test(models.Model): pass
Change History (1)
comment:1 Changed 7 years ago by
Note: See TracTickets for help on using tickets.
This is a result of the models being associated with an implicit app_label that is derived from the path to the model.
because the app label namespace is flat, a subsequent attempt to import a model named "test" for an app with the label "test" will first check the appcache and return an existing model matching that pair of identifiers. This is a case of clashing app names/labels and as such is a duplicate of #3591
The real bug here is that no error is raised (in Django 1.4) with the following in installed_apps:
this results in two applications with the same label - which will confound the model registration process - such a case should raise an error pointing out the collision early on. | https://code.djangoproject.com/ticket/18686 | CC-MAIN-2019-39 | refinedweb | 265 | 59.19 |
__gc in the beginning of the class declaration ).
In these instance we'll call it
CMyThreads. Why not? Every program should
have a CMySomething as a class.
#pragma once __gc class CMyThreads { public: CMyThreads(void); ~CMyThreads(void); void MyThreadProc(); void AddArguments(void* pArg1, void* pArg2) void * m_FirstArgument ; void * m_SecondArgument ; };
One problem in managed C++ threads is the arguments. You
must create a function to call before starting the thread if you want
arguments. (See
AddArguments above)
Calling the thread from another class:
foo() { CMyThreads * pMyThread; pMyThread = new CMyThreads; pMyThread->AddArguments(Argument1, Argument2); ThreadStart * pThread = new ThreadStart(pMyThread, &CMyThreads::MyThreadProc); Thread *oThread = new Thread(pThread); oThread->Start(); }
Before we create
ThreadStart you must call
AddArguments if
you want arguments on this thread.
The thread will not begin until you call the member function
Start()
#include "StdAfx.h" #using <mscorlib.dll> using namespace System; using namespace System::Threading; #include <stdio.h> #include "mythreads.h" CMyThreads::CMyThreads(void) { } CMyThreads::~CMyThreads(void) { } void CMyThreads::MyThreadProc() { Console::WriteLine(S"Starting Thread... "); Thread::Sleep(5); pClass->ExternalFunction(/*Arguments*/); Console::WriteLine(S"Finishing Thread..."); } void CMyThreads::AddArguments(void* pArg1, void* pArg2) { m_FirstArgument = pArg1; m_SecondArgument = pArg2; }
Remember to
Sleep to allow the main process to continue.
Also you put anything you like in
MyThreadProc() you can also call a
function in another class. I hope you have fun!
General
News
Question
Answer
Joke
Rant
Admin | http://www.codeproject.com/KB/threads/managedthreads.aspx | crawl-002 | refinedweb | 227 | 50.63 |
The Noiasca Rotary Encoder Library
The Noiasca Rotary Encoder Libary is a lightweight encoder library. This library doesn't use any interupts and you can use it on any two input pins you have available on your Arduino.
A very simple Arduino sketch will look like following:
# include <NoiascaEncoder.h> // download library from Encoder myEnc(2, 3); // Change these two numbers to the pins connected to your encoder. void setup() { Serial.begin(115200); Serial.println(F("Basic Encoder Test:")); myEnc.begin(); } long oldPosition = -42; void loop() { long newPosition = myEnc.getCounter(); if (newPosition != oldPosition) { oldPosition = newPosition; Serial.println(newPosition); } }
The default constructor takes two pins.
Additionally you can define two callback methods which will be called when the encoder will be moved.
Encoder myEnc(2, 3, callbackIncrease, callbackDecrease);
Don't forget to implement the callback functions in your user sketch!
.begin() : The begin() must be called in setup(). It will do all hardware specific tasks.
.getCounter() returns the internal counter.
.getDirection() returns 1 or -1 if the encoder was moved clockwise or counter clockwise..
.setCounter() can be used to set the internal counter (a signed 32 bit variable) to a specific value
.upClick() gets true if rotary encoder is moved clockwise.
.downClick() gets true if rotary encoder is moved counter clockwise.
.encode() is the internal "run" method. If you only use the defined callbacks, call encode() in your loop(). If you read events with getCounter(), getDirection(), upClick() or downClick() the encode() function will be called in the background.
Disclaimer
This encoder library is just a fast prototype - use it with caution. As the library is not using any interrupts it relies on a non blocking loop(). Don't use delay() in your user sketch! | https://werner.rothschopf.net/microcontroller/202202_noiasca_encoder_en.htm | CC-MAIN-2022-40 | refinedweb | 283 | 60.61 |
In this section you will learn about how to copy a content of one file to another file. In java, File API will not provide any direct way to copy a file. What we can do is, read a content of one file through FileInputStream and write it into another file through FileOutPutStream. There are some open source library available like "Apache commons IO " , in that there is one class called FileUtils, which provide file related operation. In the FileUtils class there is a method FileUtils.copyFile(source file , destinationfile), which allows you to copy a file. In another way we can copy using File I/o Stream.
Example: Java Code to copy a content of one file to another file.
import java.io.FileInputStream; import java.io.FileOutputStream; import java.io.InputStream; import java.io.IOException; import java.io.OutputStream; public class CopyFile { public static void main(String args[])throws IOException { String src ="C://Documents and Settings//satya//Desktop//file//hello.txt"; // path of source file is assigned to src string. String dest = "C://Documents and Settings//satya//Desktop//file//hi.txt"; //path of destination file is assigned to dest. InputStream in = new FileInputStream(src); OutputStream out = new FileOutputStream(dest); byte[] b = new byte[1024]; // creating a byte type buffer to store the content of file. int len; while ((len = in.read(b)) > 0) { out.write(b, 0, len); // 0 here indicates the position to writing in the target file. } in.close(); out.close(); System.out.println("Done"); } }
When we execute the above program the content of "hello.txt" is copied to "hi.txt" as we have given path for the both source file and destination file which is stored in src and dest,. then reading the content using while loop storing in buffer. After that just write the content to target file using out.write() in which starting from 0 to the length len of the file.
Output: After compiling and executing copy a file in java
Post your Comment | http://roseindia.net/java/example/java/io/copy-a-file.shtml | CC-MAIN-2015-22 | refinedweb | 332 | 69.07 |
This is the mail archive of the cygwin mailing list for the Cygwin project.
Hi Warren, On Jun 1 12:57, Corinna Vinschen wrote: > If F_MDLCK is not such a bright idea, maybe another fcntl is? Something > along these lines: > > fd = open ("my.db", ...); > #ifdef __CYGWIN__ > fcntl (fd, F_SETLK_MAND, 1); > #endif > > As for the user, we could add the g+S,g-x stuff additionally at one point, > but I'm rather reluctant to provide these means at all. See below. > [...] > There's a lot to recommend not using mandatory locking at all, unless in > very limited circumstances where interoperability with native > applications using mandatory locking is required. [...] I just applied a patch to implement mandatory locking. It also supports F_GETLK, with limited usability due to Windows restrictions, as I explained in other mail. It does NOT yet support BSD flock locking, only POSIX fcntl locking. I dropped the F_MDLCK idea. Instead I implemented a specific fcntl code to switch to mandatory locking on a file descriptor: fcntl (fd, F_LCK_MANDATORY, 1); The name F_SETLK_MAND didn't seem right. Anyway, afterwards, you can use the usual locking fcntls, but with Windows mandatory locking semantics. I didn't add a way for the user to switch on mandatory locking for now, and I don't intend to do that for 1.7.19. Hope that helps, nevertheless. I'm just about to generate a 2013-06-02 developer snapshot for 32 bit, a 64 bit DLL will follow tomorrow. Please give the 32 bit snapshot a try ASAP. I intend to release 1.7.19 very soon, probably tomorrow or Tuesday. Thanks, Corinna -- Corinna Vinschen Please, send mails regarding Cygwin to Cygwin Maintainer cygwin AT cygwin DOT com Red Hat -- Problem reports: FAQ: Documentation: Unsubscribe info: | https://sourceware.org/legacy-ml/cygwin/2013-06/msg00012.html | CC-MAIN-2020-29 | refinedweb | 294 | 55.84 |
HighlightAppearance
Since: BlackBerry 10.2.0
#include <bb/cascades/HighlightAppearance>
Represents a highlight appearance for a CustomListItem.
You can use the HighlightAppearance class to indicate the type of highlighting that you want to use for a CustomListItem when it's selected. For example, the item could use full highlighting (the entire item is highlighted), frame highlighting (only a small frame on top of the item is highlighted), or no highlighting.
Overview
Public Types Index
Public Types
An enumeration of possible highlight appearances for a CustomListItem.
BlackBerry 10.2.0
- Default 0
Represents the default highlight appearance.
- Full 1
Represents the full highlight appearance.
When this highlight appearance is used, the whole item will be highlighted. Note that if the whole item is covered and no transparency is set on the content, the highlight isn't visible.Since:
BlackBerry 10.2.0
- Frame 2
Represents the frame highlight appearance.Note:
This highlight appearance has a similar effect as HighlightAppearance::Full; it does not highlight only the frame, but the entire item.Since:
BlackBerry 10.2.0
- None 3
Represents no highlight appearance.
When this highlight appearance is used, no highlight is shown. This can be useful when you want to implement and use your own highlight.Since:
BlackBerry 10.2.0
Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus | http://developer.blackberry.com/native/reference/cascades/bb__cascades__highlightappearance.html | CC-MAIN-2014-49 | refinedweb | 226 | 52.56 |
Download presentation
Presentation is loading. Please wait.
Published byEric Weaver Modified about 1 year ago
1
Eschatology SCTR 19 – Religions of the Book Prepared by Matt Pham
2?)
3
4
5
6
1 Thessalonians 4:13-18 Paul’s first letter (ca. 50 CE) Wants Christians to keep faith and await Jesus’ return (soon) Answers people’s concerns about those who have died Don’t grieve or be afraid: ALL will see the Parousia. When Jesus returns, the dead will be raised first; And we who are still alive will be together with them. Note: Belief in “Rapture” based on literalist reading of 4:17
7
Later Pauline Eschatology Paul realizes that he himself may!
8)
10
The Book of Revelation a.k.a. The Apocalypse The Revelation to John Entire book is apocalyptic genre [Other Xn “apocalypses” not in NT] Purpose: Encourage Xns to preserve in faith in time of crisis (13:10; 14:12) Content: Initial vision – The Son of Man Letters to 7 Churches of Asia Main visions – Many Tribulations Final visions – The End Destruction of Satan, Evil, Death New Heavens, New Earth, New Jerusalem, Reign of God
12 New Earth New Jerusalem
13
17 but that all should come to repentance.” (3:9)
18
The End Is Coming! When? Is it near? Or far away? How can we define “time”? Is “time” relevant for God? How? What will happen at “The End”? End of what? My life? The USA? The world? Are the events depicted in the Book of Revelation meant to be interpreted literally or symbolically?
Similar presentations
© 2017 SlidePlayer.com Inc. | http://slideplayer.com/slide/4289522/ | CC-MAIN-2017-13 | refinedweb | 267 | 64.71 |
What is the point of oddCount?
Here is some pseudo-code/logic to your program. I hope it helps.
while still getting input
{
declare total variable
What is the point of oddCount?
Here is some pseudo-code/logic to your program. I hope it helps.
while still getting input
{
declare total variable
I wouldn't say static is deprecated.
It is used alot when you need data members or functions in a class accessible without having to instantiate an object.
#include <iostream>
using...
Once you get used to Object Oriented Programming, you wil appreciate its simplicity.
Java is much simpler than C++.
You usually control that kind of code generation via cmd-line flags.
However it is weird that it would put in debugging info without you telling it to.
Well I do use them in different places, however I think later on in the development of this I am going to have some other way of dealing with it.
I just wanted to know what people's opinion's...
Well it seems silly. I probably won't use it in the long run (just for the time being). I have a function that takes 6 arguments and I #define a small macro for expanding a ton of arguments. Here is...
I have recently come across a situation where it will be beneficial to use a #define macro inside my code. This addition will make the code much easier to understand and a little easier to edit...
Right? I mean, I posted in this thread and I can't even remember what I was doing at that time.
Essentially, arrays and linked-lists can be used interchangeably. It really depends on the situation though. You have to think of the pro's and con's of each:
Arrays
Pros
Easy pointer...
Yes, you can read the file (i.e. its a text file). It works fine now, though. It was an old bug in MSVC++ (I never updated). I compiled through g++ and it worked ok.
Fordy, I think you are...
The file is actually only one line long for now and is like so:
2 123456789 25 100
and the main code is like so:
......
Hey, all! It's been a while. I am having a strange problem which doesn't seem right. I have a class I created that I am trying overload the insertion operator (>>). I do so with this code:
...
Make sure that mount is in your /bin directory. Sometimes commands are just symbolically linked from the /bin and /sbin directory. It sounds like mount is in the /sbin dir (in the root users path)...
I only use KDE because it was smaller than GNOME in filesize. Window managers don't really matter too much to me.
Thanks for the reply. I wasn't sure if people still used them or if they just stuck with normal pipes/FIFO's or just network sockets. I'll give it some time, thanks.
In stdlib.h there is a function called system() which allows you to pass arguments to the shell. You use this injunction with the clear command for Bash like so:
system("clear");
Works...
Does anybody here use (or know anyone that uses) System V IPC's? I'm not even sure if they are POSIX'd. I am just wondering if I should invest time learning all about it.
thanks
RedHat isn't just for newbies. I use RH7.3 and it is perfect for me.
I think the key is to try a few and see which one makes you feel more comfortable. I used Mandrake for about a year and just...
You can do alot with calc and stats with just console programs. I got a rush when I finished a program that calculated a Riemann's Sum. Try it out. Do integrals and find out derivatives and just go...
Until you learn certain API's, you won't be impressed with much. Decide what you would like to specialize in. If you want to do network programming, look at socket API's. If you just want to do...
First and foremost, you can't use a character to hold a word. You need to have an array of characters (or a character pointer to an allocated amount of memory). You seem pretty new so I'm just going...
Yeah right.. Schools give you final projects and other little creative freedoms. Once you get into a company and start coding full time you will have to code to their standards and of course to their...
Here is a different, more standard version:
#include <iostream>
#include <cstdlib>
#include <string>
using namespace std;
Well, they did create C# and they did make Visual C++ b/c Visual C++ is just a program that makes it easier to program with MFC.
Also, they aren't trying to take over every language. They are,... | https://cboard.cprogramming.com/search.php?s=6e13793a39a89897ce865a81b7affe53&searchid=6042755 | CC-MAIN-2020-45 | refinedweb | 817 | 76.72 |
LoPy LoraWAN OTAA Deepsleep Example
Is there a working example how to send LoraWAN packets with OTAA in a deepsleep loop?
- join loraWAN
- send data
- deepsleep for minutes
- send data
- goto 3
so far i'm only able todo this loop with joining lora after each deepsleep cycle, but thas quite a bit of aditional airtime which i would like to avoid beside that packet count always is 1.
- papasmurph last edited by
@roadfox Late response, but this doesn't look like a correct approach, as the join (and send) might have failed. Next time it will detect that it comes out of deepsleep and assume that it has joined, which it hasn't, so "if boot" must be replaced by something like "if successfully joined and not boot".
Try these two to see if it changes things for you:
- make sure the socket is blocking
- add a
time.sleep(2)after sending, before saving and going to deep sleep.
I had the same problem and I only could solve it using confirmed messages.
@roadfox I also have the same problem, I do not know if they have already found a solution. it seems that the frame counter is not stored
@jcaron i'm using the default machine.deepslep, it's not for current it's to see if it all will work whenever i get a deepsleep shield. my understanding is that the lopy boots no matter what kind of deepsleep i'm using
thats how i detect the boot and send the data
n = LORA() # Setup network & sensors if machine.reset_cause() == machine.DEEPSLEEP_RESET: print('woke from a deep sleep') else: print('power on or hard reset') # Join LoRaWAN with OTAA n.connect(app_eui, app_key) # Send packet response = n.send(data) print("Received data:", response) # put the device to sleep machine.deepsleep(sleep_time*1000)
What kind of deep sleep are you using? Deep Sleep Shield? Pysense? Pytrack?
How do you detect whether you're coming from deep sleep or some other form of startup?
@jcaron ok thats what i have now:
def connect(self, app_eui, app_key): """ Connect device to LoRa. Set the socket and lora instances. """ app_eui = unhexlify(app_eui) app_key = unhexlify(app_key) # Disable blue blinking and turn LED off LED.heartbeat(False) LED.off() # Initialize LoRa in LORAWAN mode self.lora = LoRa(mode = LoRa.LORAWAN) # Join a network using OTAA (Over the Air Activation) self.lora.join(activation = LoRa.OTAA, auth = (app_eui, app_key), timeout = 0) #login for TheThingsNetwork see here: # Wait until the module has joined the network count = 1 while not self.lora.has_joined(): LED.blink(1, 2.5, 0xff0000) print("Trying to join LoraWAN with OTAA: " , count) count = count + 1 sleep(2.5) print ("LoraWAN joined! ") # save the LoRaWAN connection state self.lora.nvram_save() def send(self, data): """ Send data over the network. """ # Initialize LoRa in LORAWAN mode self.lora = LoRa(mode = LoRa.LORAWAN) # restore the LoRaWAN connection state self.lora.nvram_restore() # Create a LoRa socket print("Create LoRaWAN socket") self.s = socket.socket(socket.AF_LORA, socket.SOCK_RAW) # Set the LoRaWAN data rate self.s.setsockopt(socket.SOL_LORA, socket.SO_DR, 5) # Make the socket non-blocking self.s.setblocking(False) try: self.s.send(data) LED.blink(2, 0.1, 0x00ff00) print("Sending data:") print(data) # save the LoRaWAN connection state self.lora.nvram_save() except OSError as e: if e.errno == 11: print("Caught exception while sending") print("errno: ", e.errno)
i call connect after a hard reset or boot, otherwise i just call send
so far everything is working beside the fact that LoRaWAN frame counter is always at 1 and the packets are dropped in TTN unless i disable frame counter checks.
should nvram_save/restore not also save the framecounter?
You only need to join once, so you may do that manually at the same time you upload the code. Once you've done that, it's as simple as:
nvram_restore()
send your data
nvram_save()
deepsleep
@jcaron i was reading about nvram_save and restore and i also found a code snippet,how to detect if the system is booting or coming from deepsleep but its all puzzle pieces and new to me, so i was hopimg for a bit more complete example
If boot lorawan join else nvram_restore send data nvram_save deepslep 2min
Is this approach correct?
Are you using the most recent firmware? Are you using
nvram_save(before going to sleep) and
nvram_restore(after coming back from sleep)? | https://forum.pycom.io/topic/1784/lopy-lorawan-otaa-deepsleep-example | CC-MAIN-2022-21 | refinedweb | 734 | 65.01 |
Member
119 Points
All-Star
45439 Points
Microsoft
Jan 10, 2017 03:32 AM|Zhi Lv - MSFT|LINK
Hi mamoni.kol2017,
mamoni.kol2017How to manage signalr connection in database
You could refer to the following code to connect the database in signalr:
public class Startup { public void Configuration(IAppBuilder app) { // Any connection or hub wire up and configuration should go here string sqlConnectionString = "Connecton string to your SQL DB"; GlobalHost.DependencyResolver.UseSqlServer(sqlConnectionString); app.MapSignalR(); } }
More details, see:
Best regards,
Dillion
Member
119 Points
Jan 10, 2017 09:31 AM|mamoni.kol2017|LINK
@Zhi i roughly check all your link but none of the link is showing how to manage user connection in database instead of in memory.
see the screen shot please see because those area is not very clear
1) what is message_0 and message_0_id table ?
2) what is payload data in binary format storing in table ?
i got one link which show how to store user connection in db
thanks for your help.
2 replies
Last post Jan 10, 2017 09:31 AM by mamoni.kol2017 | https://forums.asp.net/t/2113395.aspx?How+to+manage+signalr+connection+in+database | CC-MAIN-2019-13 | refinedweb | 179 | 59.33 |
Prerequisite: Partition allocation methods
What is Next Fit ?
Next fit is a modified version of ‘first fit’. It begins as first fit to find a free partition but when called next time it starts searching from where it left off, not from the beginning. This policy makes use of a roving pointer. The pointer roves along the memory chain to search for a next fit. This helps in, to avoid the usage of memory always from the head (beginning) of the free block chain.
What are its advantage over first fit ?
- First fit is a straight and fast algorithm, but tends to cut large portion of free parts into small pieces due to which, processes that needs large portion of memory block would not get anything even if the sum of all small pieces is greater than it required which is so called external fragmentation problem.
- Another problem of first fit is that it tends to allocate memory parts at the begining of the memory, which may leads to more internal fragements at the begining. Next fit tries to address this problem by starting search for the free portion of parts not from the start of the memory, but from where it ends last time.
- Next fit is a very fast searching algorithm and is also comparatively faster than First Fit and Best Fit Memory Management Algorithms.
Example: Input : blockSize[] = {5, 10, 20}; processSize[] = {10, 20, 30}; Output: Process No. Process Size Block no. 1 10 2 2 20 3 3 30 Not Allocated
Algorithm:
- Input the number of memory blocks and their sizes and initializes all the blocks as free.
- Input the number of processes and their sizes.
- Start by picking each process and check if it can be assigned to current block, if yes, allocate it the required memory and check for next process but from the block where we left not from starting.
- If current block size is smaller then keep checking the further blocks.
// C/C++ program for next fit // memory management algorithm #include <bits/stdc++.h> using namespace std; // Function to allocate memory to blocks as per Next fit // algorithm void NextFit(int blockSize[], int m, int processSize[], int n) { // Stores block id of the block allocated to a // process int allocation[n], j = 0; // Initially no block is assigned to any process memset(allocation, -1, sizeof(allocation)); // pick each process and find suitable blocks // according to its size ad assign to it for (int i = 0; i < n; i++) { // Do not start from beginning while (j < m) { if (blockSize[j] >= processSize[i]) { // allocate block j to p[i] process allocation[i] = j; // Reduce available memory in this block. blockSize[j] -= processSize[i]; break; } // mod m will help in traversing the blocks from // starting block after we reach the end. j = (j + 1) % m; } } cout << "\nProcess No.\tProcess Size\tBlock no.\n"; for (int i = 0; i < n; i++) { cout << " " << i + 1 << "\t\t" << processSize[i] << "\t\t"; if (allocation[i] != -1) cout << allocation[i] + 1; else cout << "Not Allocated"; cout << endl; } } // Driver program int main() { int blockSize[] = { 5, 10, 20 }; int processSize[] = { 10, 20, 5 }; int m = sizeof(blockSize) / sizeof(blockSize[0]); int n = sizeof(processSize) / sizeof(processSize[0]); NextFit(blockSize, m, processSize, n); return 0; }
Process No. Process Size Block no. 1 10 2 2 20 3 3 5 1.
Improved By : ishita_thakkar
Recommended Posts:
- Working with Shared Libraries | Set 1
- Operating System | Peterson’s Algorithm (Using processes and shared memory)
- Program for Best Fit algorithm in Memory Management
- Program for Banker’s Algorithm | Set 1 (Safety Algorithm)
- Program for First Fit algorithm in Memory Management
-. | https://www.geeksforgeeks.org/program-next-fit-algorithm-memory-management/ | CC-MAIN-2018-34 | refinedweb | 603 | 58.42 |
Automate Hyperparameter Tuning for your models
When we create our machine learning models, a common task that falls on us is how to tune them.
People end up taking different manual approaches. Some of them work, and some don’t, and a lot of time is spent in anticipation and running the code again and again.
So that brings us to the quintessential question: Can we automate this process?
A while back, I was working on an in-class competition from the “How to win a data science competition” Coursera course. Learned a lot of new things, one among them being Hyperopt — A bayesian Parameter Tuning Framework.
And I was amazed. I left my Mac with hyperopt in the night. And in the morning I had my results. It was awesome, and I did avoid a lot of hit and trial.
This post is about automating hyperparameter tuning because our time is more important than the machine.
So, What is Hyperopt?
From the Hyperopt site:
Hyperopt is a Python library for serial and parallel optimization over awkward search spaces, which may include real-valued, discrete, and conditional dimensions
In simple terms, this means that we get an optimizer that could minimize/maximize any function for us. For example, we can use this to minimize the log loss or maximize accuracy.
All of us know how grid search or random-grid search works.
A grid search goes through the parameters one by one, while a random search goes through the parameters randomly.
Hyperopt takes as an input space of hyperparameters in which it will search and moves according to the result of past trials.
Thus, Hyperopt aims to search the parameter space in an informed way.
I won’t go in the details. But if you want to know more about how it works, take a look at this paper by J Bergstra. Here is the documentation from Github.
Our Dataset
To explain how hyperopt works, I will be working on the heart dataset from UCI precisely because it is a simple dataset. And why not do some good using Data Science apart from just generating profits?
This dataset predicts the presence of a heart disease given some variables.
This is a snapshot of the dataset :
This is how the target distribution looks like:
Hyperopt Step by Step
So, while trying to run hyperopt, we will need to create two Python objects:
An Objective function: The objective function takes the hyperparameter space as the input and returns the loss. Here we call our objective function
objective
A dictionary of hyperparams: We will define a hyperparam space by using the variable
spacewhich is actually just a dictionary. We could choose different distributions for different hyperparameter values.
In the end, we will use the
fmin function from the hyperopt package to minimize our
objective through the
space.
You can follow along with the code in this Kaggle Kernel.
1. Create the objective function
Here we create an objective function which takes as input a hyperparameter space:
We first define a classifier, in this case, XGBoost. Just try to see how we access the parameters from the space. For example
space[‘max_depth’]
We fit the classifier to the train data and then predict on the cross-validation set.
We calculate the required metric we want to maximize or minimize.
Since we only minimize using
fminin hyperopt, if we want to minimize
loglosswe just send our metric as is. If we want to maximize accuracy we will try to minimize
-accuracy
from sklearn.metrics import accuracy_score from hyperopt import hp, fmin, tpe, STATUS_OK, Trials import numpy as np import xgboost as xgb def objective(space): # Instantiate the classifier clf = xgb.XGBClassifier)] # Fit the classsifier clf.fit(X, y, eval_set=eval_set, eval_metric="rmse", early_stopping_rounds=10,verbose=False) # Predict on Cross Validation data pred = clf.predict(Xcv) # Calculate our Metric - accuracy accuracy = accuracy_score(ycv, pred>0.5) # return needs to be in this below format. We use negative of accuracy since we want to maximize it. return {'loss': -accuracy, 'status': STATUS_OK }
2. Create the Space for your classifier
Now, we create the search space for hyperparameters for our classifier.
To do this, we end up using many of hyperopt built-in functions which define various distributions.
As you can see in the code below, we use uniform distribution between 0.7 and 1 for our
subsample hyperparameter. We also give a label for the subsample parameter
x_subsample. You need to provide different labels for each hyperparam you define. I generally add a
x_ before my parameter name to create this label.) }
You can also define a lot of other distributions too. Some of the most useful stochastic expressions currently recognized by hyperopt’s optimization algorithms are:
hp.choice(label, options)— Returns one of the options, which should be a list or tuple.
hp.randint(label, upper)— Returns a random integer in the range [0, upper).
hp.uniform(label, low, high)— Returns a value uniformly between low and high.
hp.quniform(label, low, high, q)— Returns a value like round(uniform(low, high) / q) * q
hp.normal(label, mu, sigma)— Returns a real value that’s normally-distributed with mean mu and standard deviation sigma.
There are a lot of other distributions. You can check them out here.
3. And finally, Run Hyperopt
Once we run this, we get the best parameters for our model. Turns out we achieved an accuracy of 90% by just doing this on the problem.
trials = Trials() best = fmin(fn=objective, space=space, algo=tpe.suggest, max_evals=10, trials=trials) print(best)
Now we can retrain our XGboost algorithm with these best params, and we are done.
Conclusion
Running the above gives us pretty good hyperparams for our learning algorithm. And that saves me a lot of time to think about various other hypotheses and testing them.
I tend to use this a lot while tuning my models. From my experience, the most crucial part in this whole procedure is setting up the hyperparameter space, and that comes by experience as well as knowledge about the models.
So, Hyperopt is an awesome tool to have in your repository but never neglect to understand what your models does. It will be very helpful in the long run.
You can get the full code in this Kaggle Kernel.
Continue Learning
If you want to learn more about practical data science, do take a look at the “How to win a data science competition” Coursera course. Learned a lot of new things from this course taught by one of the most prolific Kaggler.. | https://mlwhiz.com/blog/2019/10/10/hyperopt2/ | CC-MAIN-2020-40 | refinedweb | 1,097 | 56.45 |
0,3
The leading diagonal of its difference table is the sequence shifted, see Bernstein and Sloane (1995). - N. J. A. Sloane, Jul 04 2015
Also the number of equivalence relations that can be defined on a set of n elements. - Federico Arboleda (federico.arboleda(AT)gmail.com), Mar 09 2005
a(n) = number of nonisomorphic colorings of a map consisting of a row of n+1 adjacent regions. Adjacent regions cannot have the same color. - David W. Wilson, Feb 22 2005
If an integer is squarefree and has n distinct prime factors then a(n) is the number of ways of writing it as a product of its divisors. - Amarnath Murthy, Apr 23 2001
Consider rooted trees of height at most 2. Letting each tree 'grow' into the next generation of n means we produce a new tree for every node which is either the root or at height 1, which gives the Bell numbers. - Jon Perry, Jul 23 2003
Begin with [1,1] and follow the rule that [1,k] -> [1,k+1] and [1,k] k times, e.g., [1,3] is transformed to [1,4], [1,3], [1,3], [1,3]. Then a(n) is the sum of all components: [1,1] = 2; [1,2], [1,1] = 5; [1,3], [1,2], [1,2], [1,2], [1,1] = 15; etc. - Jon Perry, Mar 05 2004
Number of distinct rhyme schemes for a poem of n lines: a rhyme scheme is a string of letters (e.g., 'abba') such that the leftmost letter is always 'a' and no letter may be greater than one more than the greatest letter to its left. Thus 'aac' is not valid since 'c' is more than one greater than 'a'. For example, a(3)=5 because there are 5 rhyme schemes: aaa, aab, aba, abb, abc; also see example by Neven Juric. - Bill Blewett, Mar 23 2004
In other words, number of length-n restricted growth strings (RGS) [s(0),s(1),...,s(n-1)] where s(0)=0 and s(k)<=1+max(prefix) for k>=1, see example (cf. A080337 and A189845). - Joerg Arndt, Apr 30 2011
Number of partitions of {1, ...,n+1} into subsets of nonconsecutive integers, including the partition 1|2|...|n+1. E.g., a(3)=5: there are 5 partitions of {1,2,3,4} into subsets of nonconsecutive integers, namely, 13|24, 13|2|4, 14|2|3, 1|24|3, 1|2|3|4. - Augustine O. Munagi, Mar 20 2005
Triangle (addition) scheme to produce terms, derived from the recurrence, from Oscar Arevalo (loarevalo(AT)sbcglobal.net), May 11 2005:
1
1 2
2 3 5
5 7 10 15
15 20 27 37 52
... [This is Aitken's array A011971]
With P(n) = the number of integer partitions of n, p(i) = the number of parts of the i-th partition of n, d(i) = the number of different parts of the i-th partition of n, p(j,i) = the j-th part of the i-th partition of n, m(i,j) = multiplicity of the j-th part of the i-th partition of n, one has: a(n) = Sum_{i=1..P(n)} (n!/(Product_{j=1..p(i)}p(i,j)!)) * (1/(Product_{j=1..d(i)} m(i,j)!)) - Thomas Wieder, May 18 2005
a(n+1) is the number of binary relations on an n-element set that are both symmetric and transitive. - Justin Witt (justinmwitt(AT)gmail.com), Jul 12 2005
If the rule from Jon Perry, Mar 05 2004, is used, then a(n-1) = [number of components used to form a(n)] / 2. - Daniel Kuan (dkcm(AT)yahoo.com), Feb 19 2006
a(n) is the number of functions f from {1,...,n} to {1,...,n,n+1} that satisfy the following two conditions for all x in the domain: (1) f(x)>x; (2)f(x)=n+1 or f(f(x))=n+1. E.g., a(3)=5 because there are exactly five functions that satisfy the two conditions: f1={(1,4),(2,4),(3,4)}, f2={(1,4),(2,3),(3,4)}, f3={(1,3),(2,4),(3,4)}, f4={(1,2),(2,4),(3,4)} and f5={(1,3),(2,3),(3,4)}. - Dennis P. Walsh, Feb 20 2006
Number of asynchronic siteswap patterns of length n which have no zero-throws (i.e., contain no 0's) and whose number of orbits (in the sense given by Allen Knutson) is equal to the number of balls. E.g., for n=4, the condition is satisfied by the following 15 siteswaps: 4444, 4413, 4242, 4134, 4112, 3441, 2424, 1344, 2411, 1313, 1241, 2222, 3131, 1124, 1111. Also number of ways to choose n permutations from identity and cyclic permutations (1 2), (1 2 3), ..., (1 2 3 ... n) so that their composition is identity. For n=3 we get the following five: id o id o id, id o (1 2) o (1 2), (1 2) o id o (1 2), (1 2) o (1 2) o id, (1 2 3) o (1 2 3) o (1 2 3). (To see the bijection, look at Ehrenborg and Readdy paper.) - Antti Karttunen, May 01 2006
a(n) is the number of permutations on [n] in which a 3-2-1 (scattered) pattern occurs only as part of a 3-2-4-1 pattern. Example: a(3) = 5 counts all permutations on [3] except 321. See "Eigensequence for Composition" reference a(n) = number of permutation tableaux of size n (A000142) whose first row contains no 0's. Example: a(3)=5 counts {{}, {}, {}}, {{1}, {}}, {{1}, {0}}, {{1}, {1}}, {{1, 1}}. - David Callan, Oct 07 2006
Take the series 1^n/1! + 2^n/2! + 3^n/3! + 4^n/4! ... If n=1 then the result will be e, about 2.71828. If n=2, the result will be 2e. If n=3, the result will be 5e. This continues, following the pattern of the Bell numbers: e, 2e, 5e, 15e, 52e, 203e, etc. - Jonathan R. Love (japanada11(AT)yahoo.ca), Feb 22 2007
From Gottfried Helms, Mar 30 2007: (Start)
This sequence is also the first column in the matrix-exponential of the (lower triangular) Pascal-matrix, scaled by exp(-1): PE = exp(P) / exp(1) =
1
1 1
2 2 1
5 6 3 1
15 20 12 4 1
52 75 50 20 5 1
203 312 225 100 30 6 1
877 1421 1092 525 175 42 7 1
First 4 columns are A000110, A033306, A105479, A105480. The general case is mentioned in the two latter entries. PE is also the Hadamard-product Toeplitz(A000110) (X) P:
2 1 1
5 2 1 1
15 5 2 1 1 (X) P
52 15 5 2 1 1
203 52 15 5 2 1 1
877 203 52 15 5 2 1 1
(End)
The terms can also be computed with finite steps and precise integer arithmetic. Instead of exp(P)/exp(1) one can compute A = exp(P - I) where I is the identity-matrix of appropriate dimension since (P-I) is nilpotent to the order of its dimension. Then a(n)=A[n,1] where n is the row-index starting at 1. - Gottfried Helms, Apr 10 2007
Define a Bell pseudoprime to be a composite number n such that a(n) == 2 (mod n). W. F. Lunnon recently found the Bell pseudoprimes 21361 = 41*521 and C46 = 3*23*16218646893090134590535390526854205539989357 and conjectured that Bell pseudoprimes are extremely scarce. So the second Bell pseudoprime is unlikely to be known with certainty in the near future. I confirmed that 21361 is the first. - David W. Wilson, Aug 04 2007 and Sep 24 2007
This sequence and A000587 form a reciprocal pair under the list partition transform described in A133314. - Tom Copeland, Oct 21 2007
Starting (1, 2, 5, 15, 52, ...), equals row sums and right border of triangle A136789. Also row sums of triangle A136790. - Gary W. Adamson, Jan 21 2008
This is the exponential transform of A000012. - Thomas Wieder, Sep 09 2008
From Abdullahi Umar, Oct 12 2008: (Start)
a(n) is also the number of idempotent order-decreasing full transformations (of an n-chain).
a(n) is also the number of nilpotent partial one-one order-decreasing transformations (of an n-chain).
a(n+1) is also the number of partial one-one order-decreasing transformations (of an n-chain). (End)
From Peter Bala, Oct 19 2008: (Start)
Bell(n) is the number of n-pattern sequences [Cooper & Kennedy]. An n-pattern sequence is a sequence of integers (a_1,...,a_n) such that a_i = i or a_i = a_j for some j < i. For example, Bell(3) = 5 since the 3-pattern sequences are (1,1,1), (1,1,3), (1,2,1), (1,2,2) and (1,2,3).
Bell(n) is the number of sequences of positive integers (N_1,...,N_n) of length n such that N_1 = 1 and N_(i+1) <= 1 + max{j = 1..i} N_j for i >= 1 (see the comment by B. Blewett above). It is interesting to note that if we strengthen the latter condition to N_(i+1) <= 1 + N_i we get the Catalan numbers A000108 instead of the Bell numbers.
Equals the eigensequence of Pascal's triangle, A007318; and starting with offset 1, = row sums of triangles A074664 and A152431. - Gary W. Adamson, Dec 04 2008
The entries f(i, j) in the exponential of the infinite lower-triangular matrix of binomial coefficients b(i, j) are f(i, j) = b(i, j) e a(i - j). - David Pasino, Dec 04 2008
Equals Lim_{k->inf.} A071919^k. - Gary W. Adamson, Jan 02 2009
Equals A154107 convolved with A014182, where A014182 = expansion of exp(1-x-exp(-x)), the eigensequence of A007318^(-1). Starting with offset 1 = A154108 convolved with (1,2,3,...) = row sums of triangle A154109. - Gary W. Adamson, Jan 04 2009
Repeated iterates of (binomial transform of [1,0,0,0,...]) will converge upon (1, 2, 5, 15, 52,...) when each result is prefaced with a "1"; such that the final result is the fixed limit: (binomial transform of [1,1,2,5,15,...] = (1,2,5,15,52,...). - Gary W. Adamson, Jan 14 2009
From Karol A. Penson, May 03 2009: (Start)
Relation between the Bell numbers B(n) and the n-th derivative of 1/Gamma(1+x) of such derivatives through seq(subs(x=0, simplify((d^n/dx^n)GAMMA(1+x)^(-1))), n=1..6);
b) leave them expressed in terms of digamma (Psi(k)) and polygamma (Psi(k,n)) functions and unevaluated;
Examples of such expressions, for n=1..5, are:
n=1: -Psi(1),
n=2: -(-Psi(1)^2+Psi(1,1)),
n=3: -Psi(1)^3+3*Psi(1)*Psi(1,1)-Psi(2,1),
n=4: -(-Psi(1)^4+6*Psi(1)^2*Psi(1,1)-3*Psi(1,1)^2-4*Psi(1)*Psi(2,1)+Psi(3, 1)),
n=5: -Psi(1)^5 +10*Psi(1)^3*Psi(1,1) -15*Psi(1)*Psi(1,1)^2 -10*Psi(1)^2*Psi(2,1) +10*Psi(1,1)*Psi(2,1) +5*Psi(1)*Psi(3,1) -Psi(4,1);
c) for a given n, read off the sum of absolute values of coefficients of every term involving digamma or polygamma functions.
This sum is equal to B(n). Examples: B(1)=1, B(2)=1+1=2, B(3)=1+3+1=5, B(4)=1+6+3+4+1=15, B(5)=1+10+15+10+10+5+1=52;
d) Observe that this decomposition of the Bell number B(n) apparently does not involve the Stirling numbers of the second kind explicitly.
The numbers given above by Penson lead to the multinomial coefficients A036040. - Johannes W. Meijer, Aug 14 2009
Column 1 of A162663. - Franklin T. Adams-Watters, Jul 09 2009
Asymptotic expansions (0!+1!+2!+...+(n-1)!)/(n-1)! = a(0) + a(1)/n + a(2)/n^2 + ... and (0!+1!+2!+...+n!)/n! = 1 + a(0)/n + a(1)/n^2 + a(2)/n^3 + .... - Michael Somos, Jun 28 2009
Starting with offset 1 = row sums of triangle A165194. - Gary W. Adamson, Sep 06 2009
a(n+1) = A165196(2^n); where A165196 begins: (1, 2, 4, 5, 7, 12, 14, 15, ...). such that A165196(2^3) = 15 = A000110(4). - Gary W. Adamson, Sep 06 2009
The divergent series g(x=1,m) = 1^m*1! - 2^m*2! + 3^m*3! - 4^m*4! + ..., m >= -1, which for m=-1 dates back to Euler, is related to the Bell numbers. We discovered that g(x=1,m) = (-1)^m * (A040027(m) - A000110(m+1) * A073003). We observe that A073003 is Gompertz's constant and that A040027 was published by Gould, see for more information A163940. - Johannes W. Meijer, Oct 16 2009
a(n)= E(X^n), i.e., the n-th moment about the origin of a random variable X that has a Poisson distribution with (rate) parameter, lambda = 1. - Geoffrey Critzer, Nov 30 2009
Let A000110 = S(x), then S(x) = A(x)/A(x^2) when A(x) = A173110; or (1, 1, 2, 5, 15, 52, ...) = (1, 1, 3, 6, 20, 60, ...) / (1, 0, 1, 0, 3, 0, 6, 0, 20, ...). - Gary W. Adamson, Feb 09 2010
The Bell numbers serve as the upper limit for the number of distinct homomorphic images from any given finite universal algebra. Every algebra homomorphism is determined by its kernel, which must be a congruence relation. As the number of possible congruence relations with respect to a finite universal algebra must be a subset of its possible equivalence classes (given by the Bell numbers), it follows naturally. - Max Sills, Jun 01 2010
For a proof of the o.g.f. given in the R. Stephan comment see, e.g., the W. Lang link under A071919. - Wolfdieter Lang, Jun 23 2010
Let B(x) = (1 + x + 2x^2 + 5x^3 + ...). Then B(x) is satisfied by A(x)/A(x^2) where A(x) = polcoeff A173110: (1 + x + 3x^2 + 6x^3 + 20x^4 + 60x^5 + ...) = B(x) * B(x^2) * B(x^4) * B(x^8) * .... - Gary W. Adamson, Jul 08 2010
Consider a set of A000217(n) balls of n colors in which, for each integer k = 1 to n, exactly one color appears in the set a total of k times. (Each ball has exactly one color and is indistinguishable from other balls of the same color.) a(n+1) equals the number of ways to choose 0 or more balls of each color without choosing any two colors the same positive number of times. (See related comments for A000108, A008277, A016098.) - Matthew Vandermast, Nov 22 2010
A binary counter with faulty bits starts at value 0 and attempts to increment by 1 at each step. Each bit that should toggle may or may not do so. a(n) is the number of ways that the counter can have the value 0 after n steps. E.g., for n=3, the 5 trajectories are 0,0,0,0; 0,1,0,0; 0,1,1,0; 0,0,1,0; 0,1,3,0. - David Scambler, Jan 24 2011
No Bell number is divisible by 8, and no Bell number is congruent to 6 modulo 8; see Theorem 6.4 and Table 1.7 in Lunnon, Pleasants and Stephens. - Jon Perry, Feb 07 2011, clarified by Eric Rowland, Mar 26 2014
a(n+1) is the number of (symmetric) positive semidefinite n X n 0-1 matrices. These correspond to equivalence relations on {1,...,n+1}, where matrix element M[i,j] = 1 if and only if i and j are equivalent to each other but not to n+1. - Robert Israel, Mar 16 2011
a(n) is the number of monotonic-labeled forests on n vertices with rooted trees of height less than 2. We note that a labeled rooted tree is monotonic-labeled if the label of any parent vertex is greater than the label of any offspring vertex. See link "Counting forests with Stirling and Bell numbers". - Dennis P. Walsh, Nov 11 2011
a(n) = D^n(exp(x)) evaluated at x = 0, where D is the operator (1+x)*d/dx. Cf. A000772 and A094198. - Peter Bala, Nov 25 2011
B(n) counts the length n+1 rhyme schemes without repetitions. E.g., for n=2 there are 5 rhyme schemes of length 3 (aaa, aab, aba, abb, abc), and the 2 without repetitions are aba, abc. This is basically O. Munagi's result that the Bell numbers count partitions into subsets of nonconsecutive integers (see comment above dated Mar 20 2005). - Eric Bach, Jan 13 2012
Number n is prime if mod(a(n)-2,n) = 0. -Dmitry Kruchinin, Feb 14 2012
Right and left borders and row sums of A212431 = A000110 or a shifted variant. - Gary W. Adamson, Jun 21 2012
Number of maps f: [n] -> [n] where f(x)<=x and f(f(x))=f(x) (projections). - Joerg Arndt, Jan 04 2013
Permutations of [n] avoiding any given one of the 8 dashed patterns in the equivalence classes (i) 1-23, 3-21, 12-3, 32-1, and (ii) 1-32, 3-12, 21-3, 23-1. (See Claesson 2001 reference.) - David Callan, Oct 03 2013
Conjecture: No a(n) has the form x^m with m > 1 and x > 1. - Zhi-Wei Sun, Dec 02 2013
Sum_{n>=0} a(n)/n! = e^(e-1) = 5.57494152476... , see A234473. - Richard R. Forberg, Dec 26 2013 (This is the e.g.f. for x=1. - Wolfdieter Lang, Feb 02 2015)
Sum_{j=0..n} binomial(n,j)*a(j) = (1/e)*Sum_{k>=0} (k+1)^n/k! = (1/e) Sum_{k=1..infinity} k^(n+1)/k! = a(n+1), n >= 0, using the Dobinski formula. See the comment by Gary W. Adamson, Dec 04 2008 on the Pascal eigensequence. - Wolfdieter Lang, Feb 02 2015
In fact it is not really an eigensequence of the Pascal matrix; rather the Pascal matrix acts on the sequence as a shift. It is an eigensequence (the unique eigensequence with eigenvalue 1) of the matrix derived from the Pascal matrix by adding at the top the row [1, 0, 0, 0 ...]. The binomial sum formula may be derived from the definition in terms of partitions: label any element X of a set S of N elements, and let X(k) be the number of subsets of S containing X with k elements. Since each subset has a unique coset, the number of partitions p(N) of S is given by p(N) = Sum_{k=1..N} (X(k) p(N-k)); trivially X(k) = N-1 choose k-1. - Mason Bogue, Mar 20 2015
a(n) is the number of ways to nest n matryoshkas (Russian nesting dolls): we may identify {1, 2, ..., n} with dolls of ascending sizes and the sets of a set partition with stacks of dolls. - Carlo Sanna, Oct 17 2015
Number of permutations of [n] where the initial elements of consecutive runs of increasing elements are in decreasing order. a(4) = 15: `1234, `2`134, `23`14, `234`1, `24`13, `3`124, `3`2`14, `3`24`1, `34`12, `34`2`1, `4`123, `4`2`13, `4`23`1, `4`3`12, `4`3`2`1. - Alois P. Heinz, Apr 27 2016
Taking with alternating signs, the Bell numbers are the coefficients in the asymptotic expansion (Ramanujan): (-1)^n*(A000166(n) - n!/exp(1)) ~ 1/n - 2/n^2 + 5/n^3 - 15/n^4 + 52/n^5 - 203/n^6 + O(1/n^7). - Vladimir Reshetnikov, Nov 10 2016
Number of treeshelves avoiding pattern T231. See A278677 for definitions and examples. - Sergey Kirgizov, Dec 24 2016
Presumably this satisfies Benford's law, although the results in Hürlimann (2009) do not make this clear. - N. J. A. Sloane, Feb 09 2017
a(n) = Sum(# of standard immaculate tableaux of shape m, m is a composition of n), where this sum is over all integer compositions m of n > 0. This formula is easily seen to hold by identifying standard immaculate tableaux of size n with set partitions of { 1, 2, ..., n }. For example, if we sum over integer compositions of 4 lexicographically, we see that 1+1+2+1+3+3+3+1 = 15 = A000110(4). - John M. Campbell, Jul 17 2017
a(n) is also the number of independent vertex sets (and vertex covers) in the (n-1)-triangular honeycomb bishop graph. - Eric W. Weisstein, Aug 10 2017
Even-numbered entries represent the numbers of configurations of identity and non-identity for alleles of a gene in n diploid individuals with distinguishable maternal and paternal alleles. - Noah A Rosenberg, Jan 28 2019
Number of partial equivalence relations (PERs) on a set with n elements (offset=1), i.e., number of symmetric, transitive (not necessarily reflexive) relations. The idea is to add a dummy element D to the set, and then take equivalence relations on the result; anything equivalent to D is then removed for the partial equivalence relation. - David Spivak, Feb 06 2019
Number of words of length n+1 with no repeated letters, when letters are unlabeled. - Thomas Anton, Mar 14 2019
Stefano Aguzzoli, Brunella Gerla and Corrado Manara, Poset Representation for Goedel and Nilpotent Minimum Logics, in Symbolic and Quantitative Approaches to Reasoning with Uncertainty, Lecture Notes in Computer Science, Volume 3571/2005, Springer-Verlag. [Added by N. J. A. Sloane, Jul 08 2009]
S. Ainley, Problem 19, QARCH, No. IV, Nov 03, 1980.
J.-P. Allouche and J. Shallit, Automatic Sequences, Cambridge Univ. Press, 2003, p. 205.
W. Asakly, A. Blecher, C. Brennan, A. Knopfmacher, T. Mansour, S. Wagner, Set partition asymptotics and a conjecture of Gould and Quaintance, Journal of Mathematical Analysis and Applications, Volume 416, Issue 2, Aug 15 2014, Pages 672-682.
J. Balogh, B. Bollobas and D. Weinreich, A jump to the Bell numbers for hereditary graph properties, J. Combin. Theory Ser. B 95 (2005), no. 1, 29-48.
R. E. Beard, On the coefficients in the expansion of exp(exp(t)) and exp(-exp(t)), J. Institute Actuaries, 76 (1951), 152-163.
H. W. Becker, Abstracts of two papers related to Bell numbers, Bull. Amer. Math. Soc., 52 (1946), p. 415.
E. T. Bell, The iterated exponential numbers, Ann. Math., 39 (1938), 539-557.
C. M. Bender, D. C. Brody and B. K. Meister, Quantum Field Theory of Partitions, J. Math. Phys., 40,7 (1999), 3239-45.
E. A. Bender and J. R. Goldman, Enumerative uses of generating functions, Indiana Univ. Math. J., 20 (1971), 753-765.
G. Birkhoff, Lattice Theory, Amer. Math. Soc., Revised Ed., 1961, p. 108, Ex. 1.
M. T. L. Bizley, On the coefficients in the expansion of exp(lambda exp(t)), J. Inst. Actuaries, 77 (1952), p. 122.
J. M. Borwein, D. H. Bailey and R. Girgensohn, Experimentation in Mathematics, A K Peters, Ltd., Natick, MA, 2004. x+357 pp. See p. 41.
Carlier, Jacques; and Lucet, Corinne; A decomposition algorithm for network reliability evaluation. In First International Colloquium on Graphs and Optimization (GOI), 1992 (Grimentz). Discrete Appl. Math. 65 (1996), 141-156.
M. E. Cesaro, Sur une equation aux differences melees, Nouvelles Annales de Math. (3), Tome 4, (1885), 36-40.
Anders Claesson, Generalized Pattern Avoidance, European Journal of Combinatorics, 22 (2001) 961-971.
L. Comtet, Advanced Combinatorics, Reidel, 1974, p. 210.
J. H. Conway et al., The Symmetries of Things, Peters, 2008, p. 207.
De Angelis, Valerio, and Dominic Marcello. "Wilf's Conjecture." The American Mathematical Monthly 123.6 (2016): 557-573.
N. G. de Bruijn, Asymptotic Methods in Analysis, Dover, 1981, Sections 3.3. Case b and 6.1-6.3.
J.-M. De Koninck, Ces nombres qui nous fascinent, Entry 52, p. 19, Ellipses, Paris 2008.
G. Dobinski, Summierung der Reihe Sum(n^m/n!) für m = 1, 2, 3, 4, 5, ..., Grunert Archiv (Arch. f. Math. und Physik), 61 (1877) 333-336.
L. F. Epstein, A function related to the series for exp(exp(z)), J. Math. and Phys., 18 (1939), 153-173.
G. Everest, A. van der Poorten, I. Shparlinski and T. Ward, Recurrence Sequences, Amer. Math. Soc., 2003; see esp. p. 255.
Flajolet, Philippe and Schott, Rene, Nonoverlapping partitions, continued fractions, Bessel functions and a divergent series, European J. Combin. 11 (1990), no. 5, 421-432.
Martin Gardner, Fractal Music, Hypercards and More (Freeman, 1992), Chapter 2.
H. W. Gould, Research bibliography of two special number sequences, Mathematica Monongaliae, Vol. 12, 1971.
R. L. Graham, D. E. Knuth and O. Patashnik, Concrete Mathematics, Addison-Wesley, 2nd ed., p. 493.
Silvia Heubach and Toufik Mansour, Combinatorics of Compositions and Words, CRC Press, 2010.
M. Kauers and P. Paule, The Concrete Tetrahedron, Springer 2011, p. 26.
D. E. Knuth, The Art of Computer Programming, vol. 4A, Combinatorial Algorithms, Section 7.2.1.5 (p. 418).
Christian Kramp, Der polynomische Lehrsatz (Leipzig: 1796), 113.
Lehmer, D. H. Some recursive sequences. Proceedings of the Manitoba Conference on Numerical Mathematics (Univ. Manitoba, Winnipeg, Man., 1971), pp. 15--30. Dept. Comput. Sci., Univ. Manitoba, Winnipeg, Man., 1971. MR0335426 (49 #208)
J. Levine and R. E. Dalton, Minimum periods, modulo p, of first-order Bell exponential integers, Math. Comp., 16 (1962), 416-423.
Levinson, H.; Silverman, R. Topologies on finite sets. II. Proceedings of the Tenth Southeastern Conference on Combinatorics, Graph Theory and Computing (Florida Atlantic Univ., Boca Raton, Fla., 1979), pp. 699--712, Congress. Numer., XXIII-XXIV, Utilitas Math., Winnipeg, Man., 1979. MR0561090 (81c:54006)
S. Linusson, The number of M-sequences and f-vectors, Combinatorica, 19 (1999), 255-266.
L. Lovasz, Combinatorial Problems and Exercises, North-Holland, 1993, pp. 14-15.
W. F. Lunnon, P. A. B. Pleasants and N. M. Stephens, Arithmetic properties of Bell numbers to a composite modulus I, Acta Arithmetica 35 (1979) 1-16.
M. Meier, On the number of partitions of a given set, Amer. Math. Monthly, 114 (2007), p. 450.
Merris, Russell, and Stephen Pierce. "The Bell numbers and r-fold transitivity." Journal of Combinatorial Theory, Series A 12.1 (1972): 155-157.
Moser, Leo, and Max Wyman. An asymptotic formula for the Bell numbers. Trans. Royal Soc. Canada, 49 (1955), 49-53.
A. O. Munagi, k-Complementing Subsets of Nonnegative Integers, International Journal of Mathematics and Mathematical Sciences, 2005:2, (2005), 215-224.
A. Murthy, Generalization of partition function, introducing Smarandache factor partition, Smarandache Notions Journal, Vol. 11, No. 1-2-3, Spring 2000.
Amarnath Murthy and Charles Ashbacher, Generalized Partitions and Some New Ideas on Number Theory and Smarandache Sequences, Hexis, Phoenix; USA 2005. See Section 1.4,1.8.
P. Peart, Hankel determinants via Stieltjes matrices. Proceedings of the Thirty-first Southeastern International Conference on Combinatorics, Graph Theory and Computing (Boca Raton, FL, 2000). Congr. Numer. 144 (2000), 153-159.
A. M. Robert, A Course in p-adic Analysis, Springer-Verlag, 2000; p. 212.
G.-C. Rota, Finite Operator Calculus.
Frank Ruskey, Jennifer Woodcock and Yuji Yamauchi, Counting and computing the Rand and block distances of pairs of set partitions, Journal of Discrete Algorithms, Volume 16, October 2012, Pages 236-248.; see Section 1.4 and Example 5.2.4.
Abdullahi Umar, On the semigroups of order-decreasing finite full transformations, Proc. Roy. Soc. Edinburgh 120A (1992), 129-142.
Abdullahi Umar, On the semigroups of partial one-to-one order-decreasing finite transformations, Proc. Roy. Soc. Edinburgh 123A (1993), 355-363.
Simon Plouffe, Table of n, a(n) for n = 0..500
M. Aigner, A characterization of the Bell numbers, Discr. Math., 205 (1999), 207-210.
M. Aigner, Enumeration via ballot numbers, Discrete Math., 308 (2008), 2544-2563.
S. Ainley, Problem 19, QARCH, No. IV, Nov 03, 1980. [Annotated scanned copy]
Tewodros Amdeberhan, Valerio de Angelis and Victor H. Moll, Complementary Bell numbers: arithmetical properties and Wilf's conjecture
R. Aldrovandi and L. P. Freitas, Continuous iteration of dynamical maps, arXiv:physics/9712026 [math-ph], 1997.
Horst Alzer, On Engel's Inequality for Bell Numbers, J. Int. Seq., Vol. 22 (2019), Article 19.7.1.
Joerg Arndt, Matters Computational (The Fxtbook), p.151, p.358, and p. 368.
Joerg Arndt, Subset-lex: did we miss an order?, arXiv:1405.6503 [math.CO], 2014-2015.
Juan S. Auli, Sergi Elizalde, Wilf equivalences between vincular patterns in inversion sequences, arXiv:2003.11533 [math.CO], 2020.
E. Baake, M. Baake, M. Salamat, The general recombination equation in continuous time and its solution, arXiv preprint arXiv:1409.1378 [math.CA], 2014-2015.
Pat Ballew, Bell Numbers
C. Banderier, M. Bousquet-Mélou, A. Denise, P. Flajolet, D. Gardy and D. Gouyou-Beauchamps, Generating Functions for Generating Trees, Discrete Mathematics 246(1-3), March 2002, pp. 29-55.
Elizabeth Banjo, Representation theory of algebras related to the partition algebra, Unpublished Doctoral thesis, City University London, 2013.
S. Barbero, U. Cerruti, N. Murru, A Generalization of the Binomial Interpolated Operator and its Action on Linear Recurrent Sequences , J. Int. Seq. 13 (2010) # 10.9.7
J.-L. Baril, T. Mansour, and A. Petrossian, Equivalence classes of permutations modulo excedances, 2014.
Jean-Luc Baril, Sergey Kirgizov, Vincent Vajnovszki, Patterns in treeshelves, arXiv:1611.07793 [cs.DM], 2016.
P. Barry, Invariant number triangles, eigentriangles and Somos-4 sequences, arXiv preprint arXiv:1107.5490 [math.CO], 2011.
D. Barsky, Analyse p-adique et suites classiques de nombres, Sem. Loth. Comb. B05b (1981) 1-21.
R. E. Beard, On the Coefficients in the Expansion of e^(e^t) and e^(-e^t), J. Institute of Actuaries, 76 (1950), 152-163. [Annotated scanned copy]
H. W. Becker, Abstracts of two papers from 1946 related to Bell numbers [Annotated scanned copy]
H. W. Becker, Rooks and rhymes, Math. Mag., 22 (1948/49), 23-26.
H. W. Becker, Rooks and rhymes, Math. Mag., 22 (1948/49), 23-26. [Annotated scanned copy]
H. W. Becker and D. H. Browne, Problem E461 and solution, American Mathematical Monthly, Vol. 48 (1941), pp. 701-703.
E. T. Bell, Exponential numbers, Amer. Math. Monthly, 41 (1934), 411-419.
E. T. Bell, Exponential polynomials, Ann. Math., 35 (1934), 258-277.
E. A. Bender and J. R. Goldman, Enumerative uses of generating functions, Indiana Univ. Math. J., 20 (1971), 753-765. [Annotated scanned copy]
Beáta Bényi, José L. Ramírez, Some Applications of S-restricted Set Partitions, arXiv:1804.03949 [math.CO], 2018.
M. Bernstein and N. J. A. Sloane, Some canonical sequences of integers, arXiv:math/0205301 [math.CO], 2002; Linear Alg. Applications, 226-228 (1995), 57-72; erratum 320 (2000), 210. [Link to arXiv version]
M. Bernstein and N. J. A. Sloane, Some canonical sequences of integers, Linear Alg. Applications, 226-228 (1995), 57-72; erratum 320 (2000), 210. [Link to Lin. Alg. Applic. version together with omitted figures]
Daniel Birmajer, Juan B. Gil, Michael D. Weiner, A family of Bell transformations, arXiv:1803.07727 [math.CO], 2018.
M. T. L. Bizley, On the coefficients in the expansion of exp(lambda exp(t)), J. Inst. Actuaries, 77 (1952), p. 122. [Annotated scanned.
Tobias Boege, Thomas Kahle, Construction Methods for Gaussoids, arXiv:1902.11260 [math.CO], 2019.
Tommaso Bolognesi, Vincenzo Ciancia, Exploring nominal cellular automata, Journal of Logical and Algebraic Methods in Programming, vol 93 (2017), see p 26.
D. Borwein, S. Rankin, S. and L. Renner, Enumeration of injective partial transformations, Discrete Math. (1989), 73: 291-296.
J. M. Borwein, Adventures with the OEIS: Five sequences Tony may like, Guttman 70th [Birthday] Meeting, 2015, revised May 2016.
J. M. Borwein, Adventures with the OEIS: Five sequences Tony may like, Guttman 70th [Birthday] Meeting, 2015, revised May 2016. [Cached copy, with permission]
H. Bottomley, Illustration of initial terms
Lukas Bulwahn, Spivey's Generalized Recurrence for Bell Numbers, Archive of Formal Proofs, 2016.
A. Burstein and I. Lankham, Combinatorics of patience sorting piles, arXiv:math/0506358 [math.CO], 2005-2006.
David Callan, A Combinatorial Interpretation of the Eigensequence for Composition, Vol. 9 (2006), Article 06.1.4.
David Callan, Cesaro's integral formula for the Bell numbers (corrected), arXiv:0708.3301 [math.HO], 2007.
David Callan and Emeric Deutsch, The Run Transform, arXiv preprint arXiv:1112.3639 [math.CO], 2011.
David Callan, On Ascent, Repetition and Descent Sequences, arXiv:1911.02209 [math.CO], 2019.
P. J. Cameron, Sequences realized by oligomorphic permutation groups, J. Integ. Seqs. Vol. 3 (2000), #00.1.5.
S. D. Chatterji, The number of topologies on n points, Kent State University, NASA Technical report, 1966 [Annotated scanned copy]
K.-W. Chen, Algorithms for Bernoulli numbers and Euler numbers, J. Integer Sequences, 4 (2001), #01.1.6.
B. Chern, P. Diaconis, D. M. Kane, and R. C. Rhoades, Central limit theorems for some set partition statistics, 2014.
Shane Chern, On 0012-avoiding inversion sequences and a Conjecture of Lin and Ma, arXiv:2006.04318 [math.CO], 2020.
Ali Chouria, Vlad-Florin Drǎgoi, Jean-Gabriel Luque, On recursively defined combinatorial classes and labelled trees, arXiv:2004.04203 [math.CO], 2020.
Johann Cigler, Christian Krattenthaler, Hankel determinants of linear combinations of moments of orthogonal polynomials, arXiv:2003.01676 [math.CO], 2020.
A. Claesson and T. Mansour, Counting patterns of type (1,2) or (2,1), arXiv:math/0110036 [math.CO], 2001.
Martin Cohn, Shimon Even, Karl Menger, Jr., and Philip K. Hooper, On the Number of Partitionings of a Set of n Distinct Objects, Amer. Math. Monthly 69 (1962), no. 8, 782--785. MR1531841.
Martin Cohn, Shimon Even, Karl Menger, Jr., and Philip K. Hooper, On the Number of Partitionings of a Set of n Distinct Objects, Amer. Math. Monthly 69 (1962), no. 8, 782--785. MR1531841. [Annotated scanned copy]
C. Coker, A family of eigensequences, Discrete Math. 282 (2004), 249-250.
Laura Colmenarejo, Rosa Orellana, Franco Saliola, Anne Schilling, Mike Zabrocki, An insertion algorithm on multiset partitions with applications to diagram algebras, arXiv:1905.02071 [math.CO], 2019.
CombOS - Combinatorial Object Server, Generate set partitions
C. Cooper and R. E. Kennedy, Patterns, automata and Stirling numbers of the second kind, Mathematics and Computer Education Journal, 26 (1992), 120-124.
Éva Czabarka, Péter L. Erdős, Virginia Johnson, Anne Kupczok and László A. Székely, Asymptotically normal distribution of some tree families relevant for phylogenetics, and of partitions without singletons, arXiv preprint arXiv:1108.6015 [math.CO], 2011.
Gesualdo Delfino and Jacopo Viti, Potts q-color field theory and scaling random cluster model, arXiv preprint arXiv:1104.4323 [hep-th], 2011.
R. M. Dickau, Bell number diagrams. - From N. J. A. Sloane, Feb 08 2013
Robert W. Donley Jr, Binomial arrays and generalized Vandermonde identities, arXiv:1905.01525 [math.CO], 2019.
Tomislav Došlic, Darko Veljan, Logarithmic behavior of some combinatorial sequences, Discrete Math. 308 (2008), no. 11, 2182--2212. MR2404544 (2009j:05019) - N. J. A. Sloane, May 01 2012
Branko Dragovich, On Summation of p-Adic Series, arXiv:1702.02569 [math.NT], 2017.
Branko Dragovich, Andrei Yu. Khrennikov, Natasa Z. Misic, Summation of p-Adic Functional Series in Integer Points, arXiv:1508.05079, 2015
B. Dragovich, N. Z. Misic, p-Adic invariant summation of some p-adic functional series, P-Adic Numbers, Ultrametric Analysis, and Applications, October 2014, Volume 6, Issue 4, pp 275-283.
R. Ehrenborg and M. Readdy, Juggling and applications to q-analogues, Discrete Math. 157 (1996), 107-125.
L. F. Epstein, A function related to the series for exp(exp(z)), J. Math. and Phys., 18 (1939), 153-173. [Annotated scanned copy]
M. Erné, Struktur- und Anzahlformeln für Topologien auf Endlichen Mengen, Manuscripta Math., 11 (1974), 221-259.
M. Erné, Struktur- und Anzahlformeln für Topologien auf Endlichen Mengen, Manuscripta Math., 11 (1974), 221-259. (Annotated scanned copy)
FindStat - Combinatorial Statistic Finder, Set partitions
John Fiorillo, GENJI-MON
P. Flajolet and R. Sedgewick, Analytic Combinatorics, 2009; see page 109, 110
H. Fripertinger, The Bell Numbers
O. Furdui, T. Trif, On the Summation of Certain Iterated Series, J. Int. Seq. 14 (2011) #11.6.1.
Daniel L. Geisler, Combinatorics of Iterated Functions
A. Gertsch and A. M.Robert, Some congruences concerning the Bell numbers
Robert Gill, The number of elements in a generalized partition semilattice, Discrete mathematics 186.1-3 (1998): 125-134. See Example 2.
Jekuthiel Ginsburg, Iterated exponentials, Scripta Math., 11 (1945), 340-353. [Annotated scanned copy]
H. W. Gould, J. Quaintance, Implications of Spivey's Bell Number formula, JIS 11 (2008) 08.3.7
Adam M. Goyt and Lara K. Pudwell, Avoiding colored partitions of two elements in the pattern sense, arXiv preprint arXiv:1203.3786 [math.CO], 2012. - From N. J. A. Sloane, Sep 17 2012
W. S. Gray and M. Thitsa, System Interconnections and Combinatorial Integer Sequences, in: System Theory (SSST), 2013 45th Southeastern Symposium on, Date of Conference: 11-Mar 11 2013.
M. Griffiths, Generalized Near-Bell Numbers, JIS 12 (2009) 09.5.7
M. Griffiths, I. Mezo, A generalization of Stirling Numbers of the Second Kind via a special multiset, JIS 13 (2010) #10.2.5.
Song He, Fei Teng, Yong Zhang, String Correlators: Recursive Expansion, Integration-by-Parts and Scattering Equations, arXiv:1907.06041 [hep-th], 2019. Also in Journal of High Energy Physics (2019), 2019:85.
Gottfried Helms, Bell Numbers, 2008.
A. Hertz and H. Melot, Counting the Number of Non-Equivalent Vertex Colorings of a Graph, Les Cahiers du GERAD G-2013-82
M. E. Hoffman, Updown categories: Generating functions and universal covers, arXiv preprint arXiv:1207.1705 [math.CO], 2012. - From N. J. A. Sloane, Dec 22 2012
A. Horzela, P. Blasiak, G. E. H. Duchamp, K. A. Penson and A. I. Solomon, A product formula and combinatorial field theory, arXiv:quant-ph/0409152, 2004.
W. Hürlimann, Generalizing Benford's law using power laws: application to integer sequences. International Journal of Mathematics and Mathematical Sciences, Article ID 970284 (2009).
Greg Hurst, Andrew Schultz, An elementary (number theory) proof of Touchard's congruence, arXiv:0906.0696 [math.CO], (2009)
INRIA Algorithms Project, Encyclopedia of Combinatorial Structures 15, Encyclopedia of Combinatorial Structures 65, Encyclopedia of Combinatorial Structures 73, Encyclopedia of Combinatorial Structures 291
Giovanni Cerulli Irelli, Xin Fang, Evgeny Feigin, Ghislain Fourier, Markus Reineke, Linear degenerations of flag varieties: partial flags, defining equations, and group actions, arXiv:1901.11020 [math.AG], 2019.
R. Jakimczuk, Integer Sequences, Functions of Slow Increase, and the Bell Numbers, J. Int. Seq. 14 (2011) #11.5.8
M. Janjic, Determinants and Recurrence Sequences, Journal of Integer Sequences, 2012, Article 12.3.5. - N. J. A. Sloane, Sep 16 2012
F. Johansson, Computing Bell numbers, Aug 06 2015
J. Katriel, On a generalized recurrence for Bell Numbers, JIS 11 (2008) 08.3.8
A. Kerber, A matrix of combinatorial numbers related to the symmetric groups<, Discrete Math., 21 (1978), 319-321. [Annotated scanned copy]
M. Klazar, Counting even and odd partitions, Amer. Math. Monthly, 110 (No. 6, 2003), 527-532.
M. Klazar, Bell numbers, their relatives and algebraic differential equations, J. Combin. Theory, A 102 (2003), 63-87.
A. Knutson, Siteswap FAQ, Section 5, Working backwards, defines the term "orbit" in siteswap notation.
Nate Kube and Frank Ruskey, Sequences That Satisfy a(n-a(n))=0, Journal of Integer Sequences, Vol. 8 (2005), Article 05.5.5.
Kazuhiro Kunii, Genji-koh no zu [Japanese page illustrating a(5) = 52]
Jacques Labelle, Applications diverses de la théorie combinatoire des espèces de structures, Ann. Sci. Math. Québec, 7.1 (1983): 59-94.
G. Labelle et al., Stirling numbers interpolation using permutations with forbidden subsequences, Discrete Math. 246 (2002), 177-195.
W. Lang, On generalizations of Stirling number triangles, J. Integer Seqs., Vol. 3 (2000), #00.2.4.
J. W. Layman, The Hankel Transform and Some of its Properties, J. Integer Sequences, 4 (2001), #01.1.5.
Jack Levine, A binomial identity related to rhyming sequences, Mathematics Magazine, 32 (1958): 71-74.
Zhicong Lin, Sherry H. F. Yan, Vincular patterns in inversion sequences, Applied Mathematics and Computation (2020), Vol. 364, 124672.
Peter Luschny, Set partitions and Bell numbers
T. Mansour, A. Munagi, M. Shattuck, Recurrence Relations and Two-Dimensional Set Partitions , J. Int. Seq. 14 (2011) # 11.4.1
T. Mansour, M. Shattuck, Counting Peaks and Valleys in a Partition of a Set , J. Int. Seq. 13 (2010), 10.6.8.
Toufik Mansour and Mark Shattuck, A recurrence related to the Bell numbers, INTEGERS 11 (2011), #A67.
Toufik Mansour, Matthias Schork and Mark Shattuck, The Generalized Stirling and Bell Numbers Revisited, Journal of Integer Sequences, Vol. 15 (2012), #12.8.3.
R. J. Marsh and P. P. Martin, Pascal arrays: counting Catalan sets, arXiv:math/0612572 [math.CO], 2006.
Richard J. Mathar, 2-regular Digraphs of the Lovelock Lagrangian, arXiv:1903.12477 [math.GM], 2019.
MathOverflow, Ordinary Generating Function for Bell Numbers
Victor Meally, Comparison of several sequences given in Motzkin's paper "Sorting numbers for cylinders...", letter to N. J. A. Sloane, N. D.
N. S. Mendelsohn, Number of equivalence relations for n elements, Problem 4340, Amer. Math. Monthly 58 (1951), 46-48.
Romeo Mestrovic, Variations of Kurepa's left factorial hypothesis, arXiv preprint arXiv:1312.7037 [math.NT], 2013-2014.
Istvan Mezo, The Dual of Spivey's Bell Number Formula, Journal of Integer Sequences, Vol. 15 (2012), #12.2.4.
I. Mezo and A. Baricz, On the generalization of the Lambert W function with applications in theoretical physics, arXiv preprint arXiv:1408.3999 [math.CA], 2014-2015.
M. Mihoubi, H. Belbachir, Linear Recurrences for r-Bell Polynomials, J. Int. Seq. 17 (2014) # 14.10.6.
Janusz Milek, Quantum Implementation of Risk Analysis-relevant Copulas, arXiv:2002.07389 [stat.ME], 2020.
N. Moreira and R. Reis, On the Density of Languages Representing Finite Set Partitions, Journal of Integer Sequences, Vol. 8 (2005), Article 05.2.8.
Leo Moser and Max Wyman, An asymptotic formula for the Bell numbers, Trans. Royal Soc. Canada, 49 (1955), 49-53. [Annotated scanned copy]
T. S. Motzkin, Sorting numbers for cylinders and other classification numbers, in Combinatorics, Proc. Symp. Pure Math. 19, AMS, 1971, pp. 167-176. [Annotated, scanned copy]
A. O. Munagi, k-Complementing Subsets of Nonnegative Integers, International Journal of Mathematics and Mathematical Sciences, 2005:2 (2005), 215-224.
Emanuele Munarini q-Derangement Identities, J. Int. Seq., Vol. 23 (2020), Article 20.3.8.
Norihiro Nakashima, Shuhei Tsujie, Enumeration of Flats of the Extended Catalan and Shi Arrangements with Species, arXiv:1904.09748 [math.CO], 2019.
Pierpaolo Natalini, Paolo Emilio Ricci, New Bell-Sheffer Polynomial Sets, Axioms 2018, 7(4), 71.
A. M. Odlyzko, Asymptotic enumeration methods, pp. 1063-1229 of R. L. Graham et al., eds., Handbook of Combinatorics, 1995; see Examples 5.4 and 12.2. (pdf, ps)
OEIS Wiki, Sorting numbers
Igor Pak, Complexity problems in enumerative combinatorics, arXiv:1803.06636 [math.CO], 2018.
K. A. Penson, P. Blasiak, G. Duchamp, A. Horzela and A. I. Solomon, Hierarchical Dobinski-type relations via substitution and the moment problem, arXiv:quant-ph/0312202, 2003.
K. A. Penson, P. Blasiak, A. Horzela, G. H. E. Duchamp and A. I. Solomon, Laguerre-type derivatives: Dobinski relations and combinatorial identities, J. Math. Phys. vol. 50, 083512 (2009)
K. A. Penson and J.-M. Sixdeniers, Integral Representations of Catalan and Related Numbers, J. Integer Sequences, 4 (2001), #01.2.5.
G. Pfeiffer, Counting Transitive Relations, Journal of Integer Sequences, Vol. 7 (2004), Article 04.3.2.
Simon Plouffe, Table of n, a(n) for n = 0..3015
Simon Plouffe, Bell numbers (first 1000 terms)
T. Prellberg, On the asymptotic analysis of a class of linear recurrences (slides).
R. A. Proctor, Let's Expand Rota's Twelvefold Way for Counting Partitions!, arXiv:math/0606404 [math.CO], 2006-2007.
Feng Qi, An Explicit Formula for Bell Numbers in Terms of Stirling Numbers and Hypergeometric Functions, arXiv:1402.2361 [math.CO], 2014.
Feng Qi, On sum of the Lah numbers and zeros of the Kummer confluent hypergeometric function, 2015.
Feng Qi, Some inequalities for the Bell numbers, Proc. Indian Acad. Sci. Math. Sci. 127:4 (2017), pp. 551-564.
Jocelyn Quaintance and Harris Kwong, A combinatorial interpretation of the Catalan and Bell number difference tables, Integers, 13 (2013), #A29.
C. Radoux, Déterminants de Hankel et théorème de Sylvester, Séminaire Lotharingien de Combinatoire, B28b (1992), 9 pp.
S. Ramanujan, Notebook entry
M. Rayburn, On the Borel fields of a finite set, Proc. Amer. Math. Soc., 19 (1968), 885-889.
M. Rayburn, On the Borel fields of a finite set, Proc. Amer. Math.. Soc., 19 (1968), 885-889. [Annotated scanned copy]
M. Rayburn and N. J. A. Sloane, Correspondence, 1974
C. Reid, The alternative life of E. T. Bell, Amer. Math. Monthly, 108 (No. 5, 2001), 393-402.
H. P. Robinson, Letter to N. J. A. Sloane, Jul 12 1971
Ivo Rosenberg; The number of maximal closed classes in the set of functions over a finite domain, J. Combinatorial Theory Ser. A 14 (1973), 1-7.
N. A. Rosenberg, Cover image of the American Journal of Human Genetics, Feb 2011.
A. Ross, PlanetMath.org, Bell number
G.-C. Rota, The number of partitions of a set, Amer. Math. Monthly 71 1964 498-504.
Eric Rowland, Bell numbers modulo 8, in Combinatorics and Algorithmics of Strings, 2014, page 42.
T. Sillke, Bell numbers
A. I. Solomon, P. Blasiak, G. Duchamp, A. Horzela and K. A. Penson, Combinatorial physics, normal order and model Feynman graphs, arXiv:quant-ph/0310174, 2003.
A. I. Solomon, P. Blasiak, G. Duchamp, A. Horzela and K. A. Penson, Partition functions and graphs: A combinatorial approach, arXiv:quant-ph/0409082, 2004.
Yüksel Soykan, İnci Okumuş, On a Generalized Tribonacci Sequence, Journal of Progressive Research in Mathematics (JPRM, 2019) Vol. 14, Issue 3, 2413-2418.
Michael Z. Spivey, A generalized recurrence for Bell numbers, J. Integer Sequences, Vol. 11 (2008), Article 08.2.5.
M. Z. Spivey, On Solutions to a General Combinatorial Recurrence, J. Int. Seq. 14 (2011) # 11.9.7.
Z.-W. Sun, Conjectures involving arithmetical sequences, Number Theory: Arithmetic in Shangrila (eds., S. Kanemitsu, H.-Z. Li and J.-Y. Liu), Proc. the 6th China-Japan Sem. Number Theory (Shanghai, August 15-17, 2011), World Sci., Singapore, 2013, pp. 244-258. - From N. J. A. Sloane, Dec 28 2012
Karl Svozil, Faithful orthogonal representations of graphs from partition logics, arXiv:1810.10423 [quant-ph], 2018.
Szilárd Szalay, The classification of multipartite quantum correlation, arXiv:1806.04392 [quant-ph], 2018.
Paul Tarau, A Hiking Trip Through the Orders of Magnitude: Deriving Efficient Generators for Closed Simply-Typed Lambda Terms and Normal Forms, arXiv preprint arXiv:1608.03912 [cs.PL], 2016.
E. A. Thompson, Gene identities and multiple relationships, Biometrics 30 (1974), 667-680.
Michael Torpey, Semigroup congruences: computational techniques and theoretical applications, Ph.D. Thesis, University of St. Andrews (Scotland, 2019).
J. Touchard, Nombres exponentiels et nombres de Bernoulli, Canad. J. Math., 8 (1956), 305-320.
C. G. Wagner, Letter to N. J. A. Sloane, Sep 30 1974
D. P. Walsh, Counting forests with Stirling and Bell numbers
Yi Wang and Bao-Xuan Zhu, Proofs of some conjectures on monotonicity of number-theoretic and combinatorial sequences, arXiv preprint arXiv:1303.5595 [math.CO], 2013.
F. V. Weinstein, Notes on Fibonacci partitions, arXiv:math/0307150 [math.NT], 2003-2015 (see page 16).
Eric Weisstein's World of Mathematics, Bell Number
Eric Weisstein's World of Mathematics, Bell Triangle
Eric Weisstein's World of Mathematics, Binomial Transform
Eric Weisstein's World of Mathematics, Independent Vertex Set
Eric Weisstein's World of Mathematics, Stirling Transform
Eric Weisstein's World of Mathematics, Subfactorial
Eric Weisstein's World of Mathematics, Vertex Cover
H. S. Wilf, Generatingfunctionology, 2nd edn., Academic Press, NY, 1994, pp. 21ff.
Roman Witula, Damian Slota and Edyta Hetmaniok, Bridges between different known integer sequences, Annales Mathematicae et Informaticae, 41 (2013) pp. 255-263.
The Wolfram Functions Site, Generalized Incomplete Gamma Function.
Dekai Wu, K. Addanki and M. Saers, Modeling Hip Hop Challenge Response Lyrics as Machine Translation, in Sima'an, K., Forcada, M.L., Grasmick, D., Depraetere, H., Way, A. (eds.) Proceedings of the XIV Machine Translation Summit (Nice, September 2-6, 2013), p. 109-116.
D. Wuilquin, Letters to N. J. A. Sloane, August 1984
Chunyan Yan, Zhicong Lin, Inversion sequences avoiding pairs of patterns, arXiv:1912.03674 [math.CO], 2019.
Winston Yang, Bell numbers and k-trees, Disc. Math. 156 (1996) 247-252. MR1405023 (97c:05004)
Karen Yeats, A study on prefixes of c_2 invariants, arXiv:1805.11735 [math.CO], 2018.
Alexander Yong, The Joseph Greenberg problem: combinatorics and comparative linguistics, arXiv preprint arXiv:1309.5883 [math.CO], 2013.
Abdelmoumène Zekiri, Farid Bencherif, Rachid Boumahdi, Generalization of an Identity of Apostol, J. Int. Seq., Vol. 21 (2018), Article 18.5.1.
Index entries for "core" sequences
Index entries for sequences related to juggling
Index entries for sequences related to partitions
Index entries for sequences related to rooted trees
Index entries for sequences related to Benford's law
E.g.f.: exp(exp(x) - 1).
Recurrence: a(n+1) = Sum_{k=0..n} a(k)*binomial(n, k).
a(n) = Sum_{k=0..n} Stirling2(n, k).
a(n) = Sum_{j=0..n-1} (1/(n-1)!)*A000166(j)*binomial(n-1, j)*(n-j)^(n-1). - André F. Labossière, Dec 01 2004
G.f.: (Sum_{k=0..infinity} 1/((1-k*x)*k!))/exp(1) = hypergeom([ -1/x], [(x-1)/x], 1)/exp(1) = ((1-2*x)+LaguerreL(1/x, (x-1)/x, 1)+x*LaguerreL(1/x, (2*x-1)/x, 1))*Pi/(x^2*sin(Pi*(2*x-1)/x)), where LaguerreL(mu, nu, z) =( GAMMA(mu+nu+1)/GAMMA(mu+1)/GAMMA(nu+1))* hypergeom([ -mu], [nu+1], z) is the Laguerre function, the analytic extension of the Laguerre polynomials, for mu not equal to a nonnegative integer. This generating function has an infinite number of poles accumulating in the neighborhood of x=0.- Karol A. Penson, Mar 25 2002
a(n) = exp(-1)*Sum_{k >= 0} k^n/k! [Dobinski]. - Benoit Cloitre, May 19 2002
a(n) is asymptotic to n!*(2 Pi r^2 exp(r))^(-1/2) exp(exp(r)-1) / r^n, where r is the positive root of r exp(r) = n. See, e.g., the Odlyzko reference.
a(n) is asymptotic to b^n*exp(b-n-1/2)*sqrt(b/(b+n)) where b satisfies b*log(b) = n - 1/2 (see Graham, Knuth and Patashnik, Concrete Mathematics, 2nd ed., p. 493). - Benoit Cloitre, Oct 23 2002, corrected by Vaclav Kotesovec, Jan 06 2013
Lovasz (Combinatorial Problems and Exercises, North-Holland, 1993, Section 1.14, Problem 9) gives another asymptotic formula, quoted by Mezo and Baricz. - N. J. A. Sloane, Mar 26 2015
G.f.: Sum_{k>=0} x^k/(Product_{j=1..k} 1-jx) (see Klazar for a proof). - Ralf Stephan, Apr 18 2004
a(n+1) = exp(-1)*Sum_{k>=0} (k+1)^(n)/k!. - Gerald McGarvey, Jun 03 2004
For n>0, a(n) = Aitken(n-1, n-1) [i.e., a(n-1, n-1) of Aitken's array (A011971)]. - Gerald McGarvey, Jun 26 2004
a(n) = Sum_{k=1..n} (1/k!)*(Sum_{i=1..k} (-1)^(k-i)*binomial(k, i)*i^n+0^n). - Paul Barry, Apr 18 2005
a(n) = A032347(n) + A040027(n+1). - Jon Perry, Apr 26 2005
a(n) = 2*n!/(Pi*e)*Im( integral_{0}^{Pi} e^(e^(e^(ix))) sin(nx) dx ) where Im denotes imaginary part [Cesaro]. - David Callan, Sep 03 2005
O.g.f.: 1/(1-x-x^2/(1-2*x-2*x^2/(1-3*x-3*x^2/(.../(1-n*x-n*x^2/(...)))))) (continued fraction due to Ph. Flajolet). - Paul D. Hanna, Jan 17 2006
From Karol A. Penson, Jan 14 2007: (Start)
Representation of Bell numbers B(n), n=1,2..., as special values of hypergeometric function of type (n-1)F(n-1), in Maple notation: B(n)=exp(-1)*hypergeom([2,2,...,2],[1,1,...,1],1), n=1,2..., i.e. having n-1 parameters all equal to 2 in the numerator, having n-1 parameters all equal to 1 in the denominator and the value of the argument equal to 1.
Examples:
B(1)=exp(-1)*hypergeom([],[],1)=1
B(2)=exp(-1)*hypergeom([2],[1],1)=2
B(3)=exp(-1)*hypergeom([2,2],[1,1],1)=5
B(4)=exp(-1)*hypergeom([2,2,2],[1,1,1],1)=15
B(5)=exp(-1)*hypergeom([2,2,2,2],[1,1,1,1],1)=52
(Warning: this formula is correct but its application by a computer may not yield exact results, especially with a large number of parameters.)
a(n+1) = 1 + Sum_{k=0..n-1} Sum_{i=0..k} binomial(k,i))*(2^(k-i))*a(i). - Yalcin Aktar, Feb 27 2007
a(n) = [1,0,0,...,0] T^(n-1) [1,1,1,...,1]', where T is the n X n matrix with main diagonal {1,2,3,...,n}, 1's on the diagonal immediately above and 0's elsewhere. [Meier]
a(n) = ((2*n!)/(Pi * e)) * ImaginaryPart(Integral[from 0 to Pi](e^e^e^(i*theta))*sin(n*theta) dtheta). - Jonathan Vos Post, Aug 27 2007
From Tom Copeland, Oct 10 2007: (Start)
a(n) = T(n,1) = Sum_{j=0...n} S2(n,j) = Sum_{j=0...n} E(n,j) * Lag(n,-1,j-n) = Sum_{j=0...n} [ E(n,j)/n! ] * [ n!*Lag(n,-1, j-n) ] where T. Note that E(n,j)/n! = E(n,j) / (Sum_{k=0...n} E(n,k)).
The Eulerian numbers count the permutation ascents and the expression [n!*Lag(n,-1, j-n)] is A086885 with a simple combinatorial interpretation in terms of seating arrangements, giving a combinatorial interpretation to n!*a(n) = Sum_{j=0..n} E(n,j) * [n!*Lag(n,-1, j-n)].
Define f_1(x), f_2(x), ... such that f_1(x)=e^x and for n=2,3,... f_{n+1}(x) = (d/dx)(x*f_n(x)). Then for Bell numbers B_n we have B_n=1/e*f_n(1). - Milan Janjic, May 30 2008
a(n) = (n-1)! Sum_{k=1..n} a(n-k)/((n-k)! (k-1)!) where a(0)=1. - Thomas Wieder, Sep 09 2008
a(n+k) = Sum_{m=0..n} Stirling2(n,m) Sum_{r=0..k} binomial(k,r) m^r a(k-r). - David Pasino (davepasino(AT)yahoo.com), Jan 25 2009. (Umbrally, this may be written as a(n+k) = Sum_{m=0..n} Stirling2(n,m) (a+m)^k. - N. J. A. Sloane, Feb 07 2009.)
From Thomas Wieder, Feb 25 2009: (Start)
a(n) = Sum_{k_1=0..n+1} Sum_{k_2=0..n}...Sum_{k_i=0..n-i}...Sum_{k_n=0..1}
delta(k_1,k_2,...,k_i,...,k_n)
where delta(k_1,k_2,...,k_i,...,k_n) = 0 if any k_i > k_(i+1) and k_(i+1) <> 0
and delta(k_1,k_2,...,k_i,...,k_n) = 1 otherwise.
Let A be the upper Hessenberg matrix of order n defined by: A[i,i-1]=-1, A[i,j]:=binomial(j-1,i-1), (i<=j), and A[i,j]=0 otherwise. Then, for n>=1, a(n)=det(A). - Milan Janjic, Jul 08 2010
G.f. satisfies A(x)=(x/(1-x))*A(x/(1-x)) + 1. - Vladimir Kruchinin, Nov 28 2011
G.f.: 1 / (1 - x / (1 - 1*x / (1 - x / (1 - 2*x / (1 - x / (1 - 3*x / ... )))))). - Michael Somos, May 12 2012
a(n+1) = Sum_{m=0..n} Stirling2(n, m)*(m+1), n >= 0. Compare with the third formula for a(n) above. Here Stirling2 = A048993. - Wolfdieter Lang, Feb 03 2015
G.f.: (-1)^(1/x)*((-1/x)!/e + (!(-1-1/x))/x) where z! and !z are factorial and subfactorial generalized to complex arguments. - Vladimir Reshetnikov, Apr 24 2013
The following formulas were proposed during the period Dec 2011 - Oct 2013 by Sergei N. Gladkovskii. (Start)
E.g.f.: exp(exp(x)-1) = 1+x/(G(0)-x); G(k) = (k+1)*Bell(k)+x*Bell(k+1)-x*(k+1)*Bell(k)*Bell(k+2)/G(k+1) (continued fraction).
G.f.: W(x)=(1-1/(G(0)+1))/exp(1) ; G(k)= x*k^2 + (3*x-1)*k - 2 + x - (k+1)*(x*k+x-1)^2/G(k+1); (continued fraction Euler's kind, 1-step).
G.f.: W(x)= (1 + G(0)/(x^2-3*x+2))/exp(1); G(k)= 1- (x*k+x-1)/( ((k+1)!)- (((k+1)!)^2)*(1-x-k*x+(k+1)!)/( ((k+1)!)*(1-x-k*x+(k+1)!) - (x*k+2*x-1)*(1-2*x-k*x+(k+2)!)/G(k+1))); (continued fraction).
G.f.: A(x)= 1/(1 - x/(1-x/(1 + x/G(0)))); G(k)= x - 1 + x*k + x*(x-1+x*k)/G(k+1); (continued fraction, 1-step).
G.f.: -1/U(0) where U(k)= x*k - 1 + x - x^2*(k+1)/U(k+1); (continued fraction, 1-step).
G.f.: 1 + x/U(0) where U(k) = 1 - x*(k+2) - x^2*(k+1)/U(k+1); (continued fraction, 1-step).
G.f.: 1 + 1/(U(0) - x) where U(k) = 1 + x - x*(k+1)/(1 - x/U(k+1)); (continued fraction, 2-step).
G.f.: 1 + x/(U(0)-x) where U(k) = 1 - x*(k+1)/(1 - x/U(k+1)); (continued fraction, 2-step).
G.f.: 1/G(0) where G(k) = 1 - x/(1 - x*(2*k+1)/(1 - x/(1 - x*(2*k+2)/G(k+1) ))); (continued fraction).
G.f.: G(0)/(1+x) where G(k) = 1 - 2*x*(k+1)/((2*k+1)*(2*x*k-1) - x*(2*k+1)*(2*k+3)*(2*x*k-1)/(x*(2*k+3) - 2*(k+1)*(2*x*k+x-1)/G(k+1) )); (continued fraction).
G.f.: -(1+2*x) * Sum(k >= 0, x^(2*k)*(4*x*k^2-2*k-2*x-1) / ((2*k+1) * (2*x*k-1)) * A(k) / B(k) where A(k) = prod(p=0...k, (2*p+1)), B(k) = prod(p=0..k, (2*p-1) * (2*x*p-x-1) * (2*x*p-2*x-1)).
G.f.: (G(0) - 1)/(x-1) where G(k) = 1 - 1/(1-k*x)/(1-x/(x-1/G(k+1) )); (continued fraction).
G.f.: 1 + x*(S-1) where S=sum(k>=0, ( 1 + (1-x)/(1-x-x*k) )*(x/(1-x))^k/prod(i=0..k-1, (1-x-x*i)/(1-x) ) ).
G.f.: (G(0) - 2)/(2*x-1) where G(k) = 2 - 1/(1-k*x)/(1-x/(x-1/G(k+1) )); (continued fraction).
G.f.: -G(0) where G(k) = 1 - (x*k - 2)/(x*k - 1 - x*(x*k - 1)/(x + (x*k - 2)/G(k+1) )); (continued fraction).
G.f.: G(0) where G(k) = 2 - (2*x*k - 1)/(x*k - 1 - x*(x*k - 1)/(x + (2*x*k - 1)/G(k+1) )); (continued fraction).
G.f.: (G(0) - 1)/(1+x) where G(k) = 1 + 1/(1-k*x)/(1-x/(x+1/G(k+1) )); (continued fraction).
G.f.: 1/(x*(1-x)*G(0)) - 1/x where G(k) = 1 - x/(x - 1/(1 + 1/(x*k-1)/G(k+1) )); (continued fraction).
G.f.: 1 + x/( Q(0) - x ) where Q(k) = 1 + x/( x*k - 1 )/Q(k+1); (continued fraction).
G.f.: 1+x/Q(0), where Q(k)= 1 - x - x/(1 - x*(k+1)/Q(k+1)); (continued fraction).
G.f.: 1/(1-x*Q(0)), where Q(k)= 1 + x/(1 - x + x*(k+1)/(x - 1/Q(k+1))); (continued fraction).
G.f.: Q(0)/(1-x), where Q(k) = 1 - x^2*(k+1)/( x^2*(k+1) - (1-x*(k+1))*(1-x*(k+2))/Q(k+1) ); (continued fraction).
a(n) ~ exp(exp(W(n))-n-1)*n^n/W(n)^(n+1/2), where W(x) is the Lambert W-function. - Vladimir Reshetnikov, Nov 01 2015
a(n) ~ n^n * exp(n/LambertW(n)-1-n) / (sqrt(1+LambertW(n)) * LambertW(n)^n). - Vaclav Kotesovec, Nov 13 2015
a(n) are the coefficients in the asymptotic expansion of -exp(-1)*(-1)^x*x*Gamma(-x,0,-1), where Gamma(a,z0,z1) is the generalized incomplete Gamma function. - Vladimir Reshetnikov, Nov 12 2015
a(n) = 1 + floor(exp(-1) * Sum_{k=1..2*n} k^n/k!). - Vladimir Reshetnikov, Nov 13 2015
a(p^m) ≡ m+1 (mod p) when p is prime and m >= 1 (see Lemma 3.1 in the Hurst/Schultz reference). - Seiichi Manyama, Jun 01 2016
Sum_{n>=0} (-1)^n*a(n)/n! = exp(exp(-1)-1). - Ilya Gutkovskiy, Jun 01 2016
G.f. = 1 + x + 2*x^2 + 5*x^3 + 15*x^4 + 52*x^5 + 203*x^6 + 877*x^7 + 4140*x^8 + ...
From Neven Juric, Oct 19 2009: (Start)
The a(4)=15 rhyme schemes for n=4 are
aaaa, aaab, aaba, aabb, aabc, abaa, abab, abac, abba, abbb, abbc, abca, abcb, abcc, abcd
The a(5)=52 rhyme schemes for n=5 are
aaaaa, aaaab, aaaba, aaabb, aaabc, aabaa, aabab, aabac, aabba, aabbb, aabbc, aabca, aabcb, aabcc, aabcd, abaaa, abaab, abaac, ababa, ababb, ababc, abaca, abacb, abacc, abacd, abbaa, abbab, abbac, abbba, abbbb, abbbc, abbca, abbcb, abbcc, abbcd, abcaa, abcab, abcac, abcad, abcba, abcbb, abcbc, abcbd, abcca, abccb, abccc, abccd, abcda, abcdb, abcdc, abcdd, abcde
From Joerg Arndt, Apr 30 2011: (Start)
Restricted growth strings (RGS):
For n=0 there is one empty string;
for n=1 there is one string [0];
for n=2 there are 2 strings [00], [01];
for n=3 there are a(3)=5 strings [000], [001], [010], [011], and [012];
for n=4 there are a(4)=15 strings
1: [0000], 2: [0001], 3: [0010], 4: [0011], 5: [0012], 6: [0100], 7: [0101], 8: [0102], 9: [0110], 10: [0111], 11: [0112], 12: [0120], 13: [0121], 14: [0122], 15: [0123].
These are one-to-one with the rhyme schemes (identify a=0, b=1, c=2, etc.).
Consider the set S = {1, 2, 3, 4}. The a(4) = 1 + 3 + 6 + 4 + 1 = 15 partitions are: P1 = {{1}, {2}, {3}, {4}}; P21 .. P23 = {{a,4}, S\{a,4}} with a = 1, 2, 3; P24 .. P29 = {{a}, {b}, S\{a,b}} with 1 <= a < b <= 4; P31 .. P34 = {S\{a}, {a}} with a = 1 .. 4; P4 = {S}. See the Bottomley link for a graphical illustration. - M. F. Hasler, Oct 26 2017
A000110 := proc(n) option remember; if n <= 1 then 1 else add( binomial(n-1, i)*A000110(n-1-i), i=0..n-1); fi; end; # version 1
A := series(exp(exp(x)-1), x, 60); A000110 := n->n!*coeff(A, x, n); # version 2
with(combinat); A000110:=n->sum(stirling2(n, k), k=0..n): seq(A000110(n), n=1..22); # version 3, from Zerinvary Lajos, Jun 28 2007
A000110 := n -> combinat[bell](n): # version 4, from Peter Luschny, Mar 30 2011
a:=array(0..200); a[0]:=1; a[1]:=1; lprint(0, 1); lprint(1, 1); M:=200; for n from 2 to M do a[n]:=add(binomial(n-1, i)*a[n-1-i], i=0..n-1); lprint(n, a[n]); od:
with(combstruct); spec := [S, {S=Set(U, card >= 1), U=Set(Z, card >= 1)}, labeled]; [seq(combstruct[count](spec, size=n), n=0..40)]; G:={P=Set(Set(Atom, card>0))}: combstruct[gfsolve](G, unlabeled, x): seq(combstruct[count]([P, G, labeled], size=i), i=0..22); # Zerinvary Lajos, Dec 16 2007
A000110 := proc(n::integer) local k, Resultat; if n = 0 then Resultat:=1: return Resultat; end if; Resultat:=0: for k from 1 to n do Resultat:=Resultat+A000110(n-k)/((n-k)!*(k-1)!): od; Resultat:=Resultat*(n-1)!; return Resultat; end proc; # Thomas Wieder, Sep 09 2008
f[n_] := Sum[ StirlingS2[n, k], {k, 0, n}]; Table[ f[n], {n, 0, 40}] (* Robert G. Wilson v *)
Table[BellB[n], {n, 0, 40}] (* Harvey P. Dale, Mar 01 2011 *)
B[0] = 1; B[n_] := 1/E Sum[k^(n - 1)/(k-1)!, {k, 1, Infinity}] (* Dimitri Papadopoulos, Mar 10 2015, edited by M. F. Hasler, Nov 30 2018 *)
BellB[Range[0, 40]] (* Eric W. Weisstein, Aug 10 2017 *)
b[1] = 1; k = 1; Flatten[{1, Table[Do[j = k; k += b[m]; b[m] = j; , {m, 1, n-1}]; b[n] = k, {n, 1, 40}]}] (* Vaclav Kotesovec, Sep 07 2019 *)
(PARI) {a(n) = my(m); if( n<0, 0, m = contfracpnqn( matrix(2, n\2, i, k, if( i==1, -k*x^2, 1 - (k+1)*x))); polcoeff(1 / (1 - x + m[2, 1] / m[1, 1]) + x * O(x^n), n))}; /* Michael Somos */
(PARI) {a(n) = polcoeff( sum( k=0, n, prod( i=1, k, x / (1 - i*x)), x^n * O(x)), n)}; /* Michael Somos, Aug 22 2004 */
(PARI) a(n)=round(exp(-1)*suminf(k=0, 1.0*k^n/k!)) \\ Gottfried Helms, Mar 30 2007 - WARNING! For illustration only: Gives silently a wrong result for n = 42 and an error for n > 42, with standard precision of 38 digits. - M. F. Hasler, Nov 30 2018
(PARI) {a(n) = if( n<0, 0, n! * polcoeff( exp( exp( x + x * O(x^n)) - 1), n))}; /* Michael Somos, Jun 28 2009 */
(PARI) Vec(serlaplace(exp(exp('x+O('x^66))-1))) \\ Joerg Arndt, May 26 2012
(PARI) A000110(n)=sum(k=0, n, stirling(n, k, 2)) \\ M. F. Hasler, Nov 30 2018
(Sage) from sage.combinat.expnums import expnums2; expnums2(30, 1) # Zerinvary Lajos, Jun 26 2008
(Sage) [bell_number(n) for n in (0..40)] # G. C. Greubel, Jun 13 2019
(Python) # The objective of this implementation is efficiency.
# m -> [a(0), a(1), ..., a(m)] for m > 0.
def A000110_list(m):
A = [0 for i in range(m)]
A[0] = 1
R = [1, 1]
for n in range(1, m):
A[n] = A[0]
for k in range(n, 0, -1):
A[k-1] += A[k]
R.append(A[0])
return R
A000110_list(40) # Peter Luschny, Jan 18 2011
(Python)
# requires python 3.2 or higher. Otherwise use def'n of accumulate in python docs.
from itertools import accumulate
A000110, blist, b = [1, 1], [1], 1
for _ in range(20):
blist = list(accumulate([b]+blist))
b = blist[-1]
A000110.append(b) # Chai Wah Wu, Sep 02 2014, updated Chai Wah Wu, Sep 19 2014
(MAGMA) [Bell(n): n in [0..40]]; // Vincenzo Librandi, Feb 07 2011
(Maxima) makelist(belln(n), n, 0, 40); /* Emanuele Munarini, Jul 04 2011 */
(Haskell)
type N = Integer
n_partitioned_k :: N -> N -> N
1 `n_partitioned_k` 1 = 1
1 `n_partitioned_k` _ = 0
n `n_partitioned_k` k = k * (pred n `n_partitioned_k` k) + (pred n `n_partitioned_k` pred k)
n_partitioned :: N -> N
n_partitioned 0 = 1
n_partitioned n = sum $ map (\k -> n `n_partitioned_k` k) $ [1 .. n]
-- Felix Denis, Oct 16 2012
a000110 = sum . a048993_row -- Reinhard Zumkeller, Jun 30 2013
Equals row sums of triangle A008277 (Stirling subset numbers).
Partial sums give A005001. a(n) = A123158(n, 0).
See A061462 for powers of 2 dividing a(n).
Rightmost diagonal of triangle A121207. A144293 gives largest prime factor.
Cf. A000045, A000108, A000166, A000204, A000255, A000311, A000296, A003422, A024716, A029761, A049020, A058692, A060719, A084423, A087650, A094262, A103293, A165194, A165196, A173110, A227840.
Equals row sums of triangle A152432.
Row sums, right and left borders of A212431.
A diagonal of A011971. - N. J. A. Sloane, Jul 31 2012
Cf. A054767 (period of this sequence mod n).
Row sums are A048993. - Wolfdieter Lang, Oct 16 2014
Sequences in the Erné (1974) paper: A000110, A000798, A001035, A001927, A001929, A006056, A006057, A006058, A006059.
Bell polynomials B(n,x): A001861 (x=2), A027710 (x=3), A078944 (x=4), A144180 (x=5), A144223 (x=6), A144263 (x=7), A221159 (x=8).
Cf. A243991 (sum of reciprocals), A085686 (inv. Euler Transf.).
Sequence in context: A203645 A203646 A292935 * A336022 A303924 A336021
Adjacent sequences: A000107 A000108 A000109 * A000111 A000112 A000113
core,nonn,easy,nice
N. J. A. Sloane
Edited by M. F. Hasler, Nov 30 2018
approved | http://oeis.org/A000110 | CC-MAIN-2020-40 | refinedweb | 11,367 | 68.36 |
I'm trying to create a bit of code that can read information from a file, and based off this information, create a number of objects. However... I want every object created to be called U001 where 001 increments each time an object is created.
How do I go about this?
for reference, here's the code I already have:
All help is appreciated! :)All help is appreciated! :)Code:
#include <iostream>
#include <fstream>
#include <string>
using namespace std;
class Unit
{
public:
int HP;
int Xcord;
int Ycord;
char ident[8];
char house[8];
char weap1[8];
char weap2[8];
char dispnam[32];
};
int main ()
{
cout << "Available sides:\n* ASIDE -- Side #A\n* BSIDE -- Side #B\n* CSIDE -- Side #C\n\n";
// Defining Variables used, creating an fstream object called Fileop
char LineT[256];
int LineN; LineN = 0;
fstream Fileop;
// Opening file Units.dat for input & Output
Fileop.open ("Units.dat",ios::in);
// Read and display file, 1 line at a time
while (!Fileop.eof ( ) ) {
Fileop.getline (LineT,256);
cout << LineT; cout << "\n";
if (strncmp (LineT,"-",1) == 0) {
Unit U001;
U001.HP = 400;
};
LineN = LineN + 1;
};
// Close the opened file
Fileop.close();
cin.get();
return 0;
} | http://cboard.cprogramming.com/cplusplus-programming/84839-variable-object-names-printable-thread.html | CC-MAIN-2014-10 | refinedweb | 195 | 81.12 |
Gets the next message off a stream.
#include <stropts.h>
int getmsg (fd, ctlptr, dataptr, flags) int fd; struct strbuf *ctlptr; struct strbuf *dataptr; int *flags;
The getmsg system call retrieves from a STREAMS file the contents of a message located at the stream-head read queue, and places the contents into user-specified buffers. The message must contain either a data part, a control part, or both. The data and control parts of the message are placed into separate buffers, as described in the "Parameters" section. The semantics of each part are defined by the STREAMS module that generated the message.
Note: To use the getmsg system call you must import the /lib/pse.exp file during the compile. For example, compile the system call by entering the following command:cc -bI:/lib/pse.exp
Failure to import this file means that the getmsg system call will be unresolved during the link edit phase.
The ctlptr and dataptr parameters each point to a strbuf structure that contains the following members:
int maxlen; /* maximum buffer length */ int len; /* length of data */ char *buf; /* ptr to buffer */
In the strbuf structure, the maxlen field indicates the maximum number of bytes this buffer can hold, the len field contains the number of bytes of data or control information received, and the buf field points to a buffer in which the data or control information is to be placed.
If the ctlptr (or dataptr) parameter is null or the maxlen field is -1, the following events occur:
If the maxlen field is set to 0 and there is a zero-length control (or data) part, the following events occur:
If the maxlen field is set to 0 and there are more than 0 bytes of control (or data) information, the following events occur:
If the maxlen field in the ctlptr or dataptr parameter is less than, respectively, the control or data part of the message, the following events occur:
By default, the getmsg system call processes the first priority or nonpriority message available on the stream-head read queue. However, a user may choose to retrieve only priority messages by setting the flags parameter to RS_HIPRI. In this case, the getmsg system call processes the next message only if it is a priority message.
If the O_NDELAY flag has not been set, the getmsg system call blocks until a message of the types specified by the flags parameter (priority only or either type) is available on the stream-head read queue. If the O_DELAY flag has been set and a message of the specified types is not present on the read queue, the getmsg system call fails and sets the errno global variable to EAGAIN.
If a hangup occurs on the stream from which messages are to be retrieved, the getmsg system call continues to operate until the stream-head read queue is empty. Thereafter, it returns 0 in the len fields of both the ctlptr and dataptr parameters.
Upon successful completion, the getmsg system call returns a nonnegative value. The possible values are:
On return, the len field contains one of the following:
If information is retrieved from a priority message, the flags parameter is set to RS_HIPRI on return.
The getmsg system call fails if one or more of the following is true:
The getmsg system call can also fail if a STREAMS error message had been received at the stream head before the call to the getmsg system call. The error returned is the value contained in the STREAMS error message.
This system call is part of the STREAMS Kernel Extensions.
The poll subroutine, read subroutine, write subroutine.
The getpmsg system call, putmsg system call, putpmsg system call.
List of Streams Programming References and STREAMS Overview in AIX Version 4.3 Communications Programming Concepts. | http://ps-2.kev009.com/tl/techlib/manuals/adoclib/libs/commtrf2/getmsg.htm | CC-MAIN-2022-27 | refinedweb | 634 | 55.78 |
Have you ever wanted to be able to control all the devices around you with just the the touch of a screen? Well, thanks to the ever growing Internet of Things, you’re getting closer and closer to making that dream a reality. In this tutorial, you’ll learn how to build a simulated “smart home” using a Raspberry Pi and an Apple TV. The first part of this endeavor, in true IoT style, is wiring all of the necessary hardware components, so that when you program your AppleTV you’ll be able to control real devices. In Part 1, you’ll learn how to control an LED, a waterproof temperature sensor, and an 8×8 LED matrix to simulate turning on lights, boiling water, and adjusting the thermostat. In Part 2, we’ll show you how to program an AppleTV to control all of these devices remotely, with just the touch of a finger. Using a Raspberry Pi, an AppleTV, and PubNub’s global data stream network, you’ll be able to simulate the smart home of your dreams.
The full code repository is available on GitHub.
Hardware You’ll Need
- 1 Raspberry Pi 2, Model B
- 1 USB keyboard
- 1 USB mouse
- 1 external monitor
- 1 HDMI cable
- 1 micro-SD card
- 1 micro-SD card adapter
- 1 2.5A power supply
- 1 USB WiFi dongle
- 1 breadboard
- 1 LED
- 1 330Ω resistor
- 1 4.7kΩ resistor
- 1 Waterproof DS18B20 Temperature Sensor
- 1 MAX7219 LED Dot Matrix
- Male/male jumper wires
- Male/female jumper wires
- Female/female jumper wires
Setting Up the Raspberry Pi
Before you can really get to work wiring all your hardware components, you need to set up your Raspberry Pi. If you already have Rasbpian or a comparable operating system running on your Pi, you can skip to the last step of this section, Installing and Updating Libraries. If you’re new to the wonderful world of Raspberry Pi, no worries! We’ll have you up and running in no time.
Formatting Your SD Card and Downloading NOOBS
The first step to using your Raspberry Pi is making sure that you have an operating system (OS) to work with. If you bought your Pi as part of a kit, your micro-SD card might already have this installed, so double check whether it’s blank before continuing. If you don’t have anything installed, there are a couple steps you need to complete. First, format your micro-SD card by installing SD Formatter 4.0 on your computer and plugging in your card with an adapter. After opening up the software, select your card and click Format as shown below.
Next, you’ll need to download NOOBS (New Out of Box Software) through the Raspberry Pi downloads page. Once you’ve unzipped the file, transfer all the files inside to your newly formatted micro-SD card. After the process has completed, safely remove the card and insert it into your Raspberry Pi. You’re ready to fire it up!
Booting Up Your Raspberry Pi
Plug your keyboard, mouse, monitor, WiFi dongle, and power supply into the Raspberry Pi. You should see a rainbow screen on your monitor, after which a raspberry will appear, followed by a screen to install Raspbian. Select the recommended version of Raspbian and press Install. Once you reach the screen that says Raspberry Pi Software Configuration Tool, press the right arrow twice to select <finish> , then press the Return key.
When the command line appears, if it asks for a login, type in pi and press Return. When it asks for a password, type raspberry. When the line that says pi@raspberrypi~$ appears, type in startx and press the Return key. The GUI should launch, and now you’re ready to go!
Connecting to WiFi
To connect your WiFi dongle to the internet, go to Menu > Preferences > WiFi Configuration. Press Scan to view available networks. Then, double click your network and enter the network password in the box labeled PSK. Click Add and you should connect to the network.
Getting Started with the Script
Now that your Pi is up and running, you’re ready to create your project. In order for your Smart Home to react to commands from the AppleTV app, you need to utilize PubNub’s global data stream network to send messages between the app and the Raspberry Pi. After some initial setup, it only takes a couple lines of code to make it happen.
Creating Your Project
First, you’ll need to create a folder so that you can store all your project files in one place. Open the File Manager by clicking its icon in the toolbar at the top of the screen and navigate to the folder called Pi. The go to File > Create New… > Folder to create your directory. Give it a simple name, as you’ll have to refer to it in order to run your script. Next, you need to create your main script. Go to Menu > Accessories > Text Editor to open up a blank document, then save it to your folder as smarthome.py.
Updating and Installing Libraries and Drivers
Next, you need to install the driver you’ll need to operate your 8×8 LED matrix and microcontroller. Type in the following commands to install the MAX7219 driver.
git clone
sudo python max7219/setup.py install
Then, open up File Manager again and move the contents of the newly installed folder to your Smart Home folder, so that the folder called max7219 is in the same folder as your script, and you’re good to go.
Starting Your Script
After opening up your script again, you’re ready to get started! This project requires a lot of different libraries, so before you begin make sure to import everything you need at the top of your code.
import RPi.GPIO as GPIO import os import glob import time import sys import max7219.led as led from pubnub import Pubnub
Also set the GPIO mode so that you refer to the pins by their GPIO number, just to minimize confusion.
GPIO.setmode (GPIO.BCM)
Initializing PubNub
To start using PubNub, initialize it using your publish and subscribe keys, as well as the name of the channel that you plan on using in your AppleTV app.
pubnub = Pubnub(publish_key='demo', subscribe_key='demo') channel = 'Smart_Home'
Next, you need to define callback and error functions, so that you can analyze the messages sent through PubNub. For now, the callback function is empty, but don’t worry, you’ll fill it in later. The last step is to place a subscribe call to receive the data, and you’re done! PubNub is all set up and ready to roll.
def _error(m): print(m) def _callback(m, channel): pubnub.subscribe(channels=channel, callback=_callback, error=_error)
Running Your Script
If at any point you want to run your script, to test the different parts of your code or to view your finished product, you need to open up your terminal and change the directory to the folder you created, as shown below. Then, type in the next command to run your Python script.
If you want, you can include print statements in your code so that you can log the activities of your Raspberry Pi in the terminal window, or you can use the PubNub Developer Console to monitor the channel. Either way, finding ways to visualize the output of your Raspberry Pi will be very useful when it comes to debugging.
Adding the Hardware
Now that you’ve set everything up, you’re ready to start connecting the hardware components of your project. You can do these in any order, just be sure to keep track of which GPIO pins you use for each component so as to avoid confusion later.
Controlling an LED
Let’s start with the LED. You’ll first need to put together the circuit, then edit your script so that the LED can be controlled remotely by your app via PubNub.
Putting Together the LED Circuit
The LED circuit is really, really simple, as you can tell from the picture above. All you need is an LED, a 330Ω resistor, a breadboard, and two male/male wires to put it together. Wire a GPIO pin and a Ground pin on your Raspberry Pi to the breadboard. Then, connect one end of your resistor to the Ground wire, and the other to the short leg of the LED. The long leg of the LED connects to the GPIO wire, and you’re done!
Controlling the LED through PubNub
For this project, you want to be able to turn the LED on and off using specific commands sent from the AppleTV to your PubNub channel. First, initialize the GPIO pin you used to connect the LED and specify the pinmode.
LED_PIN = 17 GPIO.setup(LED_PIN,GPIO.OUT)
Now it’s time to start filling in your PubNub callback function. You’ll want the app to send the messages in the form of a dictionary with the key “light” and a value of either “off” or “on” so that they’re easily decoded. You just have to search for the key and turn the light off or on based on its value. Easy!
def _callback(m, channel): if m.get("light") == "on": GPIO.output(LED_PIN, True) if m.get("light") == "off": GPIO.output(LED_PIN, False)
Reading a Waterproof Temperature Sensor
Next hurdle: attaching a sensor to measure the water temperature inside a kettle. Before you get started, you have to strip the ends of the DS18B20 to expose the wire underneath using either wire strippers or scissors. If you use scissors, it’s easy to accidently cut through the whole wire, so be careful!
Hooking Up the DS18B20 Sensor
To connect your sensor to the Pi, you’ll need to attach female/male wires from each of the three wires in the sensor to your breadboard – you might have to use tape to hold them together. Then, connect the row with the black wire to a Ground pin on your Pi. Attach a 4.7kΩ resistor between the rows with the red and yellow wires, then connect the end of the row with the red wire to 3.3v and the end of the row with the yellow wire to a GPIO pin.
Initializing the DS18B20 Sensor
To use this sensor in your script, you have to first alter your system folders to include the driver you need for the sensor.
os.system('modprobe w1-gpio') os.system('modprobe w1-therm') Base_dir = '/sys/bus/w1/devices/' Device_folder = glob.glob(base_dir + '28*')[0] Device_file = device_folder + '/w1_slave'
Next, to get the temperature data from the sensor, you need to access it from the device file where it’s stored. Write your script so that it opens the file and returns the lines where the data was recorded.
def read_temp_raw(): f = open(device_file, 'r') lines = f.readlines() f.close() return lines
Lastly, you need to manipulate the data so that it’s in the form you want: degrees Celsius. To do that, you just have to strip the data out of the lines, convert it to a float, and divide by 1000.
def read_temp(): lines = read_temp_raw() while lines[0].strip()[-3:] != 'YES': time.sleep(0.1) lines = read_tempt_raw() equals_pos = lines[1].find('t=') if equals_pos != -1: temp_string = lines[1][equals_pos+2:] temp = float(temp_string) / 1000.0 return temp
Finally, your data is in the right form. You’re ready to start publishing it to your AppleTV.
Controlling the Sensor through PubNub
To publish your data, you need to send it to PubNub. But because you only want to send the data when it changes, so as not to clog up your channel to much, you need to write a function that detects changes in the temperature reading. To do this, simply read the data from the sensor twice and store those values in separate variables. Check if the values are the same, and, if they aren’t, publish the new value to the channel.
def sendTemp(): c = int(read_temp()) t = int(read_temp()) if c != t: pubnub.publish(channel=channel, message = {"temp":str(t)}, error=_error)
Now, in order to constantly check if the temperature has changed, you need to continually run the sendTemp() function, which you can do by putting it in a never-ending while loop.
while True: sendTemp()
And, you’re done! You can now remotely monitor the temperature of the water in your kettle through PubNub.
Displaying Numbers on an 8×8 LED Matrix
For this project, your 8×8 LED Matrix serves as a simulation of a thermostat, and it’s your job to program it so that you can change the numbers on it by sending a command through your app, thus increasing the “temperature” of your simulated Smart Home.
Wiring and Initializing the Matrix
To connect the matrix to your Raspberry Pi, you can just directly plug each of the pins on the matrix into the pins on the Pi using five female/female wires. Or, if you want, you can use 10 male/female wires and a breadboard. Like you can see in the diagram below, VCC should go to 5V, GND to Ground, DIN to GPIO 10 (MOSI), CS to GPIO 8 (SPI CS0), and CLK to GPIO 11 (SPI CLK).
To set it up in your script, you just need to write one line of code assigning it to a variable.
device = led.matrix(cascaded=1)
Writing the Number Functions
In order to display numbers on your LED matrix, you first have to hard code those functions in your program. If you want to figure out the pixel arrangements on your own, go for it! If that doesn’t seem like your cup of tea, I’ve done the grunt work for you. Just check out the print functions like the one below in the Github documentation for this project.
def print8(): device.pixel(2, 3, 1, redraw=True) device.pixel(2, 2, 1, redraw=True) device.pixel(2, 1, 1, redraw=True) device.pixel(3, 3, 1, redraw=True) device.pixel(4, 1, 1, redraw=True) device.pixel(4, 2, 1, redraw=True) device.pixel(4, 3, 1, redraw=True) device.pixel(5, 1, 1, redraw=True) device.pixel(6, 3, 1, redraw=True) device.pixel(6, 2, 1, redraw=True) device.pixel(6, 1, 1, redraw=True) device.pixel(5, 3, 1, redraw=True) device.pixel(3, 1, 1, redraw=True)
After you have all your number functions written, you just need to write a simple function that will take two input digits and display them on the matrix when the function is called.
def writeTherm(t,d): t = int(t) d = int(d) if t==5: print50() if t==6: print60() if t==7: print70() if t==8: print80() if d==0: print0() if d==1: print1() if d==2: print2() if d==3: print3() if d==4: print4() if d==5: print5() if d==6: print6() if d==7: print7() if d==8: print8() if d==9: print9()
Communicating with PubNub
Now it’s time to add to the callback function again. This time search the received message for a key called “thermostat.” If it has a value, meaning the app has signaled a desired change in temperature, simply call the function to show that number on the matrix.
def _callback(m, channel): if m.get("thermostat") != None: device.clear() writeTemp(str(m.get("thermostat"))[0],str(m.get("thermostat"))[1])
After finishing this step, you should be able to control the numbers on your matrix from the PubNub Debug Console. Try it out!
Incorporating Presence
The last part of setting up the Raspberry Pi is incorporating PubNub’s presence feature. To make it easier to work with the AppleTV, you’ll need to send out the initial states of all the components when a new user joins the channel by opening up the app. To do this, simply use the presence() method and a different callback function to publish the states of each component. For our purposes, only the LED and the temperature sensor have unique states at the start of the program. The thermostat just has a default setting.
def pcallback(m, channel): if m['action'] == 'join': if GPIO.input(LED_PIN)==1: pubnub.publish(channel=channel, message={"light":"on","thermostat":"73", "temp":str(int(read_temp()))}) elif GPIO.input(LED_PIN)==0: pubnub.publish(channel=channel, message={"light":"off","thermostat":"73", "temp":str(int(read_temp()))}) pubnub.presence(channel=channel, callback=pcallback, error=_error)
And, you’re done! Congratulations, you’ve just completed the first part of your simulated smart home.
Next Steps
Now that you’ve finished with the hardware, it’s time to program your AppleTV to control the Raspberry Pi. Keep an eye out for Part 2 of this tutorial so that you can finish building your simulated home of the future, and be one step closer to designing the smart home of your dreams. | https://www.pubnub.com/blog/building-a-smart-home-part-1-the-hardware/ | CC-MAIN-2021-31 | refinedweb | 2,846 | 71.14 |
Dominique Devienne wrote:
> On 6/15/06, Greg Irvine <greg.irvine@thalesatm.com> wrote:
>> Well, I managed to sort this out using the <subant> task and
>> <propertyset>
>> to pass only the required properties along.
>>
>> No more hard-coded folder lists or duplicated build files now!
>
>?
Also, what is this about
> // Return the full target banner (toString() in bugged in JDK
1.4.1)
> return _buffer.substring(0);
performance?
>
> In fact, here's the logger. Probably doesn't deal with <parallel> and
> multiple threads properly, but since I was using none of that, it was
> good enough for me. --DD
>
> import java.util.Stack;
>
> import org.apache.tools.ant.Target;
> import org.apache.tools.ant.Project;
> import org.apache.tools.ant.BuildEvent;
> import org.apache.tools.ant.DefaultLogger;
> import org.apache.tools.ant.util.StringUtils;
>
> /**
> * Build logger that allows to make sense of the nesting structure
> * generated by the use of <ant> and <subant> in Ant build
> files.
> * <p>
> * The target banner (the target name followed by a colon, all in its own
> line)
> * will not be printed until that target's tasks output any kind of message.
> * This greatly simplifies the build output for all those targets that do
> not
> * execute, either because they are prevented to from their 'if' or 'unless'
> * attributes, or because all their input files are up-to-date versus their
> * output files.
> * <p>
> * In addition, the target banner (when output) will be postfixed with the
> * project path that lead to its execution, i.e. the list of project names
> * that were started using either <ant> and <subant>. Assuming
> * one calls the build target of 3 different sub-builds called A, B, and C
> * all called from a master build, one could get an output as follows:
> * <pre>
> * Buildfile: master.xml
> *
> * build: [@master/A]
> * Compiling 19 source file to /acme/A/classes
> *
> * build: [@master/B]
> * Compiling 15 source file to /acme/B/classes
> *
> * build: [@master/C]
> * Compiling 12 source file to /acme/C/classes
> *
> * BUILD SUCCESSFUL
> * Total time: 8 seconds
> * </pre>
> * <p>
> * Inspired from NoBannerLogger by Peter Donald.
> */
> public class NoBannerSubBuildLogger
> extends DefaultLogger {
>
> /** The cached current target name, awaiting to be possibly printed. */
> private String _targetName;
>
> /** The stack of nesting Ant projects. */
> private Stack _projects = new Stack();
>
> /** The private buffer of this logger. */
> protected StringBuffer _buffer = new StringBuffer(128);
>
> /**
> * Gets the target banner for a given target name.
> *
> * @param targetName the target name to get the banner for.
> * @return the full target banner name.
> */
> protected String getTargetBanner(String targetName) {
> _buffer.setLength(0);
>
> // Target banner as usual
> _buffer.append(StringUtils.LINE_SEP);
> _buffer.append(targetName);
> _buffer.append(':');
>
> // Postfix the project path
> fillToIndex(_buffer, 16, ' ', 1);
> _buffer.append('[');
> appendProjectPath(_buffer, '/');
> _buffer.append(']');
>
> // Return the full target banner (toString() in bugged in JDK 1.4.1)
> return _buffer.substring(0);
> }
>
> /**
> * Appends the current project path to a given buffer.
> *
> * @param buffer the string buffer to append to.
> * @param separator the project path separator to use.
> */
> protected void appendProjectPath(StringBuffer buffer, char separator) {
> final int count = _projects.size();
> for (int i = 0; i < count; ++i) {
> Project project = (Project)_projects.get(i);
> buffer.append(project.getName());
> buffer.append(separator);
> }
> if (count > 0) {
> buffer.setLength(_buffer.length()-1);
> }
> }
>
> /**
> * Fills a string buffer with a given character to reach a known length.
> *
> * @param buffer the string buffer to fill (Cannot be
> <code>null</code>).
> * @param column the column index to fill up to.
> * @param c the char to fill up with.
> * @param minLength the mininum number of character to add in case the
> * string buffer is already longer than <code>column</code>
> * @return the number of characters actually added.
> */
> protected static int fillToIndex(StringBuffer buffer, int column,
> char c, int minLength) {
> final int fillCount = Math.max(column - buffer.length(), minLength);
> for (int i = 0; i < fillCount; ++i) {
> buffer.append(c);
> }
> return fillCount;
> }
>
> /**
> * Records/caches the target name and its project just started.
> *
> * @param event the build event to extract the target from.
> */
> public void targetStarted(BuildEvent event) {
> Target target = event.getTarget();
> _targetName = target.getName();
> _projects.push(target.getProject());
> }
>
> /**
> * Cleans up the record/cache of the target name and its project.
> *
> * @param event the (ignored here) build event.
> */
> public void targetFinished(BuildEvent event) {
> _targetName = null;
> _projects.pop();
> }
>
> /**
> * Logs a task message, possibly displaying the target and project path
> * that led to its execution, if they were not displayed earlier.
> *
> * @param event the build event containing message information.
> * Must not be <code>null</code>.
> */
> public void messageLogged(BuildEvent event) {
> if (event.getPriority() > msgOutputLevel
> || null == event.getMessage()
> || (_targetName != null &&
> "".equals(event.getMessage().trim()))) {
> return;
> }
>
> if (_targetName != null) {
> out.println(getTargetBanner(_targetName));
> _targetName = null;
> }
>
> super.messageLogged(event);
> }
>
> }
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscribe@ant.apache.org
> For additional commands, e-mail: user-help@ant.apache.org
>
---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@ant.apache.org
For additional commands, e-mail: user-help@ant.apache.org | http://mail-archives.apache.org/mod_mbox/ant-user/200606.mbox/%3C4492C524.3010205@apache.org%3E | CC-MAIN-2014-52 | refinedweb | 797 | 52.36 |
This is a C Program merge the elements of 2 sorted array.
This problem will implement two one-dimentional arrays and will sort them separately. Then these sorted arrays are merged into one single sorted array.
1. Create two arrays of some fixed size and define their elements in sorted fashion.
2. Take two variables i and j, which will be at the 0th position of these two arrays.
3. Elements will be compared one by one using i and j in for loop, and whichever element is smaller than the other, that element will get inserted to final array and the position(either i or j) will move by one, whereas the other array’s track position will remain in that same place.
4. Above work will be done till we reach the end of either array. After that, one of the array whose elements are still to be added, its elements will get straightaway added to the final array.
Here is source code of the C Program to merge the elements of 2 sorted array. The program is successfully compiled and tested using Turbo C compiler in windows environment. The program output is also shown below.
* C Program to Merge the Elements of 2 Sorted Array
#include <stdio.h>
void main()
int array1[50], array2[50], array3[100], m, n, i, j, k = 0;
printf("\n Enter size of array Array 1: ");
scanf("%d", &m);
printf("\n Enter sorted elements of array 1: \n");
for (i = 0; i < m; i++)
scanf("%d", &array1[i]);
printf("\n Enter size of array 2: ");
scanf("%d", &n);
printf("\n Enter sorted elements of array 2: \n");
for (i = 0; i < n; i++)
scanf("%d", &array2[i]);
i = 0;
j = 0;
while (i < m && j < n)
if (array1[i] < array2[j])
array3[k] = array1[i];
i++;
else
array3[k] = array2[j];
j++;
k++;
if (i >= m)
while (j < n)
array3[k] = array2[j];
j++;
k++;
if (j >= n)
while (i < m)
array3[k] = array1[i];
i++;
k++;
printf("\n After merging: \n");
for (i = 0; i < m + n; i++)
printf("\n%d", array3[i]);
1. Declare 2 1D arrays of some fixed size, then take size of the arrays from user and define all the elements of the array according to the size in sorted fashion.
2. Take two variables, i and j as iterators which will track the position of elements in arrays.
3. Running a while loop till we reach the end of either array, the element at ith and jth position of two arrays are compared.
4. The smaller element gets inserted into final array (third array, whose size is the sum of the size of these two arrays) and the track position gets incremented by 1.
5. This process continues, till we reach the end of either array.
6. After finishing the loop above, one of the array’s tracker(i.e either i or j) will not be at the last position of the corresponding array, in that case we will have to add all the remaining elements of that array to the final array as it is one by one
Enter size of array Array 1: 4 Enter sorted elements of array 1: 12 18 40 60 Enter size of array 2: 4 Enter sorted elements of array 2: 47 56 89 90 After merging: 12 18 40 47 56 60 89 90
- Apply for C Internship
- Buy C Books
- Practice BCA MCQs
- Practice Computer Science MCQs | https://www.sanfoundry.com/c-program-merge-sorted-array/ | CC-MAIN-2022-21 | refinedweb | 581 | 60.58 |
Introduction
The world is becoming lazier. It gets easier to do certain tasks, and certain previously tedious tasks now get done for us. Technology is great! With new technology comes new solutions, as in the case of today's topic. Yes, TinyURL has been around a very long time, but you may not have heard about it before, or, you may have been wondering how to shorten your long URLs quickly as TinyURL does. Today, I will show you. It is quite easy and not a lot of work, so let's get started, shall we?
Practical
Create a new C# or Visual Basic.NET Windows Forms Application and design your form to resemble Figure 1.
Figure 1: Design
Add the ShrinkURL method.
C#
private string ShrinkURL(string strURL) { string URL; URL = "" + strURL.ToLower(); System.Net.HttpWebRequest objWebRequest; System.Net.HttpWebResponse objWebResponse; System.IO.StreamReader srReader; string strHTML; objWebRequest = (System.Net.HttpWebRequest)System.Net .WebRequest.Create(URL); objWebRequest.Method = "GET"; objWebResponse = (System.Net.HttpWebResponse)objWebRequest .GetResponse(); srReader = new System.IO.StreamReader(objWebResponse .GetResponseStream()); strHTML = srReader.ReadToEnd(); srReader.Close(); objWebResponse.Close(); objWebRequest.Abort(); return (strHTML); }
VB.NET
Private Function ShrinkURL(ByVal strURL As String) As String Dim URL As String URL = "" + _ strURL.ToLower Dim objWebRequest As Net.HttpWebRequest Dim objWebResponse As Net.HttpWebResponse Dim srReader As IO.StreamReader Dim strHTML As String objWebRequest = CType(Net.WebRequest.Create(URL), _ Net.HttpWebRequest) objWebRequest.Method = "GET" objWebResponse = CType(objWebRequest.GetResponse(), _ Net.HttpWebResponse) srReader = New IO.StreamReader(objWebResponse _ .GetResponseStream) strHTML = srReader.ReadToEnd srReader.Close() objWebResponse.Close() objWebRequest.Abort() Return (strHTML) End Function
The ShrinkURL function generates a shortened URL with the help of the 'api-create' method in its URL. You supply the long URL that was entered in one of the Textboxes; then, you need to create WebRequest objects to obtain the returned shortened URL and a StreamReader object to interpret the URL and return a properly formed string to be returned to the calling method or procedure.
Add the code for the Process and Copy buttons.
C#
private void btnProcess_Click(object sender, EventArgs e) { System.Threading.Thread.Sleep(1000); txtOutput.Text = ShrinkURL(txtURL.Text); txtURL.Text = ""; } private void btnCopy_Click(object sender, EventArgs e) { Clipboard.SetText(txtOutput.Text); }
VB.NET
Private Sub btnProcess_Click(sender As Object, e As EventArgs) _ Handles btnProcess.Click Threading.Thread.Sleep(1000) txtOutput.Text = ShrinkURL(txtURL.Text) txtURL.Text = "" End Sub Private Sub btnCopy_Click(sender As Object, e As EventArgs) _ Handles btnCopy.Click Clipboard.SetText(txtOutput.Text) End Sub
The Process button waits a second, and then adds the shrunken URL to the Output Textbox. The Copy button simply copies the URL to the Clipboard.
Figure 2 shows a long URL that was entered. Figure 3 shows the shortened URL.
Figure 2: Long URL
Figure 3: Short URL
Conclusion
Quick and dirty. I just want to thank everyone for reading my articles. Some articles are quite long, some are short, but I do hope you benefit from them. My aim with these articles is to help you learn funky tricks, or interesting things, or simply learn something new. I know what it is like to struggle and not know what to do and where to go and what to look for. CodeGuru has helped so much! I just hope my ideas and my experience help you as well. | https://mobile.codeguru.com/csharp/.net/net_general/internet/shortening-your-long-urls-with-the-tinyurl-api-and-.net.html | CC-MAIN-2018-47 | refinedweb | 554 | 52.56 |
NAME
ipfw - IP firewall
SYNOPSIS
#include <sys/types.h> #include <sys/socket.h> #include <netinet/in.h> #include <linux/ip.h> #include <linux/tcp.h> #include <linux/udp.h> #include <linux/icmp.h> #include <linux/if.h> #include <linux/ip_fw.h> int setsockopt (int socket, IPPROTO_IP, int command, void *data, int length)
DESCRIPTION
The IP firewall facilities in the Linux kernel provide mechanisms for accounting IP packets, for building firewalls based on packet-level filtering, for building firewalls using transparent proxy servers (by redirecting packets to local sockets), and for masquerading forwarded packets. The administration of these functions is maintained in the kernel as a series of separate lists (hereafter referred to as chains) each containing zero or more rules. There are three builtin chains which are called input, forward and output which always exist. All other chains are user defined. A chain is a sequence of rules; each rule contains specific information about source and destination addresses, protocols, port numbers, and some other characteristics. Information about what to do if a packet matches the rule is also contained. A packet will match with a rule when the characteristics of the rule match those of the IP packet. A packet always traverses a chain starting at rule number 1. Each rule specifies what to do when a packet matches. If a packet does not match a rule, the next rule in that chain is tried. If the end of a builtin chain is reached the default policy for that chain is returned. If the end of a user defined chain is reached then the rule after the rule which branched to that chain is tried. The purpose of the three builtin chains are Input firewall These rules regulate the acceptance of incoming IP packets. All packets coming in via one of the local network interfaces are checked against the input firewall rules (locally-generated packets are considered to come from the loopback interface). A rule which matches a packet will cause the rule’s packet and byte counters to be incremented appropriately. Forwarding firewall These rules define the permissions for forwarding IP packets. All packets sent by a remote host having another remote host as destination are checked against the forwarding firewall rules. A rule which matches will cause the rule’s packet and byte counters to be incremented appropriately. Output firewall These rules define the permissions for sending IP packets. All packets that are ready to be be sent via one of the local network interfaces are checked against the output firewall rules. A rule which matches will cause the rule’s packet and byte counters to be incremented appropriately. Each of the firewall rules contains either a branch name or a policy, which specifies what action has to be taken when a packet matches with the rule. There are five different policies possible: ACCEPT (let the packet pass the firewall), REJECT (do not accept the packet and send an ICMP host unreachable message back to the sender as notification), DENY (sometimes referred to as block; ignore the packet without sending any notification), REDIRECT (redirected to a local socket - input rules only) and MASQ (pass the packet, but perform IP masquerading - forwarding rules only). The last two are special; for REDIRECT, the packet will be received by a local process, even if it was sent to another host and/or another port number. This function only applies to TCP or UDP packets. For MASQ, the sender address in the IP packets is replaced by the address of the local host and the source port in the TCP or UDP header is replaced by a locally generated (temporary) port number before being forwarded. Because this administration is kept in the kernel, reverse packets (sent to the temporary port number on the local host) are recognized automatically. The destination address and port number of these packets will be replaced by the original address and port number that was saved when the first packet was masqueraded. This function only applies to TCP or UDP packets. There is also a special target RETURN which is equivalent to falling off the end of the chain. This paragraph describes the way a packet goes through the firewall. Packets received via one of the local network interfaces will pass the following chains: input firewall (incoming device) Here, the device (network interface) that is used when trying to match a rule with an IP packet is listed between brackets. After this step, a packet will optionally be redirected to a local socket. When a packet has to be forwarded to a remote host, it will also pass the next set of rules: forwarding firewall (outgoing device) After this step, a packet will optionally be masqueraded. Responses to masqueraded packets will never pass the forwarding firewall (but they will pass both the input and output firewalls). All packets sent via one of the local network interfaces, either locally generated or being forwarded, will pass the following sets of rules: output firewall (outgoing device) When a packet enters one of the three above chains rules are traversed from the first rule in order. When analysing a rule one of three things may occur. Rule unmatched: If a rule is unmatched then the next rule in that chain is analysed. If there are no more rules for that chain the default policy for that chain is returned (or traversal continues back at the calling chain, in the case of a user-defined chain). Rule matched (with branch to chain): When a rule is matched by a packet and the rule contains a branch field then a jump/branch to that chain is made. Jumps can only be made to user defined chains. As described above, when the end of a builtin chain is reached then a default policy is returned. If the end of a used defined chain is reached then we return to the rule from whence we came. There is a reference counter at the head of each chain which determines the number of references to that chain. The reference count of a chain must be zero before it can be deleted to ensure that no branches are effected. To ensure the builtin chains are never deleted their reference count is initialised to one. Also since no branches to builtin chains can be made, their reference counts are always one. The reference count on user defined chains are initialised to zero and are changed accordingly when rules are inserted, deleted etc. Multiple jumps to different chains are possible which unfortunately make loops possible. Loop detection is therefore provided. Loops are detected when a packet tries to re-enter a chain it is already traversing. An example of a simple loop that could be created is if we set up two user defined chains called "test1" and "test2". We firstly insert a rule in the "input" chain which jumps to "test1". We then create a rule in the "test1" chain which points to "test2" and a rule in "test2" which points to "test1". Here we have obviously created a loop. When a packet then enters the input chain it will branch to the "test1" chain and then to the "test2" chain. From here it will try to branch back to the "test1" chain. A message in the syslog will be recorded along with the path which the packet traversed, to assist in debugging firewall rules. Rule matched (special branch): The special labels ACCEPT, DENY, REJECT, REDIRECT, MASQ or RETURN can be given which specify the immediate fate of the packet as discussed above. If no label is specified then the next rule in the chain is analysed. Using this last option (no label) an accounting chain can be created. If each of the rules in this accounting chain have no branch or label then the packet will always fall through to the end of the chain and then return to the calling chain. Each rule that matches in the accounting chain will have its byte and packet counters incremented as expected. This accounting chain can be branched to from any other chain (eg input, forward or output chain). This is a very neat way of performing packet accounting. The firewall administration can be changed via calls to setsockopt(2). The existing rules can be inspected by looking at two files in the /proc/net directory: ip_fwchains, ip_fwnames. These two files are readable only by root. The current administration related to masqueraded sessions can be found in the file ip_masquerade in the same directory.
COMMANDS
Command for changing and setting up chains and rules is ipchains(8) Most commands require some additional data to be passed. A pointer to this data and the length of the data are passed as option value and option length arguments to setsockopt. The following commands are available: IP_FW_INSERT This command allows a rule to be inserted in a chain at a given position (where 1 is considered the start of the chain). If there is already a rule in that position, it is moved one slot, as are any following rules in that chain. The reference count of any chains referenced by this inserted rule are incremented appropriately. The data passed with this command is an ip_fwnew structure, defining the position, chain and contents of the new rule. IP_FW_DELETE Remove the first rule matching the specification from the given chain. The data passed with this command is an ip_fwchange structure, defining the rule to be deleted and its chain. The reference count of any chains referenced by this deleted rule are decremented appropriately. Note that the fw_mark field is currently ignored in rule comparisons (see the BUGS section). IP_FW_DELETE_NUM Remove a rule from one of the chains at a given rule number (where 1 means the first rule). The data passed with this command is an ip_fwdelnum structure, defining the rule number of the rule to be deleted and its chain. The reference count of any chains referenced by this deleted rule are decremented appropriately. IP_FW_ZERO Reset the packet and byte counters in all rules of a chain. The data passed with this command is an ip_chainlabel which defines the chain which is to be operated on. See also the description of the /proc/net files for a way to atomically list and reset the counters. IP_FW_FLUSH Remove all rules from a chain. The data passed with this command is an ip_chainlabel which defines the chain to be operated on. IP_FW_REPLACE Replace a rule in a chain. The new rule overwrites the rule in the given position. Any chains referenced by the new rule are incremented and chains referenced by the overwritten rule are decremented. The data passed with this command is an ip_fwnew structure, defining the contents of the new rule, the the chain name and the position of the rule in that chain. IP_FW_APPEND Insert a rule at the end of one of the chains. The data passed with this command is an ip_fwchange structure, defining the contents of the new rule and the chain to which it is to be appended. Any chains referenced by this new rule have their refcount incremented. IP_FW_MASQ_TIMEOUTS Set the timeout values used for masquerading. The data passed with this command is a structure containing three fields of type int, representing the timeout values (in jiffies, 1/HZ second) for TCP sessions, TCP sessions after receiving a FIN packet, and UDP packets, respectively. A timeout value 0 means that the current timeout value of the corresponding entry is preserved. IP_FW_CHECK Check whether a packet would be accepted, denied, rejected, redirected or masqueraded by a chain. The data passed with this command is an ip_fwtest structure, defining the packet to be tested and the chain which it is to be test on. Both builtin and user defined chains can be tested. IP_FW_CREATECHAIN Create a chain. The data passed with this command is an ip_chainlabel defining the name of the chain to be created. Two chains can not have the same name. IP_FW_DELETECHAIN Delete a chain. The data passed with this command is an ip_chainlabel defining the name of the chain to be deleted. The chain must not be referenced by any rule (ie. refcount must be zero). The chain must also be empty which can be achieved using IP_FW_FLUSH. IP_FW_POLICY Changes the default policy on a builtin rule. The data passed with this command is an ip_fwpolicy structure, defining the chain whose policy is to be changed and the new policy. The chain must be a builtin chain as user-defined chains don’t have default policies.
STRUCTURES
The ip_fw structure contains the following relevant fields to be filled in for adding or replacing a rule: struct in_addr fw_src, fw_dst Source and destination IP addresses. struct in_addr fw_smsk, fw_dmsk Masks for the source and destination IP addresses. Note that a mask of 0.0.0.0 will result in a match for all hosts. char fw_vianame[IFNAMSIZ] Name of the interface via which a packet is received by the system or is going to be sent by the system. If the option IP_FW_F_WILDIF is specified, then the fw_vianame need only match the packet interface up to the first NUL character in fw_vianame. This allows wildcard-like effects. The empty string has a special meaning: it will match with all device names. __u16 fw_flg Flags for this rule. The flags for the different options can be bitwise or’ed with each other. The options are: IP_FW_F_TCPSYN (only matches with TCP packets when the SYN bit is set and both the ACK and RST bits are cleared in the TCP header, invalid with other protocols), The option IP_FW_F_MARKABS is described under the fw_mark entry. The option IP_FW_F_PRN can be used to list some information about a matching packet via printk(). The option IP_FW_F_FRAG can be used to specify a rule which applies only to second and succeeding fragments (initial fragments can be treated like normal packets for the sake of firewalling). Non-fragmented packets and initial fragments will never match such a rule. Fragments do not contain the complete information assumed for most firewall rules, notably ICMP type and code, UDP/TCP port numbers, or TCP SYN or ACK bits. Rules which try to match packets by these criteria will never match a (non-first) fragment. The option IP_FW_F_NETLINK can be specified if the kernel has been compiled with CONFIG_IP_FIREWALL_NETLINK enabled. This means that all matching packets will be sent out the firewall netlink device (character device, major number 36, minor number 3). The output of this device is four bytes indicating the total length, four bytes indicating the mark value of the packet (as described under fw_mark above), a string of IFNAMSIZ characters containing the interface name for the packet, and then the packet itself. The packet is truncated to fw_outputsize bytes if it is longer. __u16 fw_invflg This field is a set of flags used to negate the meaning of other fields, eg. to specify that a packet must NOT be on an interface. The valid flags are IP_FW_INV_SRCIP (invert the meaning of the fw_src field) IP_FW_INV_DSTIP (invert the meaning of fw_dst) IP_FW_INV_PROTO (invert the meaning of fw_proto) IP_FW_INV_SRCPT (invert the meaning of fw_spts) IP_FW_INV_DSTPT (invert the meaning of fw_dpts) IP_FW_INV_VIA (invert the meaning of fw_vianame) IP_FW_INV_SYN (invert the meaning of fw_flg & IP_FW_F_TCPSYN) IP_FW_INV_FRAG (invert the meaning of fw_flg & IP_FW_F_FRAG). It is illegal (and useless) to specify a rule that can never be matched, by inverting an all-inclusive set. Note also, that a fragment will never pass any test on ports or SYN, even an inverted one. __u16 fw_proto The protocol that this rule applies to. The protocol number 0 is used to mean ‘any protocol’. __u16 fw_spts[2], fw_dpts[2] These fields specify the range of source ports, and the range of destination ports respectively. The first array element is the inclusive minimum, and the second is the inclusive maximum. Unless the rule specifies a protocol of TCP, UDP or ICMP, the port range must be 0 to 65535. For ICMP, the fw_spts field is used to check the ICMP type, and the fw_dpts field is used to check the ICMP code. __u16 fw_redirpt This field must be zero unless the target of the rule is "REDIRECT". Otherwise, if this redirection port is 0, the destination port of a packet will be used as the redirection port. __u32 fw_mark This field indicates a value to mark the skbuff with (which contains the administration data for the matching packet). This is currently unused, but could be used to control how individual packets are treated. If the IP_FW_F_MARKABS flag is set then the value in fw_mark simply replaces the current mark in the skbuff, rather than being added to the current mark value which is normally done. To subtract a value, simply use a large number for fw_mark and 32-bit wrap-around will occur. __u8 fw_tosand, fw_tosxor These 8-bit masks define how the TOS field in the IP header should be changed when a packet is accepted by the firewall rule. The TOS field is first bitwise and’ed with fw_tosand and the result of this will be bitwise xor’ed with fw_tosxor. Obviously, only packets which match the rule have their TOS effected. It is the responsibility of the user that packets with invalid TOS bits are not created using this option. The ip_fwuser structure, used when calling some of the above commands contains the following fields: struct ip_fw ipfw See above ip_chainlabel label This is the label of the chain which is to be operated on. The ip_fwpkt structure, used when checking a packet, contains the following fields: struct iphdr fwp_iph The IP header. See <linux/ip.h> for a detailed description of the iphdr structure. struct tcphdr fwp_protoh.fwp_tcph struct udphdr fwp_protoh.fwp_udph struct icmphdr fwp_protoh.fwp_icmph The TCP, UDP, or ICMP header, combined in a union named fwp_protoh. See <linux/tcp.h>, <linux/udp.h>, or <linux/icmp.h> for a detailed description of the respective structures. struct in_addr fwp_via The interface address via which the packet is pretended to be received or sent.
CHANGES
The ability to add in extra chains other than just the standard input, output and forward chains is very powerful. The ability to branch to any chain makes the replication of rules unnecessary. Accounting becomes automatic as a single chain can be referenced by all builtin chains to do the accounting. Fragments must now be handled explicitly; previously second and succeeding fragments were passed automatically. The lowest TOS bit (MBZ) could not be effected previously; the kernel used to silently mask out any attempted manipulation of the lowest TOS bit. (‘‘So now you know how to do it - DON’T.’’). The packet and byte counters are now 64-bit on 32-bit machines (actually presented as two 32-bit values). The ability to specify an interface by an IP address was obsoleted by the ability to specify it by name; the combination of the two was error-prone and so only an interface name can now be used. The old IP_FW_F_TCPACK flag was made obsolete by the ability to invert the IP_FW_F_TCPSYN flag. The old IP_FW_F_BIDIR flag made the kernel code complex and is no longer supported. The ability to specify several ports in one rule was messy and didn’t win much, so has been removed.
RETURN VALUE
On success (or a straightforward packet accept for the CHECK options), zero is returned. On error, -1 is returned and errno is set appropriately. See setsockopt(2) for a list of possible error values. ENOENT indicates that the given chain name doesn’t exist. When the check packet command is used, zero is returned when the packet would be accepted without redirection or masquerading. Otherwise, -1 is returned and errno is set to ECONNABORTED (packet would be accepted using redirection), ECONNRESET (packet would be accepted using masquerading), ETIMEDOUT (packet would be denied), ECONNREFUSED (packet would be rejected), ELOOP (packet got into a loop), ENFILE (packet fell off end of chain; only occurs for user defined chains).
LISTING RULES
In the directory /proc/net there are two entries to list the currently defined rules and chains: ip_fwnames (for IP firewall chain names) One line per chain. Each line contains the chain name, policy, the number of references to that chain and the packet and byte counters which have matched the policy (represented as two pairs of 32-bit numbers; most significant 32-bits first). ip_fwchains (for IP firewall chains) One line per rule; rules are listed one chain at a time (from first to last as they appear in /proc/net/ip_fwnames) and in order from first to last down each chain. The fields are: the chain name for that rule, source address and mask, destination address and mask, interface name (or "-"), the fw_flg field, the fw_invflg field, protocol number, packet and byte counters, the source and destination port ranges, the TOS and-mask, the TOS xor-mask, the fw_redirpt field, the fw_mark field, the fw_outputsize field, and the target (label). The IP addresses and masks are listed as eight hexadecimal digits, the TOS masks are listed as two hexadecimal digits preceded by the letters A and X, respectively, the fw_mark, fw_flg and fw_invflg fields are listed in hex, and the other values are represented in decimal format. The packet and bytes counters are represented as two space-separated 32-bit numbers, representing the most and least significant words respectively. Individual fields are separated by white space, by a "/" (the address and the corresponding mask), by "->" (the source and destination address/mask pairs), or "-" (the ranges for source and destination ports). These files may also be opened in read/write mode. In that case, the packet and byte counters in all the rules of that category will be reset to zero after listing their current values. The file /proc/net/ip_masquerade contains the kernel administration related to masquerading. After a header line, each masqueraded session is described on a separate line with the following entries, separated by white space or by ’:’ (the address/port number pairs): protocol name ("TCP" or "UDP"), source IP address and port number, destination IP address and port number, the new port number, the initial sequence number for adding a delta value, the delta value, the previous delta value, and the expire time in jiffies (1/HZ second). All addresses and numeric values are in hexadecimal format, except the last three entries, being represented in decimal format.
FILES
/proc/net/ip_fwchains /proc/net/ip_fwnames /proc/net/ip_masquerade
BUGS
The setsockopt(2) interface is a crock. This should be put under /proc/sys/net/ipv4 and the world would be a better place. There is no way to read and reset a single chain; stop packets traversing the chain and then list, reset and restore traffic. The packet and byte counters should be presented in /proc as a single 64-bit value, not two 32-bit values. The "fw_mark" field isn’t used for deletions of matching rules. This is to facilitate the ipfwadm compatibility script. Similarly, the IP_FW_F_MARKABS flag is ignored in comparisons.
SEE ALSO
setsockopt(2), socket(2), ipchains(8) February 9, 1999 IPFW(4) | http://manpages.ubuntu.com/manpages/dapper/man4/ipfw_chains.4.html | CC-MAIN-2014-35 | refinedweb | 3,872 | 61.56 |
10.1 Creating a library with the GNU archiver
The GNU archiver
ar combines a collection of object files into a
single archive file, also known as a library. An archive file is
simply a convenient way of distributing a large number of related object
files together (as described earlier in section 2.7 Linking with external libraries).
To demonstrate the use of the GNU archiver we will create a small
library ‘libhello.a’ containing two functions
hello and
bye.
The first object file will be generated from the source code for the
hello function, in the file ‘hello_fn.c’ seen earlier:
#include <stdio.h> #include "hello.h" void hello (const char * name) { printf ("Hello, %s!\n", name); }
The second object file will be generated from the source file
‘bye_fn.c’, which contains the new function
bye:
#include <stdio.h> #include "hello.h" void bye (void) { printf ("Goodbye!\n"); }
Both functions use the header file ‘hello.h’, now with a prototype
for the function
bye():
void hello (const char * name); void bye (void);
The source code can be compiled to the object files ‘hello_fn.o’ and ‘bye_fn.o’ using the commands:
$ gcc -Wall -c hello_fn.c $ gcc -Wall -c bye_fn.c
These object files can be combined into a static library using the following command line:
$ ar cr libhello.a hello_fn.o bye_fn.o
The option
cr stands for "create and replace".(33) If
the library does not exist, it is first created. If the library already
exists, any original files in it with the same names are replaced by the new
files specified on the command line. The first argument
‘libhello.a’ is the name of the library. The remaining arguments
are the names of the object files to be copied into the library.
The archiver
ar also provides a "table of contents" option
t to list the object files in an existing library:
$ ar t libhello.a hello_fn.o bye_fn.o
Note that when a library is distributed, the header files for the public functions and variables it provides should also be made available, so that the end-user can include them and obtain the correct prototypes.
We can now write a program using the functions in the newly created library:
#include "hello.h" int main (void) { hello ("everyone"); bye (); return 0; }
This file can be compiled with the following command line, as described in section 2.7 Linking with external libraries, assuming the library ‘libhello.a’ is stored in the current directory:
$ gcc -Wall main.c libhello.a -o hello
The main program is linked against the object files found in the library file ‘libhello.a’ to produce the final executable.
The short-cut library linking option
-l can also be used to
link the program, without needing to specify the full filename of the
library explicitly:
$ gcc -Wall -L. main.c -lhello -o hello
The option
-L. is needed to add the current directory to the
library search path. The resulting executable can be run as usual:
$ ./hello Hello, everyone! Goodbye!
It displays the output from both the
hello and
bye
functions defined in the library. | http://www.network-theory.co.uk/docs/gccintro/gccintro_79.html | crawl-001 | refinedweb | 520 | 67.25 |
Content-type: text/html
Standard C Library (libc.a)
#include <sys/timers.h>
int getclock(
int clktyp,
struct timespec *tp) ;
Identifies a system-wide clock. Points to a timespec structure space where the current value of the system-wide clock is stored.
The getclock() function sets the current value of the clock specified by clktyp into the location pointed to by the tp parameter.
The clktyp parameter is given as a symbolic constant name, as defined in the sys/timers.h include file. Only the TIMEOFDAY symbolic constant, which specifies the normal time-of-day clock to access for system-wide time, is supported.
For the clock specified by TIMEOFDAY, the value returned by this function is the elapsed time since the epoch. The epoch is referenced to 00:00:00 CUT (Coordinated Universal Time) 1 Jan 1970.
The getclock() function returns a timespec structure, which is defined in the sys/timers.h header file. It has the following members:
The time interval expressed by the members of this structure is ((tv_sec * 10^9) + (tv_nsec)) nanoseconds.
Trial use
Upon successful completion, the getclock() function returns a value of 0 (zero). Otherwise, getclock() returns a value of -1 and sets errno to indicate the error.
If the getclock() function fails, errno is to one of the following values: The clktyp parameter does not specify a known system-wide clock. An error occurred when the system-wide clock specified by the clktyp parameter was accessed.
Functions: gettimeofday(2), gettimer(3), setclock(3), time(3) delim off | http://backdrift.org/man/tru64/man3/getclock.3.html | CC-MAIN-2017-09 | refinedweb | 254 | 58.48 |
5.3. Deferred Initialization¶
In the previous examples we played fast and loose with setting up our networks. In particular we did the following things that shouldn’t work:
We defined the network architecture with no regard to the input dimensionality.
We added layers without regard to the output dimension of the previous layer.
We even “initialized” these parameters without knowing how many parameters were to initialize.
All of those things sound impossible and indeed, they are. After all, there do not know exist.
5.3.1. Instantiating a Network¶
Let’s see what happens when we instantiate a network. We start with our trusty MLP as before.
from mxnet import init, np, npx from mxnet.gluon import nn npx.set_np() def getnet(): net = nn.Sequential() net.add(nn.Dense(256, activation='relu')) net.add(nn.Dense(10)) return net net = getnet()
At this point the network does not really know yet what the dimensionalities of the various parameters should be. All one could tell at this point is that each layer needs weights and bias, albeit of unspecified dimensionality. If we try accessing the parameters, that is exactly what happens.
print(net.collect_params) print(net.collect_params())
<bound method Block.collect_params of Sequential( (0): Dense(-1 -> 256, Activation(relu)) (1): Dense(-1 -> 10, linear) )> sequential0_ ( Parameter dense0_weight (shape=(256, -1), dtype=float32) Parameter dense0_bias (shape=(256,), dtype=float32) Parameter dense1_weight (shape=(10, -1),, -1), dtype=float32) Parameter dense0_bias (shape=(256,), dtype=float32) Parameter dense1_weight (shape=(10, -1), dtype=float32) Parameter dense1_bias (shape=(10,), dtype=float32) )
As we can see, nothing really changed. Only once we provide the network with some data do we see a difference. Let’s try it out.
x = np.random.uniform(size= bind all the dimensions as they become available. Once this is known, we can proceed by initializing parameters. This is the solution to the three problems outlined above.
5.3.2. = np.random.uniform(size=(2, 20)) y = net(x)
Init dense2_weight (256, 20) Init dense3_weight (10, 256).
5.3.3. Forced Initialization¶
Deferred initialization does not occur if the system knows the shape of
all parameters when calling the
initialize function. This can occur
in two cases:
We have already seen some data and we just want to reset the parameters.
We specified all input and output dimensions of the network when defining it.
The first case works just fine, as illustrated below.
net.initialize(init=MyInit(), force_reinit=True)
Init dense2_weight (256, 20) Init dense3_weight (10, 256)())
Init dense4_weight (256, 20) Init dense5_weight (10, 256)
5.3.4. Summary¶
Deferred initialization is a good thing. It allows Gluon to set many things automatically. | https://d2l.ai/chapter_deep-learning-computation/deferred-init.html | CC-MAIN-2019-51 | refinedweb | 440 | 58.48 |
Adaptive loading with service workers
Modifying the assets that you serve to users based on their device and network conditions.
Users access websites through a wide variety of devices and network connections. Even in major cities, where mobile networks are fast and reliable, one can end up experiencing slower load times, for example, when commuting in the subway, in a car, or just when moving around. In regions like emerging markets, this phenomenon is even more common, not only due to unreliable networks, but also because devices tend to have less memory and CPU processing power.
Adaptive loading is a web performance pattern that lets you adapt your site based on the user's network and device conditions.
The adaptive loading pattern is made possible by service workers, the Network Information API, the Hardware Concurrency API, and the Device Memory API. In this guide we explore how you can use service workers and the Network Information API to achieve an adaptive loading strategy.
Production case
Terra is one of the biggest media companies in Brazil. It has a large user base, coming from a wide variety of devices and networks.
To provide a more reliable experience to all their users, Terra combines service workers and the Network Information API to deliver lower quality images to users on 2G or 3G connections.
The company also found that the scripts and assets (like banners) loaded by ad networks were especially detrimental to users navigating in 3G or slower connections.
As is the case with many publishers, Terra serves AMP versions of their pages to users coming from search engines and other link sharing platforms. AMP pages are usually lightweight and help mitigate the impact of ads in performance by deprioritizing their load with respect to the main content of the page.
Taking that into consideration, Terra decided to start serving AMP versions of their pages not only to users coming from search engines, but also to those navigating their site in 3G connections or slower.
To achieve that, they use the Network Information API in the service worker to detect if the request comes from 3G or slower. If that's the case, they change the URL of the page to request the AMP version of the page instead.
Thanks to this technique, they send 70% less bytes to users on slower connections. The time spent in AMP pages is higher for 3G users and ads in AMP pages have a better CTR (click-through-rate) for that group.
Implement adaptive loading with Workbox
In this section we'll explore how Workbox can be used to implement adaptive loading strategies.
Workbox provides several runtime caching strategies out of the box. They are used to indicate how the service worker generates a response after receiving a
fetch event.
For example, in a Cache First strategy the
Request will be fulfilled using the cached response (if available). If there isn't a cached response, the
Request will be fulfilled by a network request and the response will be cached.
import {registerRoute} from 'workbox-routing';
import {CacheFirst} from 'workbox-strategies';
registerRoute(
new RegExp('/img/'),
new CacheFirst()
);
Caching strategies can be customized with Workbox plugins. These allow you to add additional behaviors by manipulating requests and responses during the lifecycle of a request. Workbox has several built-in plugins for common cases and APIs, but you can also define a custom plugin, and introduce some custom logic of your choice.
To achieve adapting loading, define a custom plugin, called, for example,
adaptiveLoadingPlugin:;
},
};
The previous code does the following:
- Implements a
requestWillFetch()callback: This is called whenever a network request is about to be made, so you can alter the
Request.
- Checks the connection type, by using the Network Information API. Based on the status of the network, it creates a new URL part, indicating the quality of the image to fetch (e.g.
q_30for 3G users).
- Creates a new URL based on the dynamic
newPartvalue, and returns the new
Requestto be made, based on that URL.,
}),
],
}),
);
As a result, when requests for images are intercepted, the runtime caching strategy will try to fulfill the request from the cache. If it's not available, it will run the logic in the plugin, to decide which image quality to fetch from the network.
Finally the response will be persisted in the cache, and sent back to the page.
Cloudinary Workbox Plugin
Cloudinary, a video and image hosting service, has a Workbox Plugin that encapsulates the functionality explained in the previous section, making it even easier to implement.
The plugin is designed to work with the Workbox webpack plugin. To implement it, use the
GenerateSW() class:
new workboxPlugin.GenerateSW({
swDest: 'sw.js',
importScripts: ['./cloudinaryPlugin.js'],
runtimeCaching: [
{
urlPattern: new RegExp('^.*/image/upload/'),
handler: 'CacheFirst',
options: {
cacheName: 'cloudinary-images',
plugins: [
{
requestWillFetch: async ({request}) =>
cloudinaryPlugin.requestWillFetch(request),
},
],
},
},
],
});
The previous code does the following:
- Uses the
GenerateSW()class to configure webpack to generate a service worker in the destination indicated in
swDest.
- Imports the cloudinary plugin script.
- Defines a Cache First runtime caching strategy for requests for images to the Cloudinary CDN.
- Passes the Cloudinary Workbox Plugin to adjust the image quality according to the network conditions.
Explore more adaptive loading strategies
You can go beyond this, by mapping device signals, like hardware concurrency and device memory to device categories and then serving different assets depending on the device type (low-, mid- or high-end). | https://web.dev/adaptive-loading-with-service-workers/ | CC-MAIN-2020-29 | refinedweb | 902 | 51.68 |
Understanding: Program Stack and Recursion
Carlos García
・5 min read
Recently, I've been doing a lot of research in "advance topics" of C++, I am very interested in things like memory management and optimization (I'm currently reading "Introduction to Algorithms" to improve my skills).
In my research, I found the topic of recursion. I studied this topic in my "Data Structures" class in university, but I wanted to understand in a deep way how the computer handles recursion and how it can be used instead loops.
In this post I will be explaining how I understood the role that the stack plays in recursion and how you can apply recursion in linear algebra to obtain the determinant of a matrix.
So... what is the Stack?
The stack is a LIFO data structure (Last in, first out). It is literally what you imagine: when you "push" data, this data will go into the stack and it will sit above the last pushed element converting itself in the new "top" element. This process will repeat each time you "push" data to the stack.
When you want to read an element from the stack, you "pop" the top element of the stack. That is why the last element pushed into the stack is the first element to get out of the stack.
Relation between the Stack and Recursion
Well, when you are running a code (Let's say, a C++ code) the runtime of the language manages the "program-stack" this stack will be the structure used the store the variables used in your code and the calls to the different functions that you invoke in your code.
(Dynamic memory spaces created with the 'new' operator are stored in the "Heap", which is a different structure than the Program-Stack)
Keeping track of the function calls with the Stack makes possible things like passing parameters, returning values and of course: Recursion.
Every time you call a function in your code, the runtime will push a "stack frame" into the stack. The stack frame is a block of information that contains the information of the subrutine. This information includes: the parameters, a return address and the local variables of the function.
When you use recursion, you are pushing a stack frame each time your function calls itself. The Call Stack (or "Program-Stack" as mentioned before) has a finite size per program run (The size is calculated before the program execution), this can lead to a dangerous situation because you can actually surpass the quantity of information that the stack can hold: this is the famous "Stack-Overflow".
But don't hold yourself, the runtime will not be pushing stack frames to the stack all the time. When you use the "Return" argument or when the function terminates its execution, the program will return to the "Return Address" and the stack frame will be pop out of the stack.
This is why is so important to have a base case for the recursion. The base case is the case in which the function will not call itself but return an specific value to the previous call.
The base case assure us that at some point of the recursion the functions will start to "roll back" and this will start to pop the stack frames of the recursion out of the Stack.
If you don't have a base case, you will definitely cause a Stack Overflow.
Using recursion to get the determinant of a Matrix.
So, in the next part I will try to demonstrate how to use recursion to get the determinant of a matrix. (This code is a first draft, so there is room for improvement, feel free to give suggestions about the code).
This code will be using the "Standard Method" for solving determinants, if you don't know how it works, you can find how to do it here.
Now lets see the code:
//We will pass the matrix and the size n of the matrix(nxn) int determinant(int** matrix, int size){ if(size == 2){ /*This one is the base case! This case will return the result for the smallest sub-matrix (2x2)*/ return (matrix[0][0] * matrix[1][1]) - (matrix[0][1] * matrix[1][0]); } else { //This bool is used for the additions of determinants of the sub-matrices. //Remember that we will add in a pattern of (+ - + - ...) bool isNegativeAddition = false; int determinantResult = 0; for(int x = 0; x < size; x++){ int i = 0; //This is a temporary place to store the values for the new matrix int *numbersForNewRecursiveCall = new int[(size - 1) * (size - 1)]; //We fill the temp with the values of the new sub-matrix for(int sy = 0; sy < size; sy++){ for(int sx = 0; sx < size; sx++){ if(sy != 0 && sx != x){ numbersForNewRecursiveCall[i] = matrix[sy][sx]; i++; } } } //Then we fill an Array[][] i = 0; int **subMatrix = declareArray(size-1); for(int sy = 0; sy < size-1; sy++){ for(int sx = 0; sx < size-1; sx++){ subMatrix[sy][sx] = numbersForNewRecursiveCall[i]; i++; } } /*This is the important part: The if is to determine when we will add or when we will rest the determinants of the sub-matrix We will add the value of the previous determinant results with the multiplication of the value that determines the sub-matrix and the determinant of that particular sub-matrix*/ if(isNegativeAddition == true){ determinantResult = determinantResult + (matrix[0][x] * determinant(subMatrix, size-1) * -1); isNegativeAddition = false; } else { determinantResult = determinantResult + (matrix[0][x] * determinant(subMatrix, size-1)); isNegativeAddition = true; } } return determinantResult; } }
As we can see here, the recursion helps us to go to the smallest matrix possible in the method, get its determinant (base case) and then go up all the way to the addition of the different determinants of the n sub-matrices of the biggest (nxn) matrix.
This is literally my second post trying to give back some knowledge to the community so I will appreciate all the feedback that you guys can give to me. Also, I want to mention that the goal of this was not to code the best method for determinant solving, the goal of this was to explain the Stack and how Recursion works.
Thank you very much for reading this post ~
Health issues you face being a Developer 🏥
You may be a JavaScript Developer or a Ruby Developer, but there is one thing ...
| https://practicaldev-herokuapp-com.global.ssl.fastly.net/jcharliegarciam/understanding-program-stack--recursion-2ii | CC-MAIN-2019-43 | refinedweb | 1,063 | 50.7 |
Java Program for Printing Mirrored Rhombus Star Pattern
Printing Mirrored Rhombus Star Pattern
In this problem we’re going to code a Java Program for printing mirrored rhombus star pattern.
For doing so we’ll take a number input from user and store it in variable rows and then run the for loop start from i=0 to ii which print the spaces and then take a another loop to print star start from j=0 to j
Algorithm:
- Take the number of rows as input from the user (length of side of rhombus) and store it in any variable.(‘row‘ in this case).
- Run a loop ‘row’ number of times to iterate through each of the rows. From i=0 to i<row. The loop should be structured as for(int i=0;i<rows;i++)
- Run a nested loop inside the main loop to print the spaces before the rhombus. From j=row to j>i. The loop should be structured as for(int j=rows;j>i;j–)
- Run another nested loop inside the main loop after the previous loop to print the stars in each column of a row. From j=0 to j<row. The loop should be structured as for(int j=0;j<rows;j++) inside this loop print System.out.println(“*”);
- Move to the next line by printing a new line System.out.println();
Code in Java:
import java.util.Scanner; public class Pattern1 { public static void main(String[] args) { Scanner sc = new Scanner(System.in); System.out.println("Enter No"); int rows = sc.nextInt(); for(int i=0;i<rows;i++) //loop controlling number of rows { for(int j=rows;j>i;j--) //inner loop for spaces System.out.print(" "); //printing spaces for(int j=0;j<rows;j++) //inner loop for printing the stars in each column of a row System.out.print("*"); //printing stars System.out.println(); // printing a new line after each row } } } This code is contributed by Shubham Nigam (Prepinsta Placement Cell Student)
Login/Signup to comment
One comment on “Java Program for Printing Mirrored Rhombus Star Pattern”
Thank you prepinsta for publishing my code… | https://prepinsta.com/java-program/mirrored-rhombus-star-pattern/ | CC-MAIN-2022-21 | refinedweb | 358 | 71.65 |
This codelab will give you a quick tour of a few machine learning APIs. You'll use:
You'll construct a pipeline that compares an audio recording with an image and determines their relevance to each other. Here is a sneak peek of how you'll accomplish.
You can click on this link to enable all the necessary APIs. After you do so, feel free to ignore instructions for setting up authentication; we'll do that in a moment. Alternatively, you can enable each API individually. To do so,:
Repeat the same process to enable the Cloud Speech, Cloud Translation and Cloud Natural Language APIs. machine learning APIs.
To get started with Cloud Shell, click on the "Activate Google Cloud Shell"
icon in the top right hand corner of the header bar
A Cloud Shell session opens inside a new frame at the bottom of the console and displays a command-line prompt. Wait until the user@project:~$ prompt appears.
Depending with your comfort with the command line, you may want to click on "Launch Code Editor"
icon in the top right hand corner of the Cloud Shell bar
You will need a service account to authenticate. To make one, replace [NAME] with desired name of service account and run the following command in Cloud Shell:
gcloud iam service-accounts create [NAME]
Now you'll need to generate a key to use that service account. Replace [FILE_NAME] with desired name of key, [NAME] with the service account name from above and [PROJECT_ID] with the ID of your project. The following command will create and download the key as [FILE_NAME].json:
gcloud iam service-accounts keys create [FILE_NAME].json --iam-account [NAME]@[PROJECT_ID].iam.gserviceaccount.com
To use the service account, you'll have to set the variable GOOGLE_APPLICATION_CREDENTIALS to the path of the key. To do this, run the following command after replacing [PATH_TO_FILE] and [FILE_NAME]:
export GOOGLE_APPLICATION_CREDENTIALS=[PATH_TO_FILE]/[FILE_NAME].json
You'll need the Python client for Cloud Vision. To install, type the following into cloud shell:
pip install --upgrade google-cloud-vision --user
Let's take a look at the code samples for the Cloud Vision API. We're interested in finding out what's in a specified image.
detect.py seems to be useful for this so let's grab that. One way is to copy the contents of detect.py, create a new file in Cloud Shell called
vision.py and paste all the code into
vision.py. You can do this manually in Cloud Shell code editor or you can run this curl command in Cloud Shell:
curl -o vision.py
After you've done that, use the API by running the following in Cloud Shell:
python vision.py labels-uri gs://cloud-samples-data/ml-api-codelab/birds.jpg
You should see an output about birds and ostriches as this was the image analysed:
You passed 2 arguments to
vision.py:
detect_labels_uri()function to run
detect_labels_uri()
Let's take a closer look at
detect_labels_uri(). Note the additional comments that have been inserted.
def detect_labels_uri(uri): """Detects labels in the file located in Google Cloud Storage or on the Web.""" # relevant import from above # from google.cloud import vision # create ImageAnnotatorClient object client = vision.ImageAnnotatorClient() # create Image object image = vision.types.Image() # specify location of image image.source.image_uri = uri # get label_detection response by passing image to client response = client.label_detection(image=image) # get label_annotations portion of response labels = response.label_annotations print('Labels:') for label in labels: # print the label descriptions print(label.description)
You'll need the Python client for Cloud Speech-to-Text. To install, type the following into cloud shell:
sudo pip install --upgrade google-cloud-speech
Let's head to the code samples for Cloud Speech-to-Text. We're interested in transcribing speech audio.
transcribe.py looks to be a good place to get started so let's use that. Copy the contents of transcribe.py, create a new file in Cloud Shell called
speech2text.py and paste all the code into
speech2text.py. You can do this manually in Cloud Shell code editor or you can run this curl command in Cloud Shell:
curl -o speech2text.py
After you've done that, use the API by running the following in Cloud Shell:
python speech2text.py gs://cloud-samples-data/ml-api-codelab/tr-ostrich.wav
There should be errors complaining about the wrong encoding and sample hertz rate. Don't worry, go into
transcribe_gcs()in the code and delete
encoding and
sampe_hertz_rate settings from
RecognitionConfig(). While you're at it, change the language code to 'tr-TR' as
tr-ostrich.wav is a speech recording in Turkish.
config = types.RecognitionConfig(language_code='tr-TR')
Now, run
speech2text.py again. The output should be some Turkish text as this was the audio analysed:
You passed gs://cloud-samples-data/ml-api-codelab/tr-ostrich.wav, the location of an audio file on Google Cloud Storage to
speech2text.py, which is then passed as gcs_uri into
transcribe_uri()
Let's take a closer look at our modified
transcribe_uri().
def transcribe_gcs(gcs_uri): """Transcribes the audio file specified by the gcs_uri.""" from google.cloud import speech # enums no longer used # from google.cloud.speech import enums from google.cloud.speech import types # create ImageAnnotatorClient object client = speech.SpeechClient() # specify location of speech audio = types.RecognitionAudio(uri=gcs_uri) # set language to Turkish # removed encoding and sample_rate_hertz config = types.RecognitionConfig(language_code='tr-TR') # get response by passing config and audio settings to client response = client.recognize(config, audio) # Each result is for a consecutive portion of the audio. Iterate through # them to get the transcripts for the entire audio file. for result in response.results: # The first alternative is the most likely one for this portion. # get the transcript of the first alternative print(u'Transcript: {}'.format(result.alternatives[0].transcript))
You'll need the Python client for Cloud Translation. To install, type the following into Cloud Shell:
sudo pip install --upgrade google-cloud-translate
Now let's check out the code samples for Cloud Translation. For the purpose of this codelab, we want to translate text to English.
snippets.py looks like what we want. Copy the contents of snippets.py, create a new file in Cloud Shell called
translate.py and paste all the code into
translate.py. You can do this manually in Cloud Shell code editor or you can run this curl command in Cloud Shell:
curl -o translate.py
After you've done that, use the API by running the following in Cloud Shell:
python translate.py translate-text en '你有沒有帶外套'
The translation should be "Do you have a jacket?".
You passed 3 arguments to
translate.py:
translate_text()function to run
translate_text()and serves to specify the language to be translated to
translate_text()
Let's take a closer look at
translate_text(). Note the comments that have been added.
def translate_text(target, text): """Translates text into the target language. Target must be an ISO 639-1 language code. See """ # relevant imports from above # from google.cloud import translate # import six # create Client object translate_client = translate.Client() # decode text if it's a binary type # six is a python 2 and 3 compatibility library if isinstance(text, six.binary_type): text = text.decode('utf-8') # get translation result by passing text and target language to client # Text can also be a sequence of strings, in which case this method # will return a sequence of results for each text. result = translate_client.translate(text, target_language=target) # print original text, translated text and detected original language print(u'Text: {}'.format(result['input'])) print(u'Translation: {}'.format(result['translatedText'])) print(u'Detected source language: {}'.format( result['detectedSourceLanguage']))
You'll need the Python client for Cloud Natural Language. To install, type the following into cloud shell:
sudo pip install --upgrade google-cloud-language
Finally, let's look at the code samples for the Cloud Natural Language API. We want to detect entities in the text.
snippets.py seems to contain code that does that. Copy the contents of snippets.py, create a new file in Cloud Shell called
natural_language.py and paste all the code into
natural_language.py. You can do this manually in Cloud Shell code editor or you can run this curl command in Cloud Shell:
curl -o natural_language.py
After you've done that, use the API by running the following in Cloud Shell:
python natural_language.py entities-text 'where did you leave my bike'
The API should identify "bike" as an entity. Entities are can be proper nouns (public figures, landmarks, etc.) or common nouns (restaurant, stadium, etc.).
You passed 2 arguments to
natural_language.py:
entities_text()function to run
entities_text()
Let's take a closer look at
entities_text(). Note the new comments that have been inserted.
def entities_text(text): """Detects entities in the text.""" # relevant imports from above # from google.cloud import language # from google.cloud.language import enums # from google.cloud.language import types # import six # create LanguageServiceClient object client = language.LanguageServiceClient() # decode text if it's a binary type # six is a python 2 and 3 compatibility library if isinstance(text, six.binary_type): text = text.decode('utf-8') # Instantiates a plain text document. document = types.Document( content=text, type=enums.Document.Type.PLAIN_TEXT) # Detects entities in the document. You can also analyze HTML with: # document.type == enums.Document.Type.HTML entities = client.analyze_entities(document).entities # entity types from enums.Entity.Type entity_type = ('UNKNOWN', 'PERSON', 'LOCATION', 'ORGANIZATION', 'EVENT', 'WORK_OF_ART', 'CONSUMER_GOOD', 'OTHER') # print information for each entity found for entity in entities: print('=' * 20) print(u'{:<16}: {}'.format('name', entity.name)) print(u'{:<16}: {}'.format('type', entity_type[entity.type])) print(u'{:<16}: {}'.format('metadata', entity.metadata)) print(u'{:<16}: {}'.format('salience', entity.salience)) print(u'{:<16}: {}'.format('wikipedia_url', entity.metadata.get('wikipedia_url', '-')))
Let's remind ourselves what you're building.
Now let's put everything together. Create a
solution.py file; copy and paste
detect_labels_uri(),
transcribe_gcs(),
translate_text() and
entities_text() from previous steps into
solution.py.
Uncomment and move the import statements to the top. Note that both
speech.types and
language.types are being imported. This is going to cause conflict, so let's just remove them and change each individual occurrence of
types in
transcribe_gcs() and
entities_text() to
speech.types and
language.types respectively. You should be left with:
from google.cloud import vision from google.cloud import speech from google.cloud import translate from google.cloud import language from google.cloud.language import enums import six
Instead of printing, have the functions return the results. You should have something similar to:
# import statements def detect_labels_uri(uri): # code # we only need the label descriptions label_descriptions = [] for label in labels: label_descriptions.append(label.description) return label_descriptions def transcribe_gcs(gcs_uri): # code # naive assumption that audio file is short return response.results[0].alternatives[0].transcript def translate_text(target, text): # code # only interested in translated text return result['translatedText'] def entities_text(text): # code # we only need the entity names entity_names = [] for entity in entities: entity_names.append(entity.name) return entity_names
After all that hard work, you get to call those functions. Go ahead, do it! Here's an example:
def compare_audio_to_image(audio, image): """Checks whether a speech audio is relevant to an image.""" # speech audio -> text transcription = transcribe_gcs(audio) # text of any language -> english text translation = translate_text('en', transcription) # text -> entities entities = entities_text(translation) # image -> labels labels = detect_labels_uri(image) # naive check for whether entities intersect with labels has_match = False for entity in entities: if entity in labels: # print result for each match print('The audio and image both contain: {}'.format(entity)) has_match = True # print if there are no matches if not has_match: print('The audio and image do not appear to be related.')
We previously hardcoded Turkish into
transcribe_gcs(). Let's change that so the language is specifiable from
compare_audio_to_image(). Here are the changes required:
def transcribe_gcs(language, gcs_uri): ... config = speech.types.RecognitionConfig(language_code=language)
def compare_audio_to_image(language, audio, image): transcription = transcribe_gcs(language, audio)
The final code can be found in solution.py of this GitHub repository. Here is a curl command to grab that:
curl -O
The version on GitHub contains argparse, which allows the following from command line:
python solution.py tr-TR gs://cloud-samples-data/ml-api-codelab/tr-ball.wav gs://cloud-samples-data/ml-api-codelab/football.jpg
For each item found, the code should output "The audio and image both contain: ". In the example above, it would be "The audio and image both contain: ball".
Here are more audio and image file locations to try.
You've explored and integrated four machine learning APIs to determine whether a speech sample is talking about the provided image. This is just the beginning as there are many more ways for this pipeline to improve! | https://codelabs.developers.google.com/codelabs/cloud-ml-apis/index.html?index=..%2F..supercomputing | CC-MAIN-2020-10 | refinedweb | 2,121 | 51.14 |
This library integrates the low-level driver and the BSEC software library which provides high-level sensor control to obtain relaible air quality sensor data.
This library provides the necessary glue code for both to work under Mongoose OS and a number of helper functions.
When sensor output is ready,i an
MGOS_EV_BME680_BSEC_OUTPUT event is triggered which receives a structure containing sensor outputs (see
struct mgos_bsec_output definition in [mgos_bme680.h](include/struct mgos_bsec_output)).
Currently only supported on ESP8266 and ESP32 platforms, ARM support is a
TODO.
The library is configured through the
bme680 configuration section, defined here.
Several things need to be done to obtain readings from the sensor:
i2c.enable=true(currently only I2C interface is supported).
bme680.i2c_addrmust be set to the correct address. It's either 0x76 or 0x77 depending on the state of the address selection pin.
bme680.enableneeds to be set ot
truefor library to be initalized at all.
With these and the rest of the settings left in their default state, you should get readings from all the sensors at 3 second interval.
IAQ sensor requires calibration before producting accurate values. Values with accuracy value less than 3 are unreliable.
By default the library will perform calibration automatically (still may take up to 30 minutes to complete).
A number of options are provided for more advanced control of the sensor behavior.
bme680.bsec.enable: normally it is advisable to use the BSEC library to process raw values returned by the sensor. Turning this off will enable you to either use the sensor directly (reference to the dev ice can be obtained via
mgos_bme680_get_global()) or initialize drive the BSEC library yourself (e.g. for managing multiple sensors).
bme680.bsec.config_file: BSEC library comes with a number of pre-generated configuration profiles that can be loaded to improve accurcy of the measurements. These are contained in the config subdirectory and come as binary blobs, CSV files or C source code. Take the
bsec_iaq.configfile from the appropriate subdirectory and copy it to the device filesystem (or include in your firmware's initial filesystem image). You can also include several and switch between them by adjusting the value of this setting.
bme680.bsec.state_file,
bme680.bsec.state_save_interval: BSEC library performs estimations over long periods of time and the accuracy of its output relies on long-term state that it keeps. It is therefore necessary to make sure it is persisted across device restarts. Mos integration code will load BSEC state from the
state_fileon initialization and save it every
state_save_intervalseconds. Set
state_fileto empty to disable loading of state, set interval to a negative value to disable automatically saving it. You can still use
mgos_bsec_set_state_from_file()and
mgos_bsec_save_state_to_file()to load and save the state to a file manually.
bme680.bsec.{iaq,temp,rh,ps}_sample_rate: Set sampling rates for different parts of the BME680 multi-sensor. Each can be individually disabled (empty string), sampled at 3s interval (
LP) or every 300s (
ULP). In particular, since gas sensor uses heater extensively, setting it to
ULPwill save considerable amount of power.
bme680.bsec.iaq_auto_cal: if IAQ sensor is enabled (
bme680.bsec.iaq_sample_rateis not empty) and this option is enabled, mos will automatically raise sampling rate of the IAQ sensor to 3s until accuracy reaches 3 (and stays there for a while). It will then return the sampling rate to whatever it was set to previously. So in practice this only matters if IAQ sensor is confiugred for ULP rate.
With mOS library providing the integration, getting samples from the sensor is very simple - all you need to do is subscribe to the event:
#include "mgos.h" #include "mgos_bme680.h" static void bme680_output_cb(int ev, void *ev_data, void *arg) { const struct mgos_bsec_output *out = (struct mgos_bsec_output *) ev_data; double ts = out->temp.time_stamp / 1000000000.0; float ps_kpa = out->ps.signal / 1000.0f; float ps_mmhg = out->ps.signal / 133.322f; if (out->iaq.time_stamp > 0) { LOG(LL_INFO, ("%.2f IAQ %.2f (acc %d) T %.2f RH %.2f P %.2f kPa (%.2f mmHg)", ts, out->iaq.signal, out->iaq.accuracy, out->temp.signal, out->rh.signal, ps_kpa, ps_mmhg)); } else { LOG(LL_INFO, ("%.2f T %.2f RH %.2f P %.2f kPa (%.2f mmHg)", ts, out->temp.signal, out->rh.signal, ps_kpa, ps_mmhg)); } (void) ev; (void) arg; } enum mgos_app_init_result mgos_app_init(void) { mgos_event_add_handler(MGOS_EV_BME680_BSEC_OUTPUT, bme680_output_cb, NULL); return MGOS_APP_INIT_SUCCESS; }
Output:
[Aug 26 23:00:59.324] mgos_i2c_gpio_maste:250 I2C GPIO init ok (SDA: 4, SCL: 5, freq: 100000) [Aug 26 23:00:59.348] mgos_bme680.c:466 BME680 @ 0/0x77 init ok [Aug 26 23:00:59.353] mgos_bme680.c:396 BSEC 1.4.7.4 initialized [Aug 26 23:00:59.364] mgos_bme680.c:404 Failed to load BSEC config from bsec_iaq.config: -33, will use defaults [Aug 26 23:00:59.377] mgos_bme680.c:414 Failed to load BSEC state from bsec.state: -33, will use defaults ... [Aug 26 23:01:00.337] mgos_init.c:36 Init done, RAM: 51152 total, 41996 free, 42000 min free [Aug 26 23:01:00.352] mgos_bme680.c:281 IAQ sensor requires calibration [Aug 26 23:01:00.356] main.c:13 0.68 IAQ 25.00 (acc 0) T 27.07 RH 57.51 P 101.76 kPa (763.28 mmHg) [Aug 26 23:01:02.641] main.c:13 3.68 IAQ 25.00 (acc 0) T 26.98 RH 57.87 P 101.76 kPa (763.29 mmHg) [Aug 26 23:01:05.645] main.c:13 6.69 IAQ 25.00 (acc 0) T 26.99 RH 57.88 P 101.76 kPa (763.28 mmHg) [Aug 26 23:01:08.650] main.c:13 9.69 IAQ 25.00 (acc 0) T 27.01 RH 57.84 P 101.76 kPa (763.28 mmHg) ... [Aug 26 23:05:54.095] main.c:13 295.11 IAQ 25.00 (acc 0) T 26.72 RH 58.46 P 101.76 kPa (763.27 mmHg) [Aug 26 23:05:56.868] mgos_bme680.c:368 BSEC state saved (bsec.state) [Aug 26 23:05:57.100] main.c:13 298.11 IAQ 25.00 (acc 0) T 26.72 RH 58.46 P 101.76 kPa (763.26 mmHg) ... [Aug 26 23:08:12.326] main.c:13 433.32 IAQ 51.91 (acc 1) T 26.70 RH 58.46 P 101.76 kPa (763.25 mmHg) [Aug 26 23:08:15.331] main.c:13 436.33 IAQ 250.00 (acc 2) T 26.71 RH 58.41 P 101.76 kPa (763.27 mmHg) ... [Aug 26 23:16:46.136] main.c:13 947.09 IAQ 49.84 (acc 3) T 26.72 RH 58.90 P 101.76 kPa (763.23 mmHg) [Aug 26 23:16:49.140] mgos_bme680.c:289 IAQ sensor calibration complete [Aug 26 23:16:49.144] main.c:13 950.10 IAQ 50.42 (acc 3) T 26.72 RH 58.92 P 101.76 kPa (763.23 mmHg) [Aug 26 23:16:52.145] main.c:13 953.10 IAQ 50.01 (acc 3) T 26.73 RH 58.88 P 101.76 kPa (763.23 mmHg) [Aug 26 23:16:55.150] main.c:13 956.10 IAQ 50.95 (acc 3) T 26.73 RH 58.87 P 101.76 kPa (763.23 mmHg) ... | https://mongoose-os.com/docs/mongoose-os/api/misc/bme680.md | CC-MAIN-2022-05 | refinedweb | 1,213 | 72.53 |
15 April 2011 10:18 [Source: ICIS news]
SHANGHAI (ICIS)--Taiwan’s Kuokuang Petrochemical Technology Co (KPTC) must await the results of an environmental impact assessment (EIA) before it can make any further decisions on the status of its proposed mega petrochemical complex, an official from Taiwan's Environmental Protection Administration (EPA) said on Friday.
“The EIA team will hold a meeting on 21 April. The opinions achieved in this meeting will be submitted to EPA. Then EPA will open a final conference to ask all the members to vote. This conference will lay out the final decision for this project,” the official told ICIS.
However, the official declined to disclose when this final meeting will be held.
Farmers, residents and environmentalists have opposed the mega complex, which KPTC intends to build on reclaimed land near the wetlands of Changhua county. They say the project will harm the ecosystem and pollute the surrounding air and water, according to media reports. Many of the country’s oyster and eel farms as well as a habitat of humpback dolphins are situated along the coast.
Following the raising of these concerns, KPTC has pledged to downsize its planned facility by reducing the proposed number of plants at the complex from 41 to 25. This will include a 300,000 bbl/day refinery, a 1.2m tonne/year cracker and other downstream plants such as polyethylene (PE) and polypropylene (PP), a company spokeswoman said on 7 April.
The results of the EIA will result in three options for KPTC, ?xml:namespace>
"If the project passes the EIA, we'll do it. If it fails, we will not do it. If it passes conditionally, then it would be up to the investors of the Kuokuang project to decide," Wu said.
Asked about the possibility of moving the project overseas, Wu said the Middle East is too far, but he noted that
Additional reporting by Pearl Bant | http://www.icis.com/Articles/2011/04/15/9452800/taiwans-kptc-awaits-eia-results-for-mega-petrochemical-project.html | CC-MAIN-2014-41 | refinedweb | 321 | 61.46 |
Ajax
00_0132272679_FM.qxd 7/17/06 8:57 AM Page i
00_0132272679_FM.qxd 7/17/06 8:57 AM Page ii
Ajax
Creating Web Pages with Asynchronous
JavaScript and XML
Edmond Woychowsky
00_0132272679_FM.qxd 7/17/06 8:57 AM Page iii inci-
dental
This Book Is Safari Enabled
The Safari‚ Enabled icon on the cover of your favorite technology book means the book is avail-
able through Safari Bookshelf.When you buy this book,you get free access to the online edi-
tion for 45 days.Safari Bookshelf is an electronic reference library that lets you easily search
thousands of technical books,find code samples,download chapters,and access technical information when-
ever and wherever you need it.
• To gain 45-day Safari Enabled access to this book:
• Go to
• Complete the brief registration form
• Enter the coupon code WZM8-GZEL-ZTEE-4IL7-W2R5
If you have difficulty registering on Safari Bookshelf or accessing the online edition,please e-mail customer-
service@safaribooksonline.com.
Visit us on the Web:
Library of Congress Cataloging-in-Publication Data:
Woychowsky,Edmond.
Ajax :creating Web pages with asynchronous JavaScript and XML / Edmond Woychowsky.
p.cm.
ISBN 0-13-227267-9 (pbk.:alk.paper) 1.Web sites—Design—Computer programs.2.Ajax (Web site
development technology) 3.JavaScript (Computer program language) 4.XML (Document markup lan-
guage) I.Title.
TK5105.8885.A52W69 2006
006.7’86—dc22
2006017743
This material may be distributed only subject to the terms and conditions set forth in the Open Publication
License,v1.0 or later (the latest version is presently available at).
ISBN 0-13-227267-9
Text printed in the United States on recycled paper at R.R.Donnelley in Crawfordsville,Indiana.
First printing,August 2006
00_0132272679_FM.qxd 7/17/06 8:57 AM Page iv
This book is dedicated to my wife,Mary Ann,and my children,
Benjamin and Crista.Without their constant support,the book
that you hold in your hands would definitely not exist.
00_0132272679_FM.qxd 7/17/06 8:57 AM Page v
00_0132272679_FM.qxd 7/17/06 8:57 AM Page vi
Contents
About the Author xiii
Preface xv
Acknowledgments xxi
1 Types of Web Pages 1
1.1 Static Web Pages 2
1.2 Dynamic Web Pages 3
1.2.1 HTML 4
1.2.2 CSS 5
1.2.3 JavaScript 6
1.3 Web Browsers 7
1.3.1 Microsoft Internet Explorer 8
1.3.2 Mozilla-Based Browsers (Netscape,Mozilla,and Firefox) 9
1.3.3 Linux Browsers (Konqueror,Ephiphany,Galeon,Opera,
and Firefox) 10
1.3.4 The Others (Opera,Safari) 10
1.4 A Brief Introduction to Cross-Browser Development 11
1.4.1 Casualties of the Browser Wars 12
1.4.2 Market Share Does Not Equal Right 12
1.4.3 The World Wide Web Consortium,Peacekeepers 13
1.5 The Server Side of Things 13
1.5.1 Apache 14
1.5.2 Internet Information Server 14
1.5.3 The Remaining Players 14
1.6 We Learn by Doing 15
1.6.1 Coding by Hand 15
1.6.2 Tools to Make Tools 16
1.7 Summary 17
vii
00_0132272679_FM.qxd 7/17/06 8:57 AM Page vii
2 Introducing Ajax 19
2.1 Not a Mockup 20
2.2 A Technique Without a Name 20
2.2.1 Names 20
2.3 What Is Ajax?21
2.3.1 The Ajax Philosophy 21
2.3.2 Meddling with Unnatural Forces 22
2.4 An Ajax Encounter of the First Kind 23
2.4.1 A World Unseen 27
2.4.2 Enter JavaScript 27
2.5 An Ajax Encounter of the Second Kind 28
2.5.1 XML 28
2.5.2 The XMLHttpRequest Object 31
2.6 An Ajax Encounter of the Third Kind 33
2.6.1 XSLT 33
2.6.2 Variations on a Theme 36
2.7 The Shape of Things to Come 38
2.8 Summary 38
3 HTML/XHTML 41
3.1 The Difference Between HTML and XHTML 42
3.1.1 Not Well Formed 42
3.1.2 Well Formed 43
3.1.3 A Well-Formed Example 43
3.2 Elements and Attributes 44
3.2.1 A Very Brief Overview of XHTML Elements and Their
Attributes 44
3.2.2 Frames Both Hidden and Visible 57
3.2.3 Roll Your Own Elements and Attributes 58
3.2.4 A Little CSS 59
3.3 Summary 62
4 JavaScript 63
4.1 Data Types 63
4.1.1 Numeric 64
4.1.2 String 64
4.1.3 Boolean 68
4.1.4 Miscellaneous 69
4.1.5 Arrays 69
4.1.6 Object 70
4.2 Variables 70
4.3 Operators 71
4.4 Flow-Control Statements 72
4.4.1 Conditionals 73
4.4.2 Looping 75
4.5 Functions 77
viii Contents
00_0132272679_FM.qxd 7/17/06 8:57 AM Page viii
4.6 Recursion 78
4.7 Constructors 80
4.8 Event Handling 84
4.9 Summary 86
5 Ajax Using HTML and JavaScript 89
5.1 Hidden Frames and iframes 90
5.2 Cross-Browser DOM 91
5.2.1 JavaScript,ECMAScript,and JScript 96
5.2.2 A Problem to Be Solved 102
5.3 Tabular Information 105
5.3.1 Read Only 109
5.3.2 Updateable 117
5.4 Forms 122
5.4.1 Read Only 122
5.4.2 Updateable 127
5.5 Advantages and Disadvantages 134
5.6 Summary 134
6 XML 135
6.1 Elements 136
6.2 Attributes 138
6.3 Handling Verboten Characters 139
6.3.1 Entities 139
6.3.2 CDATA Sections 140
6.4 Comments 140
6.5 Expectations 141
6.5.1 Namespaces 141
6.5.2 DTD 142
6.5.3 Schema 142
6.6 XML Declaration 144
6.7 Processing Instructions 144
6.8 XML Data Islands 144
6.8.1 Internet Explorer 145
6.8.2 Firefox 145
6.9 Summary 149
7 XMLHttpRequest 151
7.1 Synchronous 152
7.2 Asynchronous 153
7.3 Microsoft Internet Explorer 155
7.4 XML Document Object Model 156
7.5 RSS 166
7.6 Web Services 168
7.6.1 What Is a Web Service?168
7.6.2 SOAP 170
7.7 Summary 173
Contents ix
00_0132272679_FM.qxd 7/17/06 8:57 AM Page ix
8 Ajax Using XML and XMLHttpRequest 175
8.1 Traditional Versus Ajax Websites 176
8.2 XML 178
8.2.1 Well Formed 179
8.2.2 Data Islands for Internet Explorer 182
8.2.3 Data Islands for All!184
8.2.4 Binding 187
8.3 The XMLHttpRequest Object 192
8.3.1 Avoiding the Unload/Reload Cycle 192
8.3.2 Browser Differences 193
8.3.3 Cleaning Up with SOAP 202
8.4 A Problem Revisited 203
8.5 Tabular Information and Forms 207
8.5.1 Read Only 216
8.5.2 Updateable 219
8.6 Advantages and Disadvantages 221
8.7 Summary 221
9 XPath 225
9.1 Location Paths 227
9.2 Context Node 228
9.3 Parent Nodes 228
9.4 Attribute Nodes 228
9.5 Predicates 228
9.6 XPath Functions 230
9.6.1 Boolean Functions 230
9.6.2 Numeric Functions 230
9.6.3 Node Set Functions 231
9.6.4 String Functions 231
9.7 XPath Expressions 233
9.8 XPath Unions 234
9.9 Axis 234
9.9.1 Ancestor Axis Example 236
9.9.2 ancestor-or-self Axis Example 236
9.9.3 attribute Axis Example 236
9.9.4 child Axis Example 237
9.9.5 descendant Axis Example 237
9.9.6 descendant-or-self Axis Example 238
9.9.7 following Axis Example 238
9.9.8 following-sibling Axis Example 239
9.9.9 namespace Axis Example 239
9.9.10 parent Axis Example 240
9.9.11 preceding Axis Example 240
9.9.12 preceding-sibling Axis Example 241
9.9.13 self Axis Example 241
9.10 Summary 242
x Contents
00_0132272679_FM.qxd 7/17/06 8:57 AM Page x
10 XSLT 243
10.1 Recursive Versus Iterative Style Sheets 244
10.1.1 Scope 248
10.1.2 Nonvariable Variables 248
10.2 XPath in the Style Sheet 249
10.3 Elements 250
10.3.1 In the Beginning 253
10.3.2 Templates and How to Use Them 255
10.3.3 Decisions,Decisions 260
10.3.4 Sorting Out Looping 260
10.4 XSLT Functions 262
10.5 XSLT Concepts 262
10.6 Client-Side Transformations 265
10.6.1 XSLT in Microsoft Internet Explorer 265
10.7 Summary 268
11 Ajax Using XSLT 269
11.1 XSLT 269
11.1.1 XML Magic 270
11.1.2 How Microsoft Shot Itself in the Foot 270
11.1.3 XPath,or I Left It Around Here Someplace 271
11.1.4 What I Learned from the Gecko 274
11.2 Tabular Information 277
11.2.1 Read Only 278
11.2.2 Updateable 281
11.3 Advantages and Disadvantages 282
11.4 Summary 283
12 Better Living Through Code Reuse 285
12.1 Reuse = Laziness 286
12.1.1 Paid by the Line 286
12.1.2 Paid by the Page 287
12.2 JavaScript Objects 287
12.2.1 Collections 289
12.2.2 XML 291
12.2.3 XSLT 303
12.2.4 Serialization Without Berries 307
12.3 Generic XSLT 307
12.3.1 Forms 308
12.3.2 Tabular 309
12.4 Summary 311
13 Traveling with Ruby on Rails 313
13.1 What Is Ruby on Rails?314
13.1.1 Ruby 314
13.1.2 Ruby on Rails 314
Contents xi
00_0132272679_FM.qxd 7/17/06 8:57 AM Page xi
13.2 Installation 315
13.3 A Little Ruby on Rails Warm-Up 317
13.4 A Problem Revisited 320
13.5 Whither Ajax?324
13.6 Summary 326
14 Traveling Farther with Ruby 327
14.1 Data Types 328
14.1.1 Numeric 328
14.1.2 String 330
14.1.3 Boolean 330
14.1.4 Objects 330
14.2 Variables 331
14.3 Operators 332
14.4 Flow-Control Statements 333
14.4.1 Conditions 333
14.4.2 Looping 334
14.5 Threads 335
14.6 Ajax 336
14.7 Summary 340
15 The Essential Cross-Browser HTML DOM 341
15.1 Interfaces 342
15.1.1 Window 344
15.2 Document 344
15.3 Frames 349
15.4 Collections 349
15.5 Summary 350
16 Other Items of Interest 351
16.1 Sarissa 352
16.1.1 A Brief Overview of Sarissa 352
16.2 JSON and JSON-RPC 356
16.2.1 JavaScript Object Notation 356
16.3 ATLAS 357
16.3.1 A Picture of ATLAS 358
16.4 The World Wide Web Consortium 358
16.5 Web Browsers 358
16.6 Summary 359
xii Contents
00_0132272679_FM.qxd 7/17/06 8:57 AM Page xii
About the Author
xiii
A graduate of Middlesex Country College and Penn State,Edmond
Woychowsky began his professional life at Bell Labs as a dinosaur writing
recursive assembly-language programs for use in their DOSS order entry sys-
tem.Throughout his career,Ed has worked in the banking,insurance,phar-
maceutical,and manufacturing industries,slowly sprouting feathers and
evolving into a web developer.He is best known for his often unique articles
on the TechRepublic website.
00_0132272679_FM.qxd 7/17/06 8:57 AM Page xiii
00_0132272679_FM.qxd 7/17/06 8:57 AM Page xiv
Preface
xv
The purpose of the book that you hold in your hands,Ajax:Creating Web Pages
with Asynchronous JavaScript and XML,is simply to show you the fundamen-
tals of developing Ajax applications.
W
HAT
T
HIS
B
OOK
I
S
A
BOUT
For the last several years,there has been a quiet revolution taking place in
web application development.In fact,it was so quiet that until February 2005,
this revolution didn’t have a name,even among the revolutionaries them-
selves.Actually,beyond the odd mention of phrases such as
XMLHttpRequest
object,XML,or SOAP,developers didn’t really talk about it much at all,prob-
ably out of some fear of being burned for meddling in unnatural forces.But
now that the cat is out of the bag,there is no reason not to show how Ajax
works.
Because I am a member of the “we learn by doing” cult (no Kool Aid
required),you’ll find more code examples than you can shake a stick at.So
this is the book for those people who enjoyed the labs more than the lectures.
If enjoyed is the wrong word,feel free to substitute the words “learned more
from.”
Until around 2005,the “we learn by doing” group of developers was
obscured by the belief that a piece of paper called a certification meant more
than hands-on knowledge.I suppose that,in a way,it did.Unfortunately,when
jobs became fewer and farther between,developers began to collect certifica-
tions the way that Imelda Marcos collected shoes.Encyclopedic knowledge
might have helped in getting interviews and subsequent jobs,but it really
didn’t help very much in keeping those jobs.However,now that the pendulum
00_0132272679_FM.qxd 7/17/06 8:57 AM Page xv
has begun to swing in the other direction,it is starting to become more impor-
tant to actually know a subject than to be certified in it.This leads to the ques-
tion of “Why learn Ajax?”
The answer to that question can be either short and sweet or as rich and
varied as the concept of Ajax itself.Let’s start with the first answer because it
looks good on the resumé.We all know that when something looks good on the
resumé,it helps to keep us in the manner in which we have become accus-
tomed,living indoors and eating regularly.Couple this with the knowledge of
actually having hands-on knowledge,and the odds of keeping the job are
greatly increased.
The rich and varied answer is that,to parrot half of the people writing
about web development trends,Ajax is the wave of the future.Of course,this
leads to the statement,“I heard the same thing about DHTML,and nobody
has talked about that for five years.” Yes,some of the same things were said
about DHTML,but this time it is different.
The difference is that,this time,the technology has evolved naturally
instead of being sprung upon the world just so developers could play buzzword
bingo with their resumés.This time,there are actual working examples
beyond the pixie dust following our mouse pointers around.This time,the
companies using these techniques are real companies,with histories extend-
ing beyond last Thursday.This time,things are done with a reason beyond the
“it’s cool” factor.
W
HAT
Y
OU
N
EED TO
K
NOW
B
EFORE
R
EADING
T
HIS
B
OOK
This book assumes a basic understanding of web-development techniques
beyond the WYSIWYG drag and drop that is the current standard.It isn’t nec-
essary to have hand-coded HTML;it is only necessary to know that HTML
exists.This book will hopefully fill in the gaps so that the basics of what goes
where can be performed.
Beyond my disdain for the drag-and-drop method of web development,
there is a logical reason for the need to know something about HTML—
basically,we’re going to be modifying the HTML document after it is loaded in
the browser.Nothing really outrageous will be done to the document—merely
taking elements out,putting elements in,and modifying elements in place.
For those unfamiliar with JavaScript,it isn’t a problem;I’ve taken care
to explain it in some depth because there is nothing worse than needing a sec-
ond book to help understand the first book.Thinking about it now,of course,I
missed a wonderful opportunity to write a companion JavaScript volume.Doh!
If you’re unfamiliar with XML,don’t be put off by the fact that Ajax is
short hand Asynchronous JavaScript and XML because what you need to
xvi Preface
00_0132272679_FM.qxd 7/17/06 8:57 AM Page xvi
know is in here,too.The same is also true of XSLT,which is a language used to
transform XML into other forms.Think of Hogwarts,and you get the concept.
In this book,the evolution (or,if you prefer,intelligent design) of Ajax is
described from the beginning of web development through the Dynamic
HTML,right up to Asynchronous JavaScript and XML.Because this book
describes a somewhat newer technique of web development,using a recent
vintage web browser such as Firefox or Flock is a good idea.You also need an
Internet connection.
H
OW
T
HIS
B
OOK
I
S
L
AID
O
UT
Here is a short summary of this book’s chapters:
+ Chapter 1,“Types of Web Pages,” provides a basic overview of the various
ways that web pages have been coded since the inception of the Web.The
history of web development is covered beginning with static web pages
through dynamic web pages.In addition,the various technologies used in
web development are discussed.The chapter closes with a discussion on
browsers and the browser war.
+ Chapter 2,“Introducing Ajax,” introduces Ajax with an account of what
happened when I demonstrated my first Ajax application.The concepts
behind Ajax are described and then are introduced in a step-by-step
manner,from the first primordial Ajax relatives to the current evolution.
+ Chapter 3,“HTML/XHTML,” describes some of the unmentioned basic
building blocks of Ajax,HTML/XHTML,and Cascading Style Sheets.
+ Chapter 4,“JavaScript,” serves as an overview of JavaScript,including
data types,variables,and operators.Also covered are flow-control state-
ments,recursive functions,constructors,and event handlers.
+ Chapter 5,“Ajax Using HTML and JavaScript,” describes one of the ear-
lier ancestors of Ajax.Essentially,this is how to fake it using stone
knives and bear skins.Although the technique described is somewhat
old-fashioned,it demonstrates,to a degree,how processing flows in an
Ajax application.In addition,the “dark art” of communicating informa-
tion between frames is covered.Additionally,in an effort to appease those
who believe that this is all old hat,the subject of stored procedures in
MySQL is covered.
+ Chapter 6,“XML,” covers XML,particularly the parts that come into play
when dealing with Ajax.Elements,attributes and entities,oh my;the
various means of describing content,Document Type Definitions,and
Schema are covered.Also included are cross-browser XML data islands.
Preface xvii
00_0132272679_FM.qxd 7/17/06 8:57 AM Page xvii
+ Chapter 7,“XMLHttpRequest,” dissects the
XMLHttpRequest
object by
describing its various properties and methods.Interested in making it
synchronous instead of asynchronous? You’ll find the answer in this
chapter.In addition,both web services and SOAP are discussed in this
chapter.
+ Chapter 8,“Ajax Using XML and XMLHttpRequest,” covers what some
might consider pure Ajax,with special attention paid to the
XMLHttpRequest
object that makes the whole thing work.Additionally,var-
ious back ends are discussed,ranging from PHP to C#.Also covered are
two of the more popular communication protocols:RPC and SOAP.
+ Chapter 9,“XPath,” covers XPath in detail.Starting with the basics of
what is often considered XSLT’s flunky,this chapter describes just how to
locate information contained in an XML document.Included in this chap-
ter is a detailed description of XPath axis,which is at least worth a look.
+ Chapter 10,“XSLT,” goes into some detail about the scary subject of
XSLT and how it can be fit into a cross-browser Ajax application.
Starting with the basics and progressing to the more advanced possibili-
ties,an attempt is made to demystify XSLT.
+ Chapter 11,“Ajax Using XSLT,” takes the material covered in the first
four chapters the next logical step with the introduction of XSLT.Until
relatively recently,this was typically considered a bad idea.However,
with some care,this is no longer the case.XSLT is one of those tools that
can further enhance the site visitor’s experience.
+ Chapter 12,“Better Living Through Code Reuse,” introduces a home-
grown client-side JavaScript library that is used throughout the exam-
ples shown in this book.Although this library doesn’t necessarily have to
be used,the examples provide an annotated look at what goes on behind
the scenes with most of the Ajax libraries currently in existence.
+ Chapter 13,“Traveling with Ruby on Rails,” is a gentle introduction to
the open source Ruby on Rails framework.Beginning with where to
obtain the various components and their installation,the chapter shows
how to start the WEBrick web server.Following those examples,a simple
page that accesses a MySQL database is demonstrated.
+ Chapter 14,“Traveling Farther with Ruby,” looks a little deeper into
Ruby on Rails,with the introduction of a simple Ajax application that
uses the built-in Rails JavaScript library.
+ Chapter 15,“The Essential Cross-Browser HTML DOM,” describes the
dark and mysterious realm of the cross-browser HTML Document Object
Model.Another unmentioned part of Ajax,the HTML DOM is essentially
xviii Preface
00_0132272679_FM.qxd 7/17/06 8:57 AM Page xviii
how the various parts of an HTML or XHTML document are accessed.
This is what makes the “only update part of a document” feature of Ajax
work.
+ Chapter 16,“Other Items of Interest,” describes some of the resources
available via the World Wide Web.These resources range from pre-
written Ajax-capable JavaScript libraries to some of the numerous
browsers available for your personal computer.
C
ONVENTIONS
U
SED IN
T
HIS
B
OOK
Listings,code snippets,and code in the text in this book are in
monospaced
font.
This means that the code could be typed in the manner shown using your edi-
tor of choice,and the result would appear as follows:
if(enemy = ‘troll’)
runaway();
Preface xix
00_0132272679_FM.qxd 7/17/06 8:57 AM Page xix
00_0132272679_FM.qxd 7/17/06 8:57 AM Page xx
Acknowledgments
xxi
Even though this book is essentially “my” book,it has been influenced in many
ways (all of them good) by multiple individuals.Because the roles that each of
these individuals played in the creative process were very significant,I would
like to take the time to thank as many of them as I can remember here.
Mary Ann Woychowsky,for understanding my “zoning out” when writing
and for asking,“I guess the book is finished,right?” after catching me playing
Morrowind when I should have been writing.Benjamin Woychowsky,for ask-
ing,“Shouldn’t you be writing?” whenever I played a computer game.Crista
Woychowsky,for disappearing with entire seasons of Star Gate SG-1,after
catching me watching them when I should have been writing.
My mother,Nan Gerling,for sharing her love of reading and keeping me
in reading materials.
Eric Garulay,of Prentice Hall,for marketing this book and putting me in
touch with Catherine Nolan.Catherine Nolan,of Prentice Hall,for believing
in this book and for her assistance in getting started with a book.Bruce
Perens,for his belief that because I use Firefox,I had not tread too far down
the path that leads to the dark side.Denise Mickelson,of Prentice Hall,for
making sure that I kept sending in chapters.Chris Zahn,of Prentice Hall,for
his editing,for answering my often bizarre questions,and for his knowledge of
things in general.Thanks to George Nedeff for managing the editorial and
production workflow and Heather Fox for keeping this project in the loop and
on track.Any errors remaining are solely my own.
I would like to thank the late Jack Chalker for his assistance with what
to look for in writing contracts and for essentially talking me through the
process using words that I could understand.Also for his writing a number of
science-fiction novels that have influenced the way that I look upon the world.
After all,in the end,everything is about how we look upon the world.
00_0132272679_FM.qxd 7/17/06 8:57 AM Page xxi
Dossy Shiobara,for answering several bizarre questions concerning
MySQL.
Richard Behrens,for his assistance in formulating my thoughts.
Joan Susski,for making sure that I didn’t go totally off the deep end
when developing many of the techniques used in this book.
Premkumar Ekkaladevi,who was instrumental in deciding just how far
to push the technology.
Jon (Jack) Foreman,for explaining to me that I can’t know everything.
David Sarisohn,who years ago gave a very understandable reason for
why code shouldn’t be obscure.
Finally,to Francis Burke,Shirley Tainow,Thomas Dunn,Marion
Sackrowitz,Frances Mundock,Barbara Hershey,Beverly Simon,Paul Bhatia,
Joseph Muller,Rick Good,Jane Liefert,Joan Litt,Albert Nicolai,and Bill
Ricker for teaching me how to learn.
xxii Acknowledgments
00_0132272679_FM.qxd 7/17/06 8:57 AM Page xxii
C H A P T E R
1
Types of Web Pages
While late-
twentieth sum--
ting num-
bers,they are a mathematical representation of the increase in the numbers of
immortal bunnies in a garden with no predators.Assume an infinite supply of
carrots and,well,you get the idea—it was that kind of growth.Unfortunately,
1
01_0132272679_ch01.qxd 7/17/06 8:58 AM Page 1
growth at that rate cannot be maintained forever;eventually,that many bun-
n dis-
cussion a little beyond those simple ingredients,though,to consider the only
two additional factors that can affect the end result:the browser and the web
server.
1.1 S
TATIC
W
EB
P
AGES
Static web pages are the original type (and for what seemed like about 10 min-
utes the only type) of web pages.When dealing with the distribution of techni-
cal documents,there aren’t very many changes to the original document.What
you actually see more of is a couple of technical documents getting together,
settling down,and producing litter after litter of little technical documents.
However,the technical documents didn’t have this fertile landscape com-
pletely to themselves for very long.
If you’ve ever traveled anywhere in the United States by automobile,you
might be familiar with one of the staples of the driving vacation:the travel
brochure.Often describing places like Endless Caverns,Natural Bridge,
Mystic Aquarium,or Roadside America,they’re a staple of the American land-
scape.Designed to catch attention and draw the traveler in to spend some
cash,they’ve been around seemingly forever.
The web equivalent,sometimes referred to as brochure-ware,also is
designed to draw in the virtual traveler.This type of website is usually used to
inform the visitor about subjects as varied as places to visit,cooking,children,
or my nephew Nick and niece Ashley’s 2002 visit to Walt Disney World.This is
actually a great medium for information that is relatively unchanging.
Allow me to digress for a little computer history lesson.Back in the old
days when dinosaurs—eh,mainframes—ruled computing,there were pseudo-
conversational systems that faked some of the functionality seen in web appli-
cations.These applications essentially displayed a form on what was called a
dumb terminal.It was called a dumb terminal because it had no real process-
ing power of its own.The user then filled out the form and hit a program func-
tion key,which transferred the input data to the mainframe.The mainframe
2 Types of Web Pages Chapter 1
01_0132272679_ch01.qxd 7/17/06 8:58 AM Page 2
processed the data,based upon content and the specific program function key,
and the results,if any,were displayed on the user’s dumb terminal.End of his-
tory lesson.
Static web pages offer the same functionality as those monster com-
puters of old,in much the same way.The only real changes are form “buttons”
instead of program function keys,the presence of a mouse,and the price tags
for the equipment involved.Well,maybe that isn’t entirely true;a dumb termi-
nal will set you back about as much as one of today’s off-the-shelf computers.
The real difference lies in the price difference between a web server and a
mainframe:thousands of dollars vs.millions of dollars.Those dinosaurs didn’t
come cheap.
1.2 D
YNAMIC
W
EB
P
AGES
Static com-
puter per-
formed devel-
oper dif-
ferences is that things happened on dynamic web pages.
There were events.No,not events like the grand opening of the Wal-Mart
Super Center down the road—browser events.When the mouse pointer was
moved around the page,things happened,and not just the pointer changing
1.2 Dynamic Web Pages 3
01_0132272679_ch01.qxd 7/17/06 8:58 AM Page 3?
1.2.1 HTML dep-
recated features,however,were more than made up for by the addition of the
new features.
The big question is,who decides which features stay,which are depre-
cated,and which are added? The answer is that all of these decisions are made
4 Types of Web Pages Chapter 1
01_0132272679_ch01.qxd 7/17/06 8:58 AM Page 4
by the World Wide Web Consortium,which,in secret midnight meetings,
dances around a bonfire,drinks mead,and listens to Jethro Tull CDs.Alright,
the truth is that committees meet periodically in a conference room and dis-
cuss modifications to HTML.However,my explanation accounts for the exis-
tence of the marquee tag better than the official explanation.
The World Wide Web Consortium is the governing body that issues
“Recommendations” concerning the more technical aspects of the Web.Start-
ing.
1.2.2 CSS
The problem with HTML is that it was never intended to deal with anything
beyond the structure of a page.Unfortunately,early on,somebody new to
HTML asked the question,“Hey,how do I make text bold?” and the pure struc-
tural language called HTML was polluted by presentation.The end result of
this was documents with more HTML than text.Mostly consisting of
b some-
what like being a Roman emperor:“The text in the anchor tags amuses me—
make it bold and Tahoma!”
Cascading Style Sheets work by associating style rules to the elements of
an HTML document.These rules can be applied to single tags,tags of a spe-
cific
1.2 Dynamic Web Pages 5
01_0132272679_ch01.qxd 7/17/06 8:58 AM Page 5
defined.The problem,for me,at least,is remembering cascade sequence.One
method of keeping the cascade straight is equating it to something else,some-
thing a bit more familiar,as in the winning hands of poker.In poker,the win-
ning hands,from high to low,are:
1.Royal flush
2.Straight flush
3.Four of a kind
4.Full house
5.Flush
With Cascading Style Sheets,the “winning” hands are as follows:
1.Inline CSS defined in the element’s
style
attribute
2.Internal CSS defined using the
style
tag
3.External CSS defined using the
style
tag
4.External CSS defined using the
link
tag
5.The default built into the web browser
As with poker,when there is a winning hand,any other hands are all for
naught.
1.2.3 JavaScript
6 Types of Web Pages Chapter 1
01_0132272679_ch01.qxd 7/17/06 8:58 AM Page 6
browser currently available.This means that visitors to websites that use
JavaScript,as opposed to any of the alternatives,can jump right into shopping
or whatever without waiting for a download to complete.
1.3 W
EB
B
ROWSERS
Without a web browser,though,web pages are rather useless.The majority of
people wandering around the Internet wouldn’t fully appreciate them.Yes,
there is the indentation,but without a browser,there is no scripting or pic-
tures win-
dows? hangers-
on from earlier ages.Sometimes these holdovers exist in isolated communi-
ties,and sometimes they’re lone individuals living among us unnoticed.
However,unlike in the natural world,evolution in web browsers is
driven by an intelligence,or,at least,I’d like to think so.Behind every feature
there are individuals who decide what features to include and how to imple-
ment those features.Because of this,web browsers can be both very similar to
and very different from one another.Let’s now take the opportunity to explore
some of those similarities and differences.
1.3 Web Browsers 7
01_0132272679_ch01.qxd 7/17/06 8:58 AM Page 7
1.3.1 Microsoft Internet Explorer
Love it or hate it,there is no denying that Microsoft Internet Explorer is cur-
rently the most used web browser.In fact,according to one website that meas-
ures browser statistics,Internet Explorer comes in both first and third.Huh?
Sounds a little like the 1960s version of The Love Bug,doesn’t it? This incredi-
ble feat can be attributed to the estimated 5 percent of people who are still
running some incarnation of version 5,which can be versions 5.0,5.01,or
5.5—your Inter-
net Explorer version 5 is that the machine simply doesn’t have the resources
for version 6.I know that this can happen;I’ve seen it with my own eyes.In
fact,it was quite some time before Mary Ann,my wife,let me near her com-
puter connec-
tion.
8 Types of Web Pages Chapter 1
01_0132272679_ch01.qxd 7/17/06 8:58 AM Page 8.
1.3.2 Mozilla-Based Browsers (Netscape,Mozilla,and Firefox)zilla—eh,Mozilla—web enhanc-
ing—well,maybe just a little sleep.Which is prob-
ably how my twisted mind came up with a logical method of how they did it.
Because the majority of web browsers are produced by corporations,they
are limited in the number of potential developers to employees and consult-
ants of the corporation.Firefox,on the other hand,is open source.This means
1.3 Web Browsers 9
01_0132272679_ch01.qxd 7/17/06 8:58 AM Page 9
that although there is still a limited potential pool of developers,the pool is
much larger—say,about the population of the planet,minus two (Bill Gates
and Steve Baulmer).
This line of reasoning makes the most sense,far more than my other pos-
sible explanation.Open source has better-trained Bit-Gnomes,little people
that live in the computer and move the data around.But this theory really
makes sense only after the better part of a bottle of Scotch,so I’ll stop here.
1.3.3 Linux Browsers (Konqueror,Ephiphany,Galeon,Opera,and Firefox) prob-
ablyhead-
ing—I’ll wait.Alright,notice anything? Yeah,Firefox is listed there.Being
open source,Firefox really gets around,which is really comforting.It is a bit
like visiting a city far away,feeling lonely,and finding an old friend there.
1.3.4 The Others (Opera,Safari)
10 Types of Web Pages Chapter 1
01_0132272679_ch01.qxd 7/17/06 8:58 AM Page 10-
packed browser.Although Apple is currently only a minor player in the com-
puting.
1.4 A B
RIEF
I
NTRODUCTION TO
C
ROSS
-B
ROWSER
D
EVELOPMENT
Knowledge of different browsers,their capabilities,or merely their existence is
often an aid in a discipline called cross-browser development.Cross-browser
development can be one of the most exciting programming disciplines;unfor-
tunately,in programming,“exciting” isn’t usually a good thing.The problem is
that,in most instances,cross-browser development is essentially writing the
same routines two or more times,slightly different each time.Personally,I get
a feeling of satisfaction whenever I get a routine to work,but when coding a
cross-browser,getting it to work in one browser is only half the job.
The issue with cross-browser development is that some “features” that
are available on one browser either aren’t available on another or have
slightly different syntax.Imagine the feeling of satisfaction of solving a partic-
ularly thorny problem in Firefox only to have the same page crash and burn in
Internet Explorer.Take,for example,the serialization of XML in Firefox;it
works great,but try the same code in Internet Explorer,and here be monsters!
To avoid the monsters,it is necessary to understand where they usually
hang around waiting for the unsuspecting developer.But first let’s establish
where the monsters don’t reside;for example,the standard data types such as
Boolean,numeric,and string are pretty safe.The same can be said for the
statements,such as flow-control statements and assignment statements.
It is just too bad the same cannot be said for objects and event handlers.
At least for me,this is where most of the problems arise.Everything will be
going along fine,with the page working perfectly right up to point that either
there is a spectacular failure,or worse,the page just simply stops working.
Fortunately,with a little knowledge and a little planning,it is possible to
1.4 A Brief Introduction to Cross-Browser Development 11
01_0132272679_ch01.qxd 7/17/06 8:58 AM Page 11
avoid these web development monsters that live where the standards don’t
quite mesh with reality.
1.4.1 Casualties of the Browser Wars
Cross-browser compatibility was probably the first casualty of the Browser
Wars that began about 20 minutes after the second web browser was devel-
oped.In those days,browser developers had a tendency to play fast and loose
with things in an effort to pack features into their browser before the competi-
tion.In the rush to be the first with a new feature,or to play catch-up,no
thought was given to the web developers who would actually have to program
for these browsers.
Because of this,it wasn’t unusual to see two browsers with essentially
the same functionality,but having entirely different approaches.Look at how
the
XMLHttpRequest
object is implemented in Microsoft Internet Explorer and
in Gecko-based browsers such as Firefox.Internet Explorer,which was the
first to implement this object,made it part of ActiveX.This means that to cre-
ate an instance of this object in Internet Explorer,the following syntax is used:
var objXMLHTTP = new ActiveXObject(‘Microsoft.XMLHTTP’);
With Firefox and any other browser that implements the
XMLHttpRequest
object,the syntax is as follows:
var objXMLHTTP = new XMLHttpRequest();
The reason for this is that ActiveX is a Microsoft-only technology,which
means that short of trying to license it from Microsoft,which I can’t imagine
would come cheap,it was necessary to find another way.And,when found,this
other way became the standard for all non-Microsoft web browsers.
1.4.2 Market Share Does NotEqual Right
While I’m on the subject of proprietary technologies,I’d like to point out that
market share does not equate to being right.History is full of cases in which
the leader,the one with the largest market share,was blindsided by some-
thing that he or she didn’t realize was a threat until too late.Does anybody
remember Digital Research’s CP/M? If you haven’t,CP/M was the premier
operating systems in the days when 64K was considered a lot of memory.In a
fractured landscape of operating systems,it had more than half of the operat-
ing system market.
12 Types of Web Pages Chapter 1
01_0132272679_ch01.qxd 7/17/06 8:58 AM Page 12
Then there was the release of the IBM PC,which offered a choice of three
operating systems:CP/M-86,PC DOS,and UCSD D-PASCAL.At the time,
everybody thought that Digital Research had the new landscape of the Intel
8086 as theirs for the foreseeable future.Unfortunately,because Microsoft’s
DOS was $50 less,market share yielded to economic pressure.Microsoft went
on to become the leader in computer operating systems,while Digital
Research faded into history.
1.4.3 The World Wide Web Consortium,Peacekeepers
During the height of the Browser Wars,there was the definite feeling that web
browser technology was advancing at a breakneck pace,so much so that the
World Wide Web Consortium seemed to be playing catch-up.It was a case of
putting the cart before the horse,with the web browsers getting features and
then the recommendations being published,which explains the weirdness
with the
XMLHttpRequest
object.
Now the war is,if not over,at least at intermission,giving us time to get
some popcorn and a soda.In addition,whether by accident or by design,this
break has given the World Wide Web Consortium time to move once more
into the lead.Unfortunately,the damage is done and we’re all forced to code
around the little differences in the various browsers.
1.5 T
HE
S
ERVER
S
IDE OF
T
HINGS
The conver-
sions fea-
tures of each and every server available,it is also important to take into con-
sideration.
1.5 The Server Side of Things 13
01_0132272679_ch01.qxd 7/17/06 8:58 AM Page 13.
1.5.1 Apache
First and foremost,Apache is not a web server developed by Native Ameri-
cans.
1.5.2 Internet Information Server
IIS,as it is known to those of us who use it,is Microsoft’s answer to Apache.In
fact,most of the examples in this book use IIS on the server side.Don’t get
excited—it isn’t because it is better;it is only because it comes bundled with
Windows XP Pro.It comes down to the whole Internet Explorer thing;I’m lazy,
and I use it at my day job.
1.5.3 The Remaining Players
Yes,there are other web servers beyond the big two.For example,there is the
CERN Server,brought to you by the same people who created the World Wide
Web.Another choice is NCSA HTTPd,from the National Center for Super-
computing Applications at the University of Illinois in Urbana,Illinois.
Unfortunately it is no longer under development,which is too bad;I,for one,
would like a web server from HAL’s hometown.
I’d like to mention another “minor” server:WEBrick.Technically consid-
ered.
14 Types of Web Pages Chapter 1
01_0132272679_ch01.qxd 7/17/06 8:58 AM Page 14
1.6 W
E
L
EARN BY
D
OING
The problem with working in the computing field is that technology insists on
advancing.Learn something new today,and 2 years down the road,it is obso-
lete.Because of this,it’s necessary to continue learning the latest new technol-
ogy,which means lots of reading and lots of training.While at Bell Labs,I
formulated two rules of training that I’d like to share with you:
1.Training will be given far enough in advance of the project that there is
sufficient time to forget everything learned.
2.If sufficient time does not exist for the first rule,the training will take
place a minimum of 6 months after the project has been completed.
These rules have proved true every place that I have ever worked
throughout my career.Banks,insurance,manufacturing,whatever—it doesn’t
matter.These rules have always held true.
There is,however,a way to skirt these rules.Simply try the examples,
play with them,alter the code,make it better,break it,and fix it.There is no
substitute for immersing yourself in any subject to learn that subject.It might
be difficult at first,and sometimes it might even be painful,but the easiest
way to learn is by doing.
1.6.1 Coding by Hand
Currently,coding web applications by hand has fallen out of favor,and rightly
so,replaced by packaged components that can be dragged and dropped.Unfor-
tunately,although the practice of using components means that individual
pages are developed quicker,it also means that it isn’t always easy to deter-
mine what the components are actually doing behind the scenes.This is espe-
cially true when the underlying code isn’t fully understood because the
developers skipped ahead to the parts that will keep them employed.
However,when learning something new,or trying to explain it to some-
one else,I have a strong tendency to code an application by hand.In part,the
reason for this is that it gives me a better feel for the new subject.Of course,
the other part is that I coded classic ASP for quite some time and spend a
great deal of time writing client-side workarounds for managers who insisted
on the use of design-time controls.Although it improved developers’
JavaScript skills considerably,it had the same effect upon those developers
that mercury had upon hat makers in the nineteenth century.Don’t believe
me? Go ask Alice.
Seriously,though,the idea of coding at least the first couple of applica-
tions by hand is to attempt to get a feel for the technology.Feel free to ignore
1.6 We Learn by Doing 15
01_0132272679_ch01.qxd 7/17/06 8:58 AM Page 15
my advice on this subject.What does matter,however,is making it easier for
us in the end,which is why tools are important.
1.6.2 Tools to Make Tools
If the idea of coding by hand is repugnant to you,consider this:On some level,
somebody coded something by hand.It is a pretty sure bet that there are no
software tool trees,although I have used several that weren’t quite ripe yet.
Many developers have issues with the very concept of creating their own
common tools for web development.The first issue probably relates to the idea
of job security;after all,if a company has a “developer in a box,” why would it
pay for the real thing? The answer to this is relatively simple:What if they
want changes to what’s in the box? Let me put it another way:Have you ever
written some code and played the “I bet you can’t guess what this does” game?
I have,and not only is it good for feeding the old ego,but it is a blast,too! Of
course,there is the tendency to strut around like Foghorn Leghorn afterward,
but as long as you avoid the young chicken hawk developer and the old dog
developer,everything will be fine.Also remember that,by himself,the weasel
isn’t a real threat.
Another issue is the “I can tell you,but then I’ll have to kill you” mindset.
A while back,I had a manager with this mindset;she seemed to withhold
required information just for fun from every assignment.For example,she
once gave me the assignment to produce a report from a payroll file and then
told me that I didn’t have high enough security to see either the file or the file
layout.Somebody once said that information is power,and some people take it
to heart.The danger with this philosophy is that information can literally be
taken to the grave,or it is so out-of-date that it no longer applies.
Finally,there’s what I believe to be the biggest issue,which I call “The
Wonder Tool”;it dices,it slices,and it even makes julienne fries.Similar to the
“feature creep” that we’re all familiar with,but with a difference,it starts out
unrealistic.“The Wonder Tool” is a mouse designed to government specifica-
tions,more commonly called an elephant.For the interest of sanity (yeah,
right,me talking about sanity),it makes far more sense to break up the tool
into more manageable pieces.For example,let’s say that we need common
tools to do X and Y,both of which need a routine to do Z.Rather than code Z
twice as part of X and Y,it makes more sense to code a separate tool to do Z
and have X and Y use this tool.And who knows? Sometime in the future,you
might need a few Zs,and you’ll already have them.
16 Types of Web Pages Chapter 1
01_0132272679_ch01.qxd 7/17/06 8:58 AM Page 16
1.7 S
UMMARY
The intention behind this chapter is that it serve as something of an explana-
tion of the humble beginnings of the World Wide Web,starting with a single
server and growing into the globe-spanning network that it is today.
First there was a brief explanation of both static and dynamic web pages,
including the components that go into building each type of page.Components
such as HTML,CSS,and JavaScript were briefly covered.Several examples of
“DHTML out of control” were also mentioned;I,for one,can’t wait for the
video.
There was also a brief description,or,in some cases,an honorable men-
tion,of several different web browsers.These browsers included some of the
more popular web browsers for Linux,Windows,and Mac OS X.In addition,
mention was made of some of the more annoying problems with cross-browser
development.
The server side of things was briefly covered,to illustrate that there are
always alternatives to whatever is being used currently.Also,I mentioned how
it might be possible to mix and match technology,such as ASP.NET on Linux.
Finally,I covered the biggest problem with technical training today:how
to apply it and how to circumvent it.Regardless of who we are,we learn by
doing,and that information is like cookies;it’s meant to be shared.
1.7 Summary 17
01_0132272679_ch01.qxd 7/17/06 8:58 AM Page 17
01_0132272679_ch01.qxd 7/17/06 8:58 AM Page 18
C H A P T E R
2
Introducing Ajax
A little more than a year ago,an article by Jesse James Garrett was published
describing an advanced web development technique that,even though individ-
ual components of it have existed for years,few web developers had ever stum-
bled scien-
tist.
19
02_0132272679_ch02.qxd 7/17/06 8:58 AM Page 19
2.1 N
OT A
M
OCK data-
base was actually being updated without the page “blinking,” as he referred
to it.
2.2 A T
ECHNIQUE
W
ITHOUT A
N
AME
Now tech-
nology both Ajax’s strengths and issues.
2.2.1 Names
An old idea dates back to the dawn of human civilization that to know some-
one resumé.If ever a document held names
of power,a resumé is it.Not very long ago,resumés invoking words such as
JavaScript,DHTML,and XML were looked upon with envy,perhaps even
20 Introducing Ajax Chapter 2
02_0132272679_ch02.qxd 7/17/06 8:58 AM Page 20 nam-
ing.”
2.3 W
HAT
I
S
A
JAX
?
As stated previously,Ajax stands for Asynchronous JavaScript And XML,but
what exactly does that mean? Is the developer limited to only those technolo-
gies named? Thankfully,no,the acronym merely serves as a guideline and not
a rule.In some ways,Ajax is something of an art,as with cooking.Consider,for
a moment,the dish called shrimp scampi;I’ve had it in restaurants up and
down the East Coast of the United States,and it was different in every restau-
rant.Of course,there were some common elements,such as shrimp,butter,
and garlic,but the plethora of little extras added made each dish unique.
The same can be said of Ajax.Starting with a few simple ingredients,
such as HTML and JavaScript,it is possible to cook up a web application with
the feel of a Windows or,if you prefer,a Linux application.You might have
noticed earlier that my ingredients list omitted XML;the reason for that omis-
sion is that XML is one of those optional ingredients.
2.3 What Is Ajax?21
02_0132272679_ch02.qxd 7/17/06 8:58 AM Page 21
This means that the page won’t “blink,” as the peasant—er,client—so ele-
g liv-
ing? applica-
tions need to be able to not only find objects on the HTML page but also,if nec-
essary advan-
tages compatibil-
ity.
22 Introducing Ajax Chapter 2
02_0132272679_ch02.qxd 7/17/06 8:58 AM Page 22 elec-
tronic | https://www.techylib.com/el/view/scaredbacon/ajax_00_0132272679_fm.qxd_71706_857_am_page_i | CC-MAIN-2018-34 | refinedweb | 8,815 | 67.76 |
[
]
Filip Maj commented on CB-560:
------------------------------
Do not think CB-298 has anything to do with this. We are missing adding the cordova/exec JS
module on the {{Cordova}} and {{PhoneGap}} globals; for the {{cordova}} global, this is done
[via the common.js platform definition|],
but that is missing for {{Cordova}} and {{PhoneGap}}.
> Cordova breaking iOS plugins
> ----------------------------
>
> Key: CB-560
> URL:
> Project: Apache Callback
> Issue Type: Bug
> Components: CordovaJS
> Affects Versions: 1.6.1
> Environment: From
someone apparently tried upgrading from PhoneGap 1.1 to 1.6.
> Reporter: Chris Brody
> Assignee: Filip Maj
> Fix For: 1.7.0
>
> Original Estimate: 4h
> Remaining Estimate: 4h
>
> From 1.4 to 1.5, the namespace was changed from PhoneGap to Cordova which was breaking
all of the existing plugins. I eventually provided a shim class but it was too late to stop
the pain. Then in Javascript only Cordova was changed to cordova, breaking the iOS plugins
yet again. I noticed in cordova (1.6.0) JS:
> if (!window.PhoneGap) {
> window.PhoneGap = cordova;
> }
> This should have been done for Cordova like:
> if (!Cordova) {
> Cordova = cordova;
> }
> Yes we should be deprecating the old namespaces for removal in another major release.
Any API changes made before a major release should be made with a workaround, to be deprecated,
and tested with some plugins before shipping.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators:
For more information on JIRA, see: | http://mail-archives.apache.org/mod_mbox/incubator-callback-dev/201205.mbox/%3C347532631.14174.1335901615997.JavaMail.tomcat@hel.zones.apache.org%3E | CC-MAIN-2015-48 | refinedweb | 245 | 59.3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.