text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
If you annotate some servlet, filter, or listener with @EJB
annotations at the class level this will get all those ejbs into the
java:comp/env jndi namespace so spring components can look them up
there. (Possibly the annotation is @EJBS, I haven't looked at this
in a while). You could also set this up in web.xml.
thanks
david jencks
On Oct 11, 2007, at 6:27 AM, ptriller wrote:
>
> Hello !
>
> I have a simple scenario.
>
> I am deploying an EAR with some EJB3 Jars and a Webapp.
> The Webapp is using spring, and I want to inject the EJB services
> into the Spring context.
>
> Unfortunately I have no Idea how to do that.
> Spring does have a support Class for stuff like that, but the
> LocalStatelessSessionProxyFactoryBean needs the JNDI
> name for the EJB to use, and I cant figre out how to do that.
> I know that the EJB gets a lobal JNDI name but that one includes
> the name of
> the
> EAR and EJB jar which is not acceptable, since theese contain the
> version
> numbers of the jars
> (I am using Maven2 to build).
>
> Is there any way around this ?
>
>
> Thanks
>
>
> Peter
> --
> View this message in context:-
> stateless-session-beans-and-spring-tf4607255s134.html#a13155843
> Sent from the Apache Geronimo - Users mailing list archive at
> Nabble.com.
> | http://mail-archives.apache.org/mod_mbox/geronimo-user/200710.mbox/%3C861CA508-24E3-43D6-B18C-D2E9404FE5AD@yahoo.com%3E | CC-MAIN-2017-26 | refinedweb | 221 | 72.16 |
L.S. I am trying to learn how to program in C++. At the moment I work my way trough B. Preiss his book on data structures. I run here into an excersize which won't compile with g++. A dressed down version (in reality both Base1 and Base2 inherit from the same class, so I have a diamond in the inheritance diagram for example) is included below. #include <iostream> using namespace std; class Base1 { public: virtual ~Base1() {} virtual Base1& SomeFunc() =0; }; class Base2 { public: virtual ~Base2() {} virtual void DoSomething() {cout << "Hallo world\n";} }; class Derived: virtual public Base1, virtual public Base2 { public: Derived& SomeFunc() {return *this;} }; int main() { Derived deriv; return 0; } The error message that I get is (I just use g++ -c test.cpp with version 3.3.4 of gcc on slackware 10): "test.cpp:16: sorry, unimplemented: adjusting pointers for covariant returns". Line 16 is the line declaring the Derived class. The error goes away if I either use ordinary inheritance instead of virtual in line 16, or if the return type of SomeFunc in class Derived is Base1& instead of Derived&. I don't understand what is wrong, can someone please explain. My reason to post on this list is that the Borland compiler doesn't complain. Regards, Michiel Nauta | http://lists.gnu.org/archive/html/help-gplusplus/2005-01/msg00051.html | CC-MAIN-2016-22 | refinedweb | 216 | 65.01 |
stackedwidget for different class which they have their own .cpp , .h, and .ui
I have created different classes(.cpp, .h, .ui) and I wish to combine all of their .ui's in a stackedwidget. I created an extra class specifically to add the widgets into the stackedwidget. For the stackedwidget cpp I have used
QStackedWidget *stackedWidget = new QStackedWidget; stackedWidget->addWidget(page1)
page1 would be the widget I create for one cpp/ui file and on the stackedwidget cpp I made sure to include the .h file for page_1. On the page_1 .cpp (I made it into QDialog) and wrote the following:
QWidget *page1 = new QWidget; . . QVBoxLayout *finalLayout = new QVBoxLayout(page1);
However, the stackedwidget cpp does not recognize the QWidget page1.
Hi,
Can you show the complete code where you setup your QStackedWidget ? Where do you declare/instantiate page1 ?
Page_1.cpp has
QWidget *page1 = new QWidget; ui->setupUi(this); ui->page1Layout = new QVBoxLayout(page1);
Page_stackedWidget.cpp has the '#include Page_1.h'
ui->setupUi(this);
stackedWidget = new QStackedWidget; stackedWidget->addWidget(page1); QVBoxLayout *vLayout = new QVBoxLayout; vLayout->addWidget(stackedWidget); setLayout(vLayout);
page1 is a local variable used in Page_1.cpp, you can't access it like that in Page_stackedWidget.cpp (and don't try or you are going to write pretty ugly code).
Shouldn't you be doing something like:
Page_1 *page_1 = new Page_1; stackedWidget->addWidget(page_1);
?
The thing is there were going to be multiple pages and I wanted to stack the widgets in a different file. I was able to achieve that with Page_stackedWidget.ui. However, I would like it so when I press a button that is in page 1 it takes me to page 2. This is what I have for Page_stackedWidget.cpp
#include "dialog.h" #include "ui_dialog.h" Page_stackedWidget::Page_stackedWidget(QWidget *parent) : QDialog(parent), ui(new Ui::Dialog) } ui->setupUi(this); connect(_signin->nextButton, SIGNAL(clicked()),this,SLOT(goToPage2())); } void Dialog::goToPage2() { ui->stackedWidget->setCurrentIndex(1); }
However, the "nextButton" I created on the .ui file for Page1. I have looked for forums and i have found some similar to my problem but the difference is that their button is created on this file..
- mrjj Qt Champions 2017
Hi
If I understand you correctly,
you can maybe use the promote feature.
Say you have
UI A,B,C
B and C contains widgets like buttons etc.
on A you put a stacked widget
and on page 1 it has all of B, and on page 2, it has all of C
and they still live in their own UI files. ?
That you can do via promote. In short you place a QWidget in the stacked paged and it will be replaced
by all widget from an UI file. When you run the program.
Note: select QWidget as parent, not dialog when you create B,C.
Please see
If its not what you mean, then sorry :)
@mrjj I did put a widget on the stackedWidget. Even though it is a QDialog, I created a widget inside of it. I was able to call the files and add them into the stackwidget. My question now is how do I call a qpushbutton created from Page1?
- mrjj Qt Champions 2017
@marlenet15
well , say the place holder widget is called Wid2, (the one you promoted)
then you can access any widget using the normal
ui->Wid2->SomeButton
its being created in the ui_xx file for the UI that contains the promoted widgets.
Each widget should be self-contained so what exactly do you want to do with that button from another widget in your stack ?
I want to press that button to go the next page that is in the stack
In that case and from a design point of view, does it really belong to a widget inside the stack ?
Yes it does.
Then add a signal to that widget that will forward the button clicked signal. Something like nextPageRequested. That will be cleaner and easier to understand when you read the code.
I know but how am I supposed to access that widget when it is in another class?
connect(Page1->nextButton, SIGNAL(clicked()),this,SLOT(goToPage2()));
Where Page1 is the widget that contains the button "nextButton" and when you press it will go to Page2
That's what I wrote: you don't access the button, you only connect Page1.
Are you trying to re-implement QWizard ?
Yes but I do not want to use QWizard since I don't want it in that style. The only similarity is that I can go from one page to another. | https://forum.qt.io/topic/59523/stackedwidget-for-different-class-which-they-have-their-own-cpp-h-and-ui/2 | CC-MAIN-2018-17 | refinedweb | 758 | 65.83 |
Welcome to the Errata page for Core Python Programming!
This page is broken down into 2 main sections of errata, the text of the book and the source code. Each errata item follows this format:
Page :: Section (may also have Figure, Table, or Example number) : Correction
xxiv :: Optional Sections : edit to 2nd paragraph: "... can skip the first chapter and go straight to Chapter 2... absorb Python and be off to the races."
>>> print "%s is number %d!" % ("Python", 1) Python is number 1!
27 :: 2.2 : reword parenthesized segment in the 3rd sentence to:
(Use the
string.atoi() function for Python versions
older than 1.5.)
27 :: 2.3 : code does not accurately reflect interpreter; change code to:
>>> # one comment ... print 'Hello World!' # another comment Hello World!
32 :: 2.8 :
"
aLlist"
at bottom of first interactive example should be changed to
"
aList"
35 :: 2.12 :
"
loop #5"
of interactive example should be deleted
36 :: 2.13 :
In the final
... (who, ((what + ' ') * 4))
40 (top) :: 2.16 : first sentence from preceding page should be changed to read: "... declared before they can be called." (Functions do not have types in Python.)
44 (top) :: 2.17 : Interactive example is missing "My name is..." as in:
>>> foo2.showname() Your name is Jane Smith My name is __main__.FooClass
68-71 :: 3.6 : Example (not Listing) 3.1 :
On line 25, there is a cryptic reference to
sys.exc_info()[1]
and no explanation on p. 71. As you will discover,
sys.exc_info()
is a function which returns a list containing an exception class, an exception
object, and a traceback object. The
[1] simply refers to the
2nd item (exception object) of the list. It is no different than an operation
similiar to:
>>> aList = [123, 'xyz', 45.67] >>> print aList[1] >>> xyz
73 :: 3.7 : Exercise 3-8 :
1st sentence should refer to
string.find()
the function, not module
77 :: 4.2 : Core Note : "... wrapping a type around a Python class..." should be "... wrapping a class around a Python type...." Also, the last sentence should say, "See Sections 13.15 and 13.16," not 13.18.
84-85 :: 4.5.2 :: Examples 1, 2, 3 and Figures 4-1 and 4-2 : Python caches low integers (and some strings), so using integer 4 in these examples (and in the figures) may result in the same object being referenced. If this is your experience, change the integers to floats and try again, i.e.,
>>> foo1 = 4.0 >>> foo2 = 1.0 + 3.0 >>> id(foo1) 134668004 >>> id(foo2) 134667956
86 :: 4.5.2 :: This "Core Note" sidebar is missing at the end of section 4.5.2:
CORE NOTE: Interning
Smaller integers, identifier names, and shorter strings which are similar to identifier names which may be used often in the code will be cached internally, a process known as interning. If an object is interned, then multiple references to such objects will be identical. (In other words, common objects such as these will only be created once and shared, otherwise it is a waste of resources to have multiple objects with the same value which are created and referenced often.)
If you have tried the example in this section with your interpreter,
you will discover that "foo1" and "foo2" would refer to the same
object (when you use
a is b or
id(a) ==
id(b))! Don't worry if you get that... it is not a mistake
by the interpreter. Change the integer 4 to float 4.0 and see if
you get the expected result. Also see section 14.3.6 for more about
interning.
91 :: 4.6.3 :
"
>>> type(Abc)"
of interactive example assumes a declared class named
Abc
107 :: 5.4 : separate the 2nd and 3rd to last complex numbers in the final line of Section 5.4. 0+1j and 9.80665-8.31441J are are different complex numbers.
109 :: 5.5.1 : text reference should be the Python Language Reference Guide.
110 :: 5.5.1 :
output on the second interactive example should be
999998000001L, not
99999B000001L.
118 :: 5.6.2 :
ndig is the optional second argument, not third.
134 :: 6.1.1 : Figure 6-2 :
There should be no space between
sequence and
[:] in the caption.
145 :: 6.3.2 : Core Tip :
First
while loop.
147 :: 6.4.1 : In the 2nd sentence, "pack" should really be "lack."
152 :: 6.4.2 : There should be no "our RE matched:" strings in either of the statements in the interactive output at the end of this section.
153 :: 6.4.3 :
There should be no space between
ur and the quoted Hello World string.
155 :: 6.6 : Table 6.6 :
string.alnum is misspelled with a
1
rather than an "L".
176 :: 6.11.2 :
There should be no string
'park' in the concatenation
of
str_list + num_list in the output near the bottom
of the page.
182 :: 6.13 : Table 6.10 :
The description for the
pop() list method is incorrect.
Rather than giving the object to remove, you give the index
of the object to remove, defaulting to the last item of the list if
no argument is given. So the row should read as:
list.pop(index=-1) -- removes and returns
object at given or last index from list
186 :: 6.14.1 : Example 6.2 :
call to
showmenu() on line 47 should be indented.
193 :: 6.14.2 : final sentence should refer to Section 13.16 rather than 13.18.
193 :: 6.15 : first sentence in "How to Create and Assign Tuples" section should start as "Creating and assigning tuples..." not lists.
194 :: 6.15 :
first line of interactive code should read:
>>> aTuple[1:4]
194 :: 6.15 : final sentence should refer to removing an entire "tuple," not list, keeping in mind again, that most of the time, programmers will just let an object go out-of-scope rather than explicitly deleting an object.
201 :: 6.18 : description of UserList module should refer to Section 13.16 rather than 13.18
209 :: 6.20 : Exercise 6-9 : 1st sentence should refer to Exercise 5-13, not 6-13
>>> nameList = ['Walter', "Nicole", 'Steven', 'Henry']
244 :: 8.5.4 :
In the
for-loop at the bottom of the page, the C
variable
i is misspelled and should be
eachVal:
for (eachVal = 2; eachVal < 19; eachVal += 3) {...
252 :: 8.10 : Exercise 8-9 : The actual problem does not follow the description of Fibonacci numbers. The rest should read: Write a routine such that given N, display the value of the Nth Fibonacci number. For example, the first Fibonacci number is 1, the 6th is 8, and so on.
flush()method," sentence is incomplete. Generally data is stored or queued temporarily until it is written to the file. So the sentence should read, "Rather than waiting for the (contents of the) output buffer to be written to disk, calling the
flush()method will cause the contents of the internal buffer to be written (or flushed) to the file immediately."
267 :: 9.5 :
In the paragraph for
sys.stdout.write() near the middle
of the page, the 2nd sentence should read that "
readline()
executed on
sys.stdin preserves the NEWLINE..."
and not "readline."
268 :: 9.5 : The 2nd sentence at the top of the page should read: 'The names "argc" and "argv"....'
275 :: 9.7 :
The pseudocode in the 2nd section is missing one line with a
call to
os.chdir():
>>> os.mkdir('example') >>> os.chdir('example') >>> cwd :
Also, the exception with
os.listdir() is intentional, to
show you the error which results when no directory name is
given to the function.
276 :: 9.7 :
The pseudocode in the 2nd section is missing the assignment
of the
allLines variable:
>>> file = open(path) >>> allLines = file.readlines() >>> file.close() :
# executes regardless of exceptions" comment should be on a single line of the
finallyclause.
315 :: 10.6.1 : first sentence of 2nd to last paragraph should italicize without any parameters.
320 :: 10.8 :: Table 10.2 :
TableError should be
TabError
grabweb.pyimproperly displayed on 2 lines
350 :: 11.6.2 :
dictVarArgs() function should read as:
print 'format arg1:', arg1
... and not ...
print 'format arg1:', dictVarArgs
350 :: 11.6.2 :
in example execution of
dictVarArgs(), 2nd invocation should
set
arg1 not
a to
'mystery'
353 (top) :: 11.6.3 :
add final new sentence before Section 11.7:
Prior to 1.6, variable objects could only be passed to
to
apply() with the function object for invocation.
362 :: 11.7.2 (
filter()) :
pseudocode is improperly indented... the code should look like:
def filter(bool_func, sequence): filtered_seq = [] for eachItem in sequence: if apply(bool_func, (eachItem,)): filtered_seq.append(eachItem) return filtered_seq
363 :: 11.7.2 (
filter()) :
first round code for
oddnogen.py is also indented improperly...
it should appear as:
from random import randint def odd(n): return n % 2 def main(): allNums = [] for eachNum in range(10): allNums.append(randint(1, 101)) oddNums = filter(odd, allNums) print len(oddNums), oddNums if __name__ == '__main__': main()
367 :: 11.7.2 (
map()) :
change the reference of "Zip()" to
zip() (font should also be Courier
in another reference on the bottom of p. 369).
372 :: 11.7.2 (
reduce()) :
mathematical operation for the example is missing 2 left parentheses:
((((0 + 1) + 2) + 3) + 4) => 10.
C.foodown at the bottom should output
100not
0
441 :: 13.10.2 (Core Note) : output of instanantiation is incorrect... it should be:
>>> c = C() calling C's constructor
449 :: 13.12 : section title should be "Types vs. Classes/Instances"
454 :: 13.13.1 : output of last interactive example should not be indented:
>>> print myPair (-5, 9)
463 (top) :: 13.13.2 : first paragraph is for Lines 42-54
Thu Feb 15 17:46:04 2007::uzifzf@dpyivihw.gov::117159036" and another box around "
4-6-8", and in Figure 15-3, the boxes should around "
Thu Feb 15 17:46:04 2007::uzifzf@dpyivihw.gov::" and "
1171590364-6-8".
tkhello4.py(see Example 18.4 on p. 630) starts is 12.
664 :: 19.5.1 : "CN" Core Note logo should be "CT", meaning a Core Tip logo.
682 :: 19.5.1 : Example 19.6 : The first two words of the description of this code should read, "This script..." rather than "The crawler...".
684 :: 19.5.1 : Example 19.6 :
or on line 108 of
advcgi.py should
be
for. (The source code is correct though.)
690-691 :: 19.7.1 :
output from the
myhttpd.py webserver wraps...
lines of output should be on single lines.
args.py
tsTserv.py(new script) (diffs)
ctime(time())and
data; i.e., it must be a tuple. (critical patch)
listdir.py (alternative version only)(new script) (diffs)
crawl.py(new script) (diffs)
662 :: 19.5.1 :: Example 19.2 :
friends.htm
(new HTML file)
(diffs)
Missing default value of "NEW USER".
(minor patch)
666 :: 19.5.2 :: Example 19.4 :
friends2.py
(new script)
(diffs)
Missing default value of "NEW USER".
(minor patch) | http://cpp.wesc.webfactional.com/cpp1e/errata.htm | CC-MAIN-2017-22 | refinedweb | 1,819 | 77.94 |
A system to track, version and audit Machine Learning models
ModelDB
ModelDB is an end-to-end system for tracking, versioning and auditing machine learning models. It ingests models and associated metadata as models are being trained, stores model data in a structured format, and surfaces it through a web-frontend for rich querying and the python client.
Architecture
At a high level the architecture of ModelDB in a Kubernetes cluster or a Docker application looks as below:
- ModelDB Client developed in Python which can instantiated in the user's model building code and exposes functions to log related information to ModelDB.
- ModelDB Frontend developed in JavaScript and typescript is the visual reporting module of ModelDB. It also acts as an entry point for the ModelDB cluster.
- It receives the request from client (1) and the browser and route them to the appropriate container.
- The gRPC calls (2) for creating, reading,updating or deleting Projects, Experiments, ExperimentRuns, Dataset, DatasetVersions or their metadata are routed to ModelDB Proxy.
- The HTTP calls (3) for storing and retrieving binary artifacts are forwarded directly to backend.
- ModelDB Backend Proxy developed in golang is a light weight gRPC to Http convertor.
- It receives the gRPC request from the front end (2) and sends them to backend (4). In the other direction it converts the response from backend and sends it to the frontend.
- ModelDB Backend developed in java is module which stores, retrieves or deletes information as triggered by user via the client or the front end.
- It exposes gRPC endpoints (4) for most of the operations which is used by the proxy.
- It has http endpoints (3) for storing, retrieving and deleting artifacts used directly by the frontend.
- Database ModelDB Backend stores (5) the information from the requests it receive into a Relational database.
- Out of the box ModelDB is configured and verified to work against PostgreSQL, but since it uses Hibernate as a ORM and liquibase for change management, it should be easy to configure ModelDB to run on another SQL Database supported by the the tools.
Volumes : The relational database and the artifact store in backend need volumes attached to enable persistent storage.
Setup and Installation
There are multiple way to bring up ModelDB.
Docker Setup
Deploy pre published images
If you have Docker Compose installed, you can bring up a ModelDB server with just a single command.
docker-compose -f docker-compose-all.yaml up
This command will fetch the published images from Docker hub and setup the multi container environment. The webapp can be accessed at.
Logs will have an entry similar to
Backend server started listening on 8085 to indicate backend is up. During the first run backend will have to run the liquibase scripts so it will take a few extra minutes to come up. The progress can be monitored in the logs.
Once the command finishes it might take a couple of minutes for the proxy, backend and frontend to establish connection. During this time any access through frontend or client may result in 502.
Build images from source and deploy
To build the images you need Docker and jdk(1.8) installed. Each of the modules has a script to build its Docker image. This flow can be triggered by running from the root of the repository
./build_all_no_deploy_modeldb.sh
This will build the Docker images locally.
To use these images run steps in Deploy pre published images, but this time since there will be locally built images , those will be used instead of pulling the images from remote repository.
A utility script to combine the two steps is available and can be run as
./build_modeldb.sh
Kubernetes SetUp
Helm chart is available at
chart/modeldb. ModelDB can be brought up on a Kubernetes cluster by running:
cd chart/modeldb helm install . --name <release-name> --namespace <k8s namespace>
By default, the
default namespace on your Kubernetes cluster is used.
release-name is a arbitrary identifier user picks to perform future helm operations on the cluster.
To bring a cluster down, run:
helm del --purge <release-name-used-install-cmd>
Repo Structure
Each module in the architecture diagram has a designated folder in this repository, and has their own README covering in depth documentation and contribution guidelines.
- protos has the protobuf definitions of the objects and endpoint used across ModelDB. More details here.
- backend has the source code and tests for ModelDB Backend. It also holds the proxy at backend/proxy. More details here.
- client has the source code and tests for ModelDB client. More details here.
- webapp has the source and tests for ModelDB frontend. More details here.
Other supporting material for deployment and documentation is at:
- chart has the helm chart to deploy ModelDB onto your Kubernetes cluster. More details here.
- doc-resources has images for documentation. | https://pythonawesome.com/a-system-to-track-version-and-audit-machine-learning-models/ | CC-MAIN-2020-16 | refinedweb | 801 | 55.64 |
+ 29 comments
Posting an alternate solution to this..
Consider all the cities to be a chain. Ex, with 8 cities, it would look like this 0-1-2-3-4-5-6-7-8.
Now, each space station will break the chain. What we are trying to achieve are chains of cities without space stations. If we have stations in 2 and 6, we just take them out. We are left with three chains here. 0-1, 3-4-5, 7-8
We just find the longest chain and the answer would be (length+1)/2. This is the length from the middle of the chain to a station.
There are two special cases when the longest chain without station is leading or trailing. I would leave it here...
+ 7 comments
A solution in C# with a bit of explanation.
using System; using System.Collections.Generic; using System.IO; using System.Linq; class Solution { static void Main(String[] args) { string[] tokens_n = Console.ReadLine().Split(' '); int nCities = Convert.ToInt32(tokens_n[0]); int mSpaceStations = Convert.ToInt32(tokens_n[1]); string[] c_temp = Console.ReadLine().Split(' '); int[] spaceStationArr = Array.ConvertAll(c_temp,Int32.Parse); /*The array has to be sorted. According to MSDN arrays that are sorted by using the Heapsort and Quicksort algorithms, in the worst case, this method is an O(n log n) operation, where n is the Length of array.*/ Array.Sort(spaceStationArr); //Set an initial max distance which can sensibly be the distance between City 0 and first space station int maxDistance = spaceStationArr[0] ; /*We are interested in the cities that are in the middle of two space stations as this city will be furthest for that set of two space stations City : 0,1,2,3,5,6,7,8 SpaceStation : .,.,2,.,.,.,7,. In the example above, 5 is in the middle of space station 2 and 7 and the distance to the closest one is 2 and can be calculated as (7-2)/2 = 2 */ for(int i=1 ; i<spaceStationArr.Length; i++) { int distance = (spaceStationArr[i] - spaceStationArr[i-1])/2; if(distance > maxDistance) maxDistance = distance; } //Check the distance of last spaceStation and last city. int lastSpaceStationDistance = nCities-1 - spaceStationArr[mSpaceStations-1] ; if( lastSpaceStationDistance > maxDistance) { maxDistance = lastSpaceStationDistance; } Console.WriteLine(maxDistance); } }
+ 4 comments
My Java Solution
private static int solution(int[] arr, int n){ Arrays.sort(arr); int maxDistance = arr[0]; for(int i = 1; i < arr.length; i++){ int distance = (arr[i] - arr[i-1]) / 2; if(maxDistance < distance) maxDistance = distance; } int lastGap = (n-1) - arr[arr.length - 1]; return (lastGap < maxDistance) ? maxDistance : lastGap; }
+ 4 comments
Python solution: Sort the array in incresing order. 1. Take the maximum distance from the starting city (0) and from the last city from the nearest station. Also eliminate the case of only 1 staion. 2. Now take the maximum distance from the given station to any city using formula (a[i+1]-a[i])/2. Just draw the diagrams and you will get the formula. 3. The maximum value of distance is the answer.
def flatlandSpaceStations(n, c): c.sort() maxd = max(c[0], n-1 - c[-1]) for i in range(len(c)-1): maxd = max((c[i+1]-c[i])//2, maxd) return maxd
Sort 714 Discussions, By:
Please Login in order to post a comment | https://www.hackerrank.com/challenges/flatland-space-stations/forum | CC-MAIN-2021-39 | refinedweb | 546 | 59.4 |
477335 SR-mf-003 Change Request 7
Bütçe N/A
SR-mf-003, Change Request 7..
My teacher wants a histogram as listed below:
// Purpose: A Graphing class..
import javax.swing.*;
public class Histogram
{
public static void main(String args[])
{
int n[] = {19, 3, 20, 7, 13, 19, 3, 19,1};
String output = "";
output += "Element \tValue\tHistogram";
for (int i = 0; i < [url removed, login to view]; i++)
{
output += "\n" + i + "\t" + n + "\t";
//prints the bar
for (int j = 1; j <= n; j++)
output += '*';
}
JTextArea outputArea = new JTextArea(11, 30);
[url removed, login to view](output);
[url removed, login to view](null , outputArea,
"Histogram Class",
JOptionPane.INFORMATION_MESSAGE);
[url removed, login to view](0);
}
}
Please provide this completed in a text or java file. | https://www.tr.freelancer.com/projects/java-anything-goes/change-request.2223242/ | CC-MAIN-2018-05 | refinedweb | 124 | 60.48 |
I have the following code:
private Handler m_scraHandler = new Handler(new SCRAHandlerCallback());
private class SCRAHandlerCallback : Java.Lang.Object, Handler.ICallback
{
public IntPtr Handle
{
get;
private set;
}
public void Dispose() { } public bool HandleMessage(Message msg) { return true; }
protected override void OnCreate(Bundle bundle)
{
base.OnCreate(bundle);
// Set our view from the "main" layout resource SetContentView(Resource.Layout.Main); MTSCRA mtscrea = new MTSCRA(this, m_scraHandler); }
For some reason when I run the application I would encounter the following error message:
System.NotSupportedException: Unable to activate instance of type MagTekDriver.MainActivity+SCRAHandlerCallback from native handle 0xfffe6f7c (key_handle 0x8bc78cc).
What is the cause of this error and how do I resolve it?
If you inherit from
Java.Lang.Object, you shouldn't have to add your own implementation of
Handler. I would guess this is where the issue is happening.
It should really look like this:
public class MyHandler : Java.Lang.Object, Handler.ICallback { public bool HandleMessage(Message msg) { throw new NotImplementedException(); } }
Answers
If you inherit from
Java.Lang.Object, you shouldn't have to add your own implementation of
Handler. I would guess this is where the issue is happening.
It should really look like this:
Thank you so much for your help again Jon, that solved my issue.
Hi, @JonDouglas when I try this, I get a ClassNotFoundException for
MyHandler. I can stop this exception from happening by putting
[Register("<CallBack Class>")]however the application proceeds to crash when the call back is registered. The error is
Fatal signal 6 (SIGABRT), code -6 in tid 4396. I have spent the whole day trying different things, but to no avail.
Hi, I found my issue, I am using Xamarin Forms. I tried it in the Xamarin Android project and it worked fine. | https://forums.xamarin.com/discussion/comment/263344/ | CC-MAIN-2019-43 | refinedweb | 289 | 50.33 |
Spend some time thinking about why my view appears too zoomed in and chopped off when my camera moves from one object to another. I realized that it has to do with my main camera's settings before moving the camera to another object.
My main camera is orthographic and the size is very small (1.3) so that I can view all of my complex object. When I click on a part of my complex object, the camera moves to a new view that's far away to another object those scale is different than my complex object.
This view appears too zoomed and/or chopped off when I adjust the Z axis to try and fix it. However, it seems to be because of the size of my orthographic camera is set to 1.3 when it should be higher number for a normal view.
Now, I am just wondering how do I change the size of my main camera when my camera's view changes to another object on mouse down?
Here's the camera moving script:
using UnityEngine;
using System.Collections;
public class MoveCamToObject : MonoBehaviour {
public Camera mainCam;
void OnMouseUp() {
mainCam.transform.position = new Vector3(-33, 241, 362); // These values I adjusted to an orthographic camera with size 90.
}
}
I'm looking for a way to change the size in C#.
a wonderful explanation by @duck
Answer by Eowyn27
·
Jan 07, 2014 at 06:19 PM
Hi,
I figured it out and I decided to post the solution so that it can help others too:
Camera.main.orthographicSize = 100;
I put that script after my transform.position script.
Curious to see how other answers differ though (Perhaps, a non hard coded solution?)
It's jumps from default size to size that you put. How to change it smoothly? here is a solution. You'll have to use "GetComponent()" instead of "camera" because the latter was keep an Ortho camera in a specific range when the ortho changes?
3
Answers
Is smooth movement possible in Unity with a fixed camera?
3
Answers
Orthographic camera size question
1
Answer
Detect if a GameObject is partially behind another?
1
Answer
is it possible to set the view port rect at both ends
1
Answer | https://answers.unity.com/questions/611209/how-do-i-adjust-size-of-orthographic-camera-when-m.html | CC-MAIN-2019-43 | refinedweb | 375 | 72.66 |
Interval partitioning problem
In continuation of greedy algorithm problem, (earlier we discussed : even scheduling and coin change problems) we will discuss another problem today. Problem is known as interval partitioning problem and it goes like : There are n lectures to be schedules and there are certain number of classrooms. Each lecture has a start time
si and finish time
fi. Task is to schedule all lectures in minimum number of classes and there cannot be more than one lecture in a classroom at a given point of time. For example, minimum number of classrooms required to schedule these nine lectures is 4 as shown below.
/>
However, we can do some tweaks and manage to schedule same nine lectures in three classrooms as shown below.
/>
So, second solution optimizes the output.
Another variant of this problem is : You want to schedule jobs on a computer. Requests take the form (si , fi) meaning a job that runs from time si to time fi. You get many such requests, and you want to process as many as possible, but the computer can only work on one job at a time.
Interval partitioning : Line of thought
First thing to note about interval partitioning problem is that we have to minimize something, in this case, number of classrooms. What template this problem fits into? Greedy may be? Yes it fits into greedy algorithm template. In greedy algorithm we take decision on local optimum.
Before discussing the solution, be clear that what is resource and what needs to be minimized? In this problem, resource is classroom and total number of classroom needs to be minimized by arranging lectures in certain order.
There are few natural orders in which we can arrange all lectures or for sake of generality, tasks. First is to arrange them in order of finish time, second is to arrange in order of start time, third is to order them by smallest duration of task, fourth is by minimum number of conflicting jobs. Which one to chose?
You can come up with counter example when if lectures are arranged in classrooms by order of their end time, or smallest duration or minimum number of conflicting jobs, it does not end to optimal solution So, let’s pick lectures based on earliest start time. At any given pint of time, pick lecture with least start time and yet not scheduled and then assign it to first available class. Will it work? Sure it does. When you have assigned all lectures, total number of classrooms will be minimum number of classrooms required.
Interval partitioning algorithm
1. Sort all lectures based on start time in ascending order. 2. Number of initial classrooms = 0 3. While lecture to be scheduled: 3.1 Take first lecture yet not scheduled, 3.2 If there a already a class available for lecture's start time Assign lecture to the class. 3.3 If not, then allocate a new classroom number of classroom = number of classroom + 1 4. Return number of classrooms.
Before jumping into the code, let’s discuss some data structures which we can use to implement this algorithm.
Understand that we have to find a compatible classroom for a lecture. There are many classrooms, we need to check if the finish time of lecture in that classroom is less than start time of new lecture. If yes , then classroom is compatible, if there is no such class, allocate a new class. If we store our allocated classrooms in such a way that it always gives classroom with least finish time of last lecture scheduled there, we can safely say that if this classroom is not compatible, none of the others will be.(Why?) Every time we assign a lecture to a classroom, sort the list of classroom, so that first classroom is with least finish time. Sort has complexity of
O(n log n) and if we do it for all n intervals, overall complexity of algorithm will be
O(n2 log n).
We are sorting just to find minimum end time across all classrooms. This can easily be achieved by min heap or priority queue keyed on finish time of last lecture of class. Every time finish time of last lecture changes for a classroom, heap is readjusted and root gives us classroom with min finish time.
- To determine whether lecture j is compatible with some classroom, compare sj to key of min classroom k in priority queue.
- When a lecture is added to a classroom, increase key of classroom k to fj.
Well know we have algorithm and data structure to implement in, so let’s code it.
PrioritityQueue implementation is given below:
import heapq # This is our priority queue implementation class PriorityQueue: def __init__(self): self._queue = [] self._index = 0 def push(self, item, priority): heapq.heappush(self._queue, (priority, self._index, item)) self._index += 1 def pop(self): if(self._index == 0): return None return heapq.heappop(self._queue)[-1];
Classroom class implementation
class Classroom: def __init__(self, number, finish_time): self.class_num = number self.finish_time = finish_time def __repr__(self): return 'Classroom({!r})'.format(self.class_num)
Interval partitioning problem : Implementation
from PriorityQueue import PriorityQueue from Classroom import Classroom jobs = [(1, 930, 1100), (2, 930, 1300), (3, 930, 1100), (5, 1100, 1400), (4, 1130, 1300), (6, 1330, 1500), (7, 1330, 1500), (8,1430,1700), (9, 1530, 1700), (10, 1530, 1700) ] def find_num_classrooms(): num_classrooms = 0; priority_queue = PriorityQueue() for job in jobs: # we have job here, now pop the classroom with least finishing time classroom = priority_queue.pop(); if(classroom == None) : #allocate a new class num_classrooms+= 1; priority_queue.push(Classroom(num_classrooms,job[2]),job[2]); else: #check if finish time of current classroom is #less than start time of this lecture if(classroom.finish_time <= job[1]): classroom.finish_time = job[2] priority_queue.push(classroom,job[2]) else: num_classrooms+= 1; #Since last classroom needs to be compared again, push it back priority_queue.push(classroom,job[2]) #Push the new classroom in list priority_queue.push(Classroom(num_classrooms,job[2]),job[2]) return num_classrooms print "Number of classrooms required: " + find_num_classrooms();
Java Implementation
package com.company; import java.util.*; /** * Created by sangar on 24.4.18. */ public class IntervalPartition { public static int findIntervalPartitions(ArrayList<Interval> intervals){)); Collections.sort(intervals, Comparator.comparing(p -> p.getStartTime())); int minimumClassRooms = findIntervalPartitions(intervals); System.out.println(minimumClassRooms); } }
This algorithm takes overall time of O(n log n) dominated by the sorting of jobs on start time. Total number of priority queue operations is O(n) as we have only n lectures to schedule and for each lecture we have push and pop operation.
Reference :
-
-×2.pdf
There is another method using binary search algorithm which can be used to solve this problem. As per problem statement, we have to find minimum number of classrooms to schedule n lectures. What are the maximum number of classrooms required? It will be number of lectures when all lectures conflict with each other.
Minimum number of classrooms will be 0 when there is no lecture to be scheduled. Now, we know the range of values of classrooms. How can we find minimum?
Basic idea is that if we can schedule all
n lectures in
m rooms, then we can definitely schedule them in m+1 and more rooms. So minimum number of rooms required will be either m or less than it. In this case, we can safely discard all candidate solution from m to n (remember n is the maximum number of classrooms).
Again what if we can not schedule lectures in m rooms, then there is no way we can schedule them in less than m rooms. Hence we can discard all candidate solutions less than m.
How can we select m? We can select is as mid of range which is (0,n). And try to fit all lectures on those m rooms based on condition that none of lecture conflicts. Keep track of end time of last lecture of each classroom. If none of the classroom has end time less than start time of new lecture, allocate new class. If total number of classrooms is less than or equal to m, discard m+1 to n. If it is more than m, then discard 0 to m and search for m+1 to n.
package com.company; import java.util.*; /** * Created by sangar on 24.4.18. */ public class IntervalPartition { public static boolean predicate(ArrayList<Interval> intervals, long candidateClassRooms){ int i = 0;() <= candidateClassRooms; })); long low = 0; long high = intervals.size(); Collections.sort(intervals, Comparator.comparing(p -> p.getStartTime())); while(low < high){ long mid = low + ( (high - low) >> 1); if(predicate(intervals, mid)){ high = mid; }else{ low = mid+1; } } System.out.println(low); } }
Complexity of algorithm is dependent on number of lectures to be scheduled which is
O(n log n ) with additional space complexity of O(c) where c is number of classrooms required.
Please share your views and suggestions in comments and feel free to share and spread the word. If you are interested to share your knowledge to learners across the world, please write to us on [email protected] | https://algorithmsandme.com/category/google-interview-question/page/2/ | CC-MAIN-2020-40 | refinedweb | 1,511 | 64.91 |
05 October 2009 00:00 [Source: ICB]
Despite the automotive industry bearing the brunt of the global financial crisis, chemical producers are motoring ahead with innovative new products?xml:namespace>
THE TREND towards more efficient vehicles, lower emissions and improved fuel consumption has driven weight reduction to the top of the automotive agenda.
Although the ailing sector has been badly affected by the downturn, producers can ill afford to slam the brakes onto investment in new products.
Thanks to the huge bailout schemes that are now supporting some of the world's leading car manufacturers, many are bound by a caveat to make their vehicles even more economical and efficient in the future.
Roughly 50% of fuel is used simply to move the weight of the car, says Roelof Westerbeek, business group director of DSM's Netherlands-headquartered engineering plastics business. By reducing its weight by 10%, it will become 5% more fuel efficient.
To put this into context, fuel consumption a century ago was around 10km/liter (24 miles per US gallon). Now it is 16-17km/liter. "In over a century, fuel efficiency has only gradually improved, so a 5% gain by reducing a car's weight is very significant," he adds.
Currently, around 250-300kg of plastic is used in a car, depending on the size and model. An average mid-sized car weighs around 1,000-1,200kg.
This ratio can certainly be improved upon, Westerbeek says, and the next step could be to replace body panels and eventually integrate engineering plastics into the frame of the car to provide significant weight and emissions savings.
A hybrid structure combining metal and plastic has already been a marked success in the aviation industry.
The prospect of bio-based engineering plastics will be among the most promising developments in the automotive sector over the next five to 10 years, suggests Westerbeek.
After a year and a half of development, July saw the launch of DSM's EcoPaXX, a bio-based, high-performance engineering plastic based on polyamide (PA) 410 (see graphic, point 8). DSM claims EcoPaXX is carbon neutral from cradle to factory gate, as the carbon dioxide (CO2) generated during its production is offset by the CO2 absorbed when growing the castor beans.
Similarly, French specialty firm Rhodia used the 63rd IAA motor show in Frankfurt, Germany, in September as a platform to showcase several new products aimed at reducing CO2 emissions and addressing the EU's Euro 6 regulations to limit pollution.
Jean-Claude Steinmetz, the company's vice president automotive and transportation markets, says that the new technologies could help to reduce CO2 emissions by more than 20g/km.
In addition to its new Technyl Star AFX, a high-flow polyamide plastic, as an alternative to metal components, for example, Rhodia had also developed Zeosil Premium silica for energy-efficient tires, which reduces rolling resistance and helps to cut fuel consumption.
Finally, Rhodia says the introduction of Eolys PowerFlex technology eliminates soot particles for vehicles running on biofuels, without affecting performance.
Perhaps the most significant opportunities, however, lie in the use of tough thermoplastic polycarbonate (PC), says German major Bayer MaterialScience.
For years, PC has been tipped to replace glass in automotive windscreens and sunroofs. It is durable, virtually unbreakable, and lightweight, helping to improve fuel efficiency and performance.
Guenter Hilken, head of the PC business unit, says that a symposium held a few weeks ago in Leverkusen, Germany, saw the shift towards PC use in vehicles win widespread support from all of Europe's major original equipment manufacturers (OEMs). PC was first used to manufacture headlamps some 10-12 years ago and it is now used by around 98% of the car industry, he says. One major OEM has suggested that sunroofs and roof modules would also be almost exclusively PC-based within the next decade.
Windscreens, however, may take a little longer because of issues regarding the rigidity of the car's body and safety legislation.
Clearly, sustainability is going to remain the buzzword in the automotive industry in the years ahead; alternative fuels will become commonplace, as will renewable materials.
It may be a tough road ahead, but there are plenty of opportunities on the horizon for the chemical industry.
"The collapse of the automotive industry is actually going to drive a lot of innovation," says Westerbeek.
"It may sound odd with all the problems we have seen, but it is a very exciting time with a lot of innovation going on. The automotive industry is in a transition phase right now that I have never before seen in my 20 years involved in plastics," he says.
1. TIRES
Genencor - a division of Danish bio-based ingredients producer Danisco - is collaborating with US tire maker Goodyear to create an alternative for petroleum-based isoprene. BioIsoprene could be used to produce synthetic rubber for tires. Its potential market is estimated at over 770,000 tonnes/year.
2. PAINT
Scratched your car? StickerFix from Netherlands-based AkzoNobel is an adhesive strip using patented paint technology that can repair damage. The color of a vehicle's bodywork can be copied and sprayed onto an adhesive vinyl sheet that can be stuck over any damage.
AkzoNobel has also launched its Sikkens Autoclear LV Exclusive high-gloss clearcoat that has self-healing properties when exposed to heat. If the surface gets scratched, the paint "reflows" and heals itself when exposed to heat from the sun.
3. TIRES
Rhodia's Zeosil Premium silica for energy-efficient tires, which reduces rolling resistance and helps to cut fuel consumption. For each tonne of CO2 emitted in its production, around 40 tonnes is prevented from being released into the atmosphere, according to the company.
4. CARPETS/FLOOR MATS
US car manufacturer Ford is working on a biodegradable plastic, polylactic acid (PLA) - made from the sugars in corn, sugarbeets, sugarcane and switch grass. There are many potential applications, from textiles for carpets, floor mats and upholstery to the interior trim. A plastic component made from PLA can biodegrade after its life cycle in 90 to 120 days, versus up to 1,000 years in a landfill for a traditional, petroleum-based plastic.
5. FUEL
There is plenty of discussion about how cars of the future will be powered. Electric vehicles are a huge potential market, as are hydrogen and biofuel-powered cars. In the longer term, fuel cells could play a major role.
6. WINDSCREEN GLAZING
Polycarbonate (PC) has huge potential in automotive applications and in future, could be used for windscreens rather than glass, says Bayer MaterialScience. PC is already commonplace in vehicles, for example for headlamps, bumpers, airbag covers and instrument panels. PC has been used by many manufacturers to create large roof units
7. BODY PANELS - BIORENEWABLE
DSM predicts that it may soon be possible to create vehicle bodywork that is made from as much as 80% biorenewable materials. In July, the firm unveiled the resin bodywork for a Formula Zero kart used by the Netherlands' University of Delft team, which was manufactured from 70% biorenewable material. DSM plans to launch this technology later this year into the commercial car market.
8. ENGINE COMPONENTS
Weight reduction is going to play a critical role in the automotive sector in the coming years. Plastic is increasingly being used to replace metal components to shave kilograms off the vehicle's overall weight. Polymers are durable, strong and cost effective without compromising safety. DSM's bio-based engineering plastic EcoPaXX, for example, has a high melting point of 250˚C (482˚F), making it ideal for use under the hood for engines, turbo parts and air intake manifolds. Some 70% of the material is derived from vegetable oil, although DSM hopes this can be increased to 100% in the coming years.
9. SEATING
Car seats of the future will still need to be safe and comfortable, but produced with minimal cost and weight, says German major BASF. The company produces various coverings and components for automotive seating, from the leather and fabric finish, to the foams, moisture regulators and safety belt parts.
US-based Dow Chemical meanwhile, has developed RENUVA natural oil-based polyols from natural oils such as soybean oil. It is greenhouse gas-neutral and uses up to 60% fewer fossil fuel resources than conventional polyol technology. The RENUVA technology has various applications, including the foams used for seating, armrests and headrests.
For the latest developments in the industry, go to | http://www.icis.com/Articles/2009/10/05/9252445/efficiency-and-innovation-key-component-of-automotive.html | CC-MAIN-2014-42 | refinedweb | 1,410 | 50.26 |
Hi,
In my SDL game I noticed that when running in windowed mode on Windows,
the system menu icon (the thing in te upper left corner) is a default
icon.
While researching how to set that icon I found out that on Windows you
need to call “RegisterClassEx” instead of “RegisterClass”.
I have written a small patch for SDL-1.2.9 to do that. I hope it can be
included in the next version
I according to the documentation of RegisterClassEx it is available
since Windows95, and the only change between RegisterClassEx and
RegisterClass are the addition of the two (cbSize and hIconSm) fields in
the struct. So there shouldn’t be any compability problems.
On a related note: I failed compiling a static version of SDL for
windows (it only works when I disable directx), can anyone help me out?
— src/video/wincommon/SDL_sysevents.c.org 2004-02-16 22:09:24.000000000 +0100
+++ src/video/wincommon/SDL_sysevents.c 2005-11-28 00:42:04.000000000 +0100
@@ -579,7 +579,7 @@
int SDL_RegisterApp(char *name, Uint32 style, void *hInst)
{
static int initialized = 0;
- WNDCLASS class;
- WNDCLASSEX class;
#ifdef WM_MOUSELEAVE
HMODULE handle;
#endif
@@ -612,6 +612,8 @@
strcpy(SDL_Appname, name);
}
#endif /* _WIN32_WCE */
- class.cbSize = sizeof(WNDCLASSEX);
- class.hIconSm = LoadIcon(hInst,SDL_Appname);
class.hIcon = LoadImage(hInst, SDL_Appname, IMAGE_ICON,
0, 0, LR_DEFAULTCOLOR);
class.lpszMenuName = NULL;
@@ -625,7 +627,7 @@
class.lpfnWndProc = WinMessage;
class.cbWndExtra = 0;
class.cbClsExtra = 0;
- if ( ! RegisterClass(&class) ) {
- if ( ! RegisterClassEx(&class) ) {
SDL_SetError(“Couldn’t register application class”);
return(-1);
}
CU,
Sec–
Never test for an error condition you don’t know how to handle.
Steinbach’s Guideline for Systems Programming | https://discourse.libsdl.org/t/system-menu-icon/12787 | CC-MAIN-2022-27 | refinedweb | 272 | 51.65 |
When I develop a python program in the eclipse Pydev project, I need to import a package pymongo into this python program like the below source code.
import pymongo if __name__ == '__main__': pass
But it shows an error message Unresolved import: pymongo in the source code, you can see this error message when you move your mouse over the red line in eclipse python source code.
This is because my eclipse project used a python interpreter that does not contain pymongo library. So I have two options to fix this error.
- Option 1: Install pymongo library in the project used python interpreter.
- Option 2: Use another python virtual environment ( which has installed pymongo library ) as the project’s python interpreter. This article will focus on this option.
1. Add New Python Interpreter In Eclipse Steps.
- Click Project —> Properties menu item at eclipse top menu bar. You can change the settings for this single project through this menu item.
- In the pop-up dialog, click the PyDev – Interpreter/Gramma menu in the left panel. Then you can see the interpreter drop-down list in the right panel. You can click the list to select the python interpreter which has installed the pymongo library.
- If you want to add a new python interpreter, you can click the Click here to configure an interpreter not listed link below the python interpreter drop-down list in the above window, then click the Open interpreter preferences page button to open the Python Interpreters configuration window.
- You can also click Eclipse —> Preferences ( on macOS ) or Window —> Preferences ( on Windows ) menu item to open the eclipse preferences window.
- Then click PyDev —> Interpreters —> Python Interpreter menu item in above popup window’s left side, then you can see the Python Interpreters list on the right side. There list all python interpreters that have been added to the eclipse.
- Click Browse for python/pypy exe button in the top right to open Select interpreter dialog. Input an Interpreter Name, and click the Browse button to select Interpreter Executable file path( your python virtual environment python executable file ( python.exe for Windows or python for macOS, Linux ) ).
- Then click the OK button to close the dialog. Click Apply and Close button to close the Python Interpreters configuration window.
2. Select Python Interpreter For Eclipse Pydev Project.
- After you add python interpreter in eclipse successfully, you can now select your Pydev project used python interpreter as below.
- Click Project —> Properties menu item at eclipse top menu bar.
- Click PyDev – Interpreter/Gramma menu in popup window left panel. Then you can select the newly added python interpreter from the right panel Interpreter drop-down list.
- Now you can import the python pymongo library into your python source code without error.
3. Question & Answer.
3.1 How to effectively manage python interpreters and virtual environments in the eclipse pydev projects.
- I have several eclipse pydev projects, and each project uses its own python interpreter. The reason for this is because the python version or python libraries are not the same for different eclipse pydev projects. But as there is more and more python projects, it is very hard to manage so many python interpreters with different python virtual environment. is there a way to easily manage all those python interpreters or make all the eclipse python project use the same one python interpreter?
- In eclipse pydev, you can only make the python interpreter eclipse-wide. If you switch between multiple eclipse pydev projects, you have to switch to the correct python interpreter also. This can make the eclipse pydev project isolated in multiple pydev projects. It can also reduce the influence between them.
- If all your eclipse pydev projects use the same python version, and the only difference is the installed python libraries. You can create a base python interpreter and make all the eclipse pydev projects use that python interpreter as default.
- Then you can configure the python libraries for each eclipse pydev project in the project properties PYTHONPATH, you can read the article How To Add Library In Python Eclipse Project to learn more.
References
- How To Manage Anaconda Environments.
- How To Start Jupyter Notebook In Anaconda Python Virtual Environment.
- How To Install Python Django In Virtual Environment. | https://www.dev2qa.com/how-to-change-python-interpreter-in-eclipse-pydev-project-to-use-different-python-virtual-environment-library/ | CC-MAIN-2021-43 | refinedweb | 707 | 63.49 |
Introduction: Arduino Product Display
Have you ever seen product rotating on its own ???Yes, today we are creating the same product displayed using simple item i.e a servo motor and Arduino.I forgot to introduce myself, I'm Gokul.I have said you initially what's today project we are about to deal now.Let's now get Components required to make this ibles .
Step 1: Components Required
components required are as above:
- Arduino
- Servo
- Knife
- cardboard
- tape
Step 2: Servo Connection
servo connection is a type of motor that is used to control the angular rotation and speed of the rotating shaft.
Here Servo is connected is as follow:
- The VCC pin or red wire is connected to the 5v pin on the Arduino Uno.
- The GND pin or the maroon wire is connected to the GND pin of the Arduino Uno.
- The signal wire or the orange wire is connected to the digital pin of 3 of Arduino Uno.
Step 3: Cutting and Fixing
cut a cardboard into circle or any of shape as you wish to.Then place your horn on the cardboard,Then stick the horn on the cardboard using a tape or hot glue.
Step 4: Coding
#include "Servo.h"
Servo myservo; // create servo object to control a servo // twelve servo objects can be created on most boards
int pos = 0; // variable to store the servo position
void setup() { myservo.attach(7); // } }
Participated in the
Makerspace Contest 2017
Be the First to Share
Recommendations
82 11K
168 13K | https://www.instructables.com/Arduino-Product-Display/ | CC-MAIN-2022-05 | refinedweb | 254 | 63.59 |
How are Generic Functions Checked?
Give a program like:
var foo[A](arg A) arg bar end foo(Baz new)
How and when do we verify that:
baris a valid message on
arg.
Bazis a valid argument type.
- That the generic parameter
Acan be inferred from
arg?
First Question
For the first question, there are basically two solutions:
C++-style. A generic method cannot be checked on its own. Instead, it gets instantiated at each callsite with specific concrete types, and its only then that it gets checked.
C# with constraints style. A generic method's type parameters are annotated with "where" clauses that limit the type of type arguments. A method can be checked against its constrained types.
C++ is a mess, so let's try to go with C# style. A type parameter like the above
one with no constraint would default to
Object (or
Dynamic?), meaning the
above code won't check because
Object doesn't have a method
bar. To fix it,
you'd have to do:
var foo[A Baz](arg A) arg bar end
Where
Baz is assumed to be some type that has a method
bar. Type-checking a function now means:
- Evaluate the type annotations of the static parameters.
- In the function's type scope, bind those types to the static parameter names.
- Evaluate the type annotations of the dynamic parameters in that scope.
That should conveniently alias
A to
Baz, so when we look up
A later, we'll get the constrained type. Now we can type-check a method independent of its use. Win.
Second Question
Now, the second question. How do we know that
Baz is a valid (dynamic) argument type? Let's skip over inference now and just consider:
foo[Baz](Baz new)
To check this, we just need to:
1. Evaluate the constraint on the static parameter (
Baz).
2. Evaluate the static argument.
3. Test for compatibility.
4. In the function's type scope, bind the static argument value (not the
constraint type) to the static parameter name (
A).
5. Evaluate the type annotations of the dynamic parameters in that scope.
Third Question
The last question, inferring static type arguments. That's going to get tricky. Consider a function like:
var foo[A, B, C](a List[A], b B | C, c (A, (B, C)))
We need to answer two questions: 1. Are all static type parameters present in the dynamic parameter's type signature? (In this case, they are.) 2. If so, given an evaluated type for the dynamic argument and the expression tree for its parameter type, what are the values of the static type parameters?
The first one we can do statically independent of the actual type semantics by just walking the parameter type tree. The second one is hard because it's another core capability every type-like object will need to support. So the question is, given:
var foo[A, B](a Dict[B, A]) foo(Dict[String, Int] new)
Is there a way we can ask
Dict to help us figure out what
A and
B are given
Dict[String, Int]?
Here's one idea. We'll create a special tag type that just represents a placeholder for a type parameter, so that we can treat "A" and "B" as fake types. Given those, we can evaluate:
Dict[B, A] // which desugars to Dict call[B, A]
And get a type object back (an instantiated
Dict) with our special type tags embedded in it. Then we evaluate
Dict[String, Int], the actual argument type. Now we've got two objects we can line up, so we do:
Dict[B, A] inferTypesFrom(typeMap, Dict[String, Int])
That will take some sort of map that maps parameter names like "A" to their inferred type. Every type will be expected to implement this. An implementation would look something like:
def Dict[K, V] inferTypesFrom(typeMap, other IType) let dict = other as(Dict) then let keyType = K as(TypeParam) then typeMap map(keyType name, dict keyType) else K inferTypesFrom(typeMap, dict keyType) end let valueType = V as(TypeParam) then typeMap map(valueType name, dict valueType) else V inferTypesFrom(typeMap, dict keyType) end end end
Note the recursive calls to
inferTypesFrom. Those handle nested types like
Dict[(Int, String), List[String]]. The
typeMap will have to handle collisions where a type parameter appears more than once and is bound to conflicting types like:
var foo[A](a A, b A) foo(1, true)
I think this would work. Handling or types and some other stuff may be a bit tricky. Figuring out how to reuse this code across all generic types will be a bit of work too. | http://magpie.stuffwithstuff.com/design-questions/how-are-generic-functions-checked.html | CC-MAIN-2017-30 | refinedweb | 782 | 72.66 |
Python does support File handing. I mean, Python provides various functions that allow us to create a file, open, read the data, and write data to it. Unlike other languages, this language handles files differently. It treats Files as text or binary based on the data, and each line has to terminate with an End Of Line character like newline, comma, etc.
Python provides an important function called open to working with files. The File open function accepts two parameters, and they are file_name and mode. The syntax of this file open function is
f_open = open(“File_Name”, “mode”)
In the above Python file handling syntax, the mode is an optional argument. There are four different modes for opening a file.
- r – It means Read Mode, which is the open file’s default mode. It opens an existing one for reading if there is no such document, then it throws an error.
- r+ – This is for both reading and writing mode.
- a – Means Append. If it exists, it opens that one and appends the data to the existing content. If there is none, it creates a new one with the specified name.
- w – Means Write mode. It opens a file in write mode to handle and overrides the existing content with new content. It creates a new if it doesn’t exist.
- x – Use this to create a new one. It throws an error if it already exists.
- Apart from these modes, you can also specify the type of data that the file has to handle. I mean binary data or text mode
- t – Text Mode, which is the default mode.
- b – Binary mode.
f_open = open("PythonSample.txt")
The above file handling code is equal to f_open = open(“PythonSample.txt”, “rt”)
Python File Operations Examples
The following list of examples helps to Create a New File, Open, Rename it, and Delete it. Next, write text to it, close, append, write, and read available options. To demonstrate most of these file methods, including reading and writing, we use the below txt text sample.
Python Open File
In this language, you can open a file specifying the name or the full path. The full name opens it in the current working directory. However, you can access documents using the full path at any directory. Before we start, let me use the listdir function to get the list in the current directory.
import os print(os.getcwd()) print(' ') print(os.listdir())
You can open a file in this language specifying the name or the full path.
f_open = open("PythonSample.txt")
Or use the full path
f_open = open("C:/users/Document/PythonSample.txt")
Python read file
The file read function is to read the data inside it. It is a simple Python example, which opens a sample txt in the read mode. Next, we used the read function to read data inside that txt.
f_open = open("PythonSample.txt", "r") print(f_open.read())
HelloWorld Suresh Python Tutorial Welcome to Tutorial Gateway File Examples
While working with string data, you can use the read function with a parameter to restrict the number of characters returned by the read function. For example, read(5) reads the first five characters or read(10) read the first 10 characters.
f_open = open("PythonSample.txt", "r") print(f_open.read(5)) print(f_open.read(2))
Reading required characters output
Hello Wo
The first read statement prints the first five characters. Next, read is printing the next two. If you want to print the first 10, then you have to close and reopen it.
f_open = open("PythonSample.txt", "r") print(f_open.read(5)) f_open = open("PythonSample.txt", "r") print(f_open.read(8))
Hello HelloWor
Python Read file using For Loop
You can also use loops to read all the data in a sequence. For instance, the example below uses For Loop to read each line available in this txt.
f_open = open("PythonSample.txt", "r") for char in f_open: print(char)
In this file example, the for loop over the complete text in the sample text and prints each and every line.
f_open = open("PythonSample.txt", "r") for line in f_open: print(line, end = '')
For Loop to read opened
HelloWorld Suresh Python Tutorial Welcome to Tutorial Gateway File Examples
Python file readline function
The readline function reads the complete line before the end of the line character. This example opens a file and reads the first line from it.
f_sample = open("PythonSample.txt", "r") print(f_sample.readline())
HelloWorld >>>
If you want to read two lines, then call the readline function twice, and so on. Let me print three lines.
f_sample = open("PythonSample.txt", "r") print(f_sample.readline()) print(f_sample.readline()) print(f_sample.readline())
HelloWorld Suresh Python Tutorial
You can use this readline function to read or print the required line. The below readline function code print the second line from a text. If you use the readline function along with an argument, then it behaves the same as the read function with a parameter. I mean, readline(2) = read(2)
f_sample = open("PythonSample.txt", "r") print(f_sample.readline(2)) print() f_sample = open("PythonSample.txt", "r") print(f_sample.read(2))
File readline vs read function output
He He
readlines function
In this language, we have one more function called readlines(). It reads data from the given text and prints all the data in a list format. It separates each line with a proper separator.
f_sample = open("PythonSample.txt", "r") print(f_sample.readlines())
['HelloWorld\n', 'Suresh\n', 'Python Tutorial\n', 'Welcome to Tutorial Gateway\n', 'File Examples\n']
Python close file function
The close file function closes an already opened one. Although it has a Garbage Collector to close them, you should not rely entirely on this. It is always a good practice to close the open one. Here, we opened it, read a single line, and then closed that.
f_sample = open("PythonSample.txt", "r") print(f_sample.readline()) f_sample.close()
HelloWorld
Let me print another line from that closed txt.
f_sample = open("PythonSample.txt", "r") print(f_sample.readline()) f_sample.close() print(f_sample.readline())
You can notice the error thrown by the shell.
The above program shows the use of the close method. However, it is not advisable to use the way that we showed above.
In real-time, you have to use with statement to close the open properly. Or, some people say, we can go with try finally block. We will show you both.
Using Try finally in File Operations
try: f_sample = open("PythonSample.txt", "r") print(f_sample.read()) finally: f_sample.close()
HelloWorld Suresh Python Tutorial Welcome to Tutorial Gateway File Examples
with Statement
The Python with statement makes sure that every file opened by this closed irrespective of the errors.
with open("PythonSample.txt", "r") as f_sample: print(f_sample.read()) f_sample.close()
HelloWorld Suresh Python Tutorial Welcome to Tutorial Gateway File Examples
The with statement is not about closing the file object. You can use this with to open a file in any mode. I mean, you can use this to read data, write methods to data so on.
Most importantly, we use this in the case of manipulations, where we have to execute multiple statements. This statement holds them in a block so that we can write multiple statements such as read and write files in that block.
with open("PythonSample.txt", "w") as f_sample: f_sample.write("First Line") f_sample.close() with open("PythonSample.txt", "r") as f_sample: print(f_sample.read()) f_sample.close()
HelloWorld Suresh Python Tutorial Welcome to Tutorial Gateway File Examples >>>
Python Write File
It provides the write method to write content or data to a file. Before we get into the write function example, I assume you remember what I said earlier. You have to use either a mode for append or w for write mode.
This example opens the Sample text in writing mode and writes a welcome message. Next, we opened that to print the data.
f_demo = open("PythonSample.txt", "w") f_demo. write("Welcome to Tutorial gateway") f_demo.close() # Let me open and check f_demo = open("PythonSample.txt", "r") print(f_demo.read())
Welcome to Tutorial gateway
This time, we write multiple lines of code using the write method.
f_writedemo = open("PythonSample.txt", "w") f_writedemo. write("Python") f_writedemo. write("\nTutorial gateway") f_writedemo. write("\nHappy Coding!") f_writedemo. write("\nHi \nHello \nCheers") f_writedemo.close() # Let me open it and check f_writedemo = open("PythonSample.txt", "r") print(f_writedemo.read())
Python Tutorial gateway Happy Coding! Hi Hello Cheers >>>
We have been working on this txt from the beginning. However, this write function erased everything and returned this Welcome message.
The write method accepts a list as an argument, so you can write a list of items in one go. I mean, without using multiple write functions.
f_writelinesdemo = open("PythonSample.txt", "w") text = ["First Line\n", "Second Line\n", "Third Line\n", "Fourth Line"] f_writelinesdemo. writelines(text) f_writelinesdemo.close() # Let me open and check f_writelinesdemo = open("PythonSample.txt", "r") print(f_writelinesdemo.read())
First Line Second Line Third Line Fourth Line >>>
append File
We are opening a file in append mode and checking what happens after writing a hello message.
f_demo = open("PythonSample.txt", "a") f_demo. write("\nHell World!") f_demo.close() # Let me open the file and check f_demo = open("PythonSample.txt", "r") print(f_demo.read())
First Line Second Line Third Line Fourth Line Hell World! >>>
How to Write to a File using For Loop in Python?
You can also use the for loop to write multiple lines of information. Here, we are sampling writing 10 lines in a Sample10 text.
f_loopdemo = open("Sample10.txt", "w") for i in range(1, 11): f_loopdemo.write("This is the %d Line\n" %(i)) f_loopdemo.close() # Let me open the file and check f_loopdemo = open("Sample10.txt", "r") print(f_loopdemo.read())
Write to it using a for loop output
This is the 1 Line This is the 2 Line This is the 3 Line This is the 4 Line This is the 5 Line This is the 6 Line This is the 7 Line This is the 8 Line This is the 9 Line This is the 10 Line
Create a New File in Python
Until now, we are working with the existing one. However, you can create your own using the read method. For this, you have to use either x to create a new, a mode, or w mode. These three modes create a new one, but the last two are different.
f_create = open("NewFile.txt", "x")
Let me create another file and write something to it. So, you can see the new one along with the text. This program creates a Sample1 text, writes a string, and closes it. Next, we opened that one in the read mode and printed the data from it.
f_create = open("Sample1.txt", "x") f_create.write("Python Program") f_create.close() # Open the Sample1 file f_create = open("Sample1.txt", "r") print(f_create.read())
It uses the w for write mode to create a new and writes something to it.
f_wcreate = open("Sample2.txt", "x") f_wcreate.write("Python Tutorial") f_wcreate.close() # Open the Sample1 f_wcreate = open("Sample2.txt", "r") print(f_wcreate.read())
Python Program >>>
The open function along with a mode – append mode.
f_acreate = open("Sample3.txt", "x") f_acreate.write("Tutorial Gateway") f_acreate.close() # Open the Sample1 f_acreate = open("Sample3.txt", "r") print(f_acreate.read())
Tutorial Gateway >>>
Rename
You must import the os module to rename the one within a directory. Within the os module, we have a function that helps us rename existing ones in a directory.
Let me use this file rename function in Python to rename Sample2.txt to NewSample.txt. Next, we opened that renamed one to see the data inside that.
import os os.rename("Sample2.txt", "NewSample.txt") f_sample = open("NewSample.txt", "r") print(f_sample.read())
Python Tutorial >>>
To delete a document from a directory, you have to import the os module. Within the os module, we have a remove function that helps us to remove files from a directory. Let me use this file remove function in Python to delete the Sample3.txt that we created before.
import os os.remove("Sample3.txt")
It works like a charm; however, if you try to delete a file that doesn’t exist, throw an error. Let me remove the Sample3.txt that we deleted before.
When you run, it throws an error: FileNotFoundError:[Errno 2] No such file or directory: ‘Sample3.txt’. To avoid these kinds of errors, it is advisable to check whether it exists or not.
import os if os.path.exists("Sample3.txt"): os.remove("Sample3.txt") else: print("Hey! No such file exists")
Hey! No such file exists >>>
seek and tell Functions
These are the two functions to find the pointer position.
- tell(): It tells you the current pointer position.
- seek(): To set the pointer position to a specific position.
with open("PythonSample.txt", "r") as f_sample: print("Pointer Initial Position : ", f_sample.tell()) print(f_sample.read()) print("Current Pointer Position : ", f_sample.tell()) f_sample.seek(22) print("\n Current File Position : ", f_sample.tell()) f_sample.close()
Pointer Initial Position : 0 First Line Second Line Third Line Fourth Line Hell World! Current Pointer Position : 57 Current File Position : 22 >>>
Attributes
Python provides the following file attributes to get the information about them that you are working with.
- name: It returns its name of it.
- mode: On what mode do you open? For instance, r mode, w mode, etc.
- closed: returns True if the given one is closed otherwise, False.
f_open = open("PythonSample.txt", "r") print(f_open.read()) print('\nName = ', f_open.name) print('Mode = ', f_open.mode) print('closed? = ', f_open.closed) f_open.close() print('\closed? = ', f_open.closed)
Pointer Initial Position : 0 First Line Second Line Third Line Fourth Line Hell World! Name = PythonSample.txt Mode = r closed? = False closed? = True >>> | https://www.tutorialgateway.org/python-file/ | CC-MAIN-2022-40 | refinedweb | 2,291 | 68.97 |
,.
In this ebook,.
To create XML code using XmlDocument,.
using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Xml; namespace cpap1 { class Program { static int Main(string[] args) { XmlDocument docXML = new XmlDocument(); docXML.LoadXml(""); return 0; } } }
In the next sections, we will see how to create an XML file with the Add New Item dialog box. After creating the file and displaying it in the Code Editor, you can start editing it. The Code Editor is equipped to assist you with various options. Whenever you start typing, the editor would display one or more options to you. Here is an example:." ?>:
using System; using System.Xml; namespace VideoCollection1 { class Program { static int Main(string[] args) { XmlDocument xmlDoc = new XmlDocument(); xmlDoc.Load("Videos.xml"); return 0; } } }: provide access to the root of an XML file, the XmlDocument class is equipped with the DocumentElement property.> <; } } } | http://www.functionx.com/csharp/xml/Lesson01.htm | CC-MAIN-2014-42 | refinedweb | 148 | 53.78 |
Documentation enhancements
Do you think that documentation (JavaDoc) is not sufficient? That it lacks some general information? Too restricted? Hard to find the things? We'd like to hear your complaints and suggestions.
The problem of turn-around time is not related to Sun engineers not being able to notice documentation problems. Right after you filed doc problem bug, we are ready to fix it (if it is really a problem). There are procedural comlexities that do not allow us to fix such problems until the next major release of Java appears.
Denis,
That is not a problem. Here is what I expect from Sun:
The latest stable release is JDK 1.5
The current development release is JDK 1.5.1
I file a RFE against documentation found in JDK 1.5
Sun fixes it in the JDK 1.5.1 documentation
This is perfectly acceptable *as long as* you make the JDK 1.5.1 documentation available while it is being developed. One of the things I really enjoyed about the JDK 1.5 development was that while it was in beta I had free access to the documentation and see my suggestions affecting the final product. This was a big moral boost for me.
I hope such a procedure is acceptable to Sun. Allow us to file RFEs, fix the problems immediately, and make them available online under the guise of the unofficial documentation for the upcoming release.
Javadocs are pretty good. I trust javadoc API documentation much more than many other forms of API documentation. However, there haven't been any fundamental changes to the browsing experience since its inception. Our "hare" has been asleep, basking in its lead, but it is probably time to wake up and make some more advances lest the tortoises plod on past unbeknownst.
One of the problems with the current API docs, is that there is no distinction between what is "core" to the API, and what is there as "icing on the cake", to cover the rarer use cases.
By core I guess I mean the 10% of the methods and classes that get used 90% of the time (or 20%/80%).
The "core" classes in a package, and the "core" methods and constructors in a class are the ones that are most likely to appear in a succinct example, they are the essential things needed to use the API in a simple case. They are the things you need to understand first in order to use the API.
I am rambling on about this because the concept is a little subjective, but you hopefully all get the point.
As a potential solution to this, or at least as a starting point, I offer the following...
What I have tried is a small taglet and two tags to try and overcome this problem.
I use a @core tag to indicate the core classes and methods.
In every class's javadoc I put a @corelist tag, where I want a list of the core declarations related to this class.
What the taglet does is generate at the @corelist tag a list of all the core declarations in this package that use this class in some way. This list is broken into three categories
* Obtain
* Modify
* Use
"Obtain" lists the methods and constructors that are marked as @core and return (or construct) an instance of this class (or a subtype of this class/interface). A core constructor, a factory method in another class, or a static factory in this class would all be included here.
"Modify" lists the core methods in this class with a void return type.
"Use" lists other @core methods and constructors in this package that use this class as an argument type.
The lists only contain things in the same package (imagine what the @corelist would look like for String if this were not the case).
I put a @corelist in the package javadoc, and that lists the @core classes in the package.
With this, I am able to see immediately which classes in a package are @core. Further, for any class, whether or not it is a @core class, I can easily answer the questions
* "How would I normally get one of these" ("Obtain")
* "How would I normally configure it to my needs" (Modify)
* "What is the common ways to use this class once I have one" (Use)
If this mechanism were to be implemented in the standard doclet, the @corelist tag would not be required since the doclet would generate a corelist at the appropriate place for every class.
API writers would merely have to identify the @core classes and methods and constructors, and tag them with @core.
In a package that I tested this on, without spending too much time marking things as core, this yielded a very valuable web of links summarising the package. The package had a singleton service provider through which all other objects were obtained, but often indirectly. This was very apparent from the core links, but not at all obvious in the original API unless you browsed the whole package.
1) more examples in the javadocs. Just a pointer to the java tutorial pages would be better than nothing. Many of the MS docs have little code examples and that makes things just a little bit easier.
2) A picture is worth a thousand words, especially for graphics and component APIs. Again, links to the java tutorial might be OK here too.
3) Inexperienced developers often don't realize that some resources need to be disposed(). It would be great if:
a) ALL resources that need to be freed use a consistent API (dispose(), close(), etc...) you pick one
b) This API is explicitly called out in the javadoc
(a) is better than (b) but either or both would be great.
I agree, there needs to be better integration of API docs and samples.
The Java tutorial is a good place to start, but I think we can do better. MSDN has been good about this, for example for VBA documentation: when describing an API, they almost always have a small, usable code sample that shows a working example.
This would be a lot of work for the documentation team, but the current situation is a lot of trouble. You have to identify the API that is relevant, check to see if there is a tutorial entry, check for a sample in the JDK, read, test, etc.
If we could more easily cut-and-paste a sample and test it right away, that would be much preferred.
Patrick
JavaDoc about classes other than collections that use generics should be more extensive (examples would be good), as this can become pretty confusing.
Example: Class
Class extends U> asSubclass(Class clazz)
I don't get it, does this method have two return types? and Class extends U>? Certainly not. According to the description, the return type is probably Class extends U> ... but what is the doing there in front?
The JavaDoc about this method does not even include T and U. That would probably clarify it.
(This is the JavaDoc for the method:
Casts this Class object to represent a subclass of the class represented by the specified class object. Checks that that the cast is valid, and throws a ClassCastException if).
Returns: this Class object, cast to represent a subclass of the specified class object. ).
The second problem is with the size of the JDK documentation download. Because it is in HTML, installing documentation is a hassle, as is figuring out which documentation you have installed if the docs don't clearly state the version.
My suggestion is to keep the HTML version for standard use, but to support a new JavaDoc toolkit based on something like Ashkelon (). Ashkelon indexes all APIs using a doclet into a relational database. It then provides lookup mechanisms, and a set of JSP pages to read from these lookups and provide searches.
You can load one or more APIs and if there are cross references between the packages, these are automatically linked in the database (e.g. any 3rd-party library that references java.lang.String will reference the JDK API). Linking is not just on inheritance, but on return values, parameters, interfaces...and the search mechanism allows for searching on any of these aspects of the code.
If Sun could throw support behind this sort of approach, Ashkelon could be extended with a REST/SOAP download mechanism so that new APIs could be downloaded easily as needed. They can also be culled from the source code.
The output could be in a browser (dynamic using JSP), or HTML could be produced (for travelling) or a viewer could be used in Swing. So--static and dynamic output in a variety of output styles. There might even be new business opportunities for "skinning" the display of the documents.
You can also use the API to provide for some utility functions across IDEs--for example, generate a stub of an interface implementation or extend a class, etc.
Advantages
1) Can host multiple APIs, including new extension packages from Sun, 3rd-party FOSS and commercial libraries, with automatic linking
2) Can pull down new APIs docs as packages are loaded, either from the web or by scanning the source using a doclet
3) Multiple output formats--could generate HTML, XHTML, XML, PDF, or view in a rich client
4) Clear view of dependencies between packages
5) All sorts of searching supported--Ashkelon lets you find any method that returns JTree or other class, for example
6) Storage in any relational database, allowing for team/corporate, centrally-maintained API docs for internal reference
etc.
The Java community is now expanding beyond Sun and commercial vendors to include a large and vibrant FOSS community. Developers will use more and more APIs and need a single way to view, index, and search them without trouble.
Obviously we can use FOSS tools like Ashkelon ourselves, but having a standard API for this, and a standard doclet backed by Sun, would encourage this newer, improved form of API documentation.
Patrick
I think JavaDoc is pretty good and most things are easy to find.
Some things are not specified thouroughly / easy to find, e.g. ArrayList.remove(o) and ArrayList.contains(o) - the JavaDoc does not state whether the comparison is done via .equals or via ==; only when navigating to List or Collection one finds the statement that it is done via .equals (except for null, then via ==). In this case either the JavaDoc for ArrayList should be completely empty so that one automatically clicks on the link under "specified by", or it should be complete enough to at least include this essential information.
Also, there could be more links to tutorials for complicated topics. There are tons of links for Swing, but e.g. no link from any java.lang.reflect classes to the reflection tutorial.
The following has nothing to do with Mustang or J2SE: All API JavaDocs should be available for browsing online, too. The J2ME API JavaDocs are currently only available for download as a zipfile. That can be a pain when working in varying locations.
Denis,
We should have the ability to file RFEs against the Javadoc *directly off the Javadoc page*. That is, say I spot a mistake in the documentation or need it clarified, I should be able to file a RFE with the suggestions text to be inserted and a Sun engineer would simply accept/reject it. It would improve the turnaround time for these kinds of issues.
Gili
BTW: I want to note, Sun wouldn't need to write any special code to make this happen. To my recollection, there are already mature open-source packages out there that do this. Sun would simply need to install it on their servers and give it a whirl.
Current javadocs are absolutely insufficient. I've written Doclets to extract design documentation (not API documentation) from a source tree. I strongly believe that ALOT more could be done with javadocs.
(1) Firstly, alot of javadocs are out of date, or just not sufficient. I have commonly read documentation which tells you about settings and properties without going into specific details. The end result is the need to drill down into source code. (I'm refering to JDK docs here).
(2) The standard doclet should be extended to allow package comments, method comments and in-source comments to generate 'articles'. These articles should allow related topics to be detailed where they are appropriate and are then stiched together during javadocs and could cover configuration parameters, executable parameters, etc to be configured. It makes alot of sense to have the documentation of certain types of information inside the javadocs as close to the code as possible.
(3) Some sort of 'Article' standard is required. While not related to javadocs, definately documentation related. The MSDN provides a certain level of quality documentation and the fact that all documentation for their products can be found on the MSDN. I would recommend trying to use some sort of standard to allow various technologies that meet certain standards of documentation to have their user manuals and integrated into some sort of developer network. (The Java Technology Developer Network).
I would like to see something similar to jdocs.com but for user manuals, articles, HOWTOs, bug reports, etc.
JGuru, JavaPractices.com, javaalmanac.com, JDocs, etc should all be integrated. There are technical, copyright and practical issues that would need to be solved before integrating the various documentation sources for java.
(4) Create a search 'applet' that allows javadocs to be offline searchable.
Good points Tim.
I was thinking about this, and it seemed that it would be nice to be able to share our "articles".
Then it just occurred to me, that what might work really well, is if every entity in the javadocs (package, class/type, method, field) had a link to its own wiki page at SUN.
If you found something improveable in the docs, click thru to the wiki page and make a note there.
If you write a simple example, put it in the wiki.
If you are bamboozled, read the wiki.
If you find a tutorial that is really helpful, link to it from the wiki.
If you file a bug against something, link to the bug in the wiki.
If you work at SUN and want to improve the documentation, read the wiki.
Sun Build a separate set of api docs where the links go to a static copy of the wiki at build time, bundled with the javadocs (for those without "always on" internet connections). Those static copies could have a link to the live wiki (for when they are "connected" and want to see the latest incantation of the page). Maybe monthly updates (possibly incremental) to the static wiki set, could be released, so they could be downloaded and spliced into the existing docs.
Bruce
Yes, I think it is a nice idea, I'll try to move it forward.
One problem I see with it - what happens with user doc comments between releases?
Do you mean, when a new release is made?
If so, then I guess the doc writers need work with both the offical javadocs, and the wiki. If they take on board stuff from the wiki, then the wiki should have that removed (since its now in the javadocs).
Actually theres a gotcha there too isn't there. We don't want the tiger comments in the wiki deleted just becuase they have been fixed/incorporated etc in mustang. I guess the solution is to have a wiki for each major release, with a carefully considered point at which a copy of the wiki is made. Either that or use CVS behind the wiki and do branching stuff, the mustang wiki is a branch off the tiger wiki. manage the wiki as you would the code under CVS.
Sure its not quite trivial at new release time, but certainly not unresolvable.
Couldn't the comments be tagged obsolete or "hidden" by the guys that fix the bug. They can check the wiki (since it's @ Sun) and don't delete it, but mark it "obsolete as of 1.5.1_02" or something to that effect.
Cheers,
Mikael
I _really_ dont like the idea of a Wiki that is pinned against javadocs. I want editable documentation that sits next to javadocs that covers stuff not related to the API. Understand the difference between API documentation and a HOWTO or a FAQ. They both might refer to API documentation but its definately _NOT_ pinned against the javadocs.
Sticking to fixed articles that can be extracted from javadocs (eg: A description on configuration parameters and boundaries, how the thing's going to 'fail') or included with documentation is good. While the doc-files directory partially solves this problem, there is a need to merge the JavaDocs with a bigger website. Articles should come up as articles, javadoc api docs as javadoc apis, FAQs, HOWTO, etc.
In addition, after each release, if articles are focused on a particular topic it would be easier to review each article in order.
A wiki requires localised topical conversation (Admittidly, Hibernate does well with its wiki) and I can easily see things expanding pretty quickly into something unmanaged.
The major problem in javadocs is the lack of integration btw documentation sets from multiple APIs. For example, while reading locally J2EE docs for Javax.servlet.Servlet, getServletInfo() contains a link to java.lang.String which is in the J2SE docset. But the link sends me to the online docs for J2SE 1.4.2 at Sun's website. I really wanted to jump to the local documentation for J2SE 5.0, also installed in my machine. And I want a single index in the left frames, in the Overview, Tree and Index pages, with every API from J2SE, J2EE, J2ME, and also *all* other Sun APIs that are not included in any of these J2*E platforms. Please produce a single monolithic doctset with ALL the APIs and update it whenever ANY single API is updated. Even if that means updating a 200Mb bundle every month. Do the same with all your tutorials, and make them more integrated with the javadocs. Cross-reference stuff so if I'm reading the javadocs for Servlet.getServletInfo(), I can easily jump to any tutorial, demo, or guide that explains or uses this particular API. Of course this is a huge work, not viable to do manually, but I guess you can do some automated scanning of the existing materials.
This brings in the second major problem: HTML browsers are not a good host for huge, complex docsets. Please move to a JavaHelp-based viewer so we also get decent index and search and other features that can be programmed easily with JavaHelp -- like clicking a button to run some demo, or presenting crodss-references (like all tutorials and demos that use some API) as popups, etc.
More pictures...
For example (and I suggested this to the Java2D guys a year ago already) the Graphics object does rotations around an coordinate system that is not intuitive. Simple posting a couple of pictures at the right spots would make many be able to understand this stuff a whole lot faster.
Connection of documentation and RFEs can be good i think.
Possibility to see enhancements of api in context of documentation and other RFEs for parts around can be helpfull.
I think the solution is to have a single Wiki, but allow 'entries' to be tagged with which versions of the Javadocs they apply to. Then allow some kind of filtering mechanism when a page is displayed. Say for example I contributed something for Tiger, and tag it as "1.5+". Along comes Mustang, and my contribution is no longer applicable, so someone simply changes the tag to "1.5", which means it will be filtered out when viewed by Mustang users, but still visible to Tiger users. (Hopefully the page would contain a mechanism to manually change the filter.)
Such a system would also allow Wiki entries for legacy versions of Java (although obviously old Javadocs do not contains links directly to the Wiki). | https://www.java.net/node/643634 | CC-MAIN-2015-22 | refinedweb | 3,388 | 62.48 |
Yesterday an internal customer asked me on how he can create an ordered test programmatically. This was the first time someone had asked me this but I could imagine that it is useful if you have a fixed logic to define the order and have reasonable # of ordered tests to create/maintain.
Since ordered tests is an xml file with a fixed schema, it is very easy to create it programmatically. Here is how an ordered test look like: –
<?xml version="1.0" encoding="UTF-8"?>
< OrderedTest
< TestLinks>
< TestLink
</TestLinks>
< /OrderedTest>
So to create an ordered test, you have to generate an xml similar to this. I am assuming that most of the xml is self explanatory (xsd is in %VS Install dir%\Xml\Schemas\vstst.xsd ) and the only confusing thing is the highlighted guids. But in case you have any question on the xml, please feel free to leave a comment and I will revert back.
Here the first guid (1aeba90d-9c29-4b01-ae4b-fe86bba94a82) is the ID of ordered test and for this you can generate a new guid and put it there.
The second guid is about the ID of the testMethod1 and you can use a function similar to the one mentioned below to convert the test (<Name space name>.<class name>.<test method name>) to a guid.
private static Guid GuidFromString(string data)
{
SHA1CryptoServiceProvider provider = new SHA1CryptoServiceProvider();
byte[] hash = provider.ComputeHash(System.Text.Encoding.Unicode.GetBytes(data));
byte[] toGuid = new byte[16];
Array.Copy(hash, toGuid, 16);
return new Guid(toGuid);
}
Enjoy !!
Thanks for the post, this is useful. I notice that when I change the namespace of tests, that it will break my *.orderedtest files. I suspect this is because VS internally uses a method similar to what you have above, and computes a hash of the "namespace.class.method". This can be quite an inconvenience when changing namespace or class names across a large number of tests. (Think several hundred)
Do you know of a clean and easy way to update test guids when performing such refactors? At the moment, I'm stuck with quite a few broken *.orderedtest files because of a simple namespace change.
(This would have been much easier, and would be just a simple text replace, if the *.orderedtest just used the "namespace.class.method" text directly, instead of using this brittle hash approach…)
Thanks!
Sorry, I meant to say… "do you know of a clean and easy way to do this, other than to begin programmatically generating my *.orderedtest files"
🙂
@Carter : The guy who set this up is a fking moron! How the fk you want to work with that if you can't rename your DLL, namespace and/or class. I'm flabbergasted. I feel like killing someone…
Any one can explain about highlighted things and why there are present in ordered test ?what is the use of those things? | https://blogs.msdn.microsoft.com/aseemb/2013/10/05/how-to-create-an-ordered-test-programmatically/ | CC-MAIN-2017-17 | refinedweb | 484 | 72.66 |
Opened 7 years ago
Closed 7 years ago
#2350 closed Bug (Fixed)
Strange issue when using $SS_ETCHEDVERT and $SS_ETCHEDHORZ
Description
If you run the code below, you will see that $SS_ETCHEDVERT will create a static control 2 pixels wide on the left side of where the label control is supposed to be. If you minimize the window, and then restore the it, the appearance of the label has changed to appear to be using the $SS_ETCHEDFRAME style. Using the AU3Info tool I can see that the control's style doesn't change, just the appearance. If you uncomment any of the other lines that use the GUICtrlCreateLabel function, and comment the *VERT line, you will see it happens with the *HORZ style as well.
#include <GUIConstantsEx.au3> #include <StaticConstants.au3> Example() Func Example() Local $widthCell, $msg GUICreate("My GUI") $widthCell = 70 ;~ GUICtrlCreateLabel("Line 1 Cell 1", 10, 30, $widthCell, 80, $SS_ETCHEDHORZ) GUICtrlCreateLabel("Line 1 Cell 1", 10, 30, $widthCell, 80, $SS_ETCHEDVERT) ;~ GUICtrlCreateLabel("Line 1 Cell 1", 10, 30, $widthCell, 80, $SS_ETCHEDFRAME) GUISetState() Do $msg = GUIGetMsg() Until $msg = $GUI_EVENT_CLOSE EndFunc ;==>Example
This occurs using AutoIt 3.3.8.1 and 3.3.9.4
Attachments (0)
Change History (1)
comment:1 Changed 7 years ago by Jon
- Milestone set to 3.3.9061] in version: 3.3.9.10 | https://www.autoitscript.com/trac/autoit/ticket/2350 | CC-MAIN-2020-16 | refinedweb | 218 | 57.61 |
Dungeon room generation. More...
#include "angband.h"
#include "cave.h"
#include "math.h"
#include "game-event.h"
#include "generate.h"
#include "init.h"
#include "mon-make.h"
#include "mon-spell.h"
#include "obj-tval.h"
#include "parser.h"
#include "trap.h"
#include "z-queue.h"
#include "z-type.h"
Dungeon room generation.
Copyright (c) 1997 Ben Harrison, James E. Wilson, Robert A. Koeneke Copyright (c) 2013 Erik Osheim,.
This file covers everything to do with generation of individual rooms in the dungeon. It consists of room generating helper functions plus the actual room builders (which are referred to in the room profiles in generate.c).
The room builders all take as arguments the chunk they are being generated in, and the co-ordinates of the room centre in that chunk. Each room builder is also able to find space for itself in the chunk using the find_space() function; the chunk generating functions can ask it to do that by passing too large centre co-ordinates.
Build a circular room (interior radius 4-7).
References chunk::depth, draw_rectangle(), FALSE, FEAT_FLOOR, FEAT_GRANITE, FEAT_SECRET, fill_circle(), find_space(), chunk::height, rand_dir(), randint0, randint1, square_set_feat(), TRUE, vault_monsters(), vault_objects(), and chunk::width.
Builds a cross-shaped room.
Room "a" runs north/south, and Room "b" runs east/east So a "central pillar" would run from x1a,y1b to x2a,y2b.
Note that currently, the "center" is always 3x3, but I think that the code below will work for 5x5 (and perhaps even for unsymetric values like 4x3 or 5x3 or 3x4 or 3x5).
References chunk::depth, draw_rectangle(), FALSE, FEAT_FLOOR, FEAT_GRANITE, FEAT_SECRET, fill_rectangle(), find_space(), generate_hole(), generate_open(), generate_plus(), generate_room(), chunk::height, height, MAX, one_in_, place_object(), rand_range(), randint0, randint1, set_marked_granite(), TRUE, vault_monsters(), vault_traps(), chunk::width, and width.
Build a greater vaults.
Since Greater Vaults are so large (4x6 blocks, in a 6x18 dungeon) there is a 63% chance that a randomly chosen quadrant to start a GV on won't work. To balance this, we give Greater Vaults an artificially high probability of being attempted, and then in this function use a depth check to cancel vault creation except at deep depths.
The following code should make a greater vault with frequencies: dlvl freq 100+ 18.0% 90-99 16.0 - 18.0% 80-89 10.0 - 11.0% 70-79 5.7 - 6.5% 60-69 3.3 - 3.8% 50-59 1.8 - 2.1% 0-49 0.0 - 1.0%
References build_vault_type(), dun_data::cent_n, chunk::depth, dun, FALSE, i, cave_profile::name, one_in_, dun_data::profile, randint0, and streq.
A single starburst-shaped room of extreme size, usually dotted or even divided with irregularly-shaped fields of rubble.
No special monsters. Appears deeper than level 40.
These are the largest, most difficult to position, and thus highest- priority rooms in the dungeon. They should be rare, so as not to interfere with greater vaults.
References dun_data::cent_n, dun, FALSE, FEAT_FLOOR, FEAT_RUBBLE, find_space(), generate_starburst_room(), chunk::height, height, i, one_in_, randint0, randint1, ROOM_LOG, TRUE, chunk::width, and width.
Build an interesting room.
References build_vault_type().
Build a large room with an inner room.
Possible sub-types: 1 - An inner room 2 - An inner room with a small inner room 3 - An inner room with a pillar or pillars 4 - An inner room with a checkerboard 5 - An inner room with four compartments
References chunk::depth, draw_rectangle(), FALSE, FEAT_CLOSED, FEAT_FLOOR, FEAT_GRANITE, FEAT_SECRET, fill_rectangle(), find_space(), generate_hole(), generate_plus(), generate_room(), chunk::height, height, i, one_in_, place_object(), place_random_stairs(), place_secret_door(), randint0, randint1, set_marked_granite(), square_iscloseddoor(), square_set_door_lock(), TRUE, vault_monsters(), vault_objects(), vault_traps(), chunk::width, and width.
Build a lesser vault.
References build_vault_type(), dun, cave_profile::name, one_in_, dun_data::profile, and streq.
Build a medium vault.
References build_vault_type(), dun, cave_profile::name, one_in_, dun_data::profile, and streq.
Moria room (from Oangband).
Uses the "starburst room" code.
References chunk::depth, FALSE, FEAT_FLOOR, FEAT_RUBBLE, find_space(), generate_starburst_room(), chunk::height, height, i, one_in_, randint0, randint1, TRUE, void(), chunk::width, and width.
Build a monster nest.
A monster nest consists of a rectangular moat around a room containing monsters of a given type.
The monsters are chosen from a set of 64 randomly selected monster races, to allow the nest creation to fail instead of having "holes".
Note the use of the "get_mon_num_prep()" function to prepare the "monster allocation table" in such a way as to optimize the selection of "appropriate" non-unique monsters for the nest.
The available monster nests are specified in edit/pit.txt.
Note that get_mon_num() function can fail, in which case the nest will be empty, and will not affect the level rating.
Monster nests will never contain unique monsters.
References pit_profile::ave, chunk::depth, draw_rectangle(), dun, FALSE, FEAT_FLOOR, FEAT_GRANITE, FEAT_SECRET, fill_rectangle(), find_space(), generate_hole(), generate_room(), get_mon_num(), get_mon_num_prep(), chunk::height, height, i, mon_pit_hook(), chunk::mon_rating, pit_profile::name, pit_profile::obj_rarity, one_in_, dun_data::pit_type, place_new_monster(), place_object(), randint0, ROOM_LOG, set_pit_type(), TRUE, chunk::width, and width.
Builds an overlapping rectangular room.
References chunk::depth, draw_rectangle(), FALSE, FEAT_FLOOR, FEAT_GRANITE, fill_rectangle(), find_space(), generate_room(), chunk::height, height, MAX, randint1, TRUE, chunk::width, and width.
Build a monster pit.
Monster pits are laid-out similarly to monster nests.
The available monster pits are specified in edit/pit.txt.
The inside room in a monster pit appears as shown below, where the actual monsters in each location depend on the type of the pit
#11000000011# #01234543210# #01236763210# #01234543210# #11000000011#
Note that the monsters in the pit are chosen by using get_mon_num() to request 16 "appropriate" monsters, sorting them by level, and using the "even" entries in this sorted list for the contents of the pit.
Note the use of get_mon_num_prep() to prepare the monster allocation table in such a way as to optimize the selection of appropriate non-unique monsters for the pit.
The get_mon_num() function can fail, in which case the pit will be empty, and will not effect the level rating.
Like monster nests, monster pits will never contain unique monsters.
References pit_profile::ave, chunk::depth, draw_rectangle(), dun, FALSE, FEAT_FLOOR, FEAT_GRANITE, FEAT_SECRET, fill_rectangle(), find_space(), generate_hole(), generate_room(), get_mon_num(), get_mon_num_prep(), chunk::height, height, i, monster_race::level, mon_pit_hook(), chunk::mon_rating, pit_profile::name, pit_profile::obj_rarity, one_in_, dun_data::pit_type, place_new_monster(), place_object(), randint0, ROOM_LOG, set_pit_type(), TRUE, chunk::width, and width.
Rooms of chambers.
Build a room, varying in size between 22x22 and 44x66, consisting of many smaller, irregularly placed, chambers all connected by doors or short tunnels. -LM-
Plop down an area-dependent number of magma-filled chambers, and remove blind doors and tiny rooms.
Hollow out a chamber near the center, connect it to new chambers, and hollow them out in turn. Continue in this fashion until there are no remaining chambers within two squares of any cleared chamber.
Clean up doors. Neaten up the wall types. Turn floor grids into rooms, illuminate if requested.
Fill the room with up to 35 (sometimes up to 50) monsters of a creature race or type that offers a challenge at the character's depth. This is similar to monster pits, except that we choose among a wider range of monsters.
References ABS, ddx_ddd, ddy_ddd, chunk::depth, f_info, FALSE, square::feat, FEAT_BROKEN, FEAT_FLOOR, FEAT_GRANITE, FEAT_MAGMA, FEAT_OPEN, find_space(), feature::flags, get_chamber_monsters(), chunk::height, height, hollow_out_room(), i, square::info, m_bonus(), make_chamber(), chunk::mon_rating, place_random_door(), randint0, randint1, ROOM_LOG, set_marked_granite(), size, sqinfo_on, square_in_bounds(), square_in_bounds_fully(), square_iswall_inner(), square_iswall_outer(), square_iswall_solid(), square_set_feat(), chunk::squares, tf_has, TRUE, chunk::width, and width.
Build a room template from its string representation.
References chunk::depth, FALSE, FEAT_FLOOR, find_space(), chunk::height, square::info, one_in_, place_object(), place_random_stairs(), place_secret_door(), randint0, randint1, set_marked_granite(), sqinfo_on, square_isempty(), square_set_feat(), chunk::squares, TRUE, vault_monsters(), vault_objects(), vault_traps(), and chunk::width.
Referenced by build_room_template_type().
Helper function for building room templates.
References build_room_template(), room_template::dor, FALSE, room_template::hgt, room_template::name, random_room_template(), ROOM_LOG, room_template::text, TRUE, room_template::tval, and room_template::wid.
Referenced by build_template().
Builds a normal rectangular room.
References chunk::depth, draw_rectangle(), FALSE, FEAT_FLOOR, FEAT_GRANITE, fill_rectangle(), find_space(), generate_room(), chunk::height, height, one_in_, randint1, set_marked_granite(), TRUE, chunk::width, and width.
Build a template room.
References build_room_template_type().
Build a vault from its string representation.
References data, chunk::depth, FALSE, FEAT_FLOOR, FEAT_LESS, FEAT_MAGMA_K, FEAT_MORE, FEAT_PERM, FEAT_QUARTZ_K, FEAT_RUBBLE, find_space(), generate_mark(), get_vault_monsters(), chunk::height, vault::hgt, square::info, is_quest(), angband_constants::max_depth, one_in_, pick_and_place_monster(), place_gold(), place_object(), place_secret_door(), place_trap(), races, randint0, randint1, set_marked_granite(), sqinfo_on, square_isempty(), square_set_feat(), chunk::squares, vault::text, TRUE, vault::typ, vault::wid, chunk::width, and z_info.
Referenced by build_vault_type(), and vault_chunk().
Helper function for building vaults.
References build_vault(), chunk::depth, FALSE, chunk::mon_rating, vault::name, random_vault(), vault::rat, ROOM_LOG, and TRUE.
Referenced by build_greater_vault(), build_interesting(), build_lesser_vault(), and build_medium_vault().
Fill the edges of a rectangle with a feature.
References generate_mark(), and square_set_feat().
Referenced by build_circular(), build_crossed(), build_large(), build_nest(), build_overlap(), build_pit(), build_simple(), cavern_gen(), classic_gen(), gauntlet_gen(), hard_centre_gen(), labyrinth_chunk(), lair_gen(), modified_chunk(), modified_gen(), moria_chunk(), moria_gen(), and town_gen_layout().
Fill a circle with the given feature/info.
References fill_xrange(), fill_yrange(), i, and int.
Referenced by build_circular().
Fill a rectangle with a feature.
References generate_mark(), and square_set_feat().
Referenced by build_crossed(), build_large(), build_nest(), build_overlap(), build_pit(), build_simple(), build_store(), classic_gen(), gauntlet_gen(), init_cavern(), labyrinth_chunk(), make_chamber(), modified_chunk(), moria_chunk(), and town_gen_layout().
Fill a horizontal range with the given feature/info.
References square::info, sqinfo_on, square_set_feat(), and chunk::squares.
Referenced by fill_circle().
Fill a vertical range with the given feature/info.
References square::info, sqinfo_on, square_set_feat(), and chunk::squares.
Referenced by fill_circle().
Find a good spot for the next room.
Find and allocate a free space in the dungeon large enough to hold the room calling this function.
We allocate space in blocks.
Be careful to include the edges of the room in height and width!
Return TRUE and values for the center of the room if all went well. Otherwise, return FALSE.
References dun_data::block_hgt, dun_data::block_wid, dun_data::cent, dun_data::cent_n, dun_data::col_blocks, dun, FALSE, i, angband_constants::level_room_max, randint0, dun_data::room_map, dun_data::row_blocks, TRUE, loc::x, loc::y, and z_info.
Referenced by build_circular(), build_crossed(), build_huge(), build_large(), build_moria(), build_nest(), build_overlap(), build_pit(), build_room_of_chambers(), build_room_template(), build_simple(), and build_vault().
Generate helper – open one side of a rectangle with a feature.
References randint0, and square_set_feat().
Referenced by build_crossed(), build_large(), build_nest(), and build_pit().
Mark a rectangle with a sqinfo flag.
References square::info, sqinfo_on, and chunk::squares.
Referenced by build_vault(), draw_rectangle(), fill_rectangle(), gauntlet_gen(), generate_plus(), get_chamber_monsters(), and set_marked_granite().
Generate helper – open all sides of a rectangle with a feature.
References square_set_feat().
Referenced by build_crossed().
Fill the lines of a cross/plus with a feature.
References generate_mark(), and square_set_feat().
Referenced by build_crossed(), and build_large().
Mark squares as being in a room, and optionally light them.
References square::info, sqinfo_on, and chunk::squares.
Referenced by build_crossed(), build_large(), build_nest(), build_overlap(), build_pit(), and build_simple().
Make a starburst room.
-LM-
Starburst rooms are made in three steps: 1: Choose a room size-dependant number of arcs. Large rooms need to look less granular and alter their shape more often, so they need more arcs. 2: For each of the arcs, calculate the portion of the full circle it includes, and its maximum effect range (how far in that direction we can change features in). This depends on room size, shape, and the maximum effect range of the previous arc. 3: Use the table "get_angle_to_grid" to supply angles to each grid in the room. If the distance to that grid is not greater than the maximum effect range that applies at that angle, change the feature if appropriate (this depends on feature type).
Usage notes:
References ABS, ddx_ddd, ddy_ddd, distance(), f_info, FALSE, square::feat, FEAT_GRANITE, feature::flags, get_angle_to_grid, height, i, square::info, randint0, randint1, set_marked_granite(), sqinfo_off, sqinfo_on, square_in_bounds(), square_isvault(), square_monster(), square_object(), square_set_feat(), chunk::squares, tf_has, TRUE, void(), and width.
Referenced by build_huge(), and build_moria().
Expand in every direction from a start point, turning magma into rooms.
Stop only when the magma and the open doors totally run out.
References ddx_ddd, ddy_ddd, square::feat, FEAT_BROKEN, FEAT_FLOOR, FEAT_MAGMA, FEAT_OPEN, square_set_feat(), and chunk::squares.
Referenced by build_room_of_chambers().
Helper function for rooms of chambers.
Fill a room matching the rectangle input with magma, and surround it with inner wall. Create a door in a random inner wall grid along the border of the rectangle.
References ABS, ddx_ddd, ddy_ddd, square::feat, FEAT_MAGMA, FEAT_OPEN, fill_rectangle(), i, make_inner_chamber_wall(), one_in_, randint0, square_in_bounds_fully(), square_iswall_inner(), square_set_feat(), and chunk::squares.
Referenced by build_room_of_chambers().
Helper for rooms of chambers; builds a marked wall grid if appropriate.
References square::feat, FEAT_GRANITE, FEAT_MAGMA, set_marked_granite(), square_iswall_outer(), square_iswall_solid(), and chunk::squares.
Referenced by make_chamber().
Hook for picking monsters appropriate to a nest/pit or region.
References pit_monster_profile::base, monster_race::base, pit_profile::bases, pit_color_profile::color, pit_profile::colors, monster_race::d_attr, dun, FALSE, pit_profile::flags, monster_race::flags, pit_profile::forbidden_flags, pit_profile::forbidden_monsters, pit_profile::forbidden_spell_flags, pit_monster_profile::next, pit_color_profile::next, pit_forbidden_monster::next, dun_data::pit_type, pit_forbidden_monster::race, rf_has, rf_is_inter, rf_is_subset, rsf_is_inter, rsf_is_subset, pit_profile::spell_flags, monster_race::spell_flags, and TRUE.
Referenced by build_nest(), build_pit(), and mon_restrict().
Chooses a room template of a particular kind at random.
References room_template::next, one_in_, room_templates, and room_template::typ.
Referenced by build_room_template_type().
Chooses a vault of a particular kind at random.
References vault::max_lev, vault::min_lev, vault::next, one_in_, vault::typ, and vaults.
Referenced by build_vault_type(), and vault_chunk().
Attempt to build a room of the given type at the given block.
Note that this code assumes that profile height and width are the maximum possible grid sizes, and then allocates a number of blocks that will always contain them.
Note that we restrict the number of pits/nests to reduce the chance of overflowing the monster list during level creation.
References dun_data::block_hgt, dun_data::block_wid, room_profile::builder, dun_data::cent, dun_data::cent_n, dun_data::col_blocks, chunk::depth, dun, FALSE, chunk::height, room_profile::height, room_profile::level, angband_constants::level_pit_max, angband_constants::level_room_max, room_profile::pit, dun_data::pit_num, dun_data::room_map, dun_data::row_blocks, TRUE, chunk::width, room_profile::width, loc::x, loc::y, and z_info.
Referenced by classic_gen(), modified_chunk(), and moria_chunk().
Place a square of granite with a flag.
References FEAT_GRANITE, generate_mark(), and square_set_feat().
Referenced by build_crossed(), build_large(), build_room_of_chambers(), build_room_template(), build_simple(), build_tunnel(), build_vault(), clear_small_regions(), generate_starburst_room(), and make_inner_chamber_wall().
Pick a type of monster for pits (or other purposes), based on the level.
We scan through all pit profiles, and for each one generate a random depth using a normal distribution, with the mean given in pit.txt, and a standard deviation of 10. Then we pick the profile that gave us a depth that is closest to the player's actual depth.
Sets dun->pit_type, which is required for mon_pit_hook.
References ABS, pit_profile::ave, dun, i, pit_profile::name, one_in_, pit_info, angband_constants::pit_max, dun_data::pit_type, Rand_normal(), pit_profile::rarity, pit_profile::room_type, and z_info.
Referenced by build_nest(), build_pit(), gauntlet_gen(), get_chamber_monsters(), and lair_gen(). | http://buildbot.rephial.org/builds/restruct/doc/gen-room_8c.html | CC-MAIN-2017-22 | refinedweb | 2,365 | 50.94 |
The Data Science Lab
Turning his attention to the extremely time-consuming task of machine learning data preparation, Dr. James McCaffrey of Microsoft Research explains how to examine data files and how to identify and deal with missing data. main phases has several steps. This article explains how to examine machine learning data files and how to identify and deal with missing data.
A good way to understand what missing data means and see where this article is headed is to take a look at the screenshot of a demo program in Figure 1. The demo starts with a small text file that illustrates many of the types of issues that you might encounter, including missing data, extraneous data, and incorrect data.
The demo is a Python language program that examines and performs a series of transformations on the original data. In some scenarios where your source data is small (about 500 lines or less) you can clean, normalize and encode, and split the data by using a text editor or dropping the data into an Excel spreadsheet. But in almost all non-demo scenarios, manually preparing ML data is not feasible and so you must programmatically process your data.
The first five lines of the demo source data are:
# people_raw.txt
sex age empid region income politic
M 32 AB123 eastern 59200.00 moderate
F 43 BC234 central 38400.00 moderate
M 35 CD345 central 30800.00 liberal
. . .
Each line represents a person. There are six tab-delimited fields: sex, age, employee ID, region, annual income, and political leaning. The eventual goal of the ML system that will use the data is to create a neural network that predicts political leaning from other fields.
Because the demo data has so few lines, you can easily see most, but not all, of the problems that need to be handled. In neural systems you usually don't want comment lines or a header line, so the first two lines of data are removed by the demo program. You can see that line [6] has a "?" value in the region field, which likely means "unknown." But in a realistic scenario where there are hundreds or thousands of lines of data, you'd have to find such issues programmatically. Similarly, line [8] has "centrel" in the region field, which is likely a misspelling, but this would have to be detected programmatically.
Line [9] is blank, or it may have non-visible control characters. Line [12] has a "3" value in the age field, which is almost certainly a transcription error of some kind. Line [15] has only five fields and is missing the age value.
All of these typical data problems in the demo are quite common in real-world data. But there are many other types of problems too. The point is that you won't find a code library that contains a magic "clean_my_data()" function. Each ML dataset must be dealt with in a custom way. source dataset, in general the data preparation pipeline for most ML systems usually is something similar to the steps shown in Figure 2.
Data preparation for ML is deceptive because the process is conceptually easy. However, there are many steps, and each step is much trickier than you might expect if you're new to ML. This article explains the first four steps in Figure 2:
Future Data Science Lab articles will explain the other steps. They.(), show_short_lines(), delete_lines(), and remove_cols() should be clear from their names.
Listing 1: Missing Data Preparation Demo Program
# file_missing.py
# Python 3.7.6 NumPy 1.18.1
# find and deal with missing data
import numpy as np
def line_count(fn): . . .
def show_file(fn, start, end, indices=False,
strip_nl=False): . . .
def show_short_lines(fn, num_cols, delim): . . .
def delete_lines(src, dest, omit_lines): . . .
def remove_cols(src, dest, omit_cols, delim): . . .
def main():
# 1. examine
fn = ".\\people_raw.txt"
ct = line_count(fn)
print("\nSource file has " + str(ct) + " lines")
print("\nLines 1-17: ")
show_file(fn, 1, 17, indices=True, strip_nl=True)
#()
Program execution begins with:
def main():
# 1. examine
fn = ".\\people_raw.txt"
ct = line_count(fn)
print("\nSource file has " + str(ct) + " lines")
print("\nLines 1-17: ")
show_file(fn, 1, 17, indices=True, strip_nl=True)
. . .
The first step when working with machine learning data files is to do a preliminary investigation. The source data is named people_raw.txt and has only 17 lines to keep the main ideas of dealing with missing data as clear as possible. The number of lines in the file is determined by helper function line_count(). The entire data file is examined by a call to show_file().
The indices=True argument instructs show_file() to display 1-based line numbers. With some data preparation tasks it's more natural to use 1-based indexing, but with other tasks it's more natural to use 0-based indexing. The strip_nl=True argument instructs function show_file() to remove trailing newlines from the data lines before printing them to the shell so that there aren't blank lines between data lines in the display.
The demo continues with:
#])
. . .
There are two common forms of missing data: lines with fields that are completely missing and lines with fields that have special values such as "?" or "unknown." It's best to check for completely missing fields first, and deal with unusual or incorrect values later. Function show_short_lines() requires you to specify how many fields/columns there should be in each line. The function traverses the source file and displays any lines that have fewer than or more than the specified number of columns. This approach will also identify lines that have extra delimiters which aren't easy to see, such as double tab characters, and lines with incorrect delimiters, for example blank space characters instead of tab characters.
After lines with completely missing columns have been identified, there are two common approaches for dealing with them. The first approach, which I recommend in most cases, is to just delete the line(s). The second approach, which I do not recommend, unless it's absolutely necessary, is to add the missing value. For example, for a numeric column you could add the average value of the column, and for a categorical column you could add the most common value in the column. The argument for deleting lines with missing fields instead of adding values is that in most cases, "no data is better than incorrect data."
In most situations, data files intended for use in a machine learning system should not have comment lines, header lines, or blank lines. The demo source data has one each of these in lines 1, 2, 9 so these lines are deleted along with line 15 which has a completely missing age column.
The demo concludes with statements that remove the employee ID column:
. . .
#()
The idea here is that an employee ID value isn't useful for predicting a person's political leaning. You should use caution when deleting columns because sometimes useful information can be hidden. For example, suppose employee ID values were assigned in such a way that people in technical jobs have IDs that begin with A, B, or C, and people in sales roles have IDs that begin with D, E, or F, then the employee ID column could be useful for predicting political leaning.
Exploring the Data
When preparing data for an ML system, the first step is always to perform a preliminary examination. This means determining. For example, the following function definition is equivalent in terms of functionality:
def line_count(fn):
ct = 0
with open(".\\people_raw.txt", "r") as fin:
while fin.readline():
ct += 1
return ct9999 for the 17-line demo data, the display will end after the last line has been printed, which is usually what you want.
When writing custom ML data preparation functions there's a temptation to write several wrapper functions for specialized tasks. For example, you usually want to view the first few lines and the last few lines of a data file. So, you could write functions show_first(), and show_last() like so:
def show_first(fn, n, indices=False, strip_nl=False):
show_file(fn, 1, n, indices, strip_nl)
def show_last(fn, n, indices=False, strip_nl=False):
N = line_count(fn)
start = N - n + 1; end = N
show_file(fn, start, end, indices, strip_nl)
My preference is to resist this temptation for many wrapper functions and just use a minimal number of general-purpose functions. For me, the disadvantage of managing and remembering many specialized functions greatly outweighs the benefit of easier function calls.
Finding and Dealing with Missing Data
The demo program defines a function show_short_lines() as:
def show_short_lines(fn, num_cols, delim):
fin = open(fn, "r")
line_num = 0
for line in fin:
line_num += 1
tokens = line.split(delim)
if len(tokens) != num_cols:
print("[%3d]: " % line_num, end="")
print(line)
fin.close()
The function traverses the file line by line and you can use its definition as a template to write custom functions for specific ML data scenarios. For example, a function show_lines_with() could be useful to find lines with target values such as "?" or "unknown."
Once lines with missing columns/fields have been identified, function delete_lines() can be used to delete those lines: function accepts a list of 1-based line numbers to delete. Notice that the function does not strip trailing newlines so a line can be written to a destination file without adding an additional fout.write("\n") statement.
You have to be a bit careful when using delete_lines() because the statement delete_lines(src, dest, [1,2]) is not the same as the statement delete_lines(src, dest, [1]) followed by delete_lines(src, dest, [2]) since line numbering will change after the first call to delete_lines().
Function delete_lines() uses a functional programming paradigm and accepts a source file and writes the results to a destination file. It's possible to implement delete_lines().
Removing Columns
In many situations you'll want to remove some of the columns in an ML data file. This is a bit trickier than deleting rows/lines. The demo program defines a function remove_cols(src, dest, cols, delim) that deletes specified 1-based columns from a source text file and saves the result as a destination file. There are several ways to remove specified columns. In pseudo-code, the approach taken by the demo is:
loop each line of src file
split line into tokens by column
create new line without target cols
write the new line to dest file
end-loop
Like many file manipulation functions, removing columns is not conceptually difficult but there are several details that can trip you up, such as remembering that columns-to-delete are 1-based not 0-based, and remembering not to add a delimiter character before or after the newline character at the end of a newly created line. The function implementation is presented in Listing 2.
Listing 2: Removing Specified Columns
def remove_cols(src, dest, omit_cols, delim):
# cols is 1-based
fin = open(src, "r")
fout = open(dest, "w")
for line in fin:
s = "" # reconstucted line
line = line.rstrip() # remove newline
tokens = line.split(delim)
for j in range(0, len(tokens)): # j is 0-based
if j+1 in omit_cols: # col to delete
continue
elif j != len(tokens)-1: # interior col
s += tokens[j] + delim
else:
s += tokens[j] # last col
s += "\n"
fout.write(s)
fout.close(); fin.close()
return
Suppose the source data file is tab-delimited and the current line being processed is stored into a string variable named line:
M 32 AB123 eastern 59200.00 moderate
The first step is to strip the trailing newline character using the statement line = line.rstrip() because if you don't, after splitting by the statement tokens = line.split(delim) the tokens list will hold tokens[0] = "M", tokens[1] = "32", tokens[3] = "AB123", tokens[4] = "eastern", tokens[5] = "59200.00", tokens[6] = "moderate\n." Notice the newline character in tokens[6]. But if you strip before splitting, then tokens[6] will hold just "moderate".
If the column to delete is [3] and the delimiter is a tab character, then the goal reconstruction is
tokens[0] + tab + tokens[1] + tab +
tokens[2] + tab + tokens[4] + tab +
tokens[5] + tab + tokens[6] + newline
Therefore, if you reconstruct the new line using a loop you need to remember not to add a tab delimiter after the last token because the reconstructed line would end with tokens[6] + tab + newline.
Wrapping Up
The techniques presented in this article will remove most, but not all, lines with missing data. But all lines will have the same number of columns, which makes the next steps in the data preparation pipeline much easier than dealing with variable-length lines. These next data preparation steps will be explained in future VSM Data Science Lab articles.
When starting out on a machine learning project, there are ten key things to remember: 1.) data preparation takes a long time, 2.) data preparation takes a long time, 3.) data preparation takes a long time, and, well, you get the idea. I work at a very large tech company and one of my roles is to oversee a series of 12-week ML projects. It's not uncommon for project teams to use10 weeks, or even more, of the 12-week schedule for data preparation.
The strategy presented in this article is to write custom Python programs for file manipulation. An alternative strategy is to use a code library such as Pandas. Using an existing code library can have a steep learning curve, but this approach is well-suited for beginners or data scientists who don't have strong coding | https://visualstudiomagazine.com/articles/2020/07/06/ml-data-preparation.aspx | CC-MAIN-2021-31 | refinedweb | 2,269 | 61.87 |
Corejava Interview,Corejava questions,Corejava Interview Questions,Corejava
;
Q 1. When should I use the abstract class rather
than an interface ?
Ans : A Java interface is an abstract data type like a class
having all its methods abstract i.e. without any implementation. That means
Corejava Interview,Corejava questions,Corejava Interview Questions,Corejava
; A Java interface is an abstract data type like a class
having all its methods...;
Q 1 : How should I create an immutable class ?
Ans... an immutable class should not contain any modifier
method. But a developer should
We would like to hear about your goals.
We would like to hear about your goals.
... habits. They would want to hire people who are like them. Thus being non specific.... I should also learn about my prospect. I would therefore gather all possible
WEB SITE
WEB SITE can any one plzz give me some suggestions that i can implement in my site..(Some latest technology)
like theme selection in orkut
like forgot password feature..
or any more features
Sorry but its
Ple help me its very urgent
Ple help me its very urgent Hi..
I have one string
1)'2,3,4'
i want do like this
'2','3','4'
ple help me very urgent
corejava
corejava Creating the object using "new" and usins xml configurations whih one is more useful?why... explain it to me by giving a simple example that would be really helpful.
Thanks
Corejava Interview,Corejava questions,Corejava Interview Questions,Corejava
interface is an abstract data type like a class
having all its methods abstract
JSP Tutorials Resource - Useful Jsp Tutorials Links and Resources
as users that would
like to refresh their searching skills on Ovid... by progressing from very simple
examples to complex examples.
For best progress... working. The early examples might
seem very simple; please have patience
corejava - Java Beginners
arguments to the method call is passed to the method as its parameters. Note very...corejava pass by value semantics Example of pass by value semantics in Core Java. Hi friend,Java passes parameters to methods using pass
Open Source Shopping Cart
is very useful. It increases the productivity of the employees.
... is ready to go live.
Choosing the right Open Source shopping cart is very... performance of
website. So, choosing a right shopping cart is very important
its very urgent please help me
its very urgent please help me how can i retrieve all images from ms access database and display in jsp pages
corejava - Java Beginners
corejava hai this is jagadhish.
I have a doubt on corejava.How many design patterns are there in core java?
which are useful in threads?what r...{
for(int i = 1;i <= 10;i++)
{
System.out.println(i
Know About Outsourcing, More About Outsourcing, Useful Information Outsourcing
be described in many ways, but at its simplest it is allocating work to a third party... it is quite common for companies to outsource services like data storage or disaster... and offshoring. For them, both terms mean the same, but this is not very accurate
Java programming tutorial for very beginners
Java programming tutorial for very beginners Hi,
After completing my 12th Standard in India I would like to learn Java programming language. Is there any Java programming tutorial for very beginners?
Thanks
Hi
HTML FAQ site
site which uses natural language processing to get the answers. Natural... or range tree or something like that. The user input will usually be like... of answers or the actual answer should be generated. As close as possible.
I.
Free Web Hosting - Why Its not good Idea
are looking for
getting high traffic on your web site. Also its very important... future. Its very
important decision in choosing web hosting server for hosting... on your web
site. This is very annoying thing for you web site.
How to Upload Site Online
on your server.
Its very important to learn all these information specifically if you....
Uploading site can be done in many ways, but the most popular is FTP. After hosting... such as CuteFTP or WS_FTP, user can log into their host. Then, using the program like
If you came on board with us, what changes would you make in the system?
things first. Should you take me on, as I hope, I would like to take a good look... more productive at work. Ideally I would like to manage any extra work from... in this organization is stupid.
Your answer should therefore reflect that you would like
hi its the coding of create layout of chess in applet i hope u like it
its the coding of create layout of chess in applet /* <Applet...++)
{
for(int i=0;i<=w;i=i+w/8)
{
g.setColor(Color.BLACK);
g.fillRect(i,y,w/8,h/8);
i=i+w/8
What would you rate as your greatest weaknesses?
like “I am a little too aggressive when it comes to achieving targets. I... must say that I am not a very aggressive person, although I know I can quietly... very difficult question like a question about your
weaknesses.
An unprepared
How to found which class or method contain specific annotation?
like
@One
class A
{
}
@Two
class B
{
}
@One
class C
{
}
How will I find...How to found which class or method contain specific annotation? ... in which methods are annotated and i want to retrieve which annotation is having
Help Very Very Urgent - JSP-Servlet
Help Very Very Urgent Respected Sir/Madam,
I am sorry..Actually the link u have sent was not my actual requirement..
So,I send my requirement... requirements..
Please please Its Very very very very very urgent...
Thanks
You have been working with this current firm for a long time. Don?t you think it would be difficult now to switch over t
openly and say that you would like to keep this private. However you can also...;t informed my current company about my job search, for obvious reasons. I would..., exciting branch of your profession that you would like to add to your skills
Very new to Java
Very new to Java hi I am pretty new to java and am wanting to create a programe for the rhyme 10 green bottles.
10 green bottles standing... actually help me with this that would be great
JSP,JDBC and HTML(Very Urgent) - JSP-Servlet
exact requirement in which if I get an immediate help,I will be very much grateful...JSP,JDBC and HTML(Very Urgent) Respected Sir/Madam,
Thanks... details from database using JDBC and JSP. The home page i.e HTML Page must contain
Offshore Outsourcing Tips,Useful Offshore Outsourcing Tips,Helpful Outsourcing Tips
very little of. Selection of associates can be
very tricky... to use the Internet search engines like
Google to find companies that have... their
web sites
Look for information like how long the company
Why PHP Is So Useful
become very popular in recent years. This is a form of scripting that can work... like online shopping websites or chat sites. In fact, PHP has been used for the creation of a variety of different kinds of online open source programs like
Linux in Internet
;
Linux is very useful for running Internet. It is now
being very popular due to its specific properties.... Both are very useful in
internet according to it's requirement. Generally
corejava - Java Interview Questions
singleton java implementation What is Singleton? And how can it get implemented in Java program? Singleton is used to create only one instance of an object that means there would be only one instance of an object
Request[/searchMenu] does not contain handler parameter named 'function'. This may be caused by whitespace in the label text. - Subversion
Request[/searchMenu] does not contain handler parameter named 'function'. This may be caused by whitespace in the label text. Hi,
i am using struts DispatchAction class .i am facing error like
Request[/searchMenu] does
Please help me... its very urgent
Please help me... its very urgent Please send me a java code to check whether INNODB is installed in mysql...
If it is there, then we need to calculate the number of disks used by mysql
corejava - Java Interview Questions
=1900;
var maxYear=2100;
function isInteger(s){
var i;
for (i = 0; i < s.length; i++){
// Check that current character is number.
var c = s.charAt(i);
if (((c < "0") || (c > "9"))) return code for converting the numer into character(for ex;if we enter 1 it will comes in words like one)? Hi friend,
import java.io.*;
class NumToWords {
private static final String[] maxWords
How would you honestly evaluate the strengths and weaknesses of your previous/current company/boss/team?
How would you honestly evaluate the strengths and weaknesses of your... deal with a situation like this. You might be really tempted to unburden your soul... it doesn’t make sense to appear like a scatter-brain who has never gone
Corejava - Java Interview Questions
);
}
for(int i=0;i
Tomahawk dataScroller tag
component of tomahawk is one
of the very useful component. This component can take the reference of
the UIData component like dataTable, dataList in its "for"
attribute. dataScroller tag renders a component
CoreJava
corejava
Why would we want a Database?
Part-3 : Why would we want a Database?
Most of the beginners are asking... how useful the database is, when you are
making a dynamic website. It provides... and large website in
lesser time.
4.Website : Suppose, you are making a very
Request[/DispatchAction] does not contain handler parameter named 'parameter'. This may be caused by whitespace in the label text. - Struts
Request[/DispatchAction] does not contain handler parameter named 'parameter'. This may be caused by whitespace in the label text. I am trying...[/DispatchAction] does not contain handler parameter named 'parameter'. This may
Useful SEO Articles - 5 Killer Tips for Writing SEO Articles for Home-Based Online Business Ideas
as this will guarantee your home-based online business ideas
and its corresponding SEO..., a prominent strategy would be
building a chief keyword phrase of 5 words... your article at several places like web pages,
blogs, SEO articles, etc.... for immediate result to take effect. We would focus here mainly on those areas that can quickly make the difference as to save your site from Google penguin update
How is LBS useful?
How is LBS useful?
... when one is traveling such that where am I? What is around me? Where... of LBS service is useful for users when they find themselves in an unfamiliar
Outsourcing Guidelines,Successful Guidelines on Outsourcing,Useful Guidelines on Outsourcing
a good job.
It would be a good idea...
the company might have to repeat its advertisements till it
gets... to
spell out its requirements to the vendor and also reach a
clear
How to translate gameobject like chainwise
How to translate gameobject like chainwise i try to make program of just like in casino where number move up and down high speed and then its speed decreasing.i have 6 number which i placed one above the other in y direction
What is EJB 3.0?
the Application Server. Stateless session is easy to develop and its
efficient... the method of a stateless bean, the bean's
instance variables may contain...
The state of an object consists of the values of its instance variables
The second Question is very tuff for me - Java Beginners
The second Question is very tuff for me You are to choose between.... One procedure returns the smallest integer if its array argument is empty... min = num[0];
for (int i=1; i< min) {
min = num[i
Web Site Goals - Goal of Web Designing
, but will take some extra charges. And we would like to say it that for getting...Web Site Goals - Goal of Web Designing
What are the prime features necessary... and the client.
What is Custom Web Design?
Custom web site is little
and offline is evolving very fast. New software
development and testing techniques... India website.
Index |
Ask
Questions | Site
Map
Web Services
Programming help Very Urgent - JSP-Servlet
Please please its very urgent..
Thanks/Regards,
R.Ragavendran.. ...Programming help Very Urgent Respected Sir/Madam,
Actually my code... present in the database.. When I click or select any of the ID,it automatically... a substantial part of description creation effort for the methods like
Global Positioning Systems and its uses
Global Positioning Systems and its uses
Global Positioning Systems.... It has come like a great relief. Carrying a map has become an
outdated... more useful These objects include vehicles, mobile
phones, some other electronic
Global positioning system issues
to be a very useful device for tracking
and locating objects or directions. Whilst, there are heaps of advantages of a
GPS system, but like all good things, it does have its share of disadvantages as
well. Here, we discuss the same.
Price
SEO Tips,Latest SEO Tips,Free SEO Tips & Tricks,Useful Search Engine Optimization Tips
have to work. These information would help you a lot while optimizing
a website...
designed site becomes easy to use and also get back the traffic on the site.
4... and directories. It
will take some time to list your site url.
7. Link Building
The last
How to populate collection like map with annotation in mybatis?
How to populate collection like map with annotation in mybatis? ...;result</collection>
I have... to remap this collection property? Please let us know the answer ASAP.Thank you very
for what value of i the loop executes for infinite times if i prove a condition like ( i !=i+0)
for what value of i the loop executes for infinite times if i prove a condition like ( i !=i+0) for what value of i the loop executes for infinite times if i prove a condition like ( i !=i+0
JSF Interview Questions
with object
properties and event handlers. Would u like to repeat the same...:
JSF contains its basic set of UI components like text...;
What does component mean and what are its
SCJP Module-5 Question-9
Given a sample code:
public class Test {
public static void main(String args[]) {
int i = 9;
while (i++ <= 12) {
i++;
}
System.out.print(i);
}
}
What will be the result of above code ?
(1) 10
(2) 14
(3) 13
(4) 11
Answer
very important - Kindly help
very important - Kindly help I am creating web page for form registration to my department ..I have to Reprint the Application Form (i.e Download the PDf File from the Database ) , when the user gives the Application number
JavaScript open() method
;
This open() method is very useful for the programmers. Here is a case when
this might be very useful. Suppose you are filling some information...
information filled in that form. So here the JavaScript's open() method
may be very | http://roseindia.net/tutorialhelp/comment/7383 | CC-MAIN-2014-42 | refinedweb | 2,470 | 66.54 |
Chocolate Feast problem Hackerrank
Here comes a benefit for chocolate lovers. The problem here is as beneficial to the one who can calculate it correctly and logically. Now, the problem is to find the number of chocolate if there is a condition that for every n wrapper they can get 1 more chocos like, Bobby has 4 dollars that he uses to buy 4 chocolates at 1 dollar a piece, he can trade in the 2 wrappers to buy 1 more chocolates. Now he has 4 more wrappers that he can trade in for 2 more chocolate. Because he only has 2 wrappers left at this point and he can take 1 more, he was only able to eat a total of 7 pieces of chocolate. 4+2+1. Kind-of tricky but interesting and beneficial.
main question source
Question format may be as follows
For a time being let’s say, Little Bobby loves chocolate, and he frequently goes to his favourite store, Penny Auntie, with n dollars to buy chocolates. Each chocolate has a flat cost of ‘c’ dollars, and the store has a promotion where they allow you to trade in ‘m’ chocolate wrappers in exchange for the free piece of chocolate.
For example, if m=2 and Bobby have n=4 dollars that he uses to buy 4 chocolates at a c=1 dollar a piece, he can trade in the 4 wrappers to buy 2 more chocolates. Now he has 2 more wrappers that he can trade in for 1 more chocolate. Because he only has 1 wrapper left at this point and 1<m, he was only able to eat a total of 7 pieces of chocolate.
So the data required is like:
Input
Given n, c, and m for t trips to the store, can you determine how many chocolates Bobby eats during each trip?
#include <math.h> #include <stdio.h> int main(){ int t; scanf("%d",&t); for(int a0 = 0; a0 < t; a0++){ int n; int c; int m; scanf("%d %d %d",&n,&c,&m); int choc=n/c; int mc=n/c; do{ if(choc>=m) { choc-=m; choc+=1; mc+=1; } } while(choc>=m); printf("%d\n",mc); } return 0; }
for _ in range(input()): n, c, m = map(int, raw_input().split()) chocs = n / c wraps = chocs while wraps >= m: chocs += wraps/m wraps = wraps/m + wraps%m print chocs
#include <cmath> #include <cstdio> #include <vector> #include <iostream> #include <algorithm> using namespace std; int main() { int t; long int n,c,m; cin>>t; for(int i=0;i<t;++i) { cin>>n>>c>>m; long int x=n/c; long int count=x; while(x>=m) { count=count+(x/m); x=(x/m)+x%m; } cout<<count<<endl; } return 0; }
For more competitive problems click
Hackerrank problem click
> | https://coderinme.com/chocolate-feast-problem-hackerrank-coderinme/ | CC-MAIN-2019-09 | refinedweb | 471 | 69.15 |
Here's what little I currently do to make my code more readable. Please share any reactions to these:
- In all but the most trivial scripts, I create new toplevels instead of using .
- I try to combine geometry management with widget creation when possible, i.e.
grid [text .mytoplevel.text] -row 0 -column 0However, when I have to specify lots of options for the widget and/or the geometry manager, I end up with long lines which IMHO look bad even if I break them up with backslashes.
- I try to break things up into small procs, where (for example) one proc will create a frame and its children and another proc will create bindings for those widgets.
PWQ 12 Aug 2003 I use a table driven approach to gui creation. As an example
set widgetlist { {button $par.b -command xx} {entry $par.e -width 6}} foreach w $widgetlist { pack [eval $w] -anc nw }The command as listed in the list (which can be spread out over multiple lines) become much easier to read.I have a system that extends this concept but is to complex to outline here, here is an example screen as an example (Stored in a file/database etc is the gui definition ):
Options { *LogoEntry*Entry*font {Times 24} *LogoPanel*Canvas*width 600 *LogoPanel*Canvas*height 400 *LogoMenu*Button*ipadx 0 *LogoMenu*Button*padX 2 *LogoMenu*Button*relief solid } Form Logo { Script { namespace eval Logo {set __create 1.0} } V3S#LogoPanel { H#LogoMenu { V3s { H3r { PackOptions {-anc w -padx 1} H1s { PackOptions {-anc w -padx 1} T { } T {Test:} B {{Clear} {::Logo::test:clear} } B {{Check} {::Logo::test:check} } B {{Load} {::Logo::test:load} } B {{Save} {::Logo::test:save} } } Sb/Lv {{{-orient v -command {!Logo yview} -width 10 }}} {-fill y} Lb/Logo:Logo::data(_cmds) {-font {{{Times 18 bold}}} -width 0 {{ -yscrollc {!Lv set}}} } {-fill y -expand 0} V+b { E/Vars:Logo::data(_vars) {0} {-fill x} H+b { T { } T {Programme: } B {{Save} {turtleSave [!Prog get 0 end] [!Vars get]} } B {{Load} {turtleLoad !Prog}} ........Notes:H means arrange horizontallyV means vertically.The # defines the class.The second item in each list are short cuts for the most common widget option.Ie B {{Save} -justify c} equals button -text Save -justify c.The third list item are the manager options (ie pack)./xx associates a variable with the widget.:xx defines a reference to the widget that can be used in other widget definitionsNames above are abbreviations (is B is button, Sb scrollbar, Lb listbox etc.
Bryan Oakley 12-Aug-2003I keep widget creation and geometry management separate. I also break up my UI into manageable units and build them separately. Within the unit things are managed however it makes sense for the unit, but the overall layout of these units is done in a separate proc. That makes it easy to create a View menu with items like "show toolbar", 'show statusbar", etc.I also store in a global array any widget paths that I use in other parts of the program. This way I can change the widget hierarchy as needed without having to track down hard-coded path names in other parts of the code.My initialization code, then, looks something like this:
proc main {} { ... widgets widgets.layout .... } proc widgets {} { widgets.menubar widgets.toolbar widgets.statusbar widgets.main } proc widgets.toolbar {} { global widgets set widgets(toolbar) .toolbar button $widgets(toolbar).cut ... button $widgets(toolbar).copy ... ... pack $widgets(toolbar).cut $widgets(toolbar).copy \ -side left } ... proc widgets.layout {} { global widgets global options . configure -menu $widgets(menubar) grid $widgets(toolbar) -row 0 -column ... grid $widgets(main) -row 1 -column ... grid $widgets(statusbar) -row 2 -column ... if {!$options(-showtoolbar)} { grid remove $widgets(toolbar) } .... }In reality I pass in the toplevel window to each of the procs so that, in theory, I can reparent the widgets if later I choose to embed them in a larger program. I left that out of this example to make the example a little easier to understand.
Bryan Oakley 12-Aug-2003here's another one: never put more than two statements in a widget callback or binding, and if you use more than one, the second one must always be "break".PWQ 13 Aug Why must the second argument be break, that stops any class binding from firing?(Bryan Oakley: I'm not saying there must be a second command, only that if there is it should be "break" (for the very reason you suggest). A more clear way to say this is, perhaps, "you should never have more than one command; an exception can be made to add ';break' to inhibit other bindings")Put another way, always have your callbacks and bindings call procs. It makes quoting easier, makes it easier to modify what bindings do, and makes it easier to share code between bindings and callbacks (ie: the "cut" toolbar can call the same proc as the "cut" menubar item and the "cut" accelerator).
PWQ 13 Aug 2003I also use a table driven approach to event bindings (when not using the forms system I have devised). This can be seen in my CAD package [3].A Structure holds all the event bindings and one procedure is called for all events. This then matches the event with current program state and dispatches the appropriate proc to handle the event.What this means is that the event is isolated from the widget, it does not matter which widget (or what it's path name is) generates the event.As an example, the <MOTION> event normally does nothing, but if say an object on a canvas has been grabbed, it now is dragged around with the mouse.An example for managing canvas items. From the table (a list of lists in a variable):{ - - <<SELECT>> {procGrab Move} }{Move * <<MOTION>> {procMove} }Explaination:
1 If no object (-) has been grabed in any mode (-) the <<SELECT>> event (B1) calles procGrab and changes mode to Move. 2 if in ''Move'' mode and any object has been grabbed (*) the <<MOTION>> (Motion) event calls procMove.While it seems overly complicated, it is far simpler to do this than to bind individual events to each widget. Also it groups all bindings in one place and allows them to be loaded from a file without having to change code.Lastly, This technique adds state information to the bindings so allows multiple event chaining to be implemented without having to change the event bindings on the widgets. Ie emacs two key type bindings can easily be implemented in this structure.
MAK (13 Aug 2003) - Like others, I (at least nearly) always separate widget creation from geometry management. The only usual exception is for simple frames that I don't care much about. But, for the most part, I keep them separate not only so that layout can be changed moderately easy, but so that I can maintain variables with the paths for various widgets, both for specifying the path prefix for other contained widgets and so that any code elsewhere that uses thise widgets can do so without hard-coding of the paths.Rather than globals, however, I tend to use Namespace MegaWidgets with variables within the appropriate namespace for keeping track of any child widgets and other data. These are usually arrays keyed off either the toplevel (for megawidgets that are intended to appear only once per toplevel) or off the megawidget's path (for child widgets of a particular instance of the megawidget). Each of these megawidgets has a function to create an instance of it that takes a widget path as an argument (as what to create the megawidget as, which also becomes the return value for the constructor), and sometimes options for configuration, so that they behave pretty close to the way normal Tk widgets work even if they're a primary area the GUI.The nice thing about this is that it's much more lightweight than a full OOP system but suits its purpose pretty well, and I can keep calling the MegaWidget proc on the same widget, but from different namespaces, to add functionality that is more specific for a particular use. I use a slightly extended version from the one in the Wiki that sets up some other things automatically, like automatically calling a destructor function in each namespace where the MegaWidget proc was called, for semi-automatic garbage collection. The drawback, though, is all the eval calls involved in calling the widget subcommands, but hopefully this will change when some argument expansion solution is adopted (hence my keen interest in [lexpand] or other possible TIP #144/103 solutions).Of course, naming conventions and formatting styles are always important for readability, but they tend to be somewhat subjective. I usually name my procs for various events (bindings etc.) with a consistent "onSomethingEvent" convention (for when Something happens), except when it's the result of clicking on a button or menu entry, in which case it usually starts with "cmd" (for command). I use to a two- to three-letter prefix (plus descriptor) for nearly all of my widget names such that, given a path, I know what each of the elements are (e.g. fr for frame, bu for button, etc.) at a glance. For configuration arguments specified at widget creation time, I usually split it apart into many lines -- one for each option, with option keywords and values vertically aligned for (subjective, of course) readability.Hmm.. What else.. Well, I try to focus on modularity, of course, if that wasn't clear from what I said about Namespace MegaWidgets. These would be more readable still if [package require] let you import the package into other than the global namespace (see [4]), but oh well. (Which brings up another point: try to avoid using hard-coded namespaces. Use [namespace current] for widget textvariables etc., and [namespace code] for bindings.)
Please contribute any principles that help you write Tk code in a more readable, manageable, and/or reusable way. | http://wiki.tcl.tk/9592 | CC-MAIN-2017-22 | refinedweb | 1,665 | 60.45 |
In this article, we will learn about how to write a python program to read character as input. We will read only single character entered by user and print it to console.
We can use built-in methods to read single character in python. They are –
Using input() method to Read Single Character in Python
input() method is used to read entered value from console. It returns a string value. To read single character, we can get first value using array index 0 as shown below –
input()[0]
input() returns string value. Then, we are getting first character of string value using [0].
Finally, input()[0] return first character and ignores other characters.
Example 1: Read and print first character using input()
We can print entered first character as below –
a=input()[0] print("Entered character: ", a)
When we run above program, we will get output as below –
njh Entered character: n
Using sys.stdin.read() method to Read Single Character in Python
Using method inside sys module, we can read entered value in console. It is slightly different from input() method. This method reads escape character as well. We can provide a parameter that tells how many character should it read from console. So, if we need to read to only one character, we can pass 1 as argument to this method. For example,
sys.stdin.read(1)
Above line will read one character from console. This is what we are going to use in our program to achieve our target.
Example 1: Read and print Single Character Using sys module
Now, we are going to see how to read and print single character using sys module –
import sys c = sys.stdin.read(1) print("Entered character: ", c)
Here,
- import sys imports sys module to python program.
- c = sys.stdin.read(1) reads first character from console and store it in variable c.
- Finally, using print method, we are printing entered value.
When you run above program, we will get output as below –
45 Entered character: 4
Here, 45 is entered value. But, are printing only first character – 4.
That’s how we can write python program to read character as input. Visit link to know more about python. | https://tutorialwing.com/python-program-to-read-character-as-input-with-example/ | CC-MAIN-2022-21 | refinedweb | 370 | 65.73 |
I am new to Typescript which I started because of Angular 2. I would like to use the Javascript library datejs.
To use it in my Angular 2 project I installed datejs through npm and confirmed it is listed in my package.json. In addition I installed the typings that are available through DefinitelyTyped (dt). When I do "typings list" it shows that the datejs typings are included globally.
Then in my component I have :
import 'datejs'
//declare var Date: any;
error TS2339: Property 'next' does not exist on type 'Date'.
Use IDateJSStatic, example in my homecomponent:
export class HomeComponent implements OnInit { pageTitle: string = 'Home'; DateJs: IDateJSStatic = <any>Date; constructor() { } ngOnInit(): void { console.log(this.DateJs.today()); }; }
Hope that helps | https://codedump.io/share/2tBAjjcrKSrr/1/installed-typings-for-datejs-not-working-for-me | CC-MAIN-2017-43 | refinedweb | 120 | 56.96 |
FULL PRODUCT VERSION : A DESCRIPTION OF THE PROBLEM : As noted at JDK-8049846,. In production, we hit this about once a day. For testing and reproduction purposes, we can use fault injection to get spurious poll() results on demand. [This report is probably more appropriate as a comment on JDK-8049846, but commenting requires an account, and obtaining an account does not appear to be an easy task] STEPS TO FOLLOW TO REPRODUCE THE PROBLEM : 1. Create file called poll.c with contents: ----- #define _GNU_SOURCE #include <poll.h> #include <dlfcn.h> #include <stdio.h> int poll(struct pollfd *fds, nfds_t nfds, int timeout) { static int n; if ((++n & 0x3) == 0) { // Fault injection: perhaps we should report a spurious readiness notification int i; for (i = 0; i < nfds; ++i) { if (fds[i].events & POLLIN) { fds[i].revents |= POLLIN; return 1; } } } return ((int(*)(struct pollfd*,nfds_t,int))dlsym(RTLD_NEXT, "poll"))(fds, nfds, timeout); } ----- 2. Create file called OneReaderThread.java with contents: ----- import java.io.IOException; import java.io.InputStream; import java.io.OutputStream; import java.net.ServerSocket; import java.net.Socket; public class OneReaderThread { public static void main(String[] args) throws IOException { @SuppressWarnings("resource") ServerSocket serverSocket = new ServerSocket(17291); new ReadingThreadA().start(); Socket client = serverSocket.accept(); OutputStream os = client.getOutputStream(); byte[] writeData = new byte[2]; for (;;) { waitingForA = true; long start = System.currentTimeMillis(); while (waitingForA) { long now = System.currentTimeMillis(); if (now > start + 500) { // 500ms have passed, which is 10x the read timeout System.out.println("Should never happen: A is unresponsive"); os.write(writeData); break; } } } } private static volatile boolean waitingForA; private static final class ReadingThreadA extends Thread { @Override public void run() { try { @SuppressWarnings("resource") Socket s = new Socket("localhost", 17291); s.setSoTimeout(50); // SO_TIMEOUT is set, meaning that reads should not block for more than 50ms final InputStream is = s.getInputStream(); byte[] readDataA = new byte[2]; for (;;) { int n = 0; try { n = is.read(readDataA); } catch (IOException e) { // Ignore } System.out.println("A tick (" + n + ")"); waitingForA = false; // This assignment should happen at least once every 50ms } } catch (Exception e) { e.printStackTrace(); } } } } ----- 3. Compile the C code: gcc -o poll.so -shared poll.c -ldl -fPIC 4. Compile the Java code: javac OneReaderThread.java 5. Run the Java code with the C library preloaded: LD_PRELOAD=./poll.so java -cp . OneReaderThread EXPECTED VERSUS ACTUAL BEHAVIOR : EXPECTED - Expect an output stream consisting solely of: A tick (0) A tick (0) A tick (0) A tick (0) A tick (0) A tick (0) A tick (0) A tick (0) ACTUAL - Actual output stream is repetitions of: Should never happen: A is unresponsive A tick (2) A tick (0) A tick (0) A tick (0) Should never happen: A is unresponsive A tick (2) A tick (0) A tick (0) A tick (0) REPRODUCIBILITY : This bug can be reproduced occasionally. | https://bugs.java.com/bugdatabase/view_bug.do?bug_id=JDK-8075484 | CC-MAIN-2020-16 | refinedweb | 462 | 51.65 |
The 50in Samsung PS50C7000 3D plasma TV has excellent picture quality and is well designed and constructed, and it offers a swathe of interactive internet features.
Best of all, the Samsung PS50C7000 is cheaper than the Samsung Series 7 LED television; it's relatively affordable considering its feature set.
Samsung PS50C7000: Design, set up and connectivity
The Samsung PS50C7000 is at the forefront of a new wave of plasma television design, joining the LG 50PK750 and Panasonic TH-P54Z1A in offering a chassis that's nearly as thin as an LED television. Sure, it's not as svelte as the Samsung Series 9 LED or the Sony XEL-1 OLED, but it's also a hell of a lot cheaper.
The bezel is finished in a very modern combination of translucent plastic and brushed dark aluminium, while the stand uses a cylindrical plastic column and a rectangular brushed metal base. Connecting the two is a relatively simple process if you've got an assistant — two sets of screws and a hook-in system hold the set securely, but the set up process can be a bit dicey if you don't have someone to position the panel against the stand while you're inserting screws. The stand also swivels with a 40 degree range of motion, which is a welcome inclusion.
More than enough audio/video connectors are built in to the Samsung PS50C7000's rear and side panels. Four HDMI 1.4 connectors support 3D and audio return (for relaying audio to a receiver through an 'input' HDMI cable, instead of requiring an additional digital audio output), while component and composite connectors are supported via bundled break-out cables. You also get Ethernet, DVI, VGA and a few extra audio inputs and outputs.
Controlling the whole show is the same excellent brushed aluminium remote control that came with the Samsung Series 7 LED television. The same iPhone-style internet application interface seen on the Series 7 LED is used on the PS50C7000 — you can download a wide range of programs like Facebook, Twitter and Google Maps and run them on-screen. There's not much in the way of video-on-demand apart from YouTube, but the apps are a novelty and might be useful for a bored viewer.
Samsung PS50C7000: 2D and 3D picture quality
It's no secret that we prefer plasma over LED when it comes to outright picture quality, and the Samsung Series 7 (PS50C7000) just reinforces that. One area where the Samsung Series 7 (PS50C7000) plasma excels is when viewing content at extreme off-axis angles — it simply doesn't wash out at all when viewed on a horizontal angle. Vertical viewing, at least in the range of viewing you'd expect with the panel mounted on a wall or on a low entertainment unit, is similarly excellent.
Black levels are almost the only complaint we have about the Samsung Series 7 (PS50C7000)'s picture quality. They're good, but not great — in the opening sequence of The Dark Knight Blu-ray we noticed a few gradient details missing in darker areas of the screen. Even in the default Cinema mode the sharpness and vibrancy of the plasma panel are excellent, with on-screen video having a smooth and well-saturated tone.
We opted to turn off motion and judder compensation; although these features generally lend a smoother feel to video, we preferred the Samsung Series 7 (PS50C7000)'s output of The Dark Knight and Watchmen with them disabled. As you'd expect, the Samsung Series 7 (PS50C7000) supports 24p Film Mode playback for a proper cinematic frame rate.
2D playback is great, and 3D video is slightly better than the implementation we saw on the Samsung Series 7 LED television. There's less edge-blurring cross-talk (although a small amount is still visible), and the created sense of depth is impressive. For our thoughts on Samsung's current 3D implementation, check out the Series 7 LED review — all our comments hold true for the plasma as well. Just like on the LED model, Samsung also includes a 2D-to-3D mode that stretches the usefulness of the feature — you can watch the news and Two and a Half Men in 3D.
NEXT: our expert verdict >> | http://www.pcadvisor.co.uk/reviews/pc-peripheral/3239699/samsung-series-7-ps50c7000-review/ | CC-MAIN-2014-15 | refinedweb | 713 | 54.46 |
Here's a proposed patch that I believe will fix this problem, but testing concurrent autoloads makes my head hurt. All our tests pass, at any rate.
After creating a test case for this, I realized that it can't work without really invasive modifications to constant lookup or without behavioral changes to autoload itself. I will attach a patch for current set of changes + test case shortly.
The problem is that autoload first deletes the existing constant and then does its require. So there's a period of time when the constant will be neither populated (which is presumably what the required file will do) nor tagged as an autoload. Any thread performing a lookup of this constant during that period, which could span the entire period when the require is executing, would see no constant value. In most cases, this would mean the constant search continues and eventually fails.
A few possible changes could make this work, but they're not going to happen before 1.1.6 (hence why I'm booting this to 1.1.7):
- We could synchronize all constant lookups against a per-constant mutex. This is obviously not an option; the locking cost would kill us.
- If we added a new marker for autoloads, one that says "autoload is in progress", then while the autoload is running all concurrent lookups would block waiting for it to complete. There's no risk of deadlock since there would be a single global autoload mutex. However, this represents a behavioral change because the constant would necessarily have to still point at the "in progress" marker. So defined? and friends would have to be made aware of this new marker...in fact, all cases where we look up constants would have to be made aware of it. It's a very invasive change.
- If autoload's behavior were modified to not delete the constant first, but only delete it after if it were still pointing at the autoload marker, we'd also be ok. Concurrent lookups would see the marker and try to acquire the autoload lock; upon acquiring it they would again check to see if the autoload marker was still present (this may be optional) and proceed to do the require. By the time the master autoload thread has completed, the constant would be nulled out or set by the required file. By the time all other threads acquire the lock, the require will have completed and the constant set, so they can bail out early. This requires similar changes as above, since currently defined? Foo inside the autoloaded script attached to Foo would show it's not defined.
Either way, I think we need to enlist ruby-core in this. Without threading guarantees autoload borders on useless, and the current behavior is not possible to make thread-safe.
Added a test case to the new patch.
I filed a bug here some time ago but it has not yet been resolved.
I'm going to move this bug to 1.x+ since I think autoload is inherently unsafe in a threaded environment, and there's no easy way to fix it.
My app uses autoload and used to sporadically encounters method not defined errors, which presumably has something to do with concurrent autoload during startup. I have noticed that the frequency has gone down significantly in the JRuby 1.2 RC, but I have seen a few cases.
Does this JRuby bug need to wait until the MRI bug is resolved before proceeding, or can JRuby implements something in the meantime in the form of thread-safe concurrent autoload?
The problem is that nobody's come up with a good way to resolve this. I've proposed a couple theoretical fixes, but never mocked any up, and the synchronization/threading is very tricky to get right. If we could come up with a way to fix it for certain, I think we could go ahead and implement it, but as it stands right now autoload is inherently broken under both MRI and JRuby, and there's no obvious fix.
Interestingly, I saw the release notes for the upcoming Rails 2.3 release, and it looks like the Rails team has switched to using autoload in order to reduce loading time.
This is now working in MRI 1.9.2 - or at least, it works for the test case here: which fails in jruby 1.6.0.RC1
+1 on hitting "uninitalized constant" threading problems with this
This limitation is still causing problems for multi-threaded apps. Any updates?
master branch might solve this problem. require is not threadsafe as CRuby (see) but autoload itself is threadsafe now at master branch.
Though we still have some issue about this, I want to ensure that we fix this problem for the next release (1.6.2 or 1.7). Michael, do you have a reproducible script/application to check this?
I found a test script at
And I found the current master still has this problem. I'm still misunderstanding something...
This is an ad-hoc fix (CAUTION: it changes public method signature). Charles: Is this the right way?
diff --git a/src/org/jruby/RubyKernel.java b/src/org/jruby/RubyKernel.java index e72f135..4b31fba 100644 --- a/src/org/jruby/RubyKernel.java +++ b/src/org/jruby/RubyKernel.java @@ -203,13 +203,8 @@ public class RubyKernel { /** * @see org.jruby.runtime.load.IAutoloadMethod#load(Ruby, String) */ - public IRubyObject load(Ruby runtime, String name) { - boolean required = loadService.require(file()); - - // File to be loaded by autoload has already been or is being loaded. - if (!required) return null; - - return module.fastGetConstant(baseName); + public boolean load(Ruby runtime, String name) { + return loadService.require(file()); } }); return runtime.getNil(); diff --git a/src/org/jruby/RubyModule.java b/src/org/jruby/RubyModule.java index 567862b..851db37 100644 --- a/src/org/jruby/RubyModule.java +++ b/src/org/jruby/RubyModule.java @@ -2868,7 +2868,9 @@ public class RubyModule extends RubyObject { public IRubyObject resolveUndefConstant(Ruby runtime, String name) { if (!runtime.is1_9()) deleteConstant(name); - return runtime.getLoadService().autoload(getName() + "::" + name); + runtime.getLoadService().autoload(getName() + "::" + name); + + return fetchConstant(name); } /** diff --git a/src/org/jruby/runtime/load/IAutoloadMethod.java b/src/org/jruby/runtime/load/IAutoloadMethod.java index b041c5e..aae6f39 100644 --- a/src/org/jruby/runtime/load/IAutoloadMethod.java +++ b/src/org/jruby/runtime/load/IAutoloadMethod.java @@ -36,5 +36,5 @@ import org.jruby.runtime.builtin.IRubyObject; */ public interface IAutoloadMethod { public String file(); - public IRubyObject load(Ruby runtime, String name); + public boolean load(Ruby runtime, String name); } diff --git a/src/org/jruby/runtime/load/LoadService.java b/src/org/jruby/runtime/load/LoadService.java index c3b1895..62cbaf4 100644 --- a/src/org/jruby/runtime/load/LoadService.java +++ b/src/org/jruby/runtime/load/LoadService.java @@ -468,12 +468,11 @@ public class LoadService { autoloadMap.remove(name); } - public IRubyObject autoload(String name) { + public void autoload(String name) { IAutoloadMethod loadMethod = autoloadMap.remove(name); if (loadMethod != null) { - return loadMethod.load(runtime, name); + loadMethod.load(runtime, name); } - return null; } public void addAutoload(String name, IAutoloadMethod loadMethod) {
Note for me: LoadService#autoloadMap should be thread-safe for real fix.
I fixed this problem on require_protection branch.
I'm going to merge the branch to master after a while.
Fixed by be86b9ace6e2b218f6c25e2763519ee1723af98a.
JRUBY-3194: Make autoload thread-safe. To make autoload thread-safe, remove loadMethod from loadService.autoloadMap after loading file, not before. Removing loadMethod before loading file would cause the second thread raising NameError since the target constant could not be defined yet. And this commit introduces 3 states as a resulting value of internal require method: LoadService#requireCommon() * LOADED ........... specified file is loaded. * ALREADY_LOADED ... specified file is already loaded. * CIRCULAR ......... circular require. For Kernel.require, ALREADY_LOADED and CIRCULAR are treated as 'false' as a resulting value. For Kenrel.autoload, when it gets CIRCULAR from requireCommon(), it returns nil instead of defined constant. CIRCULAR helps the following autoload situation. autoload :Java, 'java' require 'java' #=> 'module Java ...' => invoke autoload for :Java => circular require of 'java' => NameError since Java is not yet defined. NB: JRuby 1.6.1 does not have this circular autoload problem because 1.6.1 was using $LOADED_FEATURES for protecting runtime from threaded require. Recursive autoload just returns nil since the feature is marked as 'already loaded'.
Oops. I should not close this ticket.
My fix was for 1.9. autoload is still not thread-safe in 1.8 mode. autoload of CRuby 1.9 is thread safe, and not for CRuby 1.8.
I talked to @shyouhei who is the branch maintainer of CRuby 1.8, and we agreed that it's a bug of CRuby 1.8 which should be fixed. (CRuby dev hat) I don't know if we can fix it in the next patch release but (JRuby dev hat) we can fix this on ahead. I'll fix it for 1.8 mode, too.
Here's the "FIX" (simple!), but it causes 2 RubySpec failures.
diff --git a/src/org/jruby/RubyModule.java b/src/org/jruby/RubyModule.java index 567862b..bca4aca 100644 --- a/src/org/jruby/RubyModule.java +++ b/src/org/jruby/RubyModule.java @@ -2866,8 +2866,6 @@ public class RubyModule extends RubyObject { } public IRubyObject resolveUndefConstant(Ruby runtime, String name) { - if (!runtime.is1_9()) deleteConstant(name); - return runtime.getLoadService().autoload(getName() + "::" + name); }
Failures.
1) Module#autoload removes the constant from the constant table if load fails FAILED Expected ModuleSpecs::Autoload NOT to have constant 'Fail' but it does /home/nahi/git/jruby/spec/ruby/core/module/autoload_spec.rb:163' 2) Module#autoload removes the constant from the constant table if the loaded files does not define it FAILED Expected ModuleSpecs::Autoload NOT to have constant 'O' but it does /home/nahi/git/jruby/spec/ruby/core/module/autoload_spec.rb:171'
CRuby/JRuby 1.8 removes the constant first so the above 2 expectations are OK. CRuby/JRuby 1.9 does not remove the constant so both does not satisfy it.
I think that RubySpec should be corrected but I'm going to talk CRuby 1.8 guys how we treat this.
Re getting 1.8 changed and fixing RubySpec: yes, I agree. It's a visible change, but a minor one, and it would allow making 1.8 autoloads thread-safe.
My 'fix' for autoload thread-safety above was incomplete.
% cat autoload.rb autoload :Foo, 'constant.rb' Thread.abort_on_exception = true t1 = Thread.new { puts "thread #{Thread.current} accessing X" p Foo::X } t2 = Thread.new { puts "thread #{Thread.current} accessing X" p Foo::X } t1.join t2.join % cat constant.rb puts "#{Thread.current} in constant.rb" module Foo # simulate a slow file load or a deep chain of requires # !Foo is already defined! sleep 1 # define X X = 1 end % jruby163 autoload.rb thread #<Thread:0x63d982d> accessing X thread #<Thread:0x71051c99> accessing X NameError: uninitialized constant Foo const_missing at org/jruby/RubyModule.java:2569 __file__ at autoload.rb:11 call at org/jruby/RubyProc.java:268 call at org/jruby/RubyProc.java:232 #<Thread:0x63d982d> in constant.rb zsh: exit 1 /home/nahi/java/jruby-1.6.3/bin/jruby autoload.rb
'sleep 1' before defining Foo in constant.rb is OK, but 'sleep 1' after defining Foo does not work.
And now I implemented (hopefully) actual thread-safe autoload in autoload-threadsafe branch. Above script works as expected.
- (fast* lookup deprecation)
- (actual autoload fix)
I added autoloadingMap to RubyModule for keeping autoloading states. It consists of a RubyThread which invokes autoloading and an Object assigned to the autoload constant while autoloading. Keeping RubyThread in a cache while autoloading can be a problem?
While autoloading, constantMap keeps UNDEF for the constant and autoloadingMap keeps the assigned object instead. Constant lookup checks if the value in constantMap is UNDEF, and iif it's UNDEF, it checks autloadingMap as well.
Unlike predicted beforehand, there's no perf drop. Introducing (lazy init) autoloadingMap for each module might be a memory consumer but it's negligible for general autoload usage I think.
As a side effect, in 1.8 mode, Module#autoload does not remove the constant from constant table if
the loaded files does not define it. This causes 3 RubySpec failures. This behavior is as same as
CRuby 1.9, and needed for autoload thread-safety. CRuby 1.8 removes the constant, and it's hard to be thread-safe as far as it keeps the current behavior.
I believe this JRuby/CRuby behavior incompatibility on 1.8 mode is OK to introduce since the behavior is hard to imagine before trying it so no one would depend on the behavior intentionally.
How do you think?
The changes seems logical, and the behavioral difference for 1.8 mode seems like it's unavoidable. I've also spent a lot of time thinking through this, and without the constant holding some kind of marker to indicate there's an autoload in progress (requiring it to remain definde) there's no way to make autoload threadsafe.
I think your code may need some locking though. For example, line 2251 in RubyModule retrieves any current autoload and the following line checks if it is null. Since this is not an atomic operation, another thread could come in and install an autoload immediately after that retrieval. Odd, and buggy code on the user's part, but it's a case to consider.
You also should be synchronizing around the construction of the autoload map, so another thread doesn't swoop in, create a map, populate it with an autoload, and then it gets wiped out. There may be other places where some locking is needed. I can help, if you like.
Does your fix depend on the fast* deprecation? We don't deprecate for minor releases (1.6.4) but it would still be good to get the autoload thread-safety fix in if possible. Maybe it's too invasive for a minor release too?
Thanks for comment, Charles. Replies.
- For line 2251 in RubyModule, are you pointing LoadService.java#483? RubyModule.java#2251 seems to be a comment line in autoload-threadsafe branch. I think LoadService.java#843 is OK since it checks if it's null or not against a variable got from autoloadMap. I could be seeing wrong line.
- For autoloadMap construction, I added synchronize to getAutoloadingMapForWrite() in RubyModule.java#200. I just copied this line from getConstantMapForWrite() so it's safe, too.
- My fix does not depend on fast* deprecation. I wanted to narrow down the occurrences where look up constantMap and autoloadingMap. I could be able to create a commit without deprecating fast* methods now, but I think it's hard to merge this fix into 1.6. 1.6 still has a bug around cyclic require and concurrent autoload (be86b9ace6e2b218f6c25e2763519ee1723af98a) and the new fix depends on it. Should we merge all fixes in this issue to 1.6?
I fixed a concurrency issue at eef82e77 [1]. The root cause of the concurrency issue is that the autoloadingMap I added to RubyModule need to sync with autoloadMap at VM global LoadService. I merged these maps into autoloadMap in RubyModule and simplified synchronizing strategy.
In Ruby < 1.8, autoload is only allowed at top-level. VM global LoadService was OK for the spec. CRuby 1.8 added to support per-Module autoload and current JRuby uses 'Module#name + :: + constname' as a key for autoloadMap.
I removed autoloadMap in LoadService and moved it to RubyModule at eef82e77. There's no need to use 'Module#name + ::' prefix in the new implementation.
[3] is the patch of whole fix. This patch can be applied after the patch [2] (fast* deprecation). Ant test passes. Ant spec marks 3 fails but I think it's OK as I stated in the previous comment.
Do you think it's OK to merge?
For jruby-1_6 branch, I think we should not merge it as far as we don't receive user's request.
[1]
[2] (fast* lookup deprecation)
[3] (autoload threadsafe fix)
Nahi, feel free to merge on master. From your discussions with me last week and the fact that we want as much bake time as possible please land it now
[We can always back it out if we discover a serious problem]
It would be nice to make some torture tests over the next couple of months to verify the completeness of the fixes you have made.
Thomas, thanks for your comment. It's a big relief that you got back from Japan safely.
Merged to master at [6d182d07]. CRuby is the next to be fixed, and RubySpec is the last.
Closing.
I fixed CRuby trunk based on the same logic, and it will be released as 1.9.4 or 2.0. RubySpec won't be changed since CRuby 1.8 won't be fixed.
Will it be released on 1.6.x branch?
Gabriel: not planned. It would be released as 1.7.0, in months I heard.
Did the problem bite you on 1.6.4? How does it go on 1.7.0.dev?
You can get 1.7.0.dev snapshot package from
Yes, it's a major block for me on 1.6.x. I'm using jruby-head (from rvm) so, now everything is working fine.
I think not many people are using threads with ruby/rails application so they are cool with it. I'm using threads to migrate massive amount of data with some rake tasks and active record, and by using jruby (1.7) with threads I got like 5x performance improvement
I'm hitting this consistently on 1.6.5 with a couple of jetty/rack (fishwife) + Sinatra applications. A workaround is to preload the autoload early, before thread pool is loaded. However, every time I take a Sinatra or Rack upgrade the set of necessary pre-loads changes and I break again.
1.9 mode is not working for me yet.
Any chance the fix for this could be back-ported to a 1.6.6?
Let me know if I can try and help.
David, can you try 1.7.0.dev to confirm the issue is actually what we've fixed at master branch? You can obtain the snapshot build from.
And I'm now discussing with @headius about how we can fix this issue at coming 1.6.6. Please stay tuned.
Sure, I can try that. But do I need to get my app working in 1.9 mode to even bring this new fix into play?
No, you don't need to run it in 1.9. The fix is both for 1.8 and 1.9 in JRuby.
To close the circle on some of this discussion, we are considering including in 1.6.6 an optional flag you can enable to force all requires to synchronize against the same global lock. This is an indirect, brute-force way to solve autoload thread safety, since autoload uses require at its core. There will be libraries that block within a require, like current Rails does when you run "rails server", but this may be an acceptable middle ground for folks using 1.6.x that need both concurrency and thread-safe autoloads.
The experience gained from having the global require lock flag in the wild will also doubtless lead to library fixes (such as fixes Yehuda already plans to make for "rails server"), and give both us and MRI real-world information on the implications of a global require lock.
Preliminary findings, using my app and comparing the following two jruby versions:
jruby 1.6.5 (ruby-1.8.7-p330) (2011-10-25 9dcd388) (Java HotSpot(TM) Server VM 1.7.0_02) [linux-i386-java] jruby 1.7.0.dev (ruby-1.8.7-p357) (2012-01-18 29168ec) (Java HotSpot(TM) Server VM 1.7.0_02) [linux-i386-java]
Where the 1.7.0.dev I built from 29168ec jruby master branch today. The 1.8 mode is shown, but I also tested with 1.9.
I can reproduce the problem on both builds, and both 1.8 and 1.9 modes. However, where the problem is reproduced on 1.6.5 nearly 100% of time; I see it maybe 50% of the time on 1.7.0.dev. Here are examples of stack dumps:
(NameError) uninitialized constant Rack::Builder at org.jruby.RubyModule.const_missing(org/jruby/RubyModule.java:2610) at RUBY.new(/home/david/.gem/jruby/1.8/gems/sinatra-1.3.2/lib/sinatra/base.rb:1321) at RUBY.prototype(/home/david/.gem/jruby/1.8/gems/sinatra-1.3.2/lib/sinatra/base.rb:1311) at RUBY.call(/home/david/.gem/jruby/1.8/gems/sinatra-1.3.2/lib/sinatra/base.rb:1334) at RUBY.synchronize(/home/david/.gem/jruby/1.8/gems/sinatra-1.3.2/lib/sinatra/base.rb:1416) at RUBY.call(/home/david/.gem/jruby/1.8/gems/sinatra-1.3.2/lib/sinatra/base.rb:1334) at RUBY.service(/home/david/.gem/jruby/1.8/gems/fishwife-1.1.1-java/lib/fishwife/rack_servlet.rb:83) at org.jruby.RubyKernel.catch(org/jruby/RubyKernel.java:1115) at RUBY.service(/home/david/.gem/jruby/1.8/gems/fishwife-1.1.1-java/lib/fishwife/rack_servlet.rb:82) (NameError) uninitialized constant Rack::Protection::IPSpoofing at org.jruby.RubyModule.const_missing(org/jruby/RubyModule.java:2590) at #<Class:0x186f2c>.new(/home/david/.gem/jruby/1.8/gems/rack-protection-1.2.0/lib/rack/protection.rb:24) at org.jruby.RubyKernel.instance_eval(org/jruby/RubyKernel.java:2062) at Rack::Builder.initialize(/home/david/.gem/jruby/1.8/gems/rack-1.3.6/lib/rack/builder.rb:51) at #<Class:0x11badec>.new(/home/david/.gem/jruby/1.8/gems/rack-protection-1.2.0/lib/rack/protection.rb #<Class:0xed0ded>.prototype(/home/david/.gem/jruby/1.8/gems/sinatra-1.3.2/lib/sinatra/base.rb:1311) at #<Class:0xaa0c76>.call(/home/david/.gem/jruby/1.8/gems/sinatra-1.3.2/lib/sinatra/base.rb:1334) at #<Class:0xed0ded>.synchronize(/home/david/.gem/jruby/1.8/gems/sinatra-1.3.2/lib/sinatra/base.rb:1416) at #<Class:0xed0ded>.call(/home/david/.gem/jruby/1.8/gems/sinatra-1.3.2/lib/sinatra/base.rb:1334) at Fishwife::RackServlet.service(/home/david/.gem/jruby/1.8/gems/fishwife-1.1.1-java/lib/fishwife/rack_servlet.rb:83) at org.jruby.RubyKernel.catch(org/jruby/RubyKernel.java:1192) at Fishwife::RackServlet.service(/home/david/.gem/jruby/1.8/gems/fishwife-1.1.1-java/lib/fishwife/rack_servlet.rb:82) (NameError) uninitialized constant Rack::NullLogger at org.jruby.RubyModule.const_missing(org/jruby/RubyModule.java:2590) at #<Class:0x1fa3ae8>.setup_null_logger(/home/david/.gem/jruby/1.9/gems/sinatra-1.3.2/lib/sinatra/base.rb:1361) at #<Class:0x1fa3ae8>.setup_logging(/home/david/.gem/jruby/1.9/gems/sinatra-1.3.2/lib/sinatra/base.rb:1356) at #<Class:0x1fa3ae8>.setup_default_middleware(/home/david/.gem/jruby/1.9/gems/sinatra-1.3.2/lib/sinatra/base.rb:1342) at #<Class:0x1fa3ae8>.build(/home/david/.gem/jruby/1.9/gems/sinatra-1.3.2/lib/sinatra/base.rb:1327) at #<Class:0x1fa3ae8>.new(/home/david/.gem/jruby/1.9/gems/sinatra-1.3.2/lib/sinatra/base.rb:1321) at #<Class:0x1fa3ae8>.prototype(/home/david/.gem/jruby/1.9/gems/sinatra-1.3.2/lib/sinatra/base.rb:1311) at #<Class:0x1cc488d>.call(/home/david/.gem/jruby/1.9/gems/sinatra-1.3.2/lib/sinatra/base.rb:1334) at #<Class:0x1fa3ae8>.synchronize(/home/david/.gem/jruby/1.9/gems/sinatra-1.3.2/lib/sinatra/base.rb:1416) at #<Class:0x1fa3ae8>.call(/home/david/.gem/jruby/1.9/gems/sinatra-1.3.2/lib/sinatra/base.rb:1334) at Fishwife::RackServlet.service(/home/david/.gem/jruby/1.9/gems/fishwife-1.2.0-java/lib/fishwife/rack_servlet.rb:83) at org.jruby.RubyKernel.catch(org/jruby/RubyKernel.java:1206) at Fishwife::RackServlet.service(/home/david/.gem/jruby/1.9/gems/fishwife-1.2.0-java/lib/fishwife/rack_servlet.rb:82)
The trick with this is that it happens only on the first parallel requests as startup; and at least with these builds self-corrects on subsequent requests.
Would it help if we created a small test bed for this? Minimal self contained app and test?
Thank you. Hmm, I'll investigate it today.
> Would it help if we created a small test bed for this? Minimal self contained app and test?
It would help much. Small and minimal is good of course, but not a must. Please send me directly if you think it's not fit for public location.
I'm now trying to backport thread-safe autoload to jruby-1_6. I'll check the test bed against the backport.
Ok, here is a self contained (well, after "bundle install") app and basic test driver:
Please let me know if you have any trouble or questions with it. Hope it helps!. | http://jira.codehaus.org/browse/JRUBY-3194?focusedCommentId=267225&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel | CC-MAIN-2013-20 | refinedweb | 4,159 | 61.33 |
Story that rewrites itself
Hitch stories can be partially rewritten when the code is changed when a step involves verifying a block of text.
It is a time saver when you only want to make modifications to messages output by a program and ensure that those modifications are verified.
Instead of manually constructing the exact output you are expecting you can simply visually inspect the output to verify that it is the desired output.
This example shows a story being run in “rewrite” mode - where text strings are rewritten. This mode can be used when doing development when you expect textual changes.
If the story passes then the file will be rewritten with the updated contents. If the story fails for any reason then the file will not be touched.
If rewrite=False is fed through to the story engine instead, the story will always fail when seeing different text. This mode can be used when, for example, running all the stories on jenkins or when you are refactoring and not expecting textual output changes.
example.story:
Do things: steps: - Do thing: x - Do thing: y - Do thing: z - Do other thing: variable 1: a variable 2: b variations: Do more things: steps: - Do thing: c
engine.py:
from hitchstory import BaseEngine class Engine(BaseEngine): def __init__(self, rewrite=True): self._rewrite = rewrite def do_thing(self, variable): if self._rewrite: self.current_step.update( variable="xxx:\nyyy" ) def do_other_thing(self, variable_1=None, variable_2=None): if self._rewrite: self.current_step.update( variable_2="complicated:\nmultiline\nstring" )
from hitchstory import StoryCollection from pathquery import pathquery from engine import Engine
Rewritten:
StoryCollection(pathquery(".").ext("story"), Engine(rewrite=True)).ordered_by_name().play()
Will output:
RUNNING Do things in /path/to/example.story ... SUCCESS in 0.1 seconds. RUNNING Do things/Do more things in /path/to/example.story ... SUCCESS in 0.1 seconds.
No changes:
StoryCollection(pathquery(".").ext("story"), Engine(rewrite=False)).ordered_by_name().play()
Will output:
RUNNING Do things in /path/to/example.story ... SUCCESS in 0.1 seconds. RUNNING Do things/Do more things in /path/to/example.story ... SUCCESS in 0.1 seconds.
Then the example story will be unchanged.
Executable specification
Page automatically generated from rewrite-story.story. | https://hitchdev.com/hitchstory/using/alpha/rewrite-story/ | CC-MAIN-2019-09 | refinedweb | 364 | 50.53 |
NAME
Astro::SIMBAD::Client - Fetch astronomical data from SIMBAD 4.
SYNOPSIS
use Astro::SIMBAD::Client; my $simbad = Astro::SIMBAD::Client->new (); print $simbad->query (id => 'Arcturus');
NOTICE
As of release 0.027_01 the SOAP interface is deprecated. The University of Strasbourg has announced that this interface will not be supported after April 1 2014.
Because the SOAP interface is still sort of functional (except for VO-format queries) as of June 4 2014, I have revised the transition plan announced with the release of 0.027_01 on October 28 2014.
What I have done as of version 0.031_01 is to add attribute
emulate_soap_queries. This is false by default. If this attribute is true, the
query() method and friends, instead of issuing a SOAP request to the SIMBAD server, will instead construct an equivalent script query, and issue that. The deprecation warning will not be issued if
emulate_soap_queries is true, since the SOAP interface is not being used.
I intend to make the default value of
emulate_soap_queries true in the first release on or after October 1 2014, assuming SOAP queries work for that long.
When the SOAP servers go out of service (and I notice) SOAP queries will become fatal, and the default value of
emulate_soap_queries will become true if it is not already.
Eventually the SOAP code will be removed. In the meantime all tests are marked TODO, and support of SOAP by this module will be on a best-effort basis; that is, if I can make it work without a huge amount of work I will -- otherwise SOAP will become unsupported.
DESCRIPTION
This package implements several query interfaces to version 4 of the SIMBAD on-line astronomical catalog, as documented at. This package will not work with SIMBAD version 3. Its primary purpose is to obtain SIMBAD data, though some rudimentary parsing functionality also exists.
There are three ways to access this data.
- URL queries are essentially page scrapers, but their use is documented, and output is available as HTML, text, or VOTable. URL queries are implemented by the url_query() method.
- Scripts may be submitted using the script() or script_file() methods. The former takes as its argument the text of the script, the latter takes a file name.
- Queries may be made using the web services (SOAP) interface. The query() method implements this, and queryObjectByBib, queryObjectByCoord, and queryObjectById have been provided as convenience methods. As of version 0.027_01, SOAP queries are deprecated. See the NOTICE section above for the deprecation schedule.
Astro::SIMBAD::Client is object-oriented, with the object supplying not only the SIMBAD server name, but the default format and output type for URL and web service queries.
A simple command line client application is also provided, as are various examples in the eg directory.
Methods
The following methods should be considered public:
- $simbad = Astro::SIMBAD::Client->new ();
This method instantiates an Astro::SIMBAD::Client object. Any arguments will be passed to the set() method once the object is instantiated.
- $string = $simbad->agent ();
This method retrieves the user agent string used to identify this package in queries to SIMBAD. This string will be the default string for LWP::UserAgent, with this package name and version number appended in parentheses. This method is exposed for the curious.
- @attribs = $simbad->attributes ();
This method retrieves the names of all public attributes, in alphabetical order. It can be called as a static method, or even as a subroutine.
- $value = $simbad->get ($attrib);
This method retrieves the current value of the named attribute. It can be called as a static method to retrieve the default value.
- $result = Parse_TXT_Simple ($data);
This subroutine (not method) parses the given text data under the assumption that it was generated using FORMAT_TXT_SIMPLE_BASIC or something similar. The data is expected to be formatted as follows:
A line consisting of exactly '---' separates objects.
Data appear on lines that look like
name: data
and are parsed into a hash keyed by the given name. If the line ends with a comma, it is assumed to contain multiple items, and the data portion of the line is split on the commas; the resultant hash value is a list reference.
The user would normally not call this directly, but specify it as the parser for 'txt'-type queries:
$simbad->set (parser => {txt => 'Parse_TXT_Simple'});
- $result = Parse_VO_Table ($data);
This subroutine (not method) parses the given VOTable data, returning a list of anonymous hashes describing the data. The $data value is split on '<?xml' before parsing, so that you get multiple VOTables back (rather than a parse error) if that is what the input contains.
This is not a full-grown VOTable parser capable of handling the full spec (see). It is oriented toward returning <TABLEDATA> contents, and the metadata that can reasonably be associated with those contents.
NOTE that as of version 0.026_01, the requisite modules to support VO format are not required. If you need VO format you will need to install XML::Parser or XML::Parser::Lite
The return is a list of anonymous hashes, one per <TABLE>. Each hash contains two keys:
{data} is the data contained in the table, and {meta} is the metadata for the table.
The {meta} element for the table is a reference to a list of data gathered from the <TABLE> tag. Element zero is the tag name ('TABLE'), and element 1 is a reference to a hash containing the attributes of the <TABLE> tag. Subsequent elements if any represent metadata tags in the order encountered in the parse.
The {data} contains an anonymous list, each element of which is a row of data from the <TABLEDATA> element of the <TABLE>, in the order encountered by the parse. Each row is a reference to a list of anonymous hashes, which represent the individual data of the row, in the order encountered by the parse. The data hashes contain two keys:
{value} is the value of the datum with undef for '~', and {meta} is a reference to the metadata for the datum.
The {meta} element for a datum is a reference to the metadata tag that describes that datum. This will be an anonymous list, of which element 0 is the tag ('FIELD'), element 1 is a reference to a hash containing that tag's attributes, and subsequent elements will be the contents of the tag (typically including a reference to the list representing the <DESCRIPTION> tag for that FIELD).
All values are returned as provided by the XML parser; no further decoding is done. Specifically, the datatype and arraysize attributes are ignored.
This parser is based on XML::Parser.
The user would normally not call this directly, but specify it as the parser for 'vo'-type queries:
$simbad->set (parser => {vo => 'Parse_VO_Table'});
- $result = $simbad->query ($query => @args);
This method is deprecated, and will cease to work in April 2014. Please choose a method that does not use SOAP. See the NOTICE above for details.
This method issues a web services (SOAP) query to the SIMBAD database. The $query specifies a SIMBAD query method, and the @args are the arguments for that method. Valid $query values and the corresponding SIMBAD methods and arguments are:
bib => queryObjectByBib ($bibcode, $format, $type) coo => queryObjectByCoord ($coord, $radius, $format, $type) id => queryObjectById ($id, $format, $type)
where:
$bibcode is a SIMBAD bibliographic code $coord is a set of coordinates $radius is an angular radius around the coordinates $type is the type of data to be returned $format is a format appropriate to the data type.
The $type defaults to the value of the type attribute, and the $format defaults to the value of the format attribute for the given $type.
The return value depends on a number of factors:
If the query found nothing, you get undef in scalar context, and an empty list in list context.
If a parser is defined for the given type, the returned data will be fed to the parser, and the output of the parser will be returned. This is assumed to be a list, so a reference to the list will be used in scalar context. Parser exceptions are not trapped, so the caller will need to be prepared to deal with malformed data.
Otherwise, the result of the query is returned as-is.
NOTE that this functionality makes use of the SOAP::Lite module. As of version 0.026_01 of
Astro::SIMBAD::Client, SOAP::Lite is not a prerequisite of this module. If you wish to use the
query()method, you will have to install SOAP::Lite separately. This can be done after
Astro::SIMBAD::Clientis installed.
- $value = $simbad->queryObjectByBib ($bibcode, $format, $type);
This method is deprecated, and will cease to work in April 2014. Please choose a method that does not use SOAP. See the NOTICE above for details.
This method is just a convenience wrapper for
$value = $simbad->query (bib => $bibcode, $format, $type);
See the query() documentation for more information.
- $value = $simbad->queryObjectByCoord ($coord, $radius, $format, $type);
This method is deprecated, and will cease to work in April 2014. Please choose a method that does not use SOAP. See the NOTICE above for details.
This method is just a convenience wrapper for
$value = $simbad->query (coo => $coord, $radius, $format, $type);
See the query() documentation for more information.
- $value = $simbad->queryObjectById ($id, $format, $type);
This method is deprecated, and will cease to work in April 2014. Please choose a method that does not use SOAP. See the NOTICE above for details.
This method is just a convenience wrapper for
$value = $simbad->query (id => $id, $format, $type);
See the query() documentation for more information.
- $release = $simbad->release ();
This method returns the current SIMBAD4 release, as scraped from the top-level web page. This will look something like 'SIMBAD4 1.045 - 27-Jul-2007'
If called in list context, it returns ($major, $minor, $point, $patch, $date). The returned information corresponding to the scalar example above is:
$major => 4 $minor => 1 $point => 45 $patch => '' $date => '27-Jul-2007'
The $patch will usually be empty, but occasionally you get something like release '1.019a', in which case $patch would be 'a'.
Please note that this method is not based on a published interface, but is simply a web page scraper, and subject to all the problems such software is heir to. What the algorithm attempts to do is to find (and parse, if called in list context) the contents of the next <td> after 'Release:' (case-insensitive).
- $value = $simbad->script ($script);
This method submits the given script to SIMBAD4. The $script variable contains the text if the script; if you want to submit a script file by name, use the script_file() method.
If the verbatim attribute is false, the front matter of the result (up to and including the '::data:::::' line) is stripped. If there is no '::data:::::' line, the entire script output is raised as an exception.
If a 'script' parser was specified, the output of the script (after stripping front matter if that was specified) is passed to it. The parser is presumed to return a list, so if script() was called in scalar context you get a reference to that list back.
If no 'script' parser is specified, the output of the script (after stripping front matter if that was specified) is simply returned to the caller.
- $value = $simbad->script_file ($filename);
This method submits the given script file to SIMBAD, returning the result of the script. Unlike script(), the argument is the name of the file containing the script, not the text of the script. However, if a parser for 'script' has been specified, it will be applied to the output.
- $simbad->set ($name => $value ...);
This method sets the value of the given attributes. More than one name/value pair may be specified. If called as a static method, it sets the default value of the attribute.
- $value = $simbad->url_query ($type => ...)
This method performs a query by URL, returning the results. The type is one of:
id = query by identifier, coo = query by coordinates, ref = query by references, sam = query by criteria.
The arguments depend on on the type, and are documented at. They are specified as name => value. For example:
$simbad->url_query (id => Ident => 'Arcturus', NbIdent => 1 );
Note that in an id query you must specify 'Ident' explicitly. This is true in general, because it is not always possible to derive the first argument name from the query type, and consistency was chosen over brevity.
The output.format argument can be defaulted based on the object's type setting as follows:
txt becomes 'ASCII', vo becomes 'VOTable'.
Any other value is passed verbatim.
If the query succeeds, the results will be passed to the appropriate parser if any. The reverse of the above translation is done to determine the appropriate parser, so the 'vo' parser (if any) is called if output.format is 'VOTable', and the 'txt' parser (if any) is called if output.format is 'ASCII'. If output.format is 'HTML', you will need to explicitly set up a parser for that.
The type of HTTP interaction depends on the setting of the post attribute: if true a POST is done; otherwise all arguments are tacked onto the end of the URL and a GET is done.
Attributes
The Astro::SIMBAD::Client attributes are documented below. The type of the attribute is given after the attribute name, in parentheses. The types are:
boolean - a true/false value (in the Perl sense); hash - a reference to one or more key/value pairs; integer - an integer; string - any characters.
Hash values may be specified either as hash references or as strings. When a hash value is set, the given value updates the hash rather than replacing it. For example, specifying
$simbad->set (format => {txt => '%MAIN_ID\n'});
does not affect the value of the vo format. If a key is set to the null value, it deletes the key. All keys in the hash can be deleted by setting key 'clear' to any true value.
When specifying a string for a hash-valued attribute, it must be of the form 'key=value'. For example,
$simbad->set (format => 'txt=%MAIN_ID\n');
does the same thing as the previous example. Specifying the key name without an = sign deletes the key (e.g. set (format => 'txt')).
The Astro::SIMBAD::Client class has the following attributes:
- autoload (boolean)
This attribute determines whether setting the parser should attempt to autoload its package.
The default is 1 (i.e. true).
- debug (integer)
This attribute turns on debug output. It is unsupported in the sense that the author makes no claim what will happen if it is non-zero.
The default value is 0.
- delay (integer)
This attribute sets the minimum delay in seconds between requests, so as not to overload the SIMBAD server. If Time::HiRes can be loaded, you can set delays in fractions of a second; otherwise the delays will be rounded to the nearest second.
Delays are from the time of the last request to the server, no matter which object issued the request. The delay can be set to 0, but not to a negative number.
The default is 3.
- emulate_soap_queries (boolean)
If this attribute is true, the methods that would normally use the SOAP interface (that is,
query()and friends) use the script interface instead.
The purpose of this attribute is to give the user a way to manage the deprecation and ultimate removal of the SOAP interface from the SIMBAD servers. It may go away once that interface disappears, but it will be put through a deprecation cycle.
The default is false, but will become true once the University of Strasbourg shuts down its SOAP server.
- format (hash)
This attribute holds the default format for a given query() output type. See for how to specify formats for each output type. Output type 'script' is used to specify a format for the script() method.
The format can be specified either literally, or as a subroutine name or code reference. A string is assumed to be a subroutine name if it looks like one (i.e. matches (\w+::)*\w+), and if the given subroutine is actually defined. If no namespace is specified, all namespaces in the call tree are checked. If a code reference or subroutine name is specified, that code is executed, and the result becomes the format.
The following formats are defined in this module:
FORMAT_TXT_SIMPLE_BASIC - a simple-to-parse text format providing basic information; FORMAT_TXT_YAML_BASIC - pseudo-YAML (parsable by YAML::Load) providing basic info; FORMAT_VO_BASIC - VOTable field names providing basic information.
The FORMAT_TXT_YAML_BASIC format attempts to provide data structured similarly to the output of Astro::SIMBAD, though Astro::SIMBAD::Client does not bless the output into any class.
A simple way to examine these formats is (e.g.)
use Astro::SIMBAD::Client; print Astro::SIMBAD::Client->FORMAT_TXT_YAML_BASIC;
Before a format is actually used it is preprocessed in a manner depending on its intended output type. For 'vo' formats, leading and trailing whitespace are stripped. For 'txt' and 'script' formats, line breaks are stripped.
The default specifies formats for output types 'txt' and 'vo'. The 'txt' default is FORMAT_TXT_YAML_BASIC; the 'vo' default is FORMAT_VO_BASIC.
There is no way to specify a default format for the 'script_file' method.
- parser (hash)
This attribute specifies the parser for a given output type.
Parsers may be specified by either a code reference, or by the text name of a subroutine. If specified as text and the name is not qualified by a package name, the calling package is assumed. The parser must be defined, and must take as its lone argument the text to be parsed.
If the parser for a given output type is defined, query results of that type will be passed to the parser, and the result returned. Otherwise the query results will be returned verbatim.
The output types are anything legal for the query() method (i.e. 'txt' and 'vo' at the moment), plus 'script' for a script parser. All default to '', meaning no parser is used.
- post (boolean)
This attribute specifies that url_query() data should be acquired using a POST request. If false, a GET request is used.
The default is 1 (i.e. true).
- server (string)
This attribute specifies the server to be used. As of January 26 2007, only 'simbad.u-strasbg.fr' is valid, since as of that date Harvard University has not yet converted their mirror to SIMBAD 4.
The default is 'simbad.u-strasbg.fr'.
- type (string)
This attribute specifies the default output type. Note that although SIMBAD only defined types 'txt' and 'vo', we do not validate this, since the SIMBAD web site hints at more types to come. SIMBAD appears to treat an unrecognized type as 'txt'.
The default is 'txt'.
- url_args (hash)
This attribute specifies default arguments for url_query method. These will be applied only if not specified in the method call. Any argument given in the SIMBAD documentation may be specified. For example:
$simbad->set (url_args => {coodisp1 => d});
causes the query to return coordinates in degrees and decimals rather than in sexagesimal (degrees, minutes, and seconds or hours, minutes, and seconds, as the case may be.) Note, however, that VOTable output does not seem to be affected by this.
The initial default for this attribute is an empty hash; that is, no arguments are defaulted by this mechanism.
- verbatim (boolean)
This attribute specifies whether script() and script_file() are to strip the front matter from the script output. If false, everything up to and including the '::data:::::' line is removed before passing the output to the parser or returning it to the user. If true, the script output is passed to the parser or returned to the user unmodified.
The default is 0 (i.e. false). | https://metacpan.org/pod/Astro::SIMBAD::Client | CC-MAIN-2015-35 | refinedweb | 3,284 | 63.9 |
Here’s a surprising result: The least common multiple of the first n positive integers is approximately exp(n).
More precisely, let φ(n) equal the log of the least common multiple of the numbers 1, 2, …, n. There are theorems that give upper and lower bounds on how far φ(n) can be from n. We won’t prove or even state these bounds here. See [1] for that. Instead, we’ll show empirically that φ(n) is approximately n.
Here’s some Python code to plot φ(n) over n. The ratio jumps up sharply after the first few values of n. In the plot below, we chop off the first 20 values of n.
from scipy import arange, empty from sympy.core.numbers import ilcm from sympy import log import matplotlib.pyplot as plt N = 5000 x = arange(N) phi = empty(N) M = 1 for n in range(1, N): M = ilcm(n, M) phi[n] = log(M) a = 20 plt.plot(x[a:], phi[a:]/x[a:]) plt.xlabel("$n$") plt.ylabel("$\phi(n) / n$") plt.show()
Here’s the graph this produces.
[1] J. Barkley Rosser and Lowell Schoenfeld. Approximate formulas for some functions of prime numbers. Illinois Journal of Mathematics, Volume 6, Issue 1 (1962), 64-94. (On Project Euclid)
6 thoughts on “Least common multiple of the first n positive integers”
Interesting to compare this with Stirling’s formula, as a measure of the asymptotic badness of $n!$ as a first guess or upper bound on $\text{lcm}(1:n)$.
Isn’t \phi(n) a bad choice of notation here, given Euler’s totient function?
Amazing tidbit of knowledge! (sorry for the content-less remark)
Very nice, indeed. I did not know that. But I am far from an expert in this kind of number theory.
For n=1000000000 we have phi(n)/n = 0.9999824279662022 (obviously by a method different from the method in the post).
@David Malone: Yes, it’s unfortunate notation, but it matches the paper I referenced.
In the paper you reference, phi(n) denotes the number of positive integers <= n and relatively prime to n.
In the paper, psi(n) is the logarithm of the least common multiple of all positive integers <= n.
It seems you are using the inequality
(1-epsilon)x< psi(x) e^5000.
Do you know if it tends to a limit? | https://www.johndcook.com/blog/2017/06/28/least-common-multiple-of-the-first-n-positive-integers/ | CC-MAIN-2021-49 | refinedweb | 398 | 68.97 |
By: John Kaster
Abstract: This article provides an overview of the architecture, design, and intent of the Visual Component Library for .NET included with the Delphi for .NET preview
By John Kaster, Eddie Churchill, and Danny Thorpe
Borland has included experimental versions of the VCL (Visual Component Library) for .NET in an update to the Delphi for .NET preview, which is available for download by registered users. Even with access to this source code, many Delphi users are still confused about exactly what the VCL for .NET is and how they can use it. Hopefully we can clear up some of this confusion.
If you haven't already read the BDN article on the Delphi for .NET compiler, I recommend doing that before reading this article.
In this article, I will be providing graphics and information extracted from a presentation originally given by Eddie Churchill at BorCon 2002 in Anaheim, California, USA. These slides say "BorCon Tokyo" because I presented them there after Eddie presented them in California. BorCon 2003 is in the planning stages now, so be sure to mark your calendar so you don't miss it.
What is probably the chief source of confusion regarding the VCL for .NET is based on imprecise usage of the term "VCL" by both Borland and the users of the framework that comes with the native code development products Delphi, C++Builder, and Kylix. "Visual Component Library" was a misnomer for the entire component framework, since the majority of the framework was non-visual. When Borland introduced Kylix, we also finally came up with an official name for the Delphi component framework: "CLX." CLX stands for "Component Library for Cross-Platform" development, and is pronounced the same as "clicks."
CLX is now the official term describing the entire component framework that is used in Delphi, C++Builder, and Kylix. The VCL is the set of components that provide a visual interface for applications running on top of the Win32 API. You can see the VCL listed in the upper right tier of the architectural diagram below.
Defining "CLX," the Component Library for cross-platform development
With Kylix, Borland introduced VisualCLX. VisualCLX is the set of components used in cross-platform GUI applications. Qt, a class library from TrollTech, is used as the graphical API for VisualCLX. VisualCLX is available in all versions of Kylix, and in Delphi and C++Builder 6 and onwards, in Professional or higher SKUs. The interface to the VisualCLX controls is nearly identical to the interface to the VCL controls. They perform the same functions, and are mutually exclusive in an application. CLX supports writing identical source code that can talk to the properties, methods, and events of either the VCL or VisualCLX. You specify which set of controls to use by referring to the appropriate source code unit names, as the following code snippet illustrates.
{$ifdef VisualCLX}
uses
Classes, QControls, QComCtrls;
{$endif}
{$ifdef VCL}
uses
Classes, Controls, ComCtrls;
{$endif}
...
{ the same source code for talking to either
set of controls goes here, such as }
Form1.Caption := 'Hello World!';
Button1.Enabled := True;
....
This source code sample is similar to what is required in your source code when default namespace searching is not available. More on this later.
VisualCLX and VCL side by side
The difference between VCL and VisualCLX lies in their display API.
VisualCLX on Linux
VisualCLX calls the Qt class library on Linux, which in turn calls XWindow for graphical display.
VisualCLX on Win32
VisualCLX calls the Qt class library on Win32, which in turn calls Win32 for graphical display.
VCL on Win32
The VCL calls Win32 APIs for its graphical display.
The following guidelines may be helpful when making a decision between VCL and VisualCLX for your GUI application.
Whether you use VCL or VisualCLX, your application will use CLX, the Delphi framework.
The packages that contain the visual components for VCL and VisualCLX are only nine (9) of the over forty (40) packages that comprise CLX. The vast majority of the classes and components in CLX are non-visual in nature, and have identical programming interfaces for any platform.
.NET includes controls for building visual applications on the platform. These controls are included in an assembly called System.Windows.Forms. Many of these controls are very similar to existing visual controls from CLX. This similarity is primarily caused by the environment in which both sets of controls run, and the fact that they are a component-based implementation.
There are many similarities between the current Delphi application environment and the .NET application environment:
The following slide illustrates the similarity in the architecture of Winforms and VCL. You can compare this to the VCL slide.
.NET WinForms on Win32
Let's look more closely at the implementation of WinForms for the .NET framework that runs on the Windows operating system. A managed layer (System.Windows.Forms) makes calls to what is termed "Native Methods" in .NET. This is actually how everything in the .NET framework (and the Java framework, for that matter) is implemented. At some point, calls to native methods/machine code are required.
.NET native method calls
This is an important point to note, because it is a very important similarity between the VCL and .NET framework, as I'll explain later.
The language neutrality as defined in the CIL (Common Intermediate Language), the CLS (Common Language Specification), and the CLR (Common Language Runtime) of the .NET platform also drives some differences between the frameworks.
TGraphicControl
As mentioned previously, an important implementation detail the VCL for .NET has in common with System.Windows.Forms is a managed layer that makes calls to the native APIs for Win32. Furthermore, VCL for .NET can be used in conjunction with System.Windows.Forms controls, unlike the VCL for Win32 and VisualCLX, which are mutually exclusive. This provides tremendous flexibility for migrating your existing Delphi applications to .NET. You can quickly move the entire application by using VCL for .NET for cross-platform language compatibility, then gradually replace the VCL code with standard CLR code should you choose to do so.
The following slide illustrates what will probably be a typical usage pattern for Delphi applications on .NET.
Borland.VCL and .NET WinForms coexisting in a Delphi application
There is another slide with the arrow going the other way, but I think you get the idea.
These slides make the level of similarity between .NET WinForms and the VCL misleading. We originally investigated using class helpers and wrappers to produce the VCL for .NET as simple name aliases to the visual components provided by .NET WinForms. We eventually decided that while WinForms shares many design patterns with VCL, the implementation semantics are too different to simply map VCL class names onto Winform classes. In particular, the initialization and finalization semantics of WinForms controls are radically different from VCL.
If you are willing to accept changes in fundamental semantics, you could do a simple name mapping between VCL and WinForms. However, that is not an acceptable solution to achieve a high degree of compatibility with existing VCL source code that relies heavily on the rich semantics of the VCL framework.
All Borland-produced redistributable assemblies will be strong named and codesigned by Borland. That should allow the assemblies to be handled by the default security policies set up by the .NET framework and the OS.
The whole point of VCL for .NET is to replicate the VCL architecture in .NET, all the way down to the message methods and other Win32 API bits. VCL for .NET is the fastest way to move existing desktop application code from VCL (or CLX) to desktop .NET. VCL for .NET is not intended to be used on non-Win32 platforms.
Namespace support is a good topic to bring up when considering the migration of existing application code. With the addition of namespace support to the compiler and IDE, compatibility among the various platforms Delphi supports will be increased.
The unit names (which are what make the namespace value) are being expanded in Delphi for .NET, and in future releases of Delphi for Win32 and Delphi for Linux to take advantage of the compiler and IDE's planned namespace support.
Examples of values in the run-time namespace
Examples of values in the VCL namespace
Of particular interest to people who want to use Delphi for cross-platform code are the VCL namespace entries. In Kylix 3 and Delphi 7, the VisualCLX source files that correspond to the VCL source files have a "Q" prepended to them. For example, the VCL source file Controls.pas becomes the CLX source file QControls.pas. With the support for default namespaces, the file name will be quite different when fully qualified, but the same when left to the default lookup rules.
Controls.pas
QControls.pas
The following diagram illustrates how the namespace resolution will probably work:
This namespace resolution logic will allow you to write Delphi code that simply says:
uses SysUtils;
and the SysUtils unit will end up referencing the appropriate code for the type of project you are developing.
SysUtils
We plan to support packages in Delphi for .NET (it's not done yet) so that you will have the same build and deploy options you have today in Delphi 7: link all code into a single large executable, or link to code that resides in packages to produce a much smaller executable.
These packages will be similar in size to the existing packages. If you choose to use the run-time package option for your applications, the packages will have to be deployed/installed on the machine running the application.
There are always trade-offs when choosing a technical direction to go. The following list is not intended to be all-inclusive or even definitive, but should give you some broad implementation issues to consider when determining which way you will implement solutions for .NET.
The component set solutions discussed here are heavily reliant on .NET's support for native methods to call the underlying Win32 API. Microsoft is committed to creating managed APIs for all their existing technology. All new technology Microsoft produces will ship with managed APIs. Because Delphi for .NET has complete access to the .NET run-time by being a first-class language for .NET, you have complete access to any new Microsoft technology immediately.
For example, DirectX 9 SDK includes .NET support and .NET language examples. Looks like fun! No more header translations required for Delphi, and you have complete access to the SDK instantly!
This commitment also works in your favor. Your managed classes and components will be first-class citizens to the rest of .NET.
Like all "portable" frameworks, different implementations of .NET may not include all of .NET. For example, ROTOR, includes a vast majority of what the Desktop version of .NET delivers, but does not include (among other things) the System.Windows.Forms assembly. This means that if you create an application that uses System.Windows.Forms it wont run on ROTOR.
Likewise, the .NET compact framework includes only a subset of what is found in the Desktop version of .NET. While it delivers System.Windows.Forms, it does not include all of it. There are a number of methods, properties and classes missing. Due to the late JITing and linking you might not know that something your code calls is unavailable until it is too late.
Even if you dont want to use .NET, it still will make your life easier. More of the SDKs will be available to you faster, without resorting to "C"-like code. From a native Delphi application, the whole of .NET is available via COM interop.
While it may look like we are changing everything again, it is not quite a bad as it might seem at first glance. Delphi is alive and vibrant, and of course will evolve as technology changes. The Delphi language is getting lots of exciting new features and still remaining compatible with existing code. Many of the extensions to it will also be available in future releases of the native compilers, perpetuating a high degree of compatibility among the platforms.
The run-time library (RTL) is still there and still very familiar. It is simply renamed to Borland.Delphi to take advantage of the new namespace support in the compiler.
The VCL is still there, just renamed Borland.VCL. We have no plans to get rid of VCL or deprecate it. Your current Delphi skill set will still apply, but you are also free to dig into .NET's Systems.Windows.Forms as well.
We plan to make the vast majority of the existing Delphi framework available on .NET, including dbExpress, DataSnap, WebSnap, and so on.
With Delphi's support for the .NET platform, Delphi developers have a strategic advantage for writing feature rich, high performance applications that run natively not only on .NET, but also Win32 and Linux. If you haven't already started working with Delphi for .NET, you can start by purchasing any version of Delphi 7. When you come to .NET, you'll see many familiar faces there, and find that Delphi still gives you the power you need to get the job done.
Enjoy!
Server Response from: SC1 | http://edn.embarcadero.com/article/29460 | crawl-002 | refinedweb | 2,211 | 57.37 |
You can subscribe to this list here.
Showing
10
results of 10
Hi, folks! I'm again encountering the problem - imshow generating a
MemoryError exception trying to image a very large array - discussed in this
thread I started almost a year and a half ago.
Question 1) has anything changed in MPL in that time interval which would
provide an "easy" solution?
Question 2) is there some way I can add pieces of the array incrementally to
the image into their proper place, i.e., modify the following code:
ax.imshow(image[0:ny/2+1, 0:nx/2+1]) # upper left corner of image
ax.hold(True)
ax.imshow(argW[ny/2+1:-1, 0:nx/2+1]) # lower left corner of image
ax.imshow(argW[0:ny/2+1, nx/2+1:-1]) # upper right corner of image
ax.imshow(argW[ny/2+1:-1, nx/2+1:-1]) # lower right corner of image
so that each subsequent imshow doesn't cover up the previous imshow and is
placed in the right place relative to each of the other pieces?
Question 3) Would such incremental addition work to get around the memory
limit, or does the fact (if the following statement is in fact correct) that
eventually the entire too-large image needs to be handled doom this strategy
to failure?
Question 4) would I have this problem if I was running 64 bit (i.e., OS, as
well as 64 bit builds of Python, numpy, MPL, etc.), i.e., is it most likely
a memory addressing problem?
Question 5) can anyone suggest any other work-around(s)?
Thanks!
DG
On Sat, Sep 6, 2008 at 4:00 PM, David Goldsmith <d_l_goldsmith@...>wrote:
> Ah, Ich verstehe now. I'll try RGBA-ing it; in the meantime, let me know
> if the colormapping conversion gets changed to 32 bit. Thanks again!
>
> DG
>
> --- On Sat, 9/6/08, Eric Firing <efiring@...> wrote:
>
> > From: Eric Firing <efiring@...>
> > Subject: Re: [Matplotlib-users] imshow size limitations?
> > To: d_l_goldsmith@...
> > Cc: matplotlib-users@...
> > Date: Saturday, September 6, 2008, 3:13 PM
> > David Goldsmith wrote:
> > > Thanks, Eric!
> > >
> > > --- On Sat, 9/6/08, Eric Firing
> > <efiring@...> wrote:
> > >
> > > -- snip OP --
> > >
> > >> It looks to me like you simply ran out of
> > memory--this is
> > >> not an imshow
> > >> problem as such. Your array is about 1e8
> > elements, and as
> > >> floats that
> > >> would be close to a GB--just for that array alone.
> > Do you
> > >
> > > Well, I anticipated that, so I do initialize the
> > storage for the numpy array as numpy.uint8 and have
> > confirmed that the data in the array returned by the
> > function which creates it remains numpy.uint8, so it should
> > "only" be ~100MB (indeed, the .na file into which
> > I tofile it is 85,430 KB, just as it should be for a 10800 x
> > 8100 array of uint8 elements). And the ax.imshow statement
> > doesn't (directly) cause the crash (but I don't know
> > that it isn't making either a float copy or an in-place
> > conversion of the array). So, AFAIK, right up until the
> > statement:
> > >
> > > canvas.print_figure('HiResHex')
> > >
> > > the data being imaged are all numpy.uint8 type.
> >
> > Yes, but it looks to me like they are still getting
> > color-mapped, and
> > this requires conversion to numpy.float. This may be a bad
> > aspect of
> > the mpl design, but it is quite deeply embedded. I suspect
> > the best we
> > could do would be to use float32 instead of float64;
> > certainly for color
> > mapping one does not need 64 bits.
> >
> > Using numpy.uint8 helps only if you are specifying RGBA
> > directly,
> > bypassing the colormapping.
> >
> > >
> > >> really need
> > >> all that resolution?
> > >
> > > Well, there's the rub: I fancy myself a fractal
> > "artist" and I have
> > > access to an HP DesignJet 500ps plotter with a maximum
> > resolution of
> > > 1200 x 600 dpi. For the size images I'm trying to
> > make (I'm hoping to go
> > > even bigger than 36" x 27", but I figured
> > that as a good starting point)
> > > even I regard _that_ resolution as too much - I was
> > thinking of 300 x
> > > 300 dpi (which is its "normal" resolution)
> > as certainly worthy of giving
> > > a try. :-)
> >
> > >> If you do, you will probably have to
> > >> get a much
> > >> more capable machine.
> > >
> > > Possible, but I was hoping to generate at least one
> > "proof" first to determine how hard I'd need
> > to try.
> > >
> > >> Otherwise, you need to knock down
> > >> the size of
> > >> that array before trying to plot or otherwise
> > manipulate
> > >> it.
> > >
> > > Forgive me, but I'd like a more detailed
> > explanation as to why: I
> > > have
> > > ample (~35 GB, just on my built-in disc, much more
> > than that on external
> > > discs) harddisc space - isn't there some way to
> > leverage that?
> >
> > I don't know enough about virtual memory
> > implementations--especially on
> > Win or Mac--to say. In practice, I suspect you would find
> > that as soon
> > as you are doing major swapping during a calculation, you
> > will thrash
> > the disk until you run out of patience.
> >
> >
> > >> With respect to imshow, probably you can get it to
> > handle
> > >> larger images
> > >
> > > Again, imshow doesn't appear to be the culprit
> > (contrary to my
> > > original subject line), rather it would appear to be
> > > canvas.print_figure. (While I'm on the subject of
> > canvas.print_figure,
> > > isn't there some way for MPL to "splash"
> > the image directly to the
> > > screen, without first having to write to a file? I
> > didn't ask this
> > > before because I did eventually want to write the
> > image to a file, but I
> > > would prefer to do so only after I've had a look
> > at it.)
> >
> > It is imshow in the sense that most of the action in mpl
> > doesn't happen
> > when you call imshow or plot or whatever--they just set
> > things up. The
> > real work is done in the backend when you display with
> > show() or write
> > to a file.
> >
> >
> > >> if you feed them in as NxMx4 numpy.uint8 RGBA
> > arrays--but I
> > >> doubt this
> > >> is going to be enough, or the right approach, for
> > your
> > >> present situation.
> > >
> > > Right: I don't see how that would be better than
> > having a single 8
> > > bit
> > > datum at each point w/ color being determined from a
> > color map (which is
> > > how I'd prefer to do it anyway).
> >
> > The way it is better is that it avoids a major operation,
> > including the
> > generation of the double-precision array. The rgba array
> > can go
> > straight to agg.
> >
> > Eric
> >
> >
> > > Thanks again,
> > >
> > > DG
> > >> Eric
> > >>
> > >>> Platform Details: MPL 0.91.2 (sorry, I
> > didn't
> > >> realize I was running such an old version, maybe I
> > just need
> > >> to upgrade?), Python 2.5.2, Windows XP 2002 SP3,
> > 504MB
> > >> physical RAM, 1294MB VM Page size (1000MB init.,
> > 5000MB max)
> > >>> Thanks!
> > >>>
> > >>> DG
>
>
>
>
Hi."
(It's unfortunately in german, but the graphics are self-explaining)
A school mate working together with me on the project has worked that out.
H = number of corners of the front triangle lying inside of the back triangle
V = number of corners of the back triangle lying inside of the front triangle
S = number of the collinear edges of the two triangles
Z = number of intersection points of the two tringles' edges, minus
the number of those occuring because of collinear edges.
Red: front triangle
Black: back triangle
Green: subdivision lines in the back triangle.
I will check my implementation in C++ today. I will maybe need some
advice in making a Python module out of it.
Friedrich
Hi,
I am a fan of the STIX sans serif typeface for mathtext, but I was
wondering if it was possible to use this typeface for non-mathtext
text as well. My main use for this is ticklabels, which seem to render
in the non-mathtext font no matter what I do. I have tried using
ax.xaxis.set_major_formatter(P.ScalarFormatter(useMathText=True))
to force the (scalar) major ticks to use mathtext and thus stixsans,
but this does not work. Am I doing something wrong here?
A better solution (for me) would be to use STIX sans serif faces as
the regular font, ie something like
rcParams["font.sans-serif"] = 'stixsans'
I know this is possible for serif fonts, and I know that a post to the
list back in 2008 noted that it was not (at that time) possible for
sans-serifs. I am curious to know if there has been any progress on
allowing STIX sans-serifs in non-mathtext text.
thanks,
Jeff Oishi
All,
On:
In the Axes Container section, you can see in what follows that I got some very different responses than what is shown on the page:
In [1]: fig=figure()
In [3]: ax=fig.add_subplot(111)
In [4]: rect=matplotlib.patches.Rectangle((1,1),width=5,height=12)
In [5]: print rect.get_axes()
------> print(rect.get_axes())
None
In [6]: print rect.get_transform()
------> print(rect.get_transform())
BboxTransformTo(Bbox(array([[ 1., 1.],
[ 6., 13.]])))
In [7]: ax.add_patch(rect)
Out[7]: <matplotlib.patches.Rectangle object at 0x2144db0>
In [8]: print rect.get_axes()
------> print(rect.get_axes())
Axes(0.125,0.1;0.775x0.8)
In [9]: print rect.get_transform()
------> print(rect.get_transform())
CompositeGenericTransform(BboxTransformTo(Bbox(array([[ 1., 1.],
[ 6., 13.]]))), [10]: print ax.transData
-------> print(ax.transData) [11]: print ax.get_xlim()
-------> print(ax.get_xlim())
(0.0, 1.0)
In [12]: print ax.dataLim.get_bounds()
-------> print(ax.dataLim.get_bounds())
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
/Library/Frameworks/Python.framework/Versions/6.0.0/Examples/matplotlib-0.99.1.1/event_handling/<ipython console> in <module>()
AttributeError: 'Bbox' object has no attribute 'get_bounds'
In [13]: ax.autoscale_view()
In [14]: print ax.get_xlim()
-------> print(ax.get_xlim())
(1.0, 6.0)
In [15]: ax.figure.canvas.draw()
David.
On Fri, Feb 26, 2010 at 6:35 PM, mikey <abc.mikey@...> wrote:
> Sorry a rather stupid question as there are '.'s available. Although I
> wouldn't mind knowing if it's possible to tinker with the sizes of
> 'o's and '.'s.
See the "markersize" parameter
JDH
On Fri, Feb 26, 2010 at 6:29 PM,
>
>
Hey,
Try:
I[1]: plt.plot(range(100), "o", markersize=100)
I[2]: plt.plot(range(100), "o", markersize=1)
I[3]: plt.figure(); plt.plot(range(100), "o", ms=1)
>
> ------------------------------------------------------------------------------
> Download Intel® Parallel Studio Eval
> Try the new software tools for yourself. Speed compiling, find bugs
> proactively, and fine-tune applications for parallel performance.
> See why Intel Parallel Studio got high marks during beta.
>
> _______________________________________________
> Matplotlib-users mailing list
> Matplotlib-users@...
>
>
--
Gökhan
> -----Original Message-----
> From: mikey [mailto:abc.mikey@...]
> Sent: Friday, February 26, 2010 4:29 PM
> To: matplotlib-users@...
> Subject: [Matplotlib-users] Change the size of the plotted 'o's ?
>
>?
Mike,
There are a couple of ways. See below:
# untested, might have typos ~~~~
import numpy as np
import matplotlib.pyplot as pl
x = np.random.randn(20)
fig = pl.figure()
ax = pl.add_subplot(1,1,1)
# you can specify the marker size two ways directly:
ax.plot(x, 'ko', markersize=4) # size in points
ax.plot(x, 'bs', ms=4) % ms is just an alias for markersize
# or you can specify it after the plotting:
X = ax.plot(x, 'ko') # X is a *list* of line2d objects...
X[0].set_markersize(4) # set_ms works too
# or...
pl.setp(Y, markersize=4) # again, ms works.
# ~~~~~~~~
For a list of all the properties you can tweak, type:
pl.getp(<object>)
HTH,
-paul
Sorry a rather stupid question as there are '.'s available. Although I
wouldn't mind knowing if it's possible to tinker with the sizes of
'o's and '.'s.
Thanks,
Mikey
On 27 February 2010 00:29,
> | http://sourceforge.net/p/matplotlib/mailman/matplotlib-users/?viewmonth=201002&viewday=27 | CC-MAIN-2014-42 | refinedweb | 1,929 | 65.62 |
14 December 2006 12:15 [Source: ICIS news]
Corrected: In the ICIS news story headlined "Wilmar to buy PPB Palm Oil for $2.7m" dated 14 December, 2006, please read in the headline ... "Wilmar to buy PPB Palm Oil for $2.7bn" ... instead of ... "Wilmar to buy PPB Palm Oil for $2.7m"... and please read in the first paragraph ... to buy its affiliate PPB Palm Oil for $2.7bn ... instead of ... to buy its affiliate PPB Palm Oil for $2.7m ... A corrected story follows.
?xml:namespace>
SINGAPORE (ICIS news)--Wilmar International is to buy its affiliate PPB Palm Oil for $2.7bn, creating one of the largest palm oil producers and refiners in Asia, a senior company official said late on Thursday.
The combined palm oil refining capacity of the two companies will be 9.6m tonnes/year, Kuok Khoon Hong, Wilmar’s chairman and ceo, said at a press conference.
Wilmar will also become a significant biodiesel producer with a total capacity of 1.15m tonnes/year by end-2007, he added. The companies are due to start up three plants in ?xml:namespace>
Kuok was confident that the company would survive in the biodiesel business in the long term as energy prices continued to look favourable.
Indonesian palm oil prices were lower than
The combined group would also be a dominant processor and merchandiser of agricultural products such as edible oil, oleochemicals and specialty fat in key markets | http://www.icis.com/Articles/2006/12/14/1114504/corrected-wilmar-to-buy-ppb-palm-oil-for-2.7bn.html | CC-MAIN-2014-15 | refinedweb | 242 | 66.44 |
Using Sphinx for PHP Project Documentation.
RTD is widely used in the industry. It hosts powerful docs such as Guzzle’s, PhpSpec’s and many more. It supports reST alongside MD (or, to be more accurate, MD alongside reST), which is a huge plus as RST files are more suitable for highly technical documents. It can be run locally and generate offline-friendly HTML files, but it can also compile from documentation source available online and and be automatically hosted as a subdomain of
readthedocs.org.
That said, setting it up for a PHP project has some caveats, so we’ll go through a basic guide in this tutorial.
TL;DR
If you’re just looking for the list of commands to get up and running quickly:
sudo pip install sphinx sphinx-autobuild sphinx_rtd_theme sphinxcontrib-phpdomain mkdir docs cd docs sphinx-quickstart wget
After the quickstart setup, to activate the theme and PHP syntax highlights run:
sed -i '/extensions = \[\]/ c\extensions = \["sphinxcontrib.phpdomain"\]' source/conf.py echo ' import sphinx_rtd_theme html_theme = "sphinx_rtd_theme" html_theme_path = [sphinx_rtd_theme.get_html_theme_path()] # Set up PHP syntax highlights from sphinx.highlighting import lexers from pygments.lexers.web import PhpLexer lexers["php"] = PhpLexer(startinline=True, linenos=1) lexers["php-annotations"] = PhpLexer(startinline=True, linenos=1) primary_domain = "php" ' >> source/conf.py
To build HTML:
make html
or
sphinx-build -b html source build
The latter command supports several options you can add into the mix.
Sphinx Installation
ReadTheDocs uses Sphinx behind the scenes, and as such is a through-and-through Python implementation. To make use of it, we need to install several prerequisites. We’ll use our trusty Homestead Improved to set up a brand new environment for us to play in.
After the VM is set up, SSH into it with
vagrant ssh and execute:
sudo pip install sphinx sphinx-autobuild
If you don’t have the command
pip, follow the official instructions to get it installed, or just execute the following if on Ubuntu:
sudo apt-get install python-sphinx python-setuptools sudo easy_install pip
These tools have now made the command
sphinx-quickstart available.
Recommended Folder Layout
Generally, you’ll either have:
- the documentation in the same folder as the project you’re documenting, or…
- the documentation in its own repository.
Unless the documentation spans several projects or contexts, it is recommended it be in the same folder as the project. If you’re worried about bloating the size of your project when Composer is downloading it for use, the docs can be easily kept out of the distribution by being placed into the
.gitattributes file (see here).
When we run the command
sphinx-quickstart, it’ll ask us for the root folder of our docs. This is the folder into which all other subfolders regarding the docs will go. Note that this is not the project’s root folder. If your project is at
my-php-project, the root of the docs will likely be in something like
my-php-project/docs.
Next, Sphinx offers to either make a separate
_build folder for the compiled version of the docs (e.g. HTML), while the sources will be in the root (defined in the previous step), or to make two folders under root:
source and
build, keeping the root clean. Feel free to choose whichever option you prefer (we went with the latter, for cleanliness and structure).
Follow the rest of the instructions to set some meta data, select
.rst as the file extension, and finally answer “no” to all questions about additional plugins – those refer to Python projects and are outside our jurisdiction. Likewise, when asked to create
make files, accept.
Custom Theme
Building the documents with the command
make html produces the HTML documents in the
build folder. Opening the documents in the browser reveals a screen not unlike the following:
That’s not very appealing. This theme is much more stylish and modern. Let’s install it.
pip install sphinx_rtd_theme
Then, in the
source folder of the docs root, find the file
conf.py and add the following line to the top, among the other
import statements:
import sphinx_rtd_theme
In the same file, change the name of the HTML theme:
html_theme = "sphinx_rtd_theme"
Finally, tell Sphinx where the theme is by asking the imported module:
html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]
Building the docs with
make html should now produce some significantly prettier HTML:
Most themes follow the same installation procedure. Here is a short list. Hopefully, it’ll expand in the future.
Table of Contents
During the
quickstart, a user is asked for the name of the master file (typically
index). The master file usually contains little to no content – rather, it only holds directives.
A reST directive is like a function – a powerful construct of the reST syntax which accepts arguments, options, and a body. The
toctree directive is one of them. It requires an option of
maxdepth, indicating the maximum number of levels in a single menu item (e.g. depth of 2 is
Mainmenu -> Submenu1 but not
-> Submenu2).
After the option goes a blank line, and then a one-per-line list of include files, without extensions. Folders are supported (
subfolder/filetoinclude).
.. Test documentation master file, created by sphinx-quickstart on Sat Aug 8 20:15:44 2015. You can adapt this file completely to your liking, but it should at least contain the root `toctree` directive. Welcome to Test's documentation! ================================ Contents: .. toctree:: :maxdepth: 2 overview quickstart
In the example above, Sphinx prints out
Contents:, followed by an expanded version of the table of contents, i.e. a bulleted list of all headings found in the included documents. Additionally, many authors include extra information at the top of the master file to give a birds-eye overview of the library right then and there. See Guzzle’s.
The
toctree definition in the master file will be automatically mirrored in the left ToC navigation bar.
Let’s grab the
overview and
quickstart files from Guzzle so that we don’t have to write our own. Put them into the root of the docs, and rebuild with
make html.
The documentation should now appear, and the left ToC should be expandable with the little plus icons:
For more information about the
toctree directive and everything it can do to give you truly customized output, see here.
PHP Syntax
If we look at the
quickstart document, we’ll notice that the PHP samples aren’t syntax highlighted. Not surprising, considering Sphinx defaults to Python. Let’s fix this.
In the
source/conf.py file, add the following:
from sphinx.highlighting import lexers from pygments.lexers.web import PhpLexer lexers['php'] = PhpLexer(startinline=True, linenos=1) lexers['php-annotations'] = PhpLexer(startinline=True, linenos=1) primary_domain = 'php'
This imports the PHP lexer and defines certain code block language-hints as specifically parseable by the module. It also sets the default mode of the documentation to PHP, so that if you omit a language declaration on a code block, Sphinx will just assume it’s PHP. E.g., instead of:
.. code-block:: php use myNamespace/MyClass; ...
one can type:
.. code-block:: use myNamespace/MyClass; ...
After adding the above into the configuration file, a rebuild is necessary.
make html
This should produce syntax highlighted PHP sections:
PHP Domain
Additionally, we can install the PHP domain.
Domains are sets of reST roles and directives specific to a programming language, making Sphinx that much more adept at recognizing common concepts and correctly highlighting and interlinking them. The PHP domain, originally developed by Mark Story, can be installed via:
sudo pip install sphinxcontrib-phpdomain
The extension needs to be activated by changing the
extensions line to:
extensions = ["sphinxcontrib.phpdomain"]
Let’s try and make another reST file now, with a described PHP class so we can see how well the PHP domain does its magic. We’ll create
source/class.rst, and add
class to the
index.rst file under all the others. Then, we’ll put the following into
class.rst:
DateTime Class ============== ..
If we rebuild, we get something like this:
Note that without the PHP Domain installed, this screen would be empty.
This doesn’t look too bad, but it could be better. We’ll leave the styling for another day, though.
View Source
It is common for docs to include a link to their source files at the top, so that users can suggest changes, raise issues and send pull requests for improvements if they spot something being out of place.
In the configuration options, there is a flag to show/hide these source links. By default, they’ll lead to
_sources/file where
file is the currently viewed file, and
_sources is a directory inside the
build directory – i.e., all source files are copied to
build/_sources during the build procedure.
We don’t want this, though. We’ll be hosting the docs on Github, and we want sources to lead there, no matter where they are hosted. We can do this by adding HTML context variables into the
conf.py file:
html_context = { "display_github": True, "github_user": "user", "github_repo": project, "github_version": "master", "conf_py_path": "/doc/", "source_suffix": source_suffix, }
Be careful to add this block after the
project definition, else you’ll get an error about the
project variable not being defined. Putting this at the bottom of
conf.py is generally a safe bet.
ReST vs MD
For a quick and dirty intro to reST, see this, but also look into the custom markup added by the Sphinx team – these additional features help you get the best out of your documentation.
ReST has many more features than MD does, so for parity’s sake and an easier transition, here’s a great conversion guide and a one-for-one comparison of features neatly laid out in tabular format.
Adding MD
Sometimes, you may not be willing or able to convert existing MD documentation into reST. That’s okay, Sphinx can dual-wield MD/reST support.
First, we need to install the MD processing module:
sudo pip install recommonmark
We also need to import the parser into
source/conf.py:
from recommonmark.parser import CommonMarkParser
Finally, we need to tell Sphinx to send
.md files to the parser by replacing the previously defined
source_suffix line with:
source_suffix = ['.rst', '.md'] source_parsers = { '.md': CommonMarkParser, }
If we try it out by adding a file
testmd.md into
source with the contents:
# TestMD! We are testing md! ## Second heading! Testing MD files. --- echo "Here is some PHP code!"
Rebuilding should now show the MD content as fully rendered – with one caveat. The syntax won’t be highlighted (not even if we put the code into a PHP code fence). If someone has an idea about why this happens and how to avoid it, please let us know in the comments.
Hosting on ReadTheDocs
Now that our documentation is ready, we can host it online. For the purpose of this demo, we create a repo of sample content at.
To host the docs online, we first add a new project…
The next screen asks for a connection with Github. After importing repositories, we click Create on the one we want to create an RTD project from and confirm some additional values which can all be left at default.
After a build successfully finishes, our docs should be live:
Note: this check used to be required, but RTD seems to have fixed the issue and you can now use the same theme declaration both in the local version, and the live one.
Extensions on RTD
Earlier in this tutorial, we installed the PHP Domain for easier directive-powered PHP class descriptions. This extension is not available on ReadTheDocs, though.
Luckily, ReadTheDocs utilizes virtualenv and can install almost any module you desire. To install custom modules, we need the following:
- a
requirements.txtfile in the repo somewhere
- the path to this file in the
Advanced Settingssection of our project on ReadTheDocs
To get a requirements file, we can just save the output of the command
pip freeze into a file:
pip freeze > requirements.txt
The
freeze command will generate a long list of modules, and some of them might not be installable on ReadTheDocs (Ubuntu specific ones, for example). You’ll have to follow the errors in the build phases and remove them from the file, one by one, until a working
requirements file is reached, or until RTD improve their build report to flag errors more accurately.
For all intents and purposes, a file such as this one should be perfectly fine:
Babel==2.0 CommonMark==0.5.4 Jinja2==2.8 MarkupSafe==0.23 PyYAML==3.11 Pygments==2.0.2 Sphinx==1.3.1 sphinxcontrib-phpdomain==0.1.4 alabaster==0.7.6 argh==0.26.1 argparse==1.2.1 docutils==0.12 html5lib==0.999 meld3==0.6.10 pathtools==0.1.2 pytz==2015.4 recommonmark==0.2.0 six==1.5.2 snowballstemmer==1.2.0 sphinx-autobuild==0.5.2 sphinx-rtd-theme==0.1.8 wheel==0.24.0
After re-running the online build (happens automatically) the documented PHP class should be available, just as it was when we tested locally.
Troubleshooting
ValueError: unknon locale: UTF-8
It’s possible you’ll get the error
ValueError: unknown locale: UTF-8 on OS X after running either
sphinx-quickstart or
make html. If this happens, open the file
~/.bashrc (create it if it doesn’t exist), put in the content:
export LC_ALL=en_US.UTF-8 export LANG=en_US.UTF-8
… save and close it, and then load it with the command
source ~/.bashrc. The error should now no longer show up.
Table of Contents does not display / outdated
Sometimes when adding new files to include into
index.rst, the ToC in the left sidebar will be out of date. For example, clicking on a file that was built before the new file was added will show a ToC with the latest file’s heading missing. The cause of this is unknown but it’s easily fixed by forcing a full rebuild:
rm -rf build/ make html
The first command removes the build folder’s contents completely, forcing Sphinx to regenerate everything on
make html.
Build Failed
If your builds fail and you can’t discern the reason, explicitly defining the location of
conf.py under
Advanced settings in
Admin sometimes helps.
Conclusion
In this tutorial, we learned how we can quickly set up a Sphinx documentation workflow for PHP projects in an isolated VM, so that the installations don’t interfere with our host operating system. We installed the PHP highlighter, configured the table of contents, installed a custom theme, went through some basic ReST syntax and hosted our docs on ReadTheDocs. In a followup post, we’ll focus on styling, documentation versions and localization.
Do you use another documentation writing workflow? If so, which and why? Any other Sphinx/RTD tips? Let us know! | https://www.sitepoint.com/using-sphinx-for-php-project-documentation/ | CC-MAIN-2018-26 | refinedweb | 2,485 | 63.49 |
Opened 4 years ago
Closed 3 years ago
#2061 closed defect (wontfix)
OSSIM packages missing from Bionic
Description
The following packages are missing from Bionic and need to be packaged in OSGeoLive ppa: ossim-plugins ossim-planet-qt ossim-planet ossim-gui
I am going to disable those packages from the ossim install script until they become available.
Cheers, Angelos
Change history (3)
comment:1 Changed 4 years ago by
comment:2 Changed 3 years ago by
Only ossim-core kept in 12.0
comment:3 Changed 3 years ago by
Note: See TracTickets for help on using tickets.
There are still git repositories in the Debian GIS namespace on Salsa for these projects, but these have not been updated for OSSIM >= 2.0.0.
I have given up on getting the rest of the ossim packages into Debian and will no longer do any work on those repositories. Once OTB no longer requires OSSIM I may even remove the ossim package from Debian and stop maintaining that as well. For the sake of OSGeoLive I may keep the ossim package in Debian, but will reject any packages that depend on ossim, especially when they are not maintained by ossimlabs. The experience with OSSIM and OTB has been very painful, and shows that OSSIM is not a good project to build upon. | https://trac.osgeo.org/osgeolive/ticket/2061 | CC-MAIN-2022-05 | refinedweb | 222 | 53.24 |
37 Reasons to Love Haskell (playing off the Ruby article)
Here’s an article on reasons to like Ruby. It’s actually very well written and persuasive. So I took it as a challenge. How well would Haskell fare when compared against this specific set of criteria, changing the points as little as possible? The meanings will inevitably shift a bit, and readers will know that Haskell has its own set of advantages, of course.
The point of this exercise, though, is to see how Haskell does in the benefits mentioned for Ruby in that particular article. It’s not a Haskell vs. Ruby; just a shameless theft of a set of criteria. Should be fun.
- It’s object oriented. Although it wasn’t designed for it, this paper points out that Haskell scores pretty well in support for object oriented programming features. It’s not entirely clear that this is a good way to write code; but if it isn’t, it’s not because Haskell doesn’t provide the features or support you need to make it work.
- It’s pure. Haskell chooses to be pure in the functional sense, rather than the object-oriented sense. The same ideas remain, though; there are no messy bits that don’t fit into the prevailing mode of thought.
- It’s a dynamic language. Yep, you heard right. Projects like Yi and hs-plugins and lambdabot prove that it’s quite possible to write programs with runtime code loading and manipulation and other dynamic features in Haskell. Indeed, the type system gives you a level of assurance that you haven’t made big mistakes along the way; something that’s quite likely when assembling code at runtime from many small pieces.
- It’s an interpreted language. Honestly, I’m having trouble understanding this point in the original; the Ruby article doesn’t ever say why this is a good thing. It helps, of course, in that it lets you work with the program interactively, easily trying out bits at a time rather than having to write a new main method to do any testing. Unlike Ruby, Haskell can also be compiled to handle any performance concerns; combining the advantages of both worlds.
- It understands regular expressions. The
Text.Regexmodule in the Haskell standard library contains functions to do regular expression stuff. It even uses Haskell’s operator mechanism to define operators like
=~to look more like other languages sometimes.
- It’s multi-platform. GHC, the major Haskell compiler, exists for Linux (many processors and distributions), Other UNIX-like systems (FreeBSD, NetBSD, OpenBSD, Solaris), many Windows variants (95 through Vista), and MacOS (Intel and PowerPC). Nobody cares about MS-DOS :).
- It’s derivative. Haskell as a language borrows many of the best features from other languages, especially ML (its type system), and Miranda (its evaluation order). It borrows libraries from lots of places. There are Haskell bindings for wxWindows and GTK+, for example. And yes, printf, too.
- It’s innovative. If there’s any single widely used language today that has a claim to being truly innovative, it’s Haskell. Haskell is practically a research playground, while still managing to be a practical language. New language features have been the bread and butter of its progress: type classes, monadic I/O and computational environments; more recently: GADTs, associated types. All of these are rare in other languages.
- It’s a Very High-Level Language (VHLL). I actually doubt this term has any kind of meaning at all; but if it does, then Haskell has quite a good claim to fitting its meaning. To the extent that the high/low level language distinction is meaningful, it’s about the ability of software to transform the program from something that’s useful to programmers into something that’s efficient and executable on machines. More work is going on here in Haskell (e.g., see the Data Parallel Haskell project) than in any other language I’m aware of.
- It has a smart garbage collector. I’m not sure this should even be worth a mention on the list. Languages without automatic memory management are simply not candidates for writing serious application level code in the modern world. For what it’s worth, though, yes Haskell does it.
- It’s a scripting language. Despite having virtually none of the properties that conventional wisdom associates with scripting languages, Haskell is quite usable for scripting. Function composition and the monadic
>>=operator provide the ability to combine pieces every bit as tersely as pipes; type inference eliminates the cost of static types. This page talks about using Haskell for scripting, and this one describes the
-eoption to ghc, which lets you run Haskell code directly from the command line, and gives an example of using it.
- It’s versatile. As a general purpose programming language, you can do as much with Haskell as practically anything. Scripting is simple, as described above. Haskell also has a very advanced application server letting you write complex web applications; has been used to write a very nice window manager, is widely used in financial and other industries, and has been used to implement other languages — itself and the very first implementation of Perl 6.
- It’s thread-capable. And more! There are basically two languages worth looking at for modern concurrent programming: Erlang and Haskell. Haskell implements not just multithreading, but three different higher-level abstractions on top of multithreading: Software Transactional Memory, Data Parallel Haskell, and basic Parallel Haskell. All three provide tools to make it easier to build correct threaded programs. (Of course, Haskell offers less interesting traditional concurrency abstractions as well.)
- It’s open-source. All Haskell implementations have the source code fully available. The major people involved hang out and regularly respond by both mailing lists (newsgroup interface available, too) and even IRC channels. Haskell has one of the most famously open and friendly communities around, so you’ll fit right in!
- It’s intuitive. Haskell doesn’t have a shallow learning curve, but that’s because you’re really learning things; not just learning a new syntax for the same programming you’ve done for years. Haskell stays out of the way and lets your mind be expanded by the concepts you’re seeing instead of by the arbitrary choices of the language implementors. That’s about as intuitive as you can really ask for.
- It handles exceptions well. Unlike practically any other language, Haskell does the right thing for computations that have exceptional cases. When working in a purely functional way, it gives you simple types like
Eitherand
Maybethat help to express those exceptional conditions in a functional manner. When you’re working in an imperative way (e.g., in the
IOmonad, or any other
MonadPlusenvironment) it provides exception handling, since that’s the right choice for the imperative style.
- It has an advanced Array type. You don’t have to declare types at compile-time, or allocate memory in advance, or keep up with their length, or worry about indexing out of bounds (unless you explicitly choose to use unsafe operations). Unlike many other languages, though, arrays aren’t syntactically preferred. It’s just as easy to use a map, list, sequence, or many other things depending on what best suits your application.
- It’s extensible. You can write external libraries in Haskell or in C. You can declare new instances (in other words add new behaviors to existing type signatures; gives you the core benefit of modifying existing classes) on the fly. You can also add new operators (and I do mean new operators, not rehashing a very limited set of operators in error-prone ways like you do in C++) and use monads to define whole new computational environments if you like.
- It encourages literate programming. Even if you aren’t really doing literate programming, you can embed comments in your code which the Haddock tool will extract and manipulate. You can look at type information even if it was never documented at all, simply by asking GHCi or Hugs for the type. If you’re really into literate programming, though, major Haskell compilers understand literate source files that default to comments, and only include code delimited in specific ways (birdtracks, or latex begin/end commands). This lets you, quite literally, compile the same document into either executable code or a details PDF user manual, without even having to do anything unusual to your latex source!
- It uses punctuation and capitalization creatively. Haskell enforces a punctuation scheme that makes the meaning of code clearer. Types, classes, modules, and data constructors begin with upper-case letters. Variables (including type variables) start with lower-case letters. Fortunately, though, a lot of very important information is stored in the type — including, for example, whether a function result is Boolean or not, and even whether it destructively updates something! This information is available at your fingertips either by reading documentation or just by typing
:tat the GHCi command prompt, so you don’t have to repeat it over and over again in your code.
- (Some) reserved words aren’t. Haskell has very few reserved words compared to most other languages, because the language itself is conceptually simpler (not to be confused with easier to learn, as mentioned earlier). A few keywords aren’t reserved, but I can’t pretend that’s a good thing.
- It allows iterators. More generally, higher-order functions are useful in a large number of situations. When combined with lazy evaluation, there are a plethora of powerful techniques for handling collections of data in Haskell. Iterators loosely correspond to maps and folds, which are well supported and widely used.
- It has safety and security features. Haskell provides as advanced a set of fool-proof safety and security features as is found in basically any language. Its type system allows you to express security constraints (even rather generalized ones) in embedded domain-specific constructs that can be enforced by the compiler, so you never even attempt to run unsafe code! If a new kind of security issue arises, it’s generally possible to use the powerful type system to solve the problem without waiting for language support for something like tainted data.
- It (really) has no pointers. Unlike the trivial sense in which languages like Java are claimed to have no pointers by restricting the most dangerous operations on them, Haskell really has no pointers. (It’s worth noting that the Java language specification wasn’t fooled into the “no pointers” myth, as a peek as the second sentence of 4.3.1 in the 3rd edition spec will make clear.) In a pure functional language, there’s no difference between using pointers and actual values, so the compiler can make the decision based on performance concerns, rather than exposing the distinction between pointers and data to the programmer. This is part of Haskell’s being a higher level language.
- It pays attention to detail. This is one of those Orwellian newspeak things where the Ruby article I’m working from claims one thing (attention to detail) and then describes the opposite (extreme sloppiness). Haskell follows the real attention to detail picture here, though you can define your own synonyms (for values or types) if you like.
- It has a flexible syntax. Haskell has a truly flexible syntax in ways that Ruby can’t dream of. It allows programmers to embed domain-specific languages that are significantly and fundamentally different from the imperative model of Ruby. Parsers can be written by embedding context-free grammars right into the source code. Logic processing can be added by embedding Prolog-style inference rules into the language. An infinite supply of operators are available to facilitate these languages. Higher-order functions and monadic environments are available to make them work well.
- It has a rich set of libraries. Available libraries is one place where Haskell is way above practically all languages with similar community sizes, and in the ballpark of a lot of mainstream languages. There are libraries for practically anything, including several for GUI programming, web programming, transactional persistence, and plenty else besides. I wouldn’t want to try to list them all, so here.
- It has a debugger. Okay, so this is the biggest stretch yet. The development version of GHC (to become GHC 6.8) actually does have a debugger; but the debugging tools for released versions of Haskell are sketchy at best. This is improving.
- It can be used interactively. This is true in the sense that GHCi exists. It’s less true in the sense that someone can use it as their login shell. Yes, it’s possible; no, it’s probably not a good idea. This is interesting, though.
- It is concise. Haskell code is about comparable to many scripting languages. Sometimes it’s a little longer. Sometimes (generally when one can make good use of powerful abstraction techniques like monads and higher-order functions ina program of large enough size that it makes a difference) it’s a lot shorter. The xmonad window manager is about 500 lines of code.
- It is expression-oriented. As a purely functional language, of course it’s expression-oriented.
- It is laced with syntactic sugar. Haskell’s got all sorts of nice syntax in ways that really matter quite a lot: custom operators and fixity declarations, do blocks for monadic computation, etc.
- It has operator overloading. In fact, operator overloading in Haskell is far nicer than in most other languages, because you can make up your own operators. No more confusing bit shifting with I/O. Within a given context, an operator means a specific thing; but at the same time, it can apply to your custom types (ah, the magic of type classes) and you can make up your own different operators to do different things concisely. Reading a journal paper that uses dotted relational operators to mean something? Great! Use them in Haskell, too.
- It has infinite-precision integer arithmetic. It also has infinite-precision rational arithmetic, and fixed-precision types in case you want those, too. And you can build your own types somewhat concisely.
- It has an exponentiation operator. Actually, it has three! This is because there’s a difference in what the three of them mean in some cases, so you get your choice.
- It has powerful string handling. It does, but more importantly it also has powerful list handling, and these list handling routines are all usable on strings.
- It has few exceptions to its rules. Haskell’s semantics are basically the lambda calculus. The whole language is remarkably consistent and behaves in consistent ways. The semantics are even simpler and easier to understand than any imperative language, which has to do with the distinction between values and variables; or eager languages, which have to deal with the question of evaluation order. This makes Haskell programs very easy to understand and manipulate safely.
So there you have it.
28 CommentsLeave a Comment
Trackbacks
- Top Posts « WordPress.com
- Why Haskell is beyond ready for Prime Time « Integer Overflow
- My Functional Programming Intro « More than a Tweet
- Func. Prog. Lang. Ref. « More than a Tweet
- 10 programming languages worth checking out | IdeasCart
- Functional Programming Concepts for the Lay Programmer – Part 1 | Stephan Sokolow's Blog
- My Thoughts On Haskell « Coding code
Pedantic note: remember that “it’s” expands to “it is”, and you use “it’s” very often where you meant the possessive “its”.
Fixed, thanks
Now go make 37 arguments in form of 37 code snippets :)
Sounds like a challenge :)
i was inclined to make a Lisp 37 list, but then i realizes it’s stupid, since Lisp has anything you would put in any other language’s list.
besides, lists are stupid, anyway.
Main problem is the amount of syntax noise a language has.
Take perl. It has a lot of syntax noise.
With haskell and ruby, it too much depends on the coder in question.
Python has an advantage here. Although it is worse than ruby ;) and probably haskell as well,
its uniform militaristic attitude give it a very good advantage there.
3. All implementations of Haskell are recognisably interpreters. (Some of them, notably HBC and GHC, use dynamic specialization techniques to achieve very good performance, but they’re still technically interpreters.)
6. Hugs supports DOS. Nhc98 supports palm-pilots. Yhc (with Golubovsky’s new backend) supports internet phones.
10. Forget smart, it’s pretty much the most advanced in the world! Memory allocation is as fast as sequential writes; OOM checks are coalesced automatically; generations and parallelism are supported, with incrementality in an unmerged branch.
24. But if you really, truly need them, you can get a safe (type-safe and unable to damage things outside of the current computation) version from Control.Monad.ST.
28. Haskell supports Forth-style interactive experimenting, with the added benefit that purity makes it unnecessary to entire commands in the right order, reset states, or anything like that. It isn’t great, but it’s much better than most non-C languages…
” … besides, lists are stupid, anyway.”
Said the Lisp fan.
:)
(BTW, you’re right about the Lisp part)
Every time I read an article like this I get the urge to try haskell again. But everytime I try haskell it’s quite confusing.
The whole, ‘we don’t have state except that we do but we call it something else’
GUIs have state, there is no getting around that.
I would like to note that Haskell has a very powerful and fast generalized exponentiation operator for integral exponents. If you define a new Num-instance (e.g. for modular arithmetic or more exotic things like elliptic curve groups), the exponentiation operator will be specialized to it automagically, and works just the way you would expect.
Jessta: Haskell’s state management is just broken down to a little finer level than traditional imperative variable storage. You can program in that same style by using bind operators, but you also have the option of abstracting the state you keep track of to things like nondeterminism, while being assured that different states will compose sanely.
About point 28: Linux doesn’t have an anti-virus program. Haskell doesn’t have a debugger.
#27 is false.
It’s really not fair to say that “Libraries” are a point in haskell’s favor.
Most of the packages on Hackage will not build successfully on a recent GHC system, and there is no QA system for packages outside of GHC boot/base package set (core data structures).
Also, despite several attempts and a work-in-progress (LambdaVM) there is not yet a functioning, easy-to-install bridge to Java/CLI or other langauges (just a DIY FFI to C), so the huge multiverse of Java/Python/Perl libraries are not effectively available to Haskell.
Haskell can’t really be said to have real-world library support.
Amazing you manage to say that object oriented is the first reason to love Haskell.
Sounds like everything is object oriented these days.
What about trying to figure out what it means first ?
I cannot understand why one uses Haskell, instead of Clean. I would appreciate to hear arguments for using Haskell, when one can use Clean. Since these two languages are very similar, it is quite easy to convert programs from one to another. I have seen huge number crunching programs in Clean (for instance, programs to model lightning, by Lucian Lima and Lucian Martins) performing better and much faster in Clean than in MatLab with a C toolbox. Since Haskell has not efficient implementation of arrays, it is impossible to port such a program to Haskell. Thus, here is my first complain about Haskell: It does not provide efficient array processing.
Clean compiler (thanks to unique types) catch most input/output errors; Haskell will catch the errors only at runtime, and I do not like programs from my company crashing at the client’s machine (who does?); for instance, Haskell compiles the program below without a single warning; Clean issues an error message, and fails to compile.
module Main
where
import IO
main = do fromHandle <- getAndOpenFile “Copy from: ” ReadMode
toHandle <- getAndOpenFile “Copy to: ” WriteMode
contents IOMode -> IO Handle
getAndOpenFile prompt mode =
do putStr prompt
name do putStrLn (“Cannot open “++ name ++ “\n”)
getAndOpenFile prompt mode)
> 24. It (really) has no pointers.
what about Foreign.Ptr then there’s Foreign.StablePtr and System.Memory.Weak
>> 24. It (really) has no pointers.
> what about Foreign.Ptr then there’s Foreign.StablePtr and System.Memory.Weak
What part of “Foreign” you don’t understand?
Python is great, but hard to take advantage of GPU and go
parallel.
Any bridge between Python and Haskell?
You pretty much listed all the superficial criteria that a programmer tries to find in a language. So, surely everybody should love Haskell :-) I like Haskell quite a lot, but as a general rule, don’t fall in love anything because love makes you blind and don’t see areas that need improvements. Of course, women are an exception :-)
“Languages without automatic memory management are simply not candidates for writing serious application level code in the modern world.”
except for games. i just felt that needed saying.
36.) This is the only one where I really disagree. Haskell’s strings are just clunky when it comes to normal language patterns. For example assembling an error message from a template a various bits of data inevitably becomes a tangle of “\”"++var ++”\”\n”++ etc… I desperately miss heredocs and interpreted strings in Haskell. | https://cdsmith.wordpress.com/2007/07/29/37-reasons-to-love-haskell-playing-off-the-ruby-article/?like=1&source=post_flair&_wpnonce=27cac980df | CC-MAIN-2014-10 | refinedweb | 3,603 | 55.64 |
It’s relatively easy to work with AWS Lambda, but preparing and deploying your code is not as straightforward as it could be. Here’s how to use Ruby_Lambda, a new tool that cuts repetition and makes life easier.
AWS Lambda is a service that lets you build complex microservice-like web applications without building or maintaining any infrastructure.The service used to only support languages such as Python, Java and Javascript, but in November 2018 Ruby support was added too.
Recently, while building Twitter bots and other apps, I’ve found myself writing the same command line configurations over and over. It’s boring and frustrating and potentially a barrier to entry for people just starting out.
I set out to create a tool that would remove the repetition and make things easier to figure out. The result is Ruby-Lambda, a tool I created that helps develop, locally test, and deploy serverless Ruby apps to AWS Lambda.
This tool cuts out a lot of the manual work – I find I’m able to focus on solving the problem rather than writing the same code repeatedly.
Looking for expert guidance or developers to work on your project? We love working on existing codebases – let’s chat.
Here’s how to make Ruby_Lambda work for you.
How to install the Ruby_Lambda tool
To install the gem, simply run this in the terminal:
$ gem install ruby_lambda
If you’d like to check out the code, here’s the Github Repo.
How to use the tool
Scaffolding your app structure
$ ruby-lambda init
This command initialises the
.gitignore,
config.yml,
env,
event.json,
lambda_function.rb,
Gemfile,
.ruby-version files.
event.jsonis where you keep mock data that will be passed to your function when the
executecommand runs.
config.ymlcontains some default configuration for your function.
envwill be renamed to
.envafter the init command runs, and will contain
AWS_ACCESS_KEYand
AWS_SECRET_ACCESS_KEY. You will need these to be able to deploy your function.
Please have a read of the
config.yml and update any of the default configurations to better suit your function.
Running a function locally
$ ruby-lambda execute
This command is used to invoke/run the function locally.
Options: -c, --config=CONFIG # Default: config.yml -H, --handler=HANDLER
Example use cases:
$ ruby-lambda execute -c=config.yml
$ ruby-lambda execute -H=lambda_function.handler
The handler function is the function AWS Lambda will invoke/run in response to an event. AWS Lambda uses the event argument to pass in event data to the handler. If the
handler flag is passed with execute, this will take precedence over the handler function defined within the
config.yml.
def handler(event:, context:) { statusCode: 200, body: JSON.generate('Hello from Ruby Lambda') } end
The
execute command retrieves the values stored in the
event.json file and passes them to your handler function.
Creating a deployable zip
$ ruby-lambda build
This command creates a zipped file ready to be published on Lambda. A new folder called ‘builds’ will be created and all zips will be stored in there.
Options: -n, --native-extensions flag to pass build gems with native extensions -q, --quiet
A note on native extensions
This article from Pat Shaughnessy covers what native extensions are and how they work.
Basically, building a native extension means compiling C code into the platform and environment specific machine language code. So, if you run bundle install — deployment on your local machine running MacOS, the C code is compiled for MacOS and stored in vendor/bundle. As AWS lambda is a Ubuntu machine, not MacOs, it won’t work.
To build gems with native extensions use
-n flag when you run this command. Doing so will run a dockerized bundle with deployment flag within a Lambda image – this will download the gems to the local directory instead of to the local systems Ruby directory, using the same OS environment as Lambda so that it installs the correct native extensions. This ensures that all our dependencies are included in the function deployment package and the correct native extensions will be called.
Deploying and publishing your function
$ ruby-lambda deploy
The deploy command will either bundle install your project and package it in a zip, or accept a zip file passed to it, then upload it to AWS Lambda.
Options: -n, --native-extensions flag to pass build gems with native extensions -c, --config=CONFIG path to the config file, defalt is config.yml -p, --publish if the function should be published, default is true -u, --update default to true, update the function -z, --zip-file=ZIP_FILE path to zip file to create or update your function -q, --quiet
By default the
deploy command will attempt to create the function with your config – if the function already exists an error will be thrown. To update an existing function simply pass the
-u flag.. AWS recommends that you publish a version at the same time that you create your Lambda function, or update your Lambda function code. So by default, all deploy will be versioned – if you do not want this, use the
-p=false flag.
When you run the deploy command, it will prepare the latest state of your function and zip it up, basically running the build command. If you have already built your zip, use the
-z flag to set the path to it.
Want to contribute?
I hope this tool proves useful for you – it’s definitely made my work a lot easier.
Keep an eye out for future posts on how to use the tool with specific technologies, including Twitter bots, basic machine learning and custom APIs.
New ideas, features, bug reports and pull requests are welcome on GitHub. Check the read me for the development guide. | https://www.cookieshq.co.uk/posts/a-useful-tool-for-building-serverless-ruby-apps-with-aws-lambda | CC-MAIN-2021-10 | refinedweb | 957 | 62.88 |
python-jtl 0.1.0
Python module for parsing JMeter test results
python-jtl provides a module called jtl which can be useful for parsing JMeter results (so called JTL files). JTL files can be either of XML or CSV (with or without the fieldnames) file format. jtl module supports both XML and CSV (with and without the fieldnames) file formats.
The typical usage in general looks like this:
from jtl import create_parser parser = create_parser('test_results.xml') for sample in parser.itersamples(): ...
Features
- Parses JMeter results (JTL) into the iterator of the results samples;
- Supports both XML and CSV (with and without fieldnames) file formats;
- Supports custom delimiter character (CSV only);
- Stores results samples in named tuples;
- Uses iterative XML parsing for better performance;
- Automatically detects the file format (XML or CSV).
- Downloads (All Versions):
- 3 downloads in the last day
- 22 downloads in the last week
- 78 downloads in the last month
- Author: Victor Klepikovskiy
- License: LICENSE.txt
- Package Index Owner: vklepikovskiy
- DOAP record: python-jtl-0.1.0.xml | https://pypi.python.org/pypi/python-jtl/0.1.0 | CC-MAIN-2015-14 | refinedweb | 171 | 51.38 |
Single thread model in Struts - Struts
be thread-safe because only one instance of a class handles all the requests for that Action. The singleton strategy restricts to Struts 1 Actions and requires extra care to make the action resources thread safe or synchronized while developing
Is ActionServlet thread safe ?
Is ActionServlet thread safe ? Is ActionServlet thread safe
Struts Action Class
Struts Action Class What happens if we do not write execute() in Action class
Struts - Struts
Struts Is Action class is thread safe in struts? if yes, how it is thread safe? if no, how to make it thread safe? Please give me with good... safe. You can make it thread safe by using only local variables, not instance
JSP Thread Safe
JSP Thread Safe
JSP Thread Safe is used to send only one...;
<h2> Jsp Thread Safe </h2>
By using the tag <<b>
Thread
methods of thread class.
Java Create Thread
There are two main ways of creating a thread. The first is to extend the Thread class and the second... the Thread class and the second is to implement the Runnable interface.
Please
Struts Tutorial
:
Struts provides the POJO based actions.
Thread safe.
Struts has support... the
information to them.
Struts Controller Component : In Controller, Action class...In this section we will discuss about Struts.
This tutorial will contain
Thread
.
Java Thread Example
class ThreadExample{
static int...Thread Write a Java program to create three theads. Each thread should produce the sum of 1 to 10, 11 to 20 and 21to 30 respectively. Main thread
how to forward select query result to jsp page using struts action class
how to forward select query result to jsp page using struts action class how to forward select query result to jsp page using struts action
Struts
is not only thread-safe but thread-dependent.
Struts2 tag libraries provide...Struts Why struts rather than other frame works?
Struts... by the application.
There are several advantages of Struts that makes it popular
thread class - Java Beginners
the following code:
class Incrementor extends Thread{
int cnt1 = 0;
boolean...;
}
}
class Decrementor extends Thread{
int cnt2 = 100;
int cnt1...thread class Create 2 Thread classes.One Thread is Incrementor
User Registration Action Class and DAO code
User Registration Action Class and DAO code... to write code for action class and code for performing database operations (saving data into database).
Developing Action Class
The Action Class
action Servlet - Struts
action Servlet What is the difference between ActionServlet ans normal servlet?
And why actionServlet is required
attribute in action tag - Java Beginners
attribute in action tag I'm just a beginner to struts.
The name tag(name="bookListForm") is used to define the form used with the action class. But i`m not clear about the attribute tag(attribute
Single thread model - Struts
Single thread model Hi Friedns , thank u in advance
1)I need sample code to find and remove duplicates in
arraylist and hashmap.
2) In struts, ow to implement singlthread model and threadsafe">
Java Thread class
Java Thread Class is a piece of the program execution
Java has...
It is created by extending the Thread class or implementing
Runnable
interface
Java Thread Class Example
public class thread1 extends Thread {
@Override>
Create Thread by Extending Thread
by extending Thread class in java.
Extending Thread :
You can create thread by extending Thread class and then by creating instance
of that class you can... Thread class to
create thread.
class ThreadClass extends Thread
Thread in java
Thread in java which method will defined in thread class... the following Action class by implementing Action
interface.
TestAction.java...;roseindia" extends="struts-default">
<action name="
Action and ActionSupport
Action and ActionSupport Difference between Action and ActionSupport.... The developer implements this interface of accessing string field in action methods. The com.opensymphony.xwork2.ActionSupport is class . It is used
Struts - Struts
Struts - Struts
*;
public class UserRegisterForm extends ActionForm{
private String action="add...();
return errors;
}
public String getAction() {
return action;
}
public void setAction(String action) {
this.action = action;
}
public
Action Listeners
Action Listeners Please, could someone help me with how to use action listeners
I am creating a gui with four buttons. I will like to know how to apply the action listener to these four buttons.
Hello Friend,
Try
Struts - Struts
javax.servlet.http.HttpServletRequest;
import org.apache.struts.action.*;
public class UserRegisterForm extends ActionForm{
private String action="add";
private...() {
return action;
}
public void setAction(String action
Developing Login Action Class
Developing Login Action Class
... for login action class and database code for validating the user against database.
Developing Login Action Class
In any.0 - Struts
struts 2.0 I have written print statement in action class. It is printing data 2 times. Why it is happening
javascript call action class method instruts
javascript call action class method instruts in struts2 onchange event call a method in Actionclass with selected value as parameter how can i do
Thread concept
in advance friends. Happy new year!!!!!
class Newthread3 implements Runnable{
Thread t;
String name;
Newthread3(String threadname){
name=threadname;
t=new Thread...Thread concept Everytime when i run a multithread program it gives :Thread Methods
Java :Thread Methods
This section explains methods of Thread class.
Thread Methods :
Thread class provides many method to handle different thread...().
Example :
class RunnableThread implements Runnable {
Thread thread | http://roseindia.net/tutorialhelp/comment/22917 | CC-MAIN-2015-48 | refinedweb | 882 | 66.13 |
Removing stop words with NLTK in Python
The process of converting data to something a computer can understand is referred to as pre-processing. One of the major forms of pre-processing is to filter out useless data. In natural language processing, useless words (data), are referred to as stop words.
What are Stop words?. NLTK(Natural Language Toolkit) in python has a list of stopwords stored in 16 different languages. You can find them in the nltk_data directory. home/pratima/nltk_data/corpora/stopwords is the directory address.(Do not forget to change your home directory name)
To check the list of stopwords you can type the following commands in the python shell.
import nltk from nltk.corpus import stopwords print(stopwords.words('english'))
{‘ourselves’, ‘hers’, ‘between’, ‘yourself’, ‘but’, ‘again’, ‘there’, ‘about’, ‘once’, ‘during’, ‘out’, ‘very’, ‘having’, ‘with’, ‘they’, ‘own’, ‘an’, ‘be’, ‘some’, ‘for’, ‘do’, ‘its’, ‘yours’, ‘such’, ‘into’, ‘of’, ‘most’, ‘itself’, ‘other’, ‘off’, ‘is’, ‘s’, ‘am’, ‘or’, ‘who’, ‘as’, ‘from’, ‘him’, ‘each’, ‘the’, ‘themselves’, ‘until’, ‘below’, ‘are’, ‘we’, ‘these’, ‘your’, ‘his’, ‘through’, ‘don’, ‘nor’, ‘me’, ‘were’, ‘her’, ‘more’, ‘himself’, ‘this’, ‘down’, ‘should’, ‘our’, ‘their’, ‘while’, ‘above’, ‘both’, ‘up’, ‘to’, ‘ours’, ‘had’, ‘she’, ‘all’, ‘no’, ‘when’, ‘at’, ‘any’, ‘before’, ‘them’, ‘same’, ‘and’, ‘been’, ‘have’, ‘in’, ‘will’, ‘on’, ‘does’, ‘yourselves’, ‘then’, ‘that’, ‘because’, ‘what’, ‘over’, ‘why’, ‘so’, ‘can’, ‘did’, ‘not’, ‘now’, ‘under’, ‘he’, ‘you’, ‘herself’, ‘has’, ‘just’, ‘where’, ‘too’, ‘only’, ‘myself’, ‘which’, ‘those’, ‘i’, ‘after’, ‘few’, ‘whom’, ‘t’, ‘being’, ‘if’, ‘theirs’, ‘my’, ‘against’, ‘a’, ‘by’, ‘doing’, ‘it’, ‘how’, ‘further’, ‘was’, ‘here’, ‘than’}
Note: You can even modify the list by adding words of your choice in the english .txt. file in the stopwords directory.
Removing stop words with NLTK
The following program removes stop words from a piece of text:
Python3
Output:
['This', 'is', 'a', 'sample', 'sentence', ',', 'showing', 'off', 'the', 'stop', 'words', 'filtration', '.'] ['This', 'sample', 'sentence', ',', 'showing', 'stop', 'words', 'filtration', '.']
Performing the Stopwords operations in a file
In the code below, text.txt is the original input file in which stopwords are to be removed. filteredtext.txt is the output file. It can be done using following code:
Python3
This is how we are making our processed content more efficient by removing words that do not contribute to any future operations.
This article is contributed by Pratima Upadhy. | https://www.geeksforgeeks.org/removing-stop-words-nltk-python/ | CC-MAIN-2022-27 | refinedweb | 377 | 69.01 |
python - when ie is controlled with selenium, the code after the instruction that opens the page with driverget ("url"
I am trying to control the web with Selenium in Python.
The program is as follows.
from selenium import webdriver if __name__ == '__main__': print ("0") # Start IE with ChromeDriver path as argument driver = webdriver.Ie (r "C: \ Selenium \ IEDriverServer.exe") print ("1") # Transition to specified URL driver.get ("http: //xxxx.htm") print ("2") print ("3")
It will run until you start IE and see http: //xxxx.htm.
Only 0 and 1 are printed when the print statement is executed. Nothing before print ("2") is executed.
This program worked fine on windows7.
On Windows10, programs below driver.get ("http: //xxxx.htm") are not executed.
What should I do? If i could teach me, it'll helps a lot.
- Answer # 1
Related articles
- python 3x - error after changing to csv file
- python - the bet365 site cannot be scraped with selenium if you are familiar with it, could you please tell me?
- python delete files after uploading
- python 3x - [python] [selenium] i get an error when i put a variable in find_element_by_xpath
- [python] selenium i want to operate the window that appears by clicking
- python - i want to improve the waveform after fft
- python - i get an error with cv2_version_ after installing opencv
- python - ec2 selenium garbled characters
- python - [selenium] about element specification in find_element_by_css_selector
- data scraped by selenium using python cannot be stored in the list just by the for statement
- in python selenium, i want to get the element of the button of the site from class name, but it fails with nosuchelementexceptio
- python - i want to extract the characters before and after the delimiter position
- python - i want to display data scraped by selenium in a table using a for statement in a django template
- python - unable to get element with selenium
- python - unable to import yaml after installing pyyaml
- python - "view page source" in selenium
- i want to pass a value to js after opening a link with c # or python
- i can't successfully store the data scraped by selenium using python in the list
- python selenium get dynamic class name during scraping
- python - googlemaps works only the first time after atom starts modulenotfounderror: no module named'googlemaps'
Related questions
-
Is this not possible? | https://www.tutorialfor.com/questions-150700.htm | CC-MAIN-2020-50 | refinedweb | 385 | 57.5 |
At the MAX conference in Chicago, we caught up with Scott Fegette, technical product manager for Dreamweaver to discuss the ins and outs of the upcoming Spry release.
In order to watch video content you need to enable javascript and install Flash player version 8 or above.
Builder AU: What's new in Spry 1.6?
Fegette: A lot of changes and a few new features too. The focus this time for 1.6 was really about shoring up our story and raising our game in regards to standards and accessibility and best practises in general.
There was some early criticism from some of the standards organisations about the decision early on with Spry to go on with custom attributes for a lot of the model.
Although we're technically following the letter of the standards, we had namespace for our custom attributes and we're kind of doing things correctly, the movement of unobtrusive JavaScript is gaining a lot of steam in best practise circles right now and obviously putting custom attributes directly in your mark-up starts to litter your content model along with your behaviour layer and we wanted to start addressing things like that specifically.
So in 1.6 one of the new features is the element selector, similar to some other frameworks, prototype for example, you can have double dollar sign notation to actually select individual element selectors within your mark-up and attach Spry elements or anything else unobtrusively at runtime so you can move all your behaviour layer off into JavaScript.
A lot of other things, we now provide compressed and packed versions of the files. A good example is the SpryData.js went from 128k to around 41k, huge savings. There was a little bit of concern about the size of the files so that will help shore that up a bit.
Of course if you're using compressed files you do have to deal with the decompression hit on the browser side but most people would prefer that trade off. The real story here is that you have your options, you can use the uncompressed files or you can use either a packed or a packed and compressed file also and we included all of these in the pack.
Let's see, some other things in Spry 1.6, a lot of it has been documentation too, we realised that people downloaded Spry and were using it at face value but weren't really diving deeply into what you could do with the framework so we spent a lot of time ... getting a lot of documentation around best practises, ways that you could create really gracefully degrading pages using Spry, ways that you could use Spry unobtrusively [to] really separate content from behaviour.
So the documentation, I think, is kind of the unsung feature of 1.6, there's just so much information there that helps you take Spry to the next level, regardless of how you got into it. If you downloaded it as a framework, if you're using it in Dreamweaver, it's just a lot of great information.
Dreamweaver was another point, we're always had this interesting delta situation between Dreamweaver and Spry in that Spry was really a lambs project -- it was launched a good year before Dreamweaver CS3 launched.
So when CS3 came out we actually had kind of a pre-release version of 1.6, 1.5 that we wanted to get out but we were torn, we weren't really able to update Dreamweaver with it yet, we wanted to get more feedback so as soon as we pushed it out we got a lot of criticism on why couldn't users update Dreamweaver directly.
That kind of exasperated the difference between our shipping product and our labs product of Spry. So with 1.6 we also have an updater that you can either download separately or as part of the framework package that'll basically go through and update all your framework files and give you the option to update your local sites directly.
And we did spend a lot of time testing backwards compatibility too, so at least we're hoping, knock wood, that you won't have any compatibility problems just by updating to 1.6. Your existing sites won't break but you'll get all these additional features too which will really help you move to the next level.
What about users that are using a previous version of Dreamweaver, is there any option for them?
No, the Spry integration was really part of CS3 so that's kind of the starting point for Spry support, on the flip side though I personally like to advocate diving in and looking under the hood.
Dreamweaver gives you a lot of good visual features, but it's really easy to treat it as a black box, push a button -- get a feature. I mean Spry is a lot deeper as a framework that what you'll even see in Dreamweaver CS3.
So personally I'd like to say that everyone should have access to Spry regardless of whether you own Dreamweaver 4, MX or CS3. You'll get some of the nice push button features and easy integration with the UI in Dreamweaver CS3 but it certainly shouldn't preclude you from using it.
Does this version of Spry address the issue of standards compliance?
Yeah, custom attributes. And that honestly is a bit of a philosophical debate in the standards community. You mentioned that there's the Spry namespace, so Spry:attribute is the syntax and you add these custom attributes to your tags so like a TD tag you can add a little bit of Spry data attributes to that to help integrate it easily into the mark-up.
The reason why we decided to do it originally was because we realised that a lot of people wanted to get into rich interfaces but weren't necessarily hardcore JavaScript coders, we wanted to make it something that was easy to understand as the mark-up they were used to.
So we went in this direction and there actually is some precedent even amongst the standards right now for using things this way. The YRIA initiative for accessibility implements rolls and states for accessibility using a custom namespace in exactly the same way. Although I totally agree and we've done a lot to shore it up I would have to disagree a little but that we kind of broke the mould because there are some existing examples that went this way too. The problem is when you start talking about validation a lot of the online validators can't extend to custom namespaces. But realistically that's what the X in XHTML stands for, extensible.
So you know we're just trying to use the mechanism and that being said it's not like I disagree with the criticism, I think that it's really important to be able to use Spry both ways and that's why we did 1.6. We wanted to make sure that you had the option that you weren't tied into using custom attributes and namespaces if that's not what you wanted.
So now it's really easy to create an unobtrusive Spry page that validates perfectly and you can bring in these namespace attributes at runtime and get the benefit of the Spry framework later.
There seems to be more documentation, specifically examples of good use policy ...
Absolutely, I mean best practice [best practice], there's one thing to talk about standards and the letter of the law, so to speak, but you know with any law people find creative ways around it and ultimately it's the interpretation of these rules. The standards and how they can be applied to real world projects.
We've been paying a lot of close attention to best practises, there's one I brought up earlier, unobtrusive JavaScript in general. Complete separation of your display of CSS, your data, XHTML and your behaviour in JavaScript is becoming a goal that a lot of people are striving towards in their projects and this is just one more way that we're helping Spry fit within that model instead of the one that we carved out for it.
Is Spry compatible with the AIR framework?
Yeah, as a matter of fact it's funny that you mentioned that, one of the AIR Developer Derby finalists was Spaz AIR which was created with a nice mix of I believe a little bit of JQuery, a little bit of Spry -- it's all HTML and JavaScript. It's a really cool Twitter client. It's ended up my Twitter client of choice lately, it's a hot little app and it's all in AIR.
Can the framework play well with others, such as Aptana?
A pretty hard core JavaScript developer pulled me aside yesterday and was like "you know I gotta tell you Spry is a little bit of your sleeper feature. I hadn't really noticed it until now and I took a look at it and I'm really liking it." We made Spry to be really open and accessible so it plays nicely with other development tools.
We just want to stay as agnostic with Spry as possible, honestly it's a framework -- it does have hooks into Dreamweaver but we're looking at it as more Dreamweaver running with Spry as opposed to Spry being part of Dreamweaver.
Spry's got its own development team, they're working on their own schedules and we always sync up with the Dreamweaver team, they're part of the larger team, it's kind of like a start-up within a big company, it helps keep us nimble.
But there's obviously a benefit in having it in Dreamweaver ...
To me it's all about rapid development in Dreamweaver. Keep in mind I may not even be the best use case for a typical Dreamweaver developer because I'm a pretty serious gear head. I love tweaking code but honestly when I start with Spry, and I think the same goes with a lot of Dreamweaver features too, I use Dreamweaver to get my code 90 percent of the way there, but then there's always little tweaks that I want to do on the backend with Spry.
Let's say I've definitely drunk the standards Kool-Aid myself over the last year. A few months of withering criticism will do that to many people, but it's actually good advice that I've taken to heart.
I like to start with Spry, mock out my original page, get everything together. I'm not really thinking about how Dreamweaver's working specifically with the attributes, whether they're in the page, whether they're external or not. At the time that I want to start refining, that's when I usually do my moving. I'll take CSS inline styles and move them externally, I'll put event handlers out of my mark-up, move them into external JavaScript also and all that's really easy to do with the code management tools in Dreamweaver.
Honestly that's where I'm seeing a lot of people are using Spry. These people who are hard core developers using Dreamweaver with Spry, mostly. It's just to get that quick mark-up. On the other hand it's also pretty comprehensive, you can do a lot with the tools there. They don't cover all the features in the framework, but they cover enough of them to really let designers be effective building rich interfaces. That's kind of the goal of the framework anyway.
I think that we're almost serving two masters here with Spry, there's the folk who really want a robust framework, people who might even use a mix of frameworks. Spry, along with JQuery or people with use YUI widgets on the front-end, that's totally cool, but then there's designers who really just want to get a job done and done quickly so these are the benefits of the front-end feature of Dreamweaver for me.
1
saintwastus - 16/12/07
hi
please i want a hot cds for my business
» Report offensive content | http://www.builderau.com.au/program/javascript/soa/Spry-Standards-Dreamweaver-the-future/0,339028434,339283125,00.htm | crawl-002 | refinedweb | 2,069 | 64.34 |
Prev
C++ VC ATL STL Operator Code Index
Headers
Your browser does not support iframes.
Re: Reverse comma operator?
From:
Alan Woodland <ajw05@aber.ac.uk>
Newsgroups:
comp.lang.c++
Date:
Tue, 11 Aug 2009 20:17:35 +0100
Message-ID:
<hjs8l6xmoj.ln2@news.aber.ac.uk>
Kaz Kylheku wrote:
On 2009-08-10, Paul N <gw7rib@aol.com> wrote:
I had an idea the other day for a new operator for C and C++, which
acts like the comma operator but which returns the first value instead
of the second.
You mean like PROG1 in Common Lisp? Quite useful indeed.
In C we have kind of a special case of this, namely post-increment. I.e.
y = x++;
is similar to a use of your $ operator:
y = x $ x++;
Implicit to a saved copy of some prior value of a computation is sometimes a
handy way to express yourself.
For example, and using $ for my operator as it doesn't
seem to be already used,
return setupstuff() , calculatevalue() $ resetstuff();
Lisp:
(progn (set-up-stuff)
(prog1 (calculate-value)
(reset-stuff)))
There is prog2 also, (but no prog3, just 1, 2 and n).
I'm pretty sure you can't emulate this operator in any way in portable C.
In the GNU C dialect, we can use the ``typeof'' operator to figure out the
return type of the expression, so that we can define a temporary variable
of a compatible type. And GNU C has block statements which return a value
(the value of the last statement in the block), similar to Lisp's PROG.
(GNU C was originally written by Lisp hackers). So in GNU C, we can easily make:
#define PROG1(A, B) ...
which evaluates A, then B, with a sequence point, and yields the value of A.
I can't think of a way to do this in ISO C. Even if we accept this ugly
interface:
#define PROG1(TYPEOF_A, A, B)
You inspired me to have a go (and I've not really succeeded 100%) at
doing this using variadic templates (a learning exercise for me if
nothing else!)
#include <iostream>
// Based on simple_tuple from
template <typename ... Types>
class ParamSet;
template <>
class ParamSet<> {};
template <typename First, typename ... Rest>
class ParamSet<First,Rest...> : private ParamSet<Rest...>
{
First member;
public:
ParamSet(First const& f, Rest const& ... rest):
ParamSet<Rest...>(rest...), member(f) { }
operator First() const { return member; }
};
template <typename Ret, typename... Args>
Ret dispatch(Ret (*f)(Args...), const ParamSet<Args...>& args) {
return f(args);
}
template <typename Ret>
Ret dispatch(Ret (*f)(), const ParamSet<>&) {
return f();
}
template <typename Ret, typename... Args1, typename... Args2>
Ret first(Ret (*f1)(Args1...), Ret (*f2)(Args2...), const
ParamSet<Args1...>& args1, const ParamSet<Args2...>& args2) {
const Ret& val = dispatch(f1,args1);
dispatch(f2,args2);
return val;
}
template <typename ... Types>
class ParamSet<Types...> make_param(const Types&... types) { return
ParamSet<Types...>(types...); }
// no parenthesis on a1, a2 is important to avoid operator comma with
// multiple parameters here.
#define prog1(f1,f2,a1,a2) first(f1,f2,make_param a1,make_param a2)
bool test1(void*) {
std::cout << "in test1()" << std::endl;
return true;
}
bool test2() {
std::cout << "in test2()" << std::endl;
return false;
}
int main() {
std::cout << prog1(test1, test2, ((void*)NULL),()) << std::endl;
return 0;
}
Can anyone improve it? The problem I have is dispatching with more than
1 argument. I also can't quite think of a tidy way to 'steal' the
arguments in a macro and make the macro just take two parameters instead
of 4.
Alan
Generated by PreciseInfo ™
established,) | http://preciseinfo.org/Convert/Articles_CPP/Operator_Code/C++-VC-ATL-STL-Operator-Code-090811221735.html | CC-MAIN-2021-49 | refinedweb | 584 | 65.93 |
Microsoft Excel's ubiquity in the corporate workplace in large part derives from the ability to extend its functionality through VBA macros and user-defined functions, for automating complex procedures and for use in cell formulas. VBA however, is an old-fashioned language with a lot of subtle idiosyncracies and today is rarely used outside the restricted domain of Office automation. Python, on the other hand, is a widely used modern scripting language, whose greatest strength is the vast available range of mature, open-source libraries for performing all kinds of computing tasks.
ExcelPython is an Excel add-in which makes it incredibly easy to write user-defined functions and macros in Python, without having to touch VBA at all!
A year ago, I wrote an article showing how to call Python code from Excel using an early version of ExcelPython. Since then, the library has come a long way! Back then, it was necessary to write VBA wrapper code to call Python. With the latest versions of ExcelPython, this is all done for you automatically by the add-in, making it quicker and easier than ever to write your UDFs and macros directly in Python.
In this tip, I present some of the new features of ExcelPython which make it incredibly quick and easy to write functions and macros in Python and use them from Excel.
The tip will skim over the various features quickly to show what is possible. For more details, you can follow the in-depth tutorials.
To use ExcelPython, you must have Excel and Python installed. ExcelPython is compatible with Python versions 2.6 to 3.4. Also, it is necessary to have installed the PyWin32 library for whichever version of Python you have.
To get set up, get the latest release of ExcelPython and follow these instructions on how to install it.
Once you have installed the ExcelPython add-in, save a blank workbook in a new folder as Book1.xlsm (i.e., as a macro-enabled workbook) and click Setup ExcelPython on the ExcelPython toolbar tab. Then open up the file Book1.py which will have been created in the same folder as the workbook and enter the following code:
# Book1.py
from xlpython import *
@xlfunc
def DoubleSum(x, y):
'''Returns twice the sum of the two arguments'''
return 2 * (x + y)
Switch back to Excel and click Import Python UDFs to pick up the new code. Then in a blank cell, you can type the following formula:
=DoubleSum(1, 2)
and you should get the correct result displayed in the cell!
Other features, which you can learn more about from the tutorials, include automatic conversion to and from common Python data types such as dictionaries, NumPy arrays as well as dimensionality manipulation and automatic function and parameter documentation.
In addition to writing user-defined functions for use in cell formulas, VBA is typically used for defining macros to automate Excel. ExcelPython makes it easy to code these in Python as well.
For example, the following code shows how to write a simple macro which sets the Excel status bar (at the bottom of the window) to the current date and time.
@xlsub
@xlarg("app", vba="Application")
def my_macro(app):
from datetime import datetime
app.StatusBar = str(datetime.now())
Once you add this code to Book1.py, all you need to do is click Import Python UDFs again and the macro will be ready to use from Excel, for example you can associate it with a button control.
Workbooks, sheets and ranges can also be manipulated from Python just as you can from VBA. The following sets the value of cell A1 in the active worksheet:
@xlsub
@xlarg("sheet", vba="ActiveSheet")
def my_macro_two(sheet):
sheet.Range("A1").Value ="Hello World!"
Finally, to facilitate writing macros, ExcelPython integrates seamlessly with xlwings, an excellent Python library which wraps up the Excel object model and makes it even easier to read and write cell values by automatically converting widely-used data types like NumPy arrays and Pandas DataFrames.
from xlwings import *
@xlsub(xlwings=True):
def my_macro_three():
Range("A1").value = [ 1, 2, 3 ]
In this tip, I exposed some of the main features of ExcelPython. There are however many more! For example:
If you like ExcelPython, please get in touch and let me know what you are using it for!
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) | https://www.codeproject.com/Tips/810425/Writing-Excel-UDFs-and-macros-in-Python-with-Excel | CC-MAIN-2017-22 | refinedweb | 742 | 61.06 |
Have you ever worked on an application with one huge CSS file and found that when you changed a style rule in one place, something unexpected happened somewhere else? I had this problem a lot in my early days of front end development. It was frustrating! So what can you do to stop this from happening?
You need to scope your style rules!
To scope your CSS means to encapsulate style rules in a systematic way so that they apply to one particular chunk of HTML only. CSS-in-JS solutions such as Styled Components or CSS modules that ship with front end frameworks have largely solved this problem by scoping styles to your component templates as standard. This means you don't need to worry about classes in one component affecting the styling of another component — even if you use the same class name. Nice!
But what if you're just starting out, and you want to focus on building out pure CSS in a systematic way without getting bogged down with CSS-in-JS?
Working in pure CSS
In order to scope your styles in pure CSS, the aim is to declare your CSS classes specifically and solely for individual HTML components. Style rules should be purposefully verbose and self-documenting, without relying on inheritance or default browser behaviour. This type of system discourages the use of utility classes reused across multiple components because this is where you can run into the problems described at the beginning of the post. If you change the style properties of a utility class used in multiple components, it could affect the layout of your whole application — sometimes with very undesirable results!
Let's take a look at how we can harness the power of a system like BEM.
What does BEM stand for?
BEM stands for block, element, modifier, and it's a super-handy system to help you scope CSS style properties to blocks of HTML. What's more, it encourages you to make your HTML and CSS descriptive and self-documenting — helping identify the purpose and intended function of the CSS classes in the code itself.
Other class naming conventions exist alongside BEM to help you scope styles when writing HTML and CSS — such as OOCSS and SMACSS. You can even roll your own system! But the most important thing to remember is to use a system, stick to that system, and make it work for you.
So, how do we work with BEM?
Block, element, modifier
Let's take a look at the building blocks of BEM.
Block: a chunk of HTML to which you want to scope styles
.block { }
Element: any element inside that block, namespaced with your block name
.block__elementOne { } .block__elementTwo { }
Modifier: a flag to add styles to an element, without creating a separate CSS class
.block__elementOne--modifier { }
BEM syntax conventions
- Use one or two underscores to separate the block name from the element name
- Use one or two dashes to separate the element name and its modifier
- Use descriptive class names in camelCase
BEM in context
In context, your HTML using the above class names might look like this:
<section class="block"> <p class="block__elementOne">This is an element inside a block.</p> <p class="block__elementOne block__elementOne--modifier">This is an element inside a block, with a modifier.</p> </section>
In a real-life example, with more realistic class names, this might look like:
<section class="container"> <p class="container__paragraph">This is a paragraph inside a container.</p> <p class="container__paragraph container__paragraph--bold"> This is a paragraph inside a container, with a modifier that adds bold styling. </p> </section>
Using the fully-declarative approach, where you don't rely on inheritance or default browser styles, your CSS classes might look like this:
.container { display: block; margin: 1rem auto; padding: 1rem; box-sizing: border-box; } .container__paragraph { color: #000000; font-family: Arial, Helvetica, sans-serif; font-size: 1rem; font-weight: normal; line-height: 1.2; margin: 0 0 1rem 0; } .container__paragraph--bold { font-weight: bold; }
Notice how any default browser behaviour we might take for granted has been declared in the above classes — such as
display: block on the
.container <div> element. This is an extremely useful way to ensure that if you need to switch up the HTML elements in your components — for example swapping the
<div> (default
display: block) for a
<span> (default
display: inline) in the above example — the resulting styles of your components are not affected.
Wrapping up
Using BEM is not going to solve all your CSS problems (good luck with centring those
<div> elements in your layouts!), but it can help you take a step in the right direction to make your CSS readable, descriptive, and safe from any unexpected results. Again, the most important thing to remember is to use a system, stick to that system, and make it work for you.
Check out my latest YouTube video that supports this post. Subscribe for more regular front end web development tips!
And remember — build stuff, learn things, love what you do!
Discussion (15)
Funny that just yesterday I was reading a (very controversial) post here on dev.to that mentioned BEM and I was wondering, "argh, what's that again?" as I couldn't remember, but I didn't have time/forgot to look it up. And here we go, first thing this morning, this post. :) Thanks. ;)
I already have this in my project, good to see you explained it very well here, thank you!
Bem is the next best thing since css itself. I think originally was developed by smart folks at yandex
Nice post!
Ever wondered why your CSS files are 200mb? Ah yes, BEM :P the place where you infinitely add more and more css incase you ever break something as you hate the world of DRY :P [/troll]
wow
love this!
Hi, I'm wondering, how do you manage the class name for all children elements ?
is this logic is correct ? :
.block_elementOne > .elementOne_head
I would usually do this:
.block_elementOne > .block_elementOneHead
However, as usual with web dev, it depends.
If .elementOne could exist as its own ‘block’, then what you have suggested is fine. However, if it’s an intrinsic element of ‘.block’ and shouldn’t exist outside of ‘.block’, then with this system, it should always be prefixed with ‘.block__’.
As people have suggested, class names can get incredibly long with BEM. However there are always trade offs in any system. As the article states – pick a system and make it work for you. ☺️
You are right, i think the fact that an element can or not exist by itself is a good point to define the class name.
Alright, thanks Salma for the clarification !
Happy to help! ☺️
CSS syntax is mostly case-insensitive, I think we should use kebab-case instead camelCase for descriptive class names
It’s all personal preference. As the article states, choose a system and make it work for you.
I mean
Abc
aBc
abCis same
Hmm. Another solution for large css structures like bigger than 1mb. | https://practicaldev-herokuapp-com.global.ssl.fastly.net/whitep4nth3r/what-is-bem-in-css-43c7 | CC-MAIN-2022-21 | refinedweb | 1,176 | 63.39 |
Today, We want to share with you Queue Job in Laravel 5.8 Tutorial.In this post we will show you Learn How to Implement Queues in Laravel 5.8, hear for Laravel Queues Tutorial With Example From Scratch we will give you demo and example for implement.In this post, we will learn about How to create Queue and Run Jobs using worker in Lumen/Laravel with an example.
Queue Job in Laravel 5.8 Tutorial
There are the Following The simple About Queue Job in Laravel 5.8 Tutorial Full Information With Example and source code.
As I will cover this Post with live Working example to develop How to create Queue with Mail in Laravel 5.7, so the laravel queue job testing in localhost for this example is following below.
We can also use the Mysql database queue driver locally with minimal step by step setup.
We will need to run php artisan queue:table and then simple run the php artisan migrate first though. We changes .env file would be updated to:
QUEUE_DRIVER=database
Jobs will be inserted into a jobs table in MySQL database. They can then be run simple with the php artisan queue:work command to process a single job or the php artisan queue:listen command which will same time process multiple jobs as they are included to the jobs table
php artisan queue:listen php artisan queue:work
Create a Laravel queue
php artisan make:mail TestMail
web.php
use App\Mail\TestMail; Route::get('/send-mail-queue', function () { Mail::to('[email protected]')->queue(new TestMail()); dump('done'); });
app/Mail/TestMail.php
<?php namespace App\Mail; use Illuminate\Bus\Queueable; use Illuminate\Mail\Mailable; use Illuminate\Queue\SerializesModels; use Illuminate\Contracts\Queue\ShouldQueue; class TestMail extends Mailable { use Queueable, SerializesModels; /** * Create a new message instance. * * @return void */ public function __construct() { // } /** * Build the message. * * @return $this */ public function build() { $title = "Simple Send mail for jaydeep Testing 0007"; return $this->view('email.test',['title' => $title]); } }
resources/views/email/test.blade.php
<h3>Welcome To pakainfo.com</h3> Simple Test Mail<br/> Product name : <b>{{ $title }}</b> <br/> <br/> Thanks, <br/> {!! config('app.name') !!}
Web Programming Tutorials Example with Demo
Read :
Summary
You can also read about AngularJS, ASP.NET, VueJs, PHP.
I hope you get an idea about Queue Job in Laravel 5.8 Tutorial.
I would like to have feedback on my infinityknow.com blog.
Your valuable feedback, question, or comments about this article are always welcome.
If you enjoyed and liked this post, don’t forget to share. | https://www.pakainfo.com/queue-job-in-laravel-5-8-tutorial/ | CC-MAIN-2021-39 | refinedweb | 428 | 57.06 |
Unfortunately I can't really accept that solution as the necessity for it is the exact problem I wished to address. Furthermore, you may notice that #ifndef LUA_API #ifdef __cplusplus #define LUA_API extern "C" #else #define LUA_API extern #endif #endif is perfectly valid C code, which completely eliminates the need for any C++ specifics, and would keep the issue in question from ever arising again. There really isn't any benefit in not writing a header like this, so why not? 27.9.2004 kello 19:02, David Burgess kirjoitti: This topic has been discussed before and this is the designated > solution. > > db > > On Mon, 27 Sep 2004 16:57:23 +0100, simon_brown@scee.net > <simon_brown@scee.net> wrote: >> As you mention, Lua is a C library, so it's not unreasonable that >> C++-specific code is left out of the Lua headers. >> >> I've been cleanly including Lua in C++ headers by wrapping the include >> directives as follows: >> >> extern "C" { >> #include "lua.h" >> #include "lualib.h" >> #include "lauxlib.h" >> } // extern "C" >> >> I guess if you wanted to be even cleaner you could put these contents >> into >> a "lua.hpp" file and include that in your project. >> >> Simon Brown >> Sony Computer Entertainment Europe >> >> >> lua-bounces@bazar2.conectiva.com.br wrote on 27/09/2004 16:41:04: >> >> >> >>> I love the language, but whenever I install Lua on a >>> machine I need to make an alteration to the header file. >>> >>> The following section allows C programs to use Lua >>> functions if the Lua libraries are compiled in C. >>> >>> /* mark for all API functions */ >>> #ifndef LUA_API >>> #define LUA_API extern >>> #endif >>> >>> It works for C++ only if the libraries are compiled in C++. >>> To get both C and C++ programs working with the C library, >>> I change that section of the lua.h header to the following: >>> >>> /* mark for all API functions */ >>> #ifndef LUA_API >>> #ifdef __cplusplus >>> #define LUA_API extern "C" >>> #else >>> #define LUA_API extern >>> #endif >>> #endif >>> >>> Now it works beautifully and transparently, but isn't quite >>> standard. I wonder... could this possibly be made the >>> standard for lua.h in future releases of lua? >>> _____________________________________________________________________ >>> For super low premiums ,click here _____________________________________________________________________ For super low premiums ,click here | https://lua-users.org/lists/lua-l/2004-09/msg00583.html | CC-MAIN-2021-49 | refinedweb | 365 | 62.88 |
> What may be useful (or at least interesting) is for people to enable > profiling and run real-world Lua programs. Then, take the topmost > functions (say, those that account for 80% of the program's > execution time) > and post them here in this mailing list. Such a list would likely show a > handful of functions that dominate most of a Lua's program's > execution, and > these would be the first candidates for optimization. I remember seeing luaM_growaux (If I recall correctly) being far first place in a profiler, for a program that was using very extensively lua_getglobal, lua_gettable, etc. This might or might not be a representative program however :) As you say, it's something about memory management, and not VM. In my project, I've seen the VM as one of the parts of Lua that score high in the profiler though, but nowhere near the real bottlenecks, since I haven't put many Lua agents in my project yet. But maybe some speedup would be gained just by reordering functions in source files to allow the compiler to inline them, and making some of the debug checks inside #ifdef DEBUG/#endif tags. Inlining can make a huge difference for some code. -- Lyrian | http://lua-users.org/lists/lua-l/2000-08/msg00044.html | CC-MAIN-2019-43 | refinedweb | 205 | 54.97 |
A collection of court seals that can be used in any project.
Project description
This is a collection of court seals that can be cloned and used in any project. Original files can be found in the orig directory and converted versions can be found in the numerical directories.
The goal of this project is to collect and maintain an updated repository of all the seals that courts have created and to create seals for those rare courts that lack them altogether.
Contributing
This project is blissfully easy to contribute to and we need lots of help gathering or making files. The process for this is pretty simple.
- Find or make the image and ensure it follows our quality guidelines (below).
- Add the image file to the orig directory.
- Edit seals.json to include the relevant fields for your new file.
That’s it!
index.html is a tool for quickly being able to see the progress on obtaining seals and quickly check the quality of the seals that have been obtained. You can refresh this file by opening it and pasting in the contents of seals.json where indicated (sloppy but effective).
If you wish to get involved as a developer, you’ll want to install this repository from git. Do the following:
Install imagemagick:
sudo apt-get install imagemagick
Download and link up the code:
sudo git clone /usr/local/seal_rookery
Install from your local source:
python setup.py install
Update the local copies of the images:
update-seals -f
Installation for Non-Developers
Basic usage doesn’t require any installation, but if you wish to import the seals.json file into a Python program, you may want to install the Seal Rookery as a Python module in your system. To do so:
Install imagemagick
sudo apt-get install imagemagick
Install the seal rookery
pip install seal_rookery
Update the seals
update-seals -f
You can then import the seals.json information into your project using:
from seal_rookery import seals_data
And you will have various sizes of all the seals ready to go on your system.
In the future, when you get the latest version of the rookery, run update-seals again to generate copies of any new images at various sizes. To see more information about this command run update-seals --help.
Quality Guidelines
Fact is, images are hard to work with and courts don’t always do the best job. Follow these guidelines so we can have nice things:
- Convert your original file to png or svg, as appropriate. If you have the ps file, include that as well.
- If you use transparency or the file has it, make sure the file looks OK on a background other than white. If it looks bizarre on an orange or blue background, fix it by adding a white layer on the bottom.
- Trim any extraneous margins and if the seal is circular, make the corners transparent.
- If the item was previously a jpeg or gif, it’s good to clean up the splotchiness created by the jpeg compression. You’ll see it if you zoom in.
Usage
We know of no instances where courts have requested a take down of their seal, however usage of government seals has caused a few stirs in the past. Don’t attempt to represent the government or its agents.
Deployment
Update the version info in setup.py.
Install the requirements in requirements-dev.txt
Set up a config file at ~/.pypirc containing the upload locations and authentication credentials.
Generate a distribution:
python setup.py bdist_wheel
Upload the distribution:
twine upload dist/* -r pypi
Two things. First, if you are creating original work, please consider signing and emailing a contributor license to us so that we may protect the work later, if needed. We do this because we have a lot of experience with IP litigation, and this is a good way to protect a project.
Second, if you’re just curious about the copyright of this repository, see the License file for details. The short version of this is you can pretty much use it however you desire.
Credit Where Due
This project inspired by the initial visualization work of @nowherenearithaca.
Project details
Release history Release notifications
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/seal_rookery/ | CC-MAIN-2019-18 | refinedweb | 725 | 63.49 |
Suppose we have a positive number n, we will add all of its digits to get a new number. Now repeat this operation until it is less than 10.
So, if the input is like 9625, then the output will be 4.
To solve this, we will follow these steps −
Let us see the following implementation to get better understanding −
import math class Solution: def solve(self, n): if n < 10: return n s = 0 l = math.floor(math.log(n, 10) + 1) while l > 0: s += n % 10 n //= 10 l -= 1 return self.solve(s) ob = Solution() print(ob.solve(9625))
9625
4 | https://www.tutorialspoint.com/program-to-find-sum-of-digits-until-it-is-one-digit-number-in-python | CC-MAIN-2021-31 | refinedweb | 105 | 74.59 |
This site works best with JavaScript enabled. Please enable JavaScript to get the best experience from this site.
package tutorial.common;
public class CommonProxy {
}
public void registerRenders(){}
package tutorial.common;
public class CommonProxy {
public void registerRenders(){}
}
public class ClientProxy extends CommonProxy {
@Override
package tutorial.client;
import tutorial.common.CommonProxy;
public Class ClientProxy extends CommonProxy {
@Override
public void registerRenders(){}
}
YouTube is back! Check out my channel here!
Welcome back to my modding series. As I have previously said, these tutorials should survive through the majority of Minecraft 1.5.x releases, but keep an eye out for any changes!
In this third tutorial, I am going to be showing you how to get started with your mod by coding our proxy files.
Note: Throughout my tutorials, I will be working under the assumption that you have all the programs installed that I specify. If you do not have, or use, Eclipse, and prefer coding in a traditional text editor (Notepad, Notepad++, etc.), you will need to do some extra searching to find the location of files you will need to import, since Eclipse does this for us.
Now that all of the information giving is out of the way, let’s go ahead and get started with The Basics #3 – Getting Started with Proxy Files.
Things that you are going to need
- Eclipse
Proxy files
At this point in time, you may be wondering “What are proxy files and why are they so important?” Well, this is a very good question.
Proxy files are very important because they allow us to register textures that we are going to use in our mod (in a future tutorial). The proxy files are also used for other methods that I will explain when we get around to them.
Packages
To be able to make our proxy files, we are going to need to have a package to put all of our mod class files in to. I create two packages, to keep the files organised and neat; tutorial.client and tutorial.common. The two packages will all contain different files; the tutorial.client package will hold all of the code relating to client-side operations, whilst the tutorial.common package will hold all of the code relating to the server-side operations. The majority of our code will be in the tutorial.common package (it will soon become clear why).
CommonProxy
The first class file we are going to be making is our CommonProxy. Select the tutorial.common package and make a new class with this name. Once Eclipse has made the class file for you, it will look like the following:
All we have to do with this class file is create one method in the CommonProxy class (between the two array – {} – indicators). This method is the registerRenders() method and it is added like so:
This is all we need to add to our CommonProxy, so the completed file should look like the following:
ClientProxy
Our ClientProxy class is set up in a similar way to the CommonProxy. Select the tutorial.client package and create the new class file. This time, however, the ClientProxy is going to extend our previously coded CommonProxy:
You will get an error code underneath CommonProxy, but simply hold Ctrl, Shift and O to import the necessary files. Because the ClientProxy now extends our CommonProxy, anything that we put in the ClientProxy will work for both the client and the server, meaning our mod can be used in both.
Next, we need to add the same method as we had in the CommonProxy; registerRenders():
You will now be met by another error code. This is because both the ClientProxy and CommonProxy have the same method. This normally wouldn’t be a problem, but because our ClientProxy extends the CommonProxy, there is a clash. To solve this problem, we need to add an annotation, signified by the @ symbol. The annotation we are going to add is the Override annotation:
This annotation is written above the registerRenders() method and does exactly as it says; anything that we put in the registerRenders() method in our ClientProxy will overwrite anything that we put in the registerRenders() method in our CommonProxy. Fortunately, we are not going to be putting any code there, so we can use this annotation.
Our completed ClientProxy should now look like the following:
This is all we need to add to the ClientProxy for now. As we add blocks and items, they will require textures and icons, which will need to be preloaded in the ClientProxy, to prevent any later problems between our mod and the Minecraft client.
Next time
This is the end of The Basics #3 – Getting Started with Proxy Files. In the next tutorial, I will be detailing the Core class file, which will be the heart of our mod. I hope you have found this tutorial useful; go ahead and drop a comment in the Reply section below or hit that Reputation Up button.
Feel free to check out my other tutorials using the links below. Until next time…
Java Reflection API
The Basics #1 – Establishing Your Workspace
The Basics #2 – Exploring the Workspace
YouTube is back! Check out my channel here! | https://www.minecraftforum.net/forums/archive/tutorials/931952-1-5-x-minecraft-modding-tutorials-the-basics-3 | CC-MAIN-2018-09 | refinedweb | 866 | 62.68 |
I missed your mail last week and only now stumbled over it after Linushas pulled my tree.On Friday 19 June 2009, Mike Frysinger wrote:> >> > sounds like a good idea.> > how about the attachedMostly good, but needs some improvements. At least it helped metrack down the last problem.> > lib/checksum.c: Fix another endianess bug> > hrm, still not quite :/> > the attached test code shows failures in every caseWhen I tried running it on x86-64, it only showed failures fornumbers 1, 2 and 4. I fixed them with this patch:---lib/checksum: fix one more thinkoWhen do_csum gets unaligned data, we really need to treatthe first byte as an even byte, not an odd byte, becausewe swap the two halves later.Found by Mike's checksum-selftest module.Reported-by: Mike Frysinger <vapier.adi@gmail.com>Signed-off-by: Arnd Bergmann <arnd@arndb.de>diff --git a/lib/checksum.c b/lib/checksum.cindex b08c2d0..0975087 100644--- a/lib/checksum.c+++ b/lib/checksum.c@@ -57,9 +57,9 @@ static unsigned int do_csum(const unsigned char *buff, int len) odd = 1 & (unsigned long) buff; if (odd) { #ifdef __LITTLE_ENDIAN- result = *buff;-#else result += (*buff << 8);+#else+ result = *buff; #endif len--; buff++;--->extern unsigned short do_csum(const unsigned char *buff, int len);>do_csum is really an internal function. IMHO we should better checkcsum_partial(), ip_fast_csum(), csum_fold(), csum_tcpudp_magic()and ip_compute_csum(), or at least a subset of them.>static unsigned char __initdata do_csum_data2[] = {> 0x0d, 0x0a,>};>static unsigned char __initdata do_csum_data3[] = {> 0xff, 0xfb, 0x01,>};> ...>static struct do_csum_data __initdata do_csum_data[] = {> DO_CSUM_DATA(1, 0x0020),> DO_CSUM_DATA(2, 0xfc00),> DO_CSUM_DATA(3, 0x0a0d),> DO_CSUM_DATA(5, 0x7fc4),> DO_CSUM_DATA(7, 0x7597),> DO_CSUM_DATA(255, 0x4f96),>};You mixed up do_csum_data2 and do_csum_data3, so they will alwaysshow up as incorrect. Also, the expected checksum is endian-dependent.The test module should either be modified to expect 0xffff to bereturned in every case, or should use le16_to_cpu(0x0020) etc. Arnd <>< | http://lkml.org/lkml/2009/6/23/597 | CC-MAIN-2014-49 | refinedweb | 313 | 56.05 |
Calculating a Shock using Newton method and Van der Waals
In this post we are going to calculate a vertical (compression) shock. Therefore, the equation system is solved using the Newton-Raphson method amongst other methods. Furthermore, we use the Van der Waals equation of state to calculate fluid properties.
Introduction to Shocks
A shock is a discontinuous change of the flow condition. This phenomena can just occur in a compressible fluid. During a shock, the static pressure rises significantly, while the velocity drops. Therefore, a vertical shock transforms a supersonic flow into a subsonic flow. Hence, this is the considered case here.
Basic equationsNotation...
– Specific gas constant in J/kg/K
– Pressure in Pa
– Temperature in K
– Specific volume in m³/kg
– Density in kg/m³
– Critical temperature in K
– Critical pressure in Pa
– Velocity in m/s
– Direction in m
First of all, let’s assume the following: 1-dimensional-, stationary flow and a pipe with a constant diameter. Therefore, the governing equations can be described as follows:
(1)
(2)
(3)
(4)
Furthermore, you can look up the coherence between static and stagnation state here. Let us continue with the governing equations:
(5)
Finally, let us neglect the volume forces
e.g. gravitational acceleration and the friction forces
. The result is:
(6)
Van der Waals Equation of State
Equations of state are used to calculate fluid properties. For example, the specific volume can be calculated from pressure and temperature
. The Van der Waals equation is used to take the real gas behaviour into account. Hence, it is an improvement compared to the the ideal gas law. Let us have a look at the basic form of the Van der Waals equation:
(7)
Here, we can calculate the variable
from:
(8)
and
from:
(9)
Afterwards, we rearrange it with respect to
.
(10)
Now we can see that it is a cubic equation. Hence, we need a numerical or analytical method to solve it. Additionally, we can solve the basic equation directly for the pressure p = p(T,v):
(11)
In addition, caloric state variables like enthalpy
, entropy
and inner energy
can be calculated using departure functions. The derivation is wonderfully described in this this YouTube video. Hence, the the enthalpy
for a Van der Waals gas can be calculated from:
(12)
Here,
is the enthalpy of an ideal gas. Hence, we can calculate it using the specific heat capacity
and the temperature
:
(13)
Therefore, we need to define a reference state
to solve the integral. Remember: since the reference states can differ, a single enthalpy is not worth much, always look at enthalpy differences. Furthermore, we need temperature dependend specific heat capacity of an ideal gas
. For instance, you can use a polynomial to approximate the specific heat capacity.
Do do that, we can use the Python function integrate from the scipy package. As an example, let us calculate the enthalpy difference of two thermodynamic states for water (steam). Therefore, we use
- The critical point of water
= 647.1 K and
= 220.6e5 Pa
- Specific gas constant
= 461.5 J/kg/K
- Reference temperature
= 300 K
- Polynomial for the specific heat capacity of water
(mind the temperature range!)
- State 1: 500 K, 2 m³/kg
- State 2: 600 K, 2.5 m³/kg
Hence, in Python this could look like the following.
#License: MIT License ()
import scipy.integrate as integrate
#Water
Tc = 647.1 #K
pc = 220.6e5 #Pa
R = 461.5 #J/kg/K
T_ref = 300 #K
def a():
return (27./64.)*R**2*Tc**2./pc
def b():
return 1./8.*R*Tc/pc/8.
def get_cp_t(T):
#Used to approximate the ideal specific heat capacity of Water (Steam)
#Applicable from 300 - 1000 K
return -0.0000001989*T**3 + 0.0005871194*T**2 - 0.1303510876*T + 1770.6334862186
def idealEnthalpy(T):
return integrate.quad(get_cp_t, T_ref, T)
def vanDerWaals_h_vt(v, T):
hIdeal = idealEnthalpy(T)[0]
return (-2.*a()/v+(b()*R*T)/(v-b())) + hIdeal
T1 = 500 #K
T2 = 600 #K
v1 = 2 #m^3/kg
v2 = 2.5 #m^3/kg
h1 = vanDerWaals_h_vt(v1, T1)
h2 = vanDerWaals_h_vt(v2, T2)
dh = h1-h2
print dh
As a result, we get the enthalpy difference
.
Solving a cubic Equation using the Newton-Raphson Method
First of all, we can use the Newton-Raphson method to solve nonlinear equations or equation systems numerically. This is achieved by finding the roots of a function. Hence, with the Newton-Raphson method the roots of a function can be successively approximated. We can describe it as follows:
(14)
Solving a cubic Equation in Python
Let us solve the cubic equation for the specific volume using the Newton-Raphson method in Python.
Therefore, we need to know, that a cubic equation has three roots. In the two-phase area we usually have 3 real roots. Thereby, the smallest root represents the volume of the liquid, while the largest root represents the volume of the vapor. The root in between is physically not relevant.
Furthermore, we can have one real root for gas or liquid phase and two imaginary roots. Hence the real root is of relevance.
Let us apply this using Python:
#License: MIT License ()
from scipy.misc import derivative
from scipy import optimize
#Water
Tc = 647.1 #K
pc = 220.6e5 #Pa
R = 461.5 #J/kg/K
def a():
return (27./64.)*R**2*Tc**2./pc
def b():
return 1./8.*R*Tc/pc/8.
def vanDerWaals_v_pt(v, p, T):
return p*v**3.-(R*T+b()*p)*v**2.+a()*v-a()*b()
def getSpecVolume(vGuess, p, T, Nmax=100, eps=1e-6):
it = 0
fv = vanDerWaals_v_pt(vGuess, p, T)
dfv = derivative(vanDerWaals_v_pt, vGuess, dx = 1e-6, args = (p, T,))
vNew = vGuess - fv/dfv
while it < Nmax:
v = vNew
fv = vanDerWaals_v_pt(v, p, T)
dfv = derivative(vanDerWaals_v_pt, v, dx = 1e-10, args = (p, T,))
vNew = v - fv/dfv
if abs(v-vNew)/v < eps:
break
it += 1
return vNew
p = 1e5 # Pa
T = 800 # K
vguess = 3 #m³/kg
v = getSpecVolume(vguess, p, T)
vTest = optimize.newton(vanDerWaals_v_pt, vguess, args=(p, T, ))
As a result, we obtain a specific volume of 3.68758 m³/kg. Furthermore, the result from the own function getSpecVolume is confirmed by the newton function from the scipy optimize
package.
Vertical (compression) shock
Furthermore, let us continue with the equations to describe shock phenomena. Therefore, let us derive the revelant equations to model the vertical compression shock. By inserting Eq. 2 into Eq. 4 we obtain:
(15)
Furthermore, by inserting Eq. 2 into Eq. 6 we obtain:
(16)
Therefore, we can solve this implicit coupled equation system for
and
.
Solve the equation system using Python
The equation system eqSystem which contains the shock equations can be solved using the fsolve function from the scipy optimizepackage. In addition, this packages uses methods such as the Newton-Raphson.
#License: MIT License ()
import numpy as np
from scipy import optimize
def area(D):
return math.pi*pow(D,2)*0.25
def eqSystem(x, args):
TDown = x[0]
vDown = x[1]
pUp = args[0]
hUp = args[1]
uUp = args[2]
vUp = args[3]
f = np.zeros(2)
f[0] = vanDerWaals_h_vt(vDown, TDown) - hUp+0.5*uUp**2.*((vDown/vUp)**2.-1.)
f[1] = vanDerWaals_p_vt(vDown, TDown) - pUp+uUp**2./vUp*(vDown/vUp-1.)
return f
p1 = 1e5 # Pa
T1 = 800 # K
v1guess = 3 #m³/kg
v1 = optimize.newton(vanDerWaals_v_pt, v1guess, args=(p1, T1, ))
mDot = 2 #kg/s
d = 0.1 #m
G = mDot/area(d)
h1 = vanDerWaals_h_vt(v1, T1)
u1 = G*v1
x0 = [1500, 10] #Guess values of T2, v2
args = [p1, h1, u1, v1] #Arguments (constant)
sol = optimize.fsolve(eqSystem, x0, args=args, xtol=1e-6, maxfev=200)
v2 = sol[1]
T2 = sol[0]
p2 = vanDerWaals_p_vt(v2, T2)
u2 = u1/v1*v2
Results
In order to produce some output data, we need to specify some input data:
- Fluid: water
- Upstream pressure
= 1e5 Pa
- Upstream temperature
= 800 K
- Diameter
= 0.1 m
- Variation of the velocity
700 – 1200 m/s.
Using the input data, we can run the script and plot the following figures. In the first figure we see the upstream Mach number over the downstream Mach number and the pressure ratio.
In the second figure we see the upstream Mach number over the temperature and specific volume ratio.
We can see, that we have a supersonic upstream state (Ma > 1). Furthermore, by looking at the figures, we can state the following:
- Subsonic state after the shock —
< 1
- Increase in pressure after the shock —
- Increase in temperature after the shock —
- Decreasing specific volume after the shock —
- Increase in entropy —
.
Conclusion
A shock can occur in a compressible fluid. During a shock, the static pressure rises significantly, while the velocity drops. This means, a supersonic flow is transformed into a subsonic flow. The equation system to model a shock can be obtained from a simple energy and momentum balance and the continuity equation. In order to solve this non-linear equation system we use the Newton-Raphson method which included in Python packages.
Furthermore, the Van der Waals equation can be used to receive fluid properties. Therefore, we need to solve a cubic equation using the Newton-Raphson method. Furthermore, we need departure functions to transform thermal state variables into caloric state variables. Therefore, we need to integrate fluid properties from polynomials.
Finally, in a vertical shock the pressure and Temperature increases non linear, while the Mach number and specific volume decreases.
What did you think?
I’d like to hear what you think about this post.
Let me know by leaving a comment below.
2 Responses
Thank you so much for the great article, it was fluent and to the point. Cheers.
I am working on something similar using the SKR equations for the cubic EoS. I noticed that your first graph for Pr vs M1, the Pr doesnt go to 1 when M1. I had the same issue initially. I eventually figured out the built in solver for the system of equations was at fault. There were two points where the equations crossed, one corresponding to what the value should be and another I am still trying to figure out the significance of mathematically. | https://numex-blog.com/shock-calculation-using-newton-method-van-der-waals/ | CC-MAIN-2019-51 | refinedweb | 1,697 | 57.16 |
Asked by:
Retrieve e-mail attachments from Exchange.
Question
- User-662994921 posted
I am trying to save attachments from a specific e-mail on Exchange. I am trying to find a code how to use WebDav?<?xml:namespace prefix = o<o:p> </o:p>
I also tried the following but it keeps logging me to my e-mail. I did make sure that Outlook was not running.oNs = oOutlook.GetNamespace("MAPI");
oNs.Logon("usrename", "Password", false, true);
Please HelpFriday, March 27, 2009 10:28 AM
All replies
- User-1181669224 posted
There's a company - that makes a really inexpensive product called InboxRules.
I'm not sure what you're trying to do business wise, but you should at least take a look at the product because it's almost certainly cheaper than anything you can develop yourself.
I'm not affiliated with the above company - just trying to help.Friday, March 27, 2009 10:52 AM
- User-662994921 posted
I am not sure why the following code is not logging in to the right account.oNs = oOutlook.GetNamespace("MAPI");
oNs.Logon("usrename", "Password", false, true);
What about WebDav any code example to save attachments?Friday, March 27, 2009 11:51 AM
- User-662994921 posted
I used WebDav see this link, March 27, 2009 1:16 PM
- User-576103305 posted
Alternatively, if you want things to be simplified, by doing just method calls and read attachments in a loop, take a look at Aspose.Network library. It provides classes for accessing Exchange server mailbox using WEBDAV and EWS (Exchange Web Services), check emails and download emails to disk with attachments as eml and msg format. More details at, September 19, 2009 9:52 PM
- User1437918436 posted
Here are examples how to save messages as .eml and .msg file and also how to save attachments from Exchange server using WebDAV protocol:, December 16, 2010 9:57 AM | https://social.msdn.microsoft.com/Forums/en-US/2f17bfd7-1c05-4629-ad22-202856e1ee07/retrieve-email-attachments-from-exchange?forum=aspenterprise | CC-MAIN-2022-40 | refinedweb | 315 | 59.74 |
In this article you will learn how to deal with timezones in SSRS Reports.
IntroductionThe Coordinated Universal Time (UTC) is the primary time standard commonly used across the world. The data warehouses store all date properties in UTC format. These UTC dates are localized in the date and time based on the user's time zone. The primary advantage of storing dates and times in UTC format is that it makes data transportable. In other words, date and time stored in UTC format can be easily converted to local date and time by adding an offset.Problem statement UTC dates are very important for the organization that span timezones. Normally the DBA chooses UTC date to store dates because it can be easily converted to a local date and time by adding an offset. This is very easily done with C# code. Using the following code we can get the local date in C#.
OutputThe following describeds how to convert a UTC date to a local date based on the time zone in a SSRS report.Solution: Microsoft provides a way to use an assembly with a report. By default the "System" namespace is included as the report assembly reference so we can use the preceding C# code in the “Code” section of the report property and create a custom function that returns a local date. The following is the procedure to do that.Step 1
Click on Report >> Report Properties from the menu.Step 2
Add the following function in the Code tab.
Step 3
To convert a date from UTC right-click the TextBox and select "Expression". Use the function that is written in the Code tab of the report properties and pass the UTC date value and timezone name to get the localized date value.
View All
View All | https://www.c-sharpcorner.com/UploadFile/ff2f08/dealing-with-time-zones-in-ssrs-reports/ | CC-MAIN-2022-05 | refinedweb | 302 | 72.56 |
Odoo Help
Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps:
CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc.
how to have checkbox depends on other fields?
Hello
I would like to have a checkbox on my app employee, and I would like this check box is check or uncheck on depends on 2 other fields!
explication:
on my form i have two fields :
'user_id' (the user relative to the employee)
'x_user_id' (a personal field and it's the id of the connected user)
So if the connected user is the same of the employee in the view i want my checkbox is uncheck
else if the connected user is different of the employee in the view i want my checkbox is check.
I try to create à boolean field with a compute arg !!! but i don't know how to write this in the compute cells!
I try to create a simple module like this but it doesn't work !!!
from openerp import SUPERUSER_ID
from openerp.osv import fields, osv
class hremployee(models.Model):
_name = "hr.employee"
_description = "Employee"
_inherit = "hr.employee"
def _get_employee_user(self, cr, uid, ids, field_name, args, context=None):
if context is None:
context = {}
if isinstance(ids, (int, long)):
ids = [ids]
res = {}
for emp in self.browse(cr, uid, ids, context=context):
res[emp.id] = True
if emp.user_id == uid:
res[emp.id] = False
return res
invisible = fields.Boolean(compute='_get_employee_user')
Can you help me?
Try this way in the old api (you had defined to import osv instead of models)
from openerp.osv import fields, osv
class hremployee(osv.osv):
_inherit = "hr.employee"
def _get_employee_user(self, cr, uid, ids, field_name, args, context=None):
if context is None:
context = {}
if isinstance(ids, (int, long)):
ids = [ids]
res = {}
for emp in self.browse(cr, uid, ids, context=context):
if emp.user_id == uid:
res[emp.id] = False
else:
res[emp.id] = True
return res
_columns = {
'invisible': fields.function(_get_employee_user, type='boolean', string='Invisible')
}
Or this way in the new api:
from openerp import models, fields, api
class hremployee(models.Model):
_inherit = "hr.employee"
@api.multi
def _get_employee_user(self):
for emp in self:
if emp.user_id == uid:
emp.invisible = False
else:
emp.invisible = True
invisible = fields.Boolean(compute='_get_employee_user')
hello axel and thanks for you're help. i try with the new api, the install is ok and the fields invisible was created in model employee, but after adding this fields in the form i had an error when i try to view my data: File "/usr/lib/python2.7/dist-packages/openerp/addons/hr_inv/hr_inv.py", line 10, in _get_employee_user if emp.user_id == uid: NameError: global name 'uid' is not defined
I change if emp.user_id == uid: by if emp.user_id == emp.x_user_uid: and i have no errors but my check box is always check!!! the first if is never verified!!!! i don't understand
Yes, I only rearrange your code in the way that should work, I don't check your business rules
About This Community
Odoo Training Center
Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now | https://www.odoo.com/forum/help-1/question/how-to-have-checkbox-depends-on-other-fields-92506 | CC-MAIN-2018-26 | refinedweb | 532 | 60.21 |
Building the Right Environment to Support AI, Machine Learning and Deep Learning
Watch→
Answer: The domain name server is a server application that provides a cross-reference or mapping service between symbolic names and their respective IP addresses. DNS is an application layer (layer 7) protocol under OSI's seven-layer model.
Let us consider the following scenario: User "John" wants to reach the host "maniac.synapse.com". For discussion purposes, let's assume that the IP address of "maniac" is 130.200.200.23.
John only knows the host as "maniac.syanpse.com"; he does not know the IP address. How can he successfully connect to "maniac"? Here is where DNS comes into the picture.
There are two ways to provide a mapping service between symbolic host names and actual IP addresses. The first is to have each host resolve symbolic names locally. Therefore, each host must maintain a table that provides the mapping between names and addresses. This technique can be termed "flat namespace" and will work well in small network situations.
For large networks and the Internet itself, the flat namespace method simply breaks down. Imagine every host on the Internet having a table that contains the address resolution information for every other host! What's worse is when a new host is added, all the tables have to be changed!
Enter the Domain Name System. Rather than having a flat namespace, we now are introduced to a "distributed namespace." Distributed namespace means is that hosts are grouped into domains or zones. In each one of these domains, name resolution is handled by a server(s). This server maintains a name to address mapping tables only for the zone or domain for which it is responsible. The domains can now be logically grouped and interconnected in a hierarchical tree fashion.
Hierarchical Tree Structure of Domains
In the above diagram, the branch to "maniac", when traced upward, reveals the symbolic host name, i.e. "maniac.synapse.com".
Typically, a DNS server under the "Synapse" branch provides the name resolution for all local hosts. Let's suppose that a user under the "Synapse" domain wants to connect to the host "Maniac". The DNS query will be initiated by the user and answered within the domain by the DNS server.
Now let's imagine that a user under the "Edu" domain is looking for "maniac.synapse.com". In this scenario, the DNS request is forwarded by the name-resolver under the user's local domain upward via the branches of the tree until the "Com" branch is reached. Then the request descends via the "Synapse" and eventually to the DNS server, say "dns1.synapse.com". The server "dns1" contains the IP address for "maniac" and will reply to the user's query.
The name resolution process itself can be described as follows:
The DNS server will then query the name server(s) for the mapping information until an authoritative answer is found. If not, a "name not found" message is delivered to the. | http://www.devx.com/tips/Tip/23700 | CC-MAIN-2018-51 | refinedweb | 503 | 55.84 |
I want to store objects in an array, where objects are weak, and conforms to a protocol. But when I try to loop it, I get a compiler error:
public class Weak<T: AnyObject> {
public weak var value : T?
public init (value: T) {
self.value = value
}
}
public protocol ClassWithReloadFRC: class {
func reloadFRC()
}
public var objectWithReloadFRC = [Weak<ClassWithReloadFRC>]()
for owrfrc in objectWithReloadFRC {
//If I comment this line here, it will able to compile.
//if not I get error see below
owrfrc.value!.reloadFRC()
}
Bitcast requires types of same width %.asSubstituted = bitcast i64
%35 to i128, !dbg !5442 LLVM ERROR: Broken function found, compilation
aborted!
Generics don't do protocol inheritance of their resolving type in the way that you seem to imagine. Your
Weak<ClassWithReloadFRC> type is going to be generally useless. For example, you can't make one, let alone load up an array of them.
class Thing : ClassWithReloadFRC { func reloadFRC(){} } let weaky = Weak(value:Thing()) // so far so good; it's a Weak<Thing> let weaky2 = weaky as Weak<ClassWithReloadFRC> // compile error
I think the thing to ask yourself is what you are really trying to do. For example, if you are after an array of weakly referenced objects, there are built-in Cocoa ways to do that. | https://codedump.io/share/WXXhd2bYltpr/1/iterate-array-of-weak-references-where-objects-conform-to-a-protocol-in-swift | CC-MAIN-2018-22 | refinedweb | 209 | 56.25 |
You can subscribe to this list here.
Showing
13
results of 13
I'm sorry for taking so long to follow up on this. My computer broke
and stayed that way for a very long time, and I've just recently begun
to dig this up again.
Anyway, if you're still willing to help me, here are my findings.
Clemens Ladisch <clemens@...> writes:
> Fredrik Tolf wrote:
> > The most annoying thing is that sounds seem to be serialized, ie. if
> > one process plays something through the OSS DSP device, any other
> > process that also tries to play through the OSS DSP device blocks
> > until the first process is done. An strace revealed that it is the
> > write() calls that are blocking. ps also reveals that the kernel is
> > sleeping in down(). To make things stranger, it seems that only
> > "short" streams are blocking. When I have, for example, an MP3 playing
> > through /dev/dsp, other processes can play sounds simultaneously (but
> > those that play short streams still block each other). What the
> > difference between a "short" and a "long" stream is, I can't really
> > determine, though.
>
> Are you using the same program for play "short" and "long" streams?
Normally, yes. mpg321 with -o oss (-o alsa doesn't work since I don't
have libalsa for libao; I'll try to get my hands on it) for almost all
"long" streams, and sox for short streams. However, I have tried
piping mpg321's output into sox, and it makes no difference
whatsoever.
> Is there a difference in the buffer size given to write() (as seen by
> strace) between "short" and "long" streams?
Well, mpg321 does have a strange buffer size (4608 bytes), but since I
tried piping it to sox instead, it doesn't seem to matter.
> > I have only tried with the OSS DSP, since I don't know how to play
> > PCM sound natively with ALSA. I tried some of the pcm* devices in
> > /dev/snd, but none yielded any sound output.
>
> Use "aplay something.wav", or "aplay -Dplughw:X,Y something.wav",
> where X is the card number and Y the device number (see "aplay -l" for
> a list).
I did that, and to my surprise it yielded the same results,
ie. multiple aplay processes playing "short" streams blocked each
other while "long" streams wouldn't affect anything at all. I would
have thought that it was something in the OSS layer, but it seems to
be a driver issue.
Btw., is there any "direct" way of natively playing streams in ALSA,
ie. like /dev/dsp in OSS?
Fredrik Tolf
I'm hoping someone will be able to shed some light on this problem,
after not finding anything on an archive search.
When I run aplay with .wav files with sampling rates higher than 8kHz,
aplay hangs. Several files with an 8kHz sampling rate play just fine,
but for 11025 and 22050 kHz rates with 'aplay -v', the process hangs
just after displaying the first line with the wave file format.
The particulars:
233MHz Pentium-2 processor
CreativeLabs CT4810 = Ensoniq AudioPCI sound card
Mandrake 9.1 minimum installation with text mode only
2.4.21 kernel
ALSA 0.9.0-0.14rc7 installed from rpms on the Mandrake distribution CDs
I added to /etc/modules.conf:
alias char-major-116 snd
alias char-major-14 soundcore
although something in the init process at bootup keeps changing the
'alias sound-slot-0 snd-card-0' line to 'alias sound-slot-0 snd-0'.
Does anybody see what I've done wrong? (I have ALSA is running fine on
another machine, a RedHat 9.0 full installation with version 0.9.6
compiled from source driving a via82xx motherboard sound module.)
Thanks-
John
On Sunday 12 October 2003 12:40, Jaroslav Kysela wrote:
>
> Bellow patch will solve your problem for 0.9.7b.
>
Thanks much, but I now get a similar error:
In file included from
/usr/local/src/alsa-driver-0.9.7b/include/sound/driver.h:42,
from hwdep.c:22:
/usr/local/src/alsa-driver-0.9.7b/include/adriver.h:134: redefinition of `PDE'
/lib/modules/2.4.23-pre7/build/include/linux/proc_fs.h:213: `PDE' previously
defined here
make[1]: *** [hwdep.o] Error 1
make[1]: Leaving directory `/usr/local/src/alsa-driver-0.9.7b/acore'
make: *** [compile] Error 1
Chris
On Sun, 12 Oct 2003, Chris Smith wrote:
>?
Bellow patch will solve your problem for 0.9.7b.
Jaroslav
Index: configure.in
===================================================================
RCS file: /cvsroot/alsa/alsa-driver/configure.in,v
retrieving revision 1.194
diff -u -r1.194 configure.in
--- configure.in 10 Oct 2003 14:47:38 -0000 1.194
+++ configure.in 12 Oct 2003 16:39:23 -0000
@@ -1027,6 +1027,7 @@
#define __KERNEL__
#include "$CONFIG_SND_KERNELDIR/include/linux/config.h"
#include "$CONFIG_SND_KERNELDIR/include/linux/fs.h"
+#include "$CONFIG_SND_KERNELDIR/include/linux/proc_fs.h"
],[
PDE(NULL);
],
Index: include/adriver.h
===================================================================
RCS file: /cvsroot/alsa/alsa-driver/include/adriver.h,v
retrieving revision 1.59
diff -u -r1.59 adriver.h
--- include/adriver.h 6 Oct 2003 10:09:33 -0000 1.59
+++ include/adriver.h 12 Oct 2003 16:39:23 -0000
@@ -129,6 +129,7 @@
#endif
#ifndef CONFIG_HAVE_PDE
#include <linux/fs.h>
+#include <linux/proc_fs.h>
static inline struct proc_dir_entry *PDE(const struct inode *inode)
{
return (struct proc_dir_entry *) inode->u.generic_ip;
-----
Jaroslav Kysela <perex@...>
Linux Kernel Sound Maintainer
ALSA Project, SuSE Labs?
Thanks.
Chris
I've been trying unsuccessfully to get my sound up
and running for the last 2 days now. I've tried
every possible combination of oss and alsa possible
googling and reading docs for hours and even received
help from a few people in #alsa on irc.freenode.net.
I've tried 0.9.6, 0.9.7b and have settled on 0.9.7
alsa. I've removed usb, apm, and everything I could
spare from my kernel. I followed the Debian package
docs and when that didn't work nuked the packages and
got the sources from the alsa site. The error I keep
getting is the following (I had to retype it):
root@...:/# modprobe snd-cs46xx thinkpad=1
PCI: Found IRQ 11 for device 00:06.0
PCI: Sharing IRQ 11 with 00:02.0
PCI: Sharing IRQ 11 with 01:00.0
ALSA ../../alsa-kernel/pci/cs46xx/cs46xx_lib.c:3944:
Activating CLKRUN hack for Thinkpad.
ALSA ../../alsa-kernel/pci/cs46xx/cs46xx_lib.c:268:
AC'97 write problem, codec_index = 0, reg = 0x2, val =
0x8000
ALSA ../../alsa-kernel/pci/cs46xx/cs46xx_lib.c:148:
AC'97 read problem (ACCTL_DCV), reg = 0x2
this section repeats many times
...
...
...
Sound Fusion CS46xx soundcard not found or device busy
/lib/modules/2.4.22/kernel/sound/pci/cs46xx/snd-cs46xx.o:
init_module: No such device
Hint: insmod errors can be cause by incorrect module
parameters, including invalid IO or IRQ parameters.
You may find more information in your syslog or the
output from dmesg
/lib/modules/2.4.22/kernel/sound/pci/cs46xx/snd-cs46xx.o:
insmod
/lib/modules/2.4.22/kernel/sound/pci/cs46xx/snd-cs46xx.o:
failed
/lib/modules/2.4.22/kernel/sound/pci/cs46xx/snd-cs46xx.o:
insmod snd-cs46xx failed
lspci finds this info for the card:
00:06.0 Multimedia audio controller: Cirrus Logic CS
4610/11 [CrystalClear SoundFusion Audio Accelerator]
(rev 01)
Anyone know how to fix this? I'm not sure what if
anything I can do next to solve these sound issues.
The general info on my machine is that its an ibm
thinkpad 770x 9549-7AU, its had its bios flashed to
the linux friendly newest version on the ibm site.
I'm running debian sid kernel 2.4.22 custom kernel.
I have soundcore support in the kernel. Any help would
be greatly appreciated. I'd also like to be able to
reenable my apm at some point once its been ruled out
as a possible problem.
Thanks in advance
Steve
l8tr3000@...
__________________________________
Do you Yahoo!?
The New Yahoo! Shopping - with improved product search
Hi
You need to have enabled oss-emulation for the alsa drivers, which
should appear as snd-pcm-oss, and snd-mixer-oss in lsmod's output.
If you've installed from source, the configure option is --with-oss=yes
-
Myk
On Sun, 12 Oct 2003 11:43:19 -0400
stoic@... wrote:
>
>
>
> -------------------------------------------------------
> This SF.net email is sponsored by: SF.net Giveback Program.
> SourceForge.net hosts over 70,000 Open Source Projects.
> See the people who have HELPED US provide better services:
> _______________________________________________
> Alsa-user mailing list
> Alsa-user@...
>
>
Hi,
Can anyone explain whether the .conf files are used during the initialisation
of the sound device or whether they are used during compiling or whether they
are used at all.
The reason why I am asking is that I still can't get the rear speakers working
with my intel 8x0 soundcard. So I was looking around a bit to find some
config file that mat cause this. I then bumped into the .conf files in
"/usr/share/alsa/cards/".
I noticed that there was no "rear" in the ICH.conf file. In most other .conf
files (e.g., ENS1370.conf and Audigy.conf) it there was a "rear" section. As
I guess ICH.conf is used for i8x0 based cards, I was wondering whether this
may cause why my rear speakers are not working?
Stoic
vey well,
I am using it for everything, both audio and midi.
Not sure about surround but I think it works also.
Aaron
On Fri, 2003-10-10 at 21:46, Dominic Iadicicco wrote:
> Hello all,
>
> Could anyone give me an idea of how well the SBlive
> card is supported in Alsa?
>
>
> Thanks all
>
> __________________________________
> Do you Yahoo!?
> The New Yahoo! Shopping - with improved product search
>
>
>
> -------------------------------------------------------
> This SF.net email is sponsored by: SF.net Giveback Program.
> SourceForge.net hosts over 70,000 Open Source Projects.
> See the people who have HELPED US provide better services:
> _______________________________________________
> Alsa-user mailing list
> Alsa-user@...
>
Hello.
Let me discribe my situation as follows, and maybe someone has an idea how
to fix this.
I have an on-board soundcard (C-Media CMI8738/3DX) on my Asus Mainboard.
It is working somehow acceptable, except the problem with the microphone.
It goes as follows:
The mic in itself gives a relative clear signal, allthough it is very low
volume.
So i thought, let's turn on the mic-boost button found in the alsa-mixer,
and ok, it worked, but unforunately it had only effects on the speaker
volume and the record volume was still the same. This is a very big problem,
because in order to be heard i had go plug in the chain an effects processor
from my old music session days, but that's not a real solution, since i
think the mic boost button should not be there in vain.
So i hope anyone can tell me how to increase the mic sensitivity not only
for the speakers but also for the record level as well, because that's been the
effect when pressing the mic-boost under windows.
Hint: My mic capture and mic volume are booth at the highest level possible,
and my Mic as center/Lfe button is turned off.
So if anyone has an idea how to fix this bug, i would very much apreciate
it.
Regards, Attila.
Hallo,
Falko Rütten hat gesagt: // Falko Rütten wrote:
> does anybody have a working configuration with this soundcard? I followed
> the hints and notes posted here and elsewhere, arranged my modules.conf &
> my .asoundrc accordingly -- still there remain some problems:
>
> 1. I don't seem to be able to use my mixer-device properly, i.e. aumix,
> amixer, alsamixer fail (no conrols show up)
You don't have a mixer device, because the Quattro doesn't have one. A
lot of USB audio cards don't have a mixer.
> 2. only two ports show in qjackctl: alsa_pcm 1/2 capture and playback.
What is the command line you use to start jack? And how does your
asoundrc look like?
ciao
--
Frank Barknecht _ ______footils.org__ | http://sourceforge.net/p/alsa/mailman/alsa-user/?viewmonth=200310&viewday=12 | CC-MAIN-2016-07 | refinedweb | 2,022 | 67.96 |
1 /*2 3 Derby - Class org.apache.derby.iapi.services.locks.ShExQual.locks;23 24 /**25 * This class is intended to be used a the qualifier class for ShExLockable.26 */27 public class ShExQual28 {29 private int lockState;30 31 private ShExQual(int lockState)32 {33 this.lockState = lockState;34 }35 36 /*37 ** These are intentionally package protected. They are intended to38 ** be used in this class and by ShExLockable, and by no one else.39 */40 public static final int SHARED = 0;41 public static final int EXCLUSIVE = 1;42 43 /* Shared Lock */44 public static final ShExQual SH = new ShExQual(SHARED);45 46 /* Exclusive Lock */47 public static final ShExQual EX = new ShExQual(EXCLUSIVE);48 49 public int getLockState()50 {51 return lockState;52 }53 54 public String toString()55 {56 if (lockState == SHARED)57 return "S";58 else59 return "X";60 }61 }62
Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ | | http://kickjava.com/src/org/apache/derby/iapi/services/locks/ShExQual.java.htm | CC-MAIN-2017-04 | refinedweb | 161 | 60.55 |
As a developer in the 21st century, you are often faced with addressing communications between various modules of your project. Internal, or “intra-process” communications, are often handled with loosely coupled messages, but forging beyond the process boundary can often be challenging. Adding to this requirements for scalability, testing, and security can often leave you scratching your head in search of a better way.
Over the years Microsoft has often revealed various technologies to handle this niche. Remote Procedure Calls (RPC), DCOM, Named Pipes, and Windows Communication Foundation (WCF) are examples of technologies that have fit the bill in the past. A new technology is now making the scene, this time with the assistance of Google.
For over 10 years now, Google has implemented an infrastructure to interface the vast number of microservices it oversees. In 2015, they set out to create the next version of this technology and shared what they had learned with others in the community. This technology came to be called gRPC.
What is gRPC?
In short, gRPC is Google’s version of RPC, or Remote Procedure Calls. It is a language-agnostic framework designed to provide high performance communication between services. gRPC supports four different communication types:
Unary – which is the simplest form of RPC where a request is made and a reply is provided
Server Streaming – where a client sends a request and the server responds with a stream of data
Client Streaming – in which the client sends a stream of data and the server responds with a single response
Bidirectional Streaming – where the client and the server both stream data back and forth
The streaming capabilities built into the framework make gRPC stand out and add great flexibility.
In layman's terms, gRPC lets you call methods on a remote machine or service. Pretty cool huh? In this tutorial you’ll put together a gPRC solution in Visual Studio and see how it works.
Understanding the case study project
The tutorial in this post will guide you through creating a solution with 3 projects: a server project, a client project, and a shared common code project. You will create a Weather service that will simulate temperature data and weather conditions for a fictitious area. You’ll also build a client app that will request weather data. You will learn the basics of the gRPC technology and create a foundation for experimenting further.
Prerequisites
You’ll need the following tools to build and run this project:
- .NET Core SDK 3.1 (includes the APIs, runtime, and CLI)
- Visual Studio 2019 (the Community edition is free) with the ASP.NET and web development workload
- GitHub extension for Visual Studio (optional, for cloning the companion repository)
You should have a general knowledge of Visual Studio and the C# language syntax. You will be adding and moving files amongst projects, changing component properties, and debugging code.
There is a companion repository for this post available on GitHub. It contains the complete source code for the tutorial project.
Verifying your .NET CLI version
Before we get started, test your environment to make sure you have everything set up. Open a Windows Command Prompt or Powershell console window and execute the following command-line instruction:
dotnet --version
If all goes well, it should come back with the version of the .NET SDK that is installed. At the time of this writing, the latest version is 3.1.300.
Understanding Protocol Buffer files
The crux of gRPC is Protocol Buffer files. These files implement a contract-first approach for defining the communications across the wire. It doesn’t matter if you are talking to a service on your local machine or one across the globe. The implementation is the same. These are text files with a .proto file extension which are compiled by the Grpc.Tools library using the protocol buffer compiler. The output is generated code (.cs files), similar to how WCF produces proxy code or the code-behind generated portions of WinForms. The idea is the same and these files should never be altered manually.
Here’s an example:
syntax = "proto3"; option csharp_namespace = "Server"; package greet;
The file starts with a declaration specifying the syntax in use. In this case, it is
proto3.
Following this an option specifying the namespace where the generated code classes will be placed. If the namespace option is absent the package name will be used as the namespace.
// The greeting service definition. service Greeter { // Sends a greeting rpc SayHello (HelloRequest) returns (HelloReply); }
Next, the code defines a RPC service interface. The compiler will use this to generate code for the interface and stubs. In this case, a class named
Greeter will be generated with a method called
SayHello. The parameters for this method will be a
HelloRequest object, defined in the next code block, and returns a
HelloReply object, also defined in the following code:
// The request message containing the user's name. message HelloRequest { string name = 1; } // The response message containing the greetings. message HelloReply { string message = 1; }
Lastly, the message objects are defined. These are basically data transfer objects that the compiler will use to construct POCOs (“plain-ol’ class objects). In the example,
HelloRequest and
HelloReply each have a single field of type string. Calling code passes in a name to the first object and the second replies with a message.
Protocol messages can only understand certain data types. Some of the common ones are:
The layout of each field is the type, name, and field number, which is a unique number within the message. This field number is used to define the sequence of the field within the message and is usually sequential (1 for the first field, 2 for the second, …).
Message fields can either be singular entities, meaning the message can have 0 or 1 instances of the field, or it can be repeated, where there are 0 or more instances of that field. A common use of repeated is when a list or collection is returned.
There are many features and aspects of protocol buffers beyond the scope of this post. You can find more information in the Additional Resources section at the end of this post.
Creating the solution structure
You’ll be creating one Visual Studio solution with three projects. The first will be the gRPC Service or server component. The others will be a client and a shared library. The shared library will provide code that is used in both the server and the client. Each project requires access to the proto files and keeping them in sync can become a nightmare.The solution is to have it one one place that both projects can access.
Creating the Server project
Begin creating the solution structure by creating a new project using the gRPC Service template. You will only see this template if the prerequisites are in place. If you can’t find it type
grpc in the search box and it will narrow your search.
In the Configure your new project window, name the project Server. Specify gRpc-Getting-Started for the Solution name and pick a folder where you’d like to keep your solution. Because you’ll be adding additional projects to the solution, it’s a good idea not to put the solution and the project in the same folder.
In the Create a new gRPC Service window, don’t make any changes to the defaults. You won’t need authentication or Docker support. Go ahead and create the project.
This is a good point to add your solution to source code control.
Open the greet.proto file in the Protos folder. Change the option for the namespace to:
option csharp_namespace = "Weather";
Creating the Shared project
With the Server project in place, you can reorganize things a bit. Both the Server and Client projects are going to need access to the files generated by the protobuf compiler. Many examples of gRPC project design suggest copying the .proto file(s) from one project to the other(s). That approach is prone to error, so it’s better to have the protos and their generated files in one place and share them.
Add a new C# Class Library (.NET Core) project named Shared and add it to the solution.
You need to add a few NuGet packages to this project so it can process the protos. Add the following packages. If you’re using the Package Manager Console, take care that the Shared project is selected in the Default project list.
Look in the Solution Explorer and find the Protos folder in the Server project. Cut the folder and paste it into the Shared project. Yup, you are moving the greet.proto file to another project.
You have to modify the properties of the greet.proto file so it knows it needs to be compiled. Select the file, right-click, click Properties on the context menu, and change the Build Action to Protobuf compiler. Also ensure the value for gRPC Stb Classes is set to Client and Server. This specifies how the Protobuf compiler will generate code.
The Properties panel should look like the following screenshot:
If you don’t find these options, you may have forgotten to add the NuGet packages.
Back in the Server project, add a reference to the Shared project by right-clicking on the Dependencies node, clicking Add Project Reference, selecting the Shared project, and clicking OK.
Creating the Client project
Finish the solution architecture by adding the gRPC client project. The template for this one is a C# Console App (.NET Core). Specify the Project name as Client and add it to the solution.
Modify the language version for this project so you can implement some of the new C# 8.0 syntax. Double-click the Client project in the Solution Explorer. This will open an XML format .csproj project file. In the
PropertyGroup node, add an element so the file looks like the following:
<PropertyGroup> <OutputType>Exe</OutputType> <TargetFramework>netcoreapp3.1</TargetFramework> <LangVersion>8.0</LangVersion> </PropertyGroup>
You can now use the latest bells and whistles in C# 8.0!
Add the Grpc.Net.Client NuGet package to the Client project.
Also add a project reference dependency to both the Server and Client projects.
Open the Program.cs file in the Client project.
Add the following
using directives to the existing code:
using System.Threading.Tasks; using Grpc.Net.Client; using Weather;
Replace the
Main method with the following C# code:
static async Task Main(string[] args) { // The port number(5001) must match the port of the gRPC server. using var channel = GrpcChannel.ForAddress(""); var client = new Greeter.GreeterClient(channel); var reply = await client.SayHelloAsync( new HelloRequest { Name = "GreeterClient" }); Console.WriteLine("Greeting: " + reply.Message); Console.WriteLine("Press any key to exit..."); Console.ReadKey(); }
You’ll see that
GreeterClient lints.
The new code changes the signature of the method so that it can be async. gRPC calls can be either synchronous or asynchronous. The async option is a good choice when there is potential for blocking.
A channel will provide communication and enable you to instantiate a
GreeterClient on the channel. Since you are using the default project template for the Server project, the sample .proto file is being used.
Remember the greeting service definition,
service Greeter, in the greet.proto file in the Shared project? The
var client = new Greeter.GreeterClient(channel); statement will access the code that the Protocol compiler generates from it. These files will be generated when you build the Server project, so ensure you build the project before you try to access these. Otherwise, these classes would not have been generated yet and you would be scratching your head, wondering why. With the client we now have access to the
SayHelloAsync method. Pass it a new
HelloRequest, and just like a function local to the Client project, you’ll get a reply which the first
Console.WriteLine statement sends to the console.
Right-click on the Server project and click Build. The project should build without errors and you should see a new folder, Services, in the project. In the Services folder you should see a new C# class file, GreeterService.cs.
Open the class file. You’ll see the
GreeterService class, and you’ll see that a number of objects are linted. Add the following directive to the existing list of
using directives:
using Weather;
This should resolve any outstanding errors.
Testing the application
When you have two executable projects in the same solution it is necessary to start each project separately. There are several ways of doing this. Here’s one: Start the Server project in debug mode and once it is running detach it by going to the Debug menu and selecting Windows > Processes. In the Processes window, right-click the Server.exe process and click Detach Process. The program will continue to run while freeing Visual Studio so you can debug the Client project.
Give it a try. Build the solution and start the Server project without debugging. You should see a console window which will display output similar to the following after a brief pause:
info: Microsoft.Hosting.Lifetime[0] Now listening on: info: Microsoft.Hosting.Lifetime[0] Application started. Press Ctrl+C to shut down. info: Microsoft.Hosting.Lifetime[0] Hosting environment: Development info: Microsoft.Hosting.Lifetime[0] Content root path: C:\Projects\jeffrosenthal\gRpc-Getting-Started\Server
Next, start the Client project. You should see a second console window for the Client console project containing:
Greeting: Hello GreeterClient Press any key to exit...
The Server console should show something like this:
info: Microsoft.Hosting.Lifetime[0] Now listening on: info: Microsoft.Hosting.Lifetime[0] Application started. Press Ctrl+C to shut down. info: Microsoft.Hosting.Lifetime[0] Hosting environment: Development info: Microsoft.Hosting.Lifetime[0] Content root path: C:\Users\Jeff\Dropbox\Projects\Twilio\Grpc\GettingStarted\Server info: Microsoft.AspNetCore.Hosting.Diagnostics[1] Request starting HTTP/2 POST application/grpc info: Microsoft.AspNetCore.Routing.EndpointMiddleware[0] Executing endpoint 'gRPC - /greet.Greeter/SayHello' info: Microsoft.AspNetCore.Routing.EndpointMiddleware[1] Executed endpoint 'gRPC - /greet.Greeter/SayHello' info: Microsoft.AspNetCore.Hosting.Diagnostics[2] Request finished in 157.2498ms 200 application/grpc
Adding a protocol buffer
Up to this point you have been doing a lot of foundational work: setting up the environment, adding projects, adding packages, and ensuring dependencies and references are set up. Now you can start adding more functionality to the application.
A protocol buffer is a good place to start. Many applications need to take structured data and send it through a communication channel. That’s what a protocol buffer does. In this project, you’ll create a mockup of a simple weather service and pass it through your gRPC channel.
If you recall, when you created the Server project, you specified a gRPC service as the template. That template also added an item template of type Protocol Buffer.
Go to the Protos folder in the Shared project and add a text file named weather.proto.
The file is empty, so add the following code:
syntax = "proto3"; option csharp_namespace = "MyWeather"; package weather; service WeatherService { // Sends the current weather information rpc RequestCurrentWeatherData (WeatherRequest) returns (WeatherDataReply); // Sends the a stream of weather data rpc RequestHistoricData(WeatherRequest) returns ( WeatherHistoricReply); } // The request message containing the location message WeatherRequest { string location = 1; } // The response message containing weather data message WeatherDataReply { int32 temperature = 1; int32 windspeed = 2; int32 winddirection = 3; string location = 4; } // The response message containing historic weather data message WeatherHistoricReply { repeated WeatherDataReply data = 1; }
The code starts with the familiar
syntax,
namespace, and
package statements. Next you have a block that will be generated as a class
WeatherService. This class will have two methods:
RequestCurrentDatawhich takes a
WeatherRequestparameter and returns
WeatherDataReply
RequestHistoricDatawhich takes a
WeatherRequestparameter and returns
WeatherHistoricReply.
It is really that simple.
Look at the data types or messages: The
WeatherRequest message is just a wrapper for a string where we specify a location.
As when you moved the greet.proto file from the Server project to the Shared project, change the file properties of the weather.proto file so Build Action is set to Protobuf compiler and gRPC Stub Classes is set to Client and the Server. See the image above for reference.
In the Server project, create a new C# class file in the Services folder named WeatherService.cs and replace the existing contents with the following C# code:
using System; using System.Collections.Generic; using System.Linq; using System.Threading.Tasks; using Grpc.Core; using Microsoft.Extensions.Logging; using MyWeather; namespace Server { public class WeatherService : MyWeather.WeatherService.WeatherServiceBase { private readonly ILogger<WeatherService> _logger; public WeatherService(ILogger<WeatherService> logger) { _logger = logger; } public override Task<WeatherDataReply> RequestCurrentWeatherData(WeatherRequest request, ServerCallContext context) { return Task.FromResult(GetWeatherData(request)); } private WeatherDataReply GetWeatherData(WeatherRequest request) { var rnd = new Random((int) DateTime.Now.Ticks); return new WeatherDataReply { Temperature = rnd.Next(10) + 65, Location = request.Location, Windspeed = rnd.Next(10), Winddirection = rnd.Next(360) }; } public override Task<WeatherHistoricReply> RequestHistoricData(WeatherRequest request, ServerCallContext context) { var list = new List<WeatherDataReply>(); Enumerable.Range(0, 10).ToList().ForEach(arg => list.Add(GetWeatherData(request))); var reply = new WeatherHistoricReply { Data = {list} }; return Task.FromResult(reply); } } }
This is the class and methods that you defined in the weather.proto file. Notice how the class is based on the MyWeather.WeatherService.WeatherServiceBase generated class. Overriding the two methods specified in the weather.proto file will add the functionality that needs to be provided. A common GetWeatherData() function is used in both the current weather and historical weather calls. This will streamline things a bit. The RequestHistoricData method returns a list of weather data, while the RequestCurrentWeatherData method returns a single data point.
Just one more thing: You have to inform the server routing of the added service. Open the Startup.cs file in the Server project. Add the following directive to the existing list:
using MyWeather;
Note that this
using directive correlates to the namespace option in the weather.proto file we added. The code uses a separate namespace to clearly show the relationship between the proto and the generated classes.
In the
Configure method, find where the
GreeterService mapping is done with the
MapGrpcService method. You need to add a similar line for the
WeatherService. This will register the class with gRpc.
endpoints.MapGrpcService<WeatherService>();
Now you are good to go with a gRPC process that returns information.
Testing the completed application
Testing the application is simply a matter of starting the Server and Client projects as you did before. The console window outputs are descriptive, so they’ll give you a feel for what is happening. When all is well, the Client application console window will show the current weather data (fictious of course) along with the made-up historic weather data we generated.
You should see two console windows looking something like the following screenshot:
Potential enhancements
This post only just touched the surface of the capabilities of gRPC. Starting from this foundation, you could add a method to report weather alerts. You could also implement bidirectional communication from the Client application console window, perhaps to request additional weather information. The historic weather data could also be provided in a stream rather than a list. gRPC could also be added to a more realistic application with security features, authentication, and metadata to monitor for lost or timed-out connections and more.
Summary
gRPC is a great addition to interprocess communications. Its tight data size makes transmissions fast while minimizing bloat. It has the flexibility to fit into many different languages while maintaining endpoint-to-endpoint security.
This post introduced you to gRPC and the basic elements of using gRPC in a C# .NET project. It also showed you how to consolidate your gRPC protocol buffer definitions in a shared .NET assembly. In testing the solution, you gained some experience running multiple projects from Visual Studio by detaching one of the programs.
Consider adding gRPC to your toolbox.
Additional resources
Protocol Buffers Developer’s Guide – The canonical Google documentation is a great place to continue your training on working with gRPC.
Why we recommend gRPC for WCF developers – Microsoft recommends gRPC for Windows Communication Foundation developers who are migrating to .NET Core. This article explains why.
gRPC – Microsoft’s introduction to gRPC is a good companion piece for this post. Read it after you’ve built this project.
Cloud Native Computing Foundation – gRPC started at Google, but it’s open source and part of the CNCF.
TwilioQuest – Looking to improve your programming skills while having some fun? Join the TwilioQuest team and help defeat the forces of legacy systems!
gRPC-Getting-Started – This post’s companion repository on GitHub is available for your use under an MIT open source license.
Jeffrey Rosenthal is a C/C++/C# developer and enjoys the architectural aspects of coding and software development. Jeff is a MCSD and has operated his own company, The Coding Pit since 2008. When not coding, Jeff enjoys his home projects, rescuing dogs, and flying his drone. Jeff is available for consulting on various technologies and can be reached via email, Twitter, or LinkedIn. | https://www.twilio.com/blog/getting-started-with-grpc-in-dot-net-core | CC-MAIN-2020-50 | refinedweb | 3,516 | 58.18 |
AWS CodeDeploy Automatic Rollback using AWS Lambda
AWS Lambda is a compute service where we can upload our code to AWS Lambda and the service can run the code on our behalf using AWS infrastructure.
AWS CodeDeploy is a service that automates code deployments to Amazon EC2 instances. AWS CodeDeploy makes it easier to rapidly release new features, helps to avoid downtime during deployment, and handles the complexity of updating the applications. We can use AWS CodeDeploy to automate deployments, eliminating the need for error-prone manual operations.
However, one major drawback with AWS CodeDeploy is that it does not support the concept of automatic rollback in case of deployment failure.
This blog illustrates how we can use AWS Lambda to perform an automatic rollback of AWS CodeDeploy (using Git) in the case of deployment failure.
The basic logic to implement the above-mentioned scenario is to first set up an AWS CodeDeploy application with an appropriate Deployment group. In that application, configure a trigger which will invoke an SNS topic whenever a deployment fails. The SNS topic then further triggers a Lambda function, which in turn calls a python script. This python script simply finds the repository name and commit id of the last successful deployment and triggers the AWS CodeDeploy accordingly.
AWS Code Deploy-Lambda integration
Setting up AWS CodeDeploy Application & AWS Lambda
Follow the following steps in order to set up the AWS CodeDeploy application:
1. Sign in to the AWS Console. Go to the services and click on “CodeDeploy”.
2. Click on “Create new application“. Enter a suitable Application Name and Application Group Name:
3. Add existing EC2 instances using key and value:
4. Choose a deployment configuration:
5. Now, create a trigger. Click on “Create Trigger“. Enter an appropriate Trigger Name. In “Events” field, select “Deployment fails“. This will ensure that the trigger would be invoked only in case of “Deployment fail” event.
6. Select an Amazon SNS topic from the available list of configured SNS topics. Click on “Create Trigger“:
7. Select an IAM role in “Service Role ARN” field, with appropriate policies attached which are needed to run AWS CodeDeploy:
8. Click on “Create application“. This will successfully create your AWS CodeDeploy Application.
9. Now configure AWS Lambda. In AWS Console , go to services and click on “lambda”.
10. Click on “Create Lambda Function“.
11. Select SNS-message blueprint.
12. Now configure event sources. Select Event Source Type as “SNS” and an appropriate SNS topic(SNS topic should be same as the one configured in AWS CodeDeploy application). Click Next.
13. Now Configure the function. Give any Name and Description. In Runtime Field, select “Python 2.7“.
14. Write the following python script in the code section:
import boto3 def lambda_handler(event,context): c=boto3.client('codedeploy') dep_ids=c.list_deployments(applicationName="lambda_demo", deploymentGroupName="demo", includeOnlyStatuses=["Succeeded"]) did=dep_ids['deployments'] final_id=did[0] b=c.get_deployment(deploymentId=final_id) commit=b['deploymentInfo']['revision']['gitHubLocation']['commitId'] print commit repo=b['deploymentInfo']['revision']['gitHubLocation']['repository'] print repo c.create_deployment(applicationName="lambda_demo",deploymentGroupName="demo",revision={'revisionType':'GitHub', 'gitHubLocation': {'repository': repo,'commitId': commit}})
15. In Lambda function handler and role, select the default handler as “lambda_function.lambda_handler“. In Role field, select “Basic Execution Role“. A new window will pop up which specifies the IAM role and policy name along with policy document. Click on edit policy, and write the following policy in order to allow your Lambda function to access other AWS Services:
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "*", "Resource": "*" } ] }
16. Click allow.
17. Click Next.
18. In the review window, select “Enable event source“:
19. Click on Create function. This will successfully create your Lambda function which will be invoked immediately (If you do not want your Lambda function to be invoked immediately after its creation, do not select “Enable event source” as mentioned in the previous step).
20. Now go back to AWS CodeDeploy dashboard. Select Deployments.
21. Click on “Create New Deployment“.
22. Enter the previously configured Application and Deployment group name. Select the Revision Type as “My application is stored in github”. Enter the appropriate git repository name and commit id along with the Deployment Config.
23. Click on Deploy Now.
This will successfully trigger your AWS CodeDeploy and if in any case code deploy fails, Lambda function will be triggered, thus leading to automatic rollback of CodeDeploy.
Hi there,
Thank you for nice tutorial. I found above script is for code deploy deployment with git but i am deploying from s3 bucket. Do you have s3 version of this script please? Thank you so much.
Sincerely,
Zak | http://www.tothenew.com/blog/aws-codedeploy-automatic-rollback-using-aws-lambda/ | CC-MAIN-2017-17 | refinedweb | 765 | 50.43 |
This is a class to generate combinations of numbers. The generated sets of numbers can be utilized as indexes in your applications. The combinations are with repetition (the same index number can be found in more than one combination), but the combinations are unique.
The code in this article is based on a permutations code originally created by Phillip Fuchs.
The reason I wrote this is that I couldn't find a class that could generate combinations of numbers. That was a year ago. So, I wrote a class to do it. And, here I want to share it.
The code to perform the permutations is not easy to follow, but the code creating the combinations is very easy: it receives the permutation numbers, and if they are found in the current set of combinations, it discards them; otherwise, it adds the combinations to the combinations vector.
Simple and silly, but it works.
Just create an application and pass these as parameters: size, len, and combinations, where:
size
len
combinations
The resulting sets of combinations can be used as indexes in your application.
Example:
#include <span class="code-string">"combinatorial.h"</span>
#include <span class="code-keyword"><iterator> // for the copy on ostream</span>
int main(int argc, char* argv[])
{
ComboVector cv;
if (argc != 4)
return -1;
int size = atoi(argv[1]);
int len = atoi(argv[2]);
int combin = atoi(argv[3]);
if (len >= size)
{
cerr << " len cannot be >= size, exiting";
return -2;
}
Comb::Combinations(size,len,combin,cv);
copy(cv.begin(), cv.end(), ostream_iterator<string>(cout, "\n" ) );
return 0;
}
An interesting point (beyond the combinatorial) is the printing of the values of the vector using an algorithm from iterators (copy).
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) | http://www.codeproject.com/script/Articles/View.aspx?aid=30010 | CC-MAIN-2014-52 | refinedweb | 300 | 54.12 |
curl_easy_pause - pause and unpause a connection
NAME
curl_easy_pause - pause and unpause a connection
SYNOPSIS
#include <curl/curl.h>
CURLcode curl_easy_pause(CURL *handle , int bitmask );
DESCRIPTION
Using.
While it may feel tempting, take care and notice that you cannot call this function from another thread. To unpause, you may for example call it from the progress callback (CURLOPT_PROGRESSFUNCTION), which gets called at least once per second, even if the connection is paused.:
Pause receiving data. There will be no data received on this connection until this function is called again without this bit set. Thus, the write callback (CURLOPT_WRITEFUNCTION) won't be called.
Pause sending data. There will be no data sent on this connection until this function is called again without this bit set. Thus, the read callback (CURLOPT_READFUNCTION) won't be called.
Convenience define that pauses both directions.
Convenience define that unpauses both directions.
RETURN VALUE
CURLE_OK (zero) means that the option was set properly, and a non-zero return code means something wrong occurred after the new state was set. See the libcurl-errors man page for the full list with descriptions.
LIMITATIONS
The pausing of transfers does not work with protocols that work without network connectivity, like FILE://. Trying to pause such a transfer, in any direction, will cause problems in the worst case or an error in the best case.
AVAILABILITY
This function was added in libcurl 7.18.0. Before this version, there was no explicit support for pausing transfers.
USAGE WITH THE MULTI-SOCKET INTERFACE
Before libcurl 7.32.0, when a specific handle was unpaused with this function, there was no particular forced rechecking or similar of the socket's state, which made the continuation of the transfer get delayed until next multi-socket call invoke or even longer. Alternatively, the user could forcibly call for example curl_multi_socket_all - with a rather hefty performance penalty.
Starting in libcurl 7.32.0, unpausing a transfer will schedule a timeout trigger for that handle 1 millisecond into the future, so that a curl_multi_socket_action( ... CURL_SOCKET_TIMEOUT) can be used immediately afterwards to get the transfer going again as desired.
MEMORY USE
When pausing a read by returning the magic return code from a write callback, the read data is already in libcurl's internal buffers so it'll allocated to save the data during the pause. This said, you should probably consider not using paused reading if you allow libcurl to uncompress data automatically.
SEE ALSO
curl_easy_cleanup, curl_easy_reset
This HTML page was made with roffit. | https://curl.haxx.se/libcurl/c/curl_easy_pause.html | CC-MAIN-2016-44 | refinedweb | 417 | 56.45 |
I'm just curious to know that there is the (Name) property, which represents the name of the Form class. This property is used within the namespace to uniquely identify the class that the Form is an instance of and, in the case of Visual Basic, is used to access the default instance of the form.
Now where this Default Instance come from, why can't C# have a equivalent method to this.
Also for example to show a form in C# we do something like this:
// Only method
Form1 frm = new Form1();
frm.Show();
' First common method
Form1.Show()
' Second method
Dim frm As New Form1()
frm.Show()
Form1
Form1
Form1
Form1
Designer
Form1
This was added back to the language in the version of VB.NET that came with VS2005. By popular demand, VB6 programmers had a hard time with seeing the difference between a type and a reference to an object of that type. Form1 vs frm in your snippet. There's history for that, VB didn't get classes until VB4 while forms go all the way back to VB1. This is otherwise quite crippling to the programmer's mind, understanding that difference is very important to get a shot at writing effective object oriented code. A big part of the reason that C# doesn't have this.
You can get this back in C# as well, albeit that it won't be quite so clean because C# doesn't allow adding properties and methods to the global namespace like VB.NET does. You can add a bit of glue to your form code, like this:
public partial class Form2 : Form { [ThreadStatic] private static Form2 instance; public Form2() { InitializeComponent(); instance = this; } public static Form2 Instance { get { if (instance == null) { instance = new Form2(); instance.FormClosed += delegate { instance = null; }; } return instance; } } }
You can now use Form2.Instance in your code, just like you could use Form2 in VB.NET. The code in the if statement of the property getter should be moved into its own private method to make it efficient, I left it this way for clarity.
Incidentally, the [ThreadStatic] attribute in that snippet is what has made many VB.NET programmers give up threading in utter despair. A problem when the abstraction is leaky. You are really better off not doing this at all. | https://codedump.io/share/on1f4VIVbbU0/1/why-is-there-a-default-instance-of-every-form-in-vbnet-but-not-in-c | CC-MAIN-2017-09 | refinedweb | 388 | 71.24 |
span8
span4
span8
span4
Hello everyone!
Long time user of FME and a big lurker here!
I am currently designing a FME Server Workbench where I expect a user to upload spreadsheets that does not have any sort of schema requirements aside from Lat/Long and a prefix of 'h_' for any hyperlinks in their spreadsheet.
I currently have two branches in my workbench
BRANCH 1
With Dynamic Reader and Writers the process is relatively simple, but my issue is with the hyperlinks. My hyperlink URLs need to be wrapped with <a href> </a> tags in order to work on web app builder mapping services. This branch creates the points out of the provided Lat/Long and accepts schemas for an increase in character width for larger hyperlinks
BRANCH 2
In order to test attribute names that contain 'h_', I used an attributeexploder and was able to determine which fields have a h_ thus needing the <a href> tags. Once I have the <a href> tag in place I want to transpose it again to return it to the original state with the attribute names being the fields with 'h_' and to reconnect it to branch 1
My issue:
In branch 2 my transposition does the job of identifying my 'h_' fields, but afterwards my attempts to transpose it again is failing me! Does anyone know a solution to rebuilding attributeexploded/aggregated lists?
Hopefully I have conveyed my issue clearly otherwise I will be checking over this post for more questions.
Thanks,
Patrick
Hi @pcheng, if I understand the requirement correctly, the workflow in the attached workspace might help you: format-urls-example.fmw (FME 2016.1.3.1)
Python Edition:
def formatURLs(feature): for attr in feature.getAllAttributeNames(): if attr.startswith('h_'): v = feature.getAttribute(attr) feature.setAttribute(attr, '<a href="%s">%s</a>' % (v, v))
Tcl Edition:
proc formatURLs {} { foreach attr [FME_AttributeNames] { if {[regexp ^h_ $attr]} { set v [FME_GetAttribute $attr] FME_SetAttribute $attr "<a href=\"$v\">$v</a>" } } }
The easiest way to do this would be to use a PythonCaller to loop through all the attributes and replace the 'h_' with the tags:
import fme import fmeobjects import re def formatURLs(feature): # get names of feature attributes attNames = feature.getAllAttributeNames() # loop through attributes for att in attNames: # don't process generic attributes if not att.startswith('fme'): # get attribute value value = feature.getAttribute(att) # does attribute start with h_? if value.startswith('h_'): # replace h_ with tags newval = re.sub('^h_','<a href>',value) + '</a>' # write updated attribute back to feature feature.setAttribute(att,newval)
Answers Answers and Comments
8 People are following this question.
Attribute names to values in a single new attribute 2 Answers
how to extract attributes definition and export it to excel or word 2 Answers
Dynamic ESRI.shp reader 1 Answer
Translate xml TransXchange data to another format 3 Answers
S-57 (ENC) Hydrographic Data Reader missing ENC edition 3.1 attributes 1 Answer | https://knowledge.safe.com/questions/37599/double-transposing-attributes-to-test-for-attribut.html | CC-MAIN-2020-16 | refinedweb | 490 | 52.49 |
Provides interaction with Windows event logs.
Public Class EventLog _
Inherits Component _
Implements ISupportInitialize
Dim instance As EventLog
public class EventLog : Component, ISupportInitialize
public ref class EventLog : public Component,
ISupportInitialize
public class EventLog extends Component implements ISupportInitialize.
If the
Source for the event log associated with the EventLog.
Creating an
EventLog object, writing an entry, then passing the EventLog object to partially trusted code can create a security issue. Never pass any event log object, including EventLogEntry and EventLogEntryCollection objects, to less trusted code.
In versions 1.0 and 1.1 of the .NET Framework, this class requires immediate callers to be fully trusted. In version 2.0 this class requires
EventLogPermission for specific actions. It is strongly recommended that EventLogPermission not be granted to partially trusted code. The ability to read and write the event log allows code to perform actions such as issuing event log messages in the name of another application.
Creating or deleting an event source requires synchronization of the underlying code by using a named mutex. If a highly privileged application locks the named mutex, attempts to create or delete an event source..
You are not required to specify the
MachineName if you are connecting to a log by specifying a Log / MachineName pair. If you do not specify the MachineName, the local computer, ".", is assumed..
There is nothing to protect an application from writing as any registered source. If an application is granted
Write permission, it can write events for any valid source registered on the computer.
Applications and services should write to the Application log or a custom log. Device drivers should write to the System log. If you do not explicitly set the Log property, the event log defaults to the Application log.
The Security log is read-only...");
}
}
#using <System.dll>
using namespace System;
using namespace System::Diagnostics;
using namespace System::Threading;
int 0;
}
// Create an EventLog instance and assign its source.
EventLog^ myLog = gcnew EventLog;
myLog->Source = "MySource";
// Write an informational entry to the event log.
myLog->WriteEntry( "Writing to event log." );
}
Windows 7, Windows Vista, Windows XP SP2, Windows XP Media Center Edition, Windows XP Professional x64 Edition, Windows XP Starter Edition, Windows Server 2008 R2, Windows Server 2008, Windows Server 2003, Windows Server 2000 SP4, Windows Millennium Edition, Windows 98
# get-eventlog.ps1# Sample of event log using powershell# thomas lee - tfl@psp.co.uk## NB: On Windows Vista or later, you must run this script as an admin (or with UAC not active).
# Setup a new log
if (![system.diagnostics.eventlog]::SourceExists("MySource")) {
# An event log source should never be created and immediately used.# There is a latency time to enable the source, it should be created# prior to executing the application that uses the source. Execute this
# script a second time to use the new source.
# Create the new event log:[system.diagnostics.EventLog]::CreateEventSource("MySource", "MyNewLog")"CreatedEventSource""Exiting, execute the application a second time to use the source."
# The source is created. Exit the application to allow it to be registered. return}
else {"MySource Eventlog exists"}
# With log created, create an EventLog instance and assign its source.$mylog = new-object System.diagnostics.Eventlog$myLog.Source = "MySource";
# Write an informational entry to the event log. $myLog.WriteEntry("Writing to event log.")
# Display log eventsget-eventlog mynewlog
This script produced the following output:
PSH [C:\foo]: .\get-eventlog.ps1CreatedEventSourceExiting, execute
PSH [C:\foo]: .\get-eventlog.ps1MySource Eventlog exists the application a second time to use the source.Index Time Type Source EventID Message----- ---- ---- ------ ------- ------- 1 Apr 06 16:05 Info MySource 0 Writing to event log
in vista the eventlog class is broken unless you run as admin or disable uac.
The eventlog class is not 'broken' - it is only accessible to administrative users. This is by design - and is unlikely to change.I'll update the sample above with appropriate comments. | http://msdn.microsoft.com/en-us/library/system.diagnostics.eventlog.aspx | crawl-002 | refinedweb | 645 | 50.33 |
The QGVector class is an internal class for implementing Qt collection classes. More...
#include <qgvector.h>
Inherits QCollection.
List of all member functions.
The QGVector class is an internal class for implementing Qt collection classes.
QGVector is a strictly internal class that acts as a base class for the QVector collection class.
QGVector has some virtual functions that may be reimplemented in subclasses to customize behavior.
Returns:
This function returns int rather than bool so that reimplementations can return one of three values and use it to sort by:
The QVector::sort() and QVector::bsearch() functions require that compareItems() is implemented as described here.
This function should not modify the vector because some const functions call compareItems().
The default implementation sets item to 0.
See also write().
The default implementation does nothing.
See also read().
This file is part of the Qtopia platform, copyright © 1995-2005 Trolltech, all rights reserved. | http://doc.trolltech.com/qtopia2.2/html/qgvector.html | crawl-001 | refinedweb | 150 | 51.65 |
SQLAlchemy 0.9 Documentation
Querying
Project Versions
Querying¶
This section provides API documentation for the Query object and related constructs.
For an in-depth introduction to querying with the SQLAlchemy ORM, please see the Object Relational Tutorial.
The Query Object¶
Query is produced in terms of a given Session, using the query() method:
q = session.query(SomeMappedClass)
Following is the full interface for the Query object.
- class sqlalchemy.orm.query.Query(entities, session=None)¶
ORM-level SQL construction object.
Query is the source of all SELECT statements generated by the ORM, both those formulated by end-user query operations as well as by high level internal operations such as related collection loading. It features a generative interface whereby successive calls return a new Query object, a copy of the former with additional criteria and options associated with it.
Query objects are normally initially generated using the query() method of Session. For a full walkthrough of Query usage, see the Object Relational Tutorial.
- add_column(column)¶
Add a column expression to the list of result columns to be returned.
Pending deprecation: add_column() will be superseded by add_columns().
- add_columns(*column)¶
Add one or more column expressions to the list of result columns to be returned.
- all()¶
Return the results represented by this Query as a list.
This results in an execution of the underlying query.
- as_scalar()¶
Return the full SELECT statement represented by this Query, converted to a scalar subquery.
Analogous to sqlalchemy.sql.expression.SelectBase.as_scalar().
New in version 0.6.5.
- autoflush(setting)¶
Return a Query with a specific ‘autoflush’ setting.
Note that a Session with autoflush=False will not autoflush, even if this flag is set to True at the Query level. Therefore this flag is usually used only to disable autoflush for a specific, }, { 'name':'id', 'type':Integer(), 'aliased':False, 'expr':User.id, }, { 'name':'user2', 'type':User, 'aliased':True, 'expr':user_alias } ]
- correlate(*args)¶
Return a Query construct which will correlate the given FROM clauses to that of an enclosing Query or select().
The method here accepts mapped classes, aliased() constructs, and mapper() constructs.
- count()¶
Return a count of rows this Query would return.
This generates the SQL for this Query as follows:
SELECT count(1) AS count_1 FROM ( SELECT <rest of query follows...> ) AS anon_1
Changed in version 0.7: The above scheme is newly refined as of 0.7b3.
For fine grained control over specific columns to count, to skip the usage of a subquery or otherwise control of the FROM clause, or to use other aggregate functions, use func expressions in conjunction with)))
- cte(name=None, recursive=False)¶
Return the full SELECT statement represented by this Query represented as a common table expression (CTE).
New in version 0.7.6.
Parameters and usage are the same as those of the SelectBase.cte() method; see that method for further details.
Here is the Postgresql WITH RECURSIVE example. Note that, in this example, the included_parts cte and the incl_alias alias of it are Core selectables, which means the columns are accessed via the .c. attribute. The parts_alias object is an orm.aliased() instance of the Part entity,
- delete(synchronize_session='evaluate')¶
Perform a bulk delete query.
Deletes rows matched by this query from the database.
- enable_assertions(value)¶.
- enable_eagerloads(value)¶.
- except_(*q)¶
Produce an EXCEPT of this Query against one or more queries.
Works the same way as union(). See that method for usage examples.
- except_all(*q)¶
Produce an EXCEPT ALL of this Query against one or more queries.
Works the same way as union(). See that method for usage examples.
- execution_options(**kwargs)¶
Set non-SQL options which take effect during execution.
The options are the same as those accepted by Connection.execution_options().
Note that the stream_results execution option is enabled automatically if the yield_per() method is used.
- exists()¶()
New in version 0.8.1.
- filter(*criterion)¶
apply the given filtering criterion to a copy of this Query, using SQL expressions.
e.g.:
session.query(MyClass).filter(MyClass.name == 'some name')
Multiple criteria are joined together by AND:
session.query(MyClass).\ filter(MyClass.name == 'some name', MyClass.id > 5)
The criterion is any SQL expression object applicable to the WHERE clause of a select. String expressions are coerced into SQL expression constructs via the text() construct.
Changed in version 0.7.5: Multiple criteria joined by AND.
See also
Query.filter_by() - filter on keyword expressions.
- filter_by(**kwargs)¶
apply the given filtering criterion to a copy of this Query, using keyword expressions.
e.g.:
session.query(MyClass).filter_by(name = 'some name')
Multiple criteria are joined together by AND:
session.query(MyClass).\ filter_by(name = 'some name', id = 5)
The keyword expressions are extracted from the primary entity of the query, or the last entity that was the target of a call to Query.join().
See also
Query.filter() - filter on SQL expressions.
- first()¶
Return the first result of this Query or None if the result doesn’t contain any row.
first() applies a limit of one within the generated SQL, so that only one primary entity row is generated on the server side (note this may consist of multiple result rows if join-loaded collections are present).
Calling first() results in an execution of the underlying query.
- from_self(*entities)¶
return a Query that selects from this Query’s SELECT statement.
*entities - optional list of entities which will replace those being selected.
- from_statement(statement)¶
Execute the given SELECT statement and return results.
This method bypasses all internal statement compilation, and the statement is executed without modification.
The statement is typically either a text() or select() construct, and should return the set of columns appropriate to the entity class represented by this Query.
See also
Using Literal SQL - usage examples in the ORM tutorial
- get(ident)¶
Return an instance based on the given primary key identifier, or None if not found.
E.g.:
my_user = session.query(User).get(5) some_object = session.query(VersionedFoo).get((5, 10)).
get() also will perform a check if the object is present in the identity map and marked as expired - a SELECT is emitted to refresh the object as well as to ensure that the row is still present. If not, ObjectDeletedError is raised.
get() is only used to return a single mapped instance, not multiple instances or individual column constructs, and strictly on a single primary key value. The originating Query must be constructed in this way, i.e. against a single mapped entity, with no additional filtering criterion. Loading options via options() may be applied however, and will be used if the object is not yet locally present.
A lazy-loading, many-to-one attribute configured by relationship(), using a simple foreign-key-to-primary-key criterion, will also use an operation equivalent to get() in order to retrieve the target value from the local identity map before querying the database. See Relationship Loading Techniques for further details on relationship loading.
- group_by(*criterion)¶
apply one or more GROUP BY criterion to the query and return the newly resulting Query
- having(criterion)¶
apply a HAVING criterion to the query and return the newly resulting Query.
having() is used in conjunction with group_by().
HAVING criterion makes it possible to use filters on aggregate functions like COUNT, SUM, AVG, MAX, and MIN, eg.:
q = session.query(User.id).\ join(User.addresses).\ group_by(User.id).\ having(func.count(Address.id) > 2)
- instances(cursor, _Query__context=None)¶
Given a ResultProxy cursor as returned by connection.execute(), return an ORM result as an iterator.
e.g.:
result = engine.execute("select * from users") for u in session.query(User).instances(result): print u
- intersect(*q)¶
Produce an INTERSECT of this Query against one or more queries.
Works the same way as union(). See that method for usage examples.
- intersect_all(*q)¶
Produce an INTERSECT ALL of this Query against one or more queries.
Works the same way as union(). See that method for usage examples.
- join(*props, **kwargs)¶
Create a SQL JOIN against this Query object’s criterion and apply generatively, returning the newly resulting Query.
Simple Relationship Joins
Consider a mapping between two classes User and Address, with a relationship User.addresses representing a collection of Address objects associated with each User. The most common usage of join() is to create a JOIN along this relationship, using the User.addresses attribute as an indicator for how this should occur:
q = session.query(User).join(User.addresses)
Where above, the call to join() along User.addresses will result in SQL equivalent to:
SELECT user.* FROM user JOIN address ON user.id = address.user_id
In the above example we refer to User.addresses as passed to join() as the on clause, that is, it indicates how the “ON” portion of the JOIN should be constructed. For a single-entity query such as the one above (i.e. we start by selecting only from User and nothing else), the relationship can also be specified by its string name:
q = session.query(User).join("addresses")
join() can also accommodate multiple “on clause” arguments to produce a chain of joins, such as below where a join across four related entities is constructed:
q = session.query(User).join("orders", "items", "keywords")
The above would be shorthand for three separate calls to join(), each using an explicit attribute to indicate the source entity:
q = session.query(User).\ join(User.orders).\ join(Order.items).\ join(Item.keywords)
Joins to a Target Entity or Selectable
A second form of join() allows any mapped entity or core selectable construct as a target. In this usage, join() will attempt to create a JOIN along the natural foreign key relationship between two entities:
q = session.query(User).join(Address)
The above calling form of join() will raise an error if either there are no foreign keys between the two entities, or if there are multiple foreign key linkages between them. In the above calling form, join() is called upon to create the “on clause” automatically for us. The target can be any mapped entity or selectable, such as a Table:
q = session.query(User).join(addresses_table)
Joins to a Target with an ON Clause
The third calling form allows both the target entity as well as the ON clause to be passed explicitly. Suppose for example we wanted to join to Address twice, using an alias the second time. We use aliased() to create a distinct alias of Address, and join to it using the target, onclause form, so that the alias can be specified explicitly as the target along with the relationship to instruct how the ON clause should proceed:
a_alias = aliased(Address) q = session.query(User).\ join(User.addresses).\ join(a_alias, User.addresses).\ filter(Address.email_address=='ed@foo.com').\ filter(a_alias.email_address=='ed@bar.com')
Where above, the generated SQL would be similar to:
SELECT user.* FROM user JOIN address ON user.id = address.user_id JOIN address AS address_1 ON user.id=address_1.user_id WHERE address.email_address = :email_address_1 AND address_1.email_address = :email_address_2
The two-argument calling form of join() also allows us to construct arbitrary joins with SQL-oriented “on clause” expressions, not relying upon configured relationships at all. Any SQL expression can be passed as the ON clause when using the two-argument form, which should refer to the target entity in some way as well as an applicable source entity:
q = session.query(User).join(Address, User.id==Address.user_id)
Changed in version 0.7: In SQLAlchemy 0.6 and earlier, the two argument form of join() requires the usage of a tuple: query(User).join((Address, User.id==Address.user_id)). This calling form is accepted in 0.7 and further, though is not necessary unless multiple join conditions are passed to a single join() call, which itself is also not generally necessary as it is now equivalent to multiple calls (this wasn’t always the case).
Advanced Join Targeting and Adaption
There is a lot of flexibility in what the “target” can be when using join(). As noted previously, it also accepts Table constructs and other selectables such as alias() and select() constructs, with either the one or two-argument forms:
addresses_q = select([Address.user_id]).\ where(Address.email_address.endswith("@bar.com")).\ alias() q = session.query(User).\ join(addresses_q, addresses_q.c.user_id==User.id)
join() also features the ability to adapt a relationship() -driven ON clause to the target selectable. Below we construct a JOIN from User to a subquery against Address, allowing the relationship denoted by User.addresses to adapt itself to the altered target:
address_subq = session.query(Address).\ filter(Address.email_address == 'ed@foo.com').\ subquery() q = session.query(User).join(address_subq, User.addresses)
Producing SQL similar to:
SELECT user.* FROM user JOIN ( SELECT address.id AS id, address.user_id AS user_id, address.email_address AS email_address FROM address WHERE address.email_address = :email_address_1 ) AS anon_1 ON user.id = anon_1.user_id
The above form allows one to fall back onto an explicit ON clause at any time:
q = session.query(User).\ join(address_subq, User.id==address_subq.c.user_id)
Controlling what to Join From
While join() exclusively deals with the “right” side of the JOIN, we can also control the “left” side, in those cases where it’s needed, using select_from(). Below we construct a query against Address but can still make usage of User.addresses as our ON clause by instructing the Query to select first from the User entity:
q = session.query(Address).select_from(User).\ join(User.addresses).\ filter(User.name == 'ed')
Which will produce SQL similar to:
SELECT address.* FROM user JOIN address ON user.id=address.user_id WHERE user.name = :name_1
Constructing Aliases Anonymously
join() can construct anonymous aliases using the aliased=True flag. This feature is useful when a query is being joined algorithmically, such as when querying self-referentially to an arbitrary depth:
q = session.query(Node).\ join("children", "children", aliased=True)
When aliased=True is used, the actual “alias” construct is not explicitly available. To work with it, methods such as Query.filter() will adapt the incoming entity to the last join point:
q = session.query(Node).\ join("children", "children", aliased=True).\ filter(Node.name == 'grandchild 1')
When using automatic aliasing, the from_joinpoint=True argument can allow a multi-node join to be broken into multiple calls to join(), so that each path along the way can be further filtered:
q = session.query(Node).\ join("children", aliased=True).\ filter(Node.name='child 1').\ join("children", aliased=True, from_joinpoint=True).\ filter(Node.name == 'grandchild 1')
The filtering aliases above can then be reset back to the original Node entity using reset_joinpoint():
q = session.query(Node).\ join("children", "children", aliased=True).\ filter(Node.name == 'grandchild 1').\ reset_joinpoint().\ filter(Node.name == 'parent 1)
For an example of aliased=True, see the distribution example XML Persistence which illustrates an XPath-like query system using algorithmic joins.
See also
Querying with Joins in the ORM tutorial.
Mapping Class Inheritance Hierarchies for details on how join() is used for inheritance relationships.
orm.join() - a standalone ORM-level join function, used internally by Query.join(), which in previous SQLAlchemy versions was the primary ORM-level joining interface.
- label(name)¶
Return the full SELECT statement represented by this Query, converted to a scalar subquery with a label of the given name.
Analogous to sqlalchemy.sql.expression.SelectBase.label().
New in version 0.6.5.
- merge_result(iterator, load=True)¶
Merge a result into this Query object’s Session.
Given an iterator returned by a Query of merge_result() is used, see the source code for the example Dogpile Caching, where merge_result() is used to efficiently restore state from a cache back into a target Session.
- one()¶
Return exactly one result or raise an exception.
Raises sqlalchemy.orm.exc.NoResultFound if the query selects no rows. Raises sqlalchemy.orm.exc.MultipleResultsFound if multiple object identities are returned, or if multiple rows are returned for a query that does not return object identities.
Note that an entity query, that is, one which selects one or more mapped classes as opposed to individual column attributes, may ultimately represent many rows but only one row of unique entity or entities - this is a successful result for one().
Calling one() results in an execution of the underlying query.
Changed in version 0.6: one() fully fetches all results instead of applying any kind of limit, so that the “unique”-ing of entities does not conceal multiple object identities.
- options(*args)¶
Return a new Query object, applying the given list of mapper options.
Most supplied options regard changing how column- and relationship-mapped attributes are loaded. See the sections Deferred Column Loading and Relationship Loading Techniques for reference documentation.
- order_by(*criterion)¶
apply one or more ORDER BY criterion to the query and return the newly resulting Query
All existing ORDER BY settings can be suppressed by passing None - this will suppress any ORDER BY configured on mappers as well.
Alternatively, an existing ORDER BY setting on the Query object can be entirely cancelled by passing False as the value - use this before calling methods where an ORDER BY is invalid.
- outerjoin(*props, **kwargs)¶
Create a left outer join against this Query object’s criterion and apply generatively, returning the newly resulting Query.
Usage is the same as the join() method.
- params(*args, **kwargs)¶.
- populate_existing()¶
Return a Query that will expire and refresh all instances as they are loaded, or reused from the current Session.
populate_existing() does not improve behavior when the ORM is used normally - the Session object’s usual behavior of maintaining a transaction and expiring all attributes after rollback or commit handles object state automatically. This method is not intended for general use.
- prefix_with(*prefixes)¶
Apply the prefixes to the query and return the newly resulting Query.
e.g.:
query = sess.query(User.name).\ prefix_with('HIGH_PRIORITY').\ prefix_with('SQL_SMALL_RESULT', 'ALL')
Would render:
SELECT HIGH_PRIORITY SQL_SMALL_RESULT ALL users.name AS users_name FROM users
New in version 0.7.7.
- reset_joinpoint()¶
Return a new Query, where the “join point” has been reset back to the base FROM entities of the query.
This method is usually used in conjunction with the aliased=True feature of the join() method. See the example in join() for how this is used.
- scalar()¶.
- select_entity_from(from_obj)¶
Set the FROM clause of this Query to a core selectable, applying it as a replacement FROM clause for corresponding mapped entities.
This method is similar to the Query.select_from() method, in that it sets the FROM clause of the query. However, where Query.select_from() only affects what is placed in the FROM, this method also applies
Changed in version 0.9: This method no longer applies the given FROM object to be the selectable from which matching entities select from; the select_entity_from() method now accomplishes this. See that method for a description of this behavior.
See also
Query.select_entity_from()
- selectable¶
Return the Select object emitted by this Query.
Used for inspect() compatibility, this is equivalent to:
query.enable_eagerloads(False).with_labels().statement
- slice(start, stop)¶
apply LIMIT/OFFSET to the Query based on a ” “range and return the newly resulting Query.
- statement¶
The full SELECT statement represented by this Query.
The statement by default will not have disambiguating labels applied to the construct unless with_labels(True) is called first.
- subquery(name=None, with_labels=False, reduce_columns=False)¶
return the full SELECT statement represented by this Query, embedded within an Alias.
Eager JOIN generation within the query is disabled.
- union(*q)¶ object will not render ORDER BY within its SELECT statement.
- union_all(*q)¶
Produce a UNION ALL of this Query against one or more queries.
Works the same way as union(). See that method for usage examples.
- update(values, synchronize_session='evaluate')¶
Perform a bulk update query.
Updates rows matched by this query in the database.
- values(*columns)¶
Return an iterator yielding result tuples corresponding to the given list of columns
- whereclause¶
A readonly attribute which returns the current WHERE criterion for this Query.
This returned value is a SQL expression construct, or None if no criterion has been established.
- with_entities(*entities)¶
Return a new Query replacing)
New in version 0.6.5.
- with_for_update(read=False, nowait=False, of=None)¶
return a new Query with the specified options for the FOR UPDATE clause.
The behavior of this method is identical to that of SelectBase.with_for_update(). When called with no arguments, the resulting SELECT statement will have a FOR UPDATE clause appended. When additional arguments are specified, backend-specific options such as FOR UPDATE NOWAIT or LOCK IN SHARE MODE can take effect.
E.g.:
q = sess.query(User).with_for_update(nowait=True, of=User)
The above query on a Postgresql backend will render like:
SELECT users.id AS users_id FROM users FOR UPDATE OF users NOWAIT
New in version 0.9.0: Query.with_for_update() supersedes the Query.with_lockmode() method.
See also
GenerativeSelect.with_for_update() - Core level method with full argument and behavioral description.
- with_hint(selectable, text, dialect_name='*')¶
Add an indexing hint for the given entity or selectable to this Query.
Functionality is passed straight through to with_hint(), with the addition that selectable can be a Table, Alias, or ORM entity / mapped class /etc.
- with_labels()¶.
- with_lockmode(mode)¶
Return a new Query object with the specified “locking mode”, which essentially refers to the FOR UPDATE clause.
Deprecated since version 0.9.0: superseded by Query.with_for_update().
See also
Query.with_for_update() - improved API for specifying the FOR UPDATE clause.
- with_parent(instance, property=None)¶
Add filtering criterion that relates the given instance to a child object or collection, using its attribute state as well as an established relationship() configuration.
The method uses the with_parent() function to generate the clause, the result of which is passed to Query.filter().
Parameters are the same as with_parent(), with the exception that the given property can be None, in which case a search is performed against this Query object’s target mapper.
- with_polymorphic(cls_or_mappers, selectable=None,.
- with_transformation(fn)¶
Return a new Query object transformed by the given function.
E.g.:
def filter_something(criterion): def transform(q): return q.filter(criterion) return transform q = q.with_transformation(filter_something(x==5))
This allows ad-hoc recipes to be created for Query objects. See the example at Building Transformers.
New in version 0.7.4.
- yield_per(count)¶
Yield only count rows).
The yield_per() method is not compatible with most eager loading schemes, including joinedload and subqueryload. See the warning below.
Warning
Use this method with caution; if the same instance is present in more than one batch of rows, end-user changes to attributes will be overwritten.
In particular, it’s usually impossible to use this setting with eagerly loaded collections (i.e. any lazy=’joined’ or ‘subquery’) since those collections will be cleared for a new load when encountered in a subsequent result batch. In the case of ‘subquery’ loading, the full result for all rows is fetched which generally defeats the purpose of yield_per().
Also note that while yield_per() will set the stream_results execution option to True, currently this is only understood by psycopg2 dialect which will stream results using server side cursors instead of pre-buffer all rows for this query. Other DBAPIs pre-buffer all rows before making them available. The memory use of raw database rows is much less than that of an ORM-mapped object, but should still be taken into consideration when benchmarking.
ORM-Specific Query Constructs¶
- sqlalchemy.orm.aliased(element, alias=None, name=None, flat=False, adapt_on_names=False)¶
Produce an alias of the given element, usually an AliasedClass instance.
E.g.:
my_alias = aliased(MyClass) session.query(MyClass, my_alias).filter(MyClass.id > my_alias.id), flat=False, adapt_on_names=False, with_polymorphic_mappers=(), with_polymorphic_discriminator=None, base_alias=None, use_mapper_path=False)¶
Represents an “aliased” form of a mapped class for usage with Query.
The ORM equivalent of a sqlalchemy.sql.expression.alias() construct, this object mimics the mapped class using a __getattr__ scheme and maintains a reference to a real Alias object.
Usage is via the, adapt_on_names)¶
Bases: sqlalchemy.orm.base..orm.query.Bundle(name, *exprs, **kw)¶
A grouping of SQL expressions that are returned by a Query under one namespace.
The Bundle essentially allows nesting of the tuple-based results returned by a column-oriented Query object.
- __init__(name, *exprs, **kw)¶
e.g.:
bn = Bundle("mybundle", MyClass.x, MyClass.y) for row in session.query(bn).filter( bn.c.x == 5).filter(bn.c.y == 4): print(row.mybundle.x, row.mybundle.y)
- c = None¶
An alias for Bundle.columns.
- columns = None¶)
- create_row_processor(query, procs, labels)¶
Produce the “row processing” function for this Bundle.
May be overridden by subclasses.
See also
Column Bundles - includes an example of subclassing.
-
- class sqlalchemy.orm.strategy_options.Load(entity)¶
Bases: sqlalchemy.sql.expression.Generative, sqlalchemy.orm.interfaces.MapperOption
Represents loader options which modify the state of a Query in order to affect how various mapped attributes are loaded.
New in version 0.9.0: The Load() system is a new foundation for the existing system of loader options, including options such as orm.joinedload(), orm.defer(), and others. In particular, it introduces a new method-chained system that replaces the need for dot-separated paths as well as “_all()” options such as orm.joinedload_all().
A Load object can be used directly or indirectly. To use one directly, instantiate given the parent class. This style of usage is useful when dealing with a Query that has multiple entities, or when producing a loader option that can be applied generically to any style of query:
myopt = Load(MyClass).joinedload("widgets")
The above myopt can now be used with Query.options():
session.query(MyClass).options(myopt)
The Load construct is invoked indirectly whenever one makes use of the various loader options that are present in sqlalchemy.orm, including options such as orm.joinedload(), orm.defer(), orm.subqueryload(), and all the rest. These constructs produce an “anonymous” form of the Load object which tracks attributes and options, but is not linked to a parent class until it is associated with a parent Query:
# produce "unbound" Load object myopt = joinedload("widgets") # when applied using options(), the option is "bound" to the # class observed in the given query, e.g. MyClass session.query(MyClass).options(myopt)
Whether the direct or indirect style is used, the Load object returned now represents a specific “path” along the entities of a Query. This path can be traversed using a standard method-chaining approach. Supposing a class hierarchy such as User, User.addresses -> Address, User.orders -> Order and Order.items -> Item, we can specify a variety of loader options along each element in the “path”:
session.query(User).options( joinedload("addresses"), subqueryload("orders").joinedload("items") )
Where above, the addresses collection will be joined-loaded, the orders collection will be subquery-loaded, and within that subquery load the items collection will be joined-loaded.
- contains_eager(loadopt, attr, alias=None)¶
Produce a new Load object with the orm.contains_eager() option applied.
See orm.contains_eager() for usage examples.
- defaultload(loadopt, attr)¶
Produce a new Load object with the orm.defaultload() option applied.
See orm.defaultload() for usage examples.
- defer(loadopt, key)¶
Produce a new Load object with the orm.defer() option applied.
See orm.defer() for usage examples.
- immediateload(loadopt, attr)¶
Produce a new Load object with the orm.immediateload() option applied.
See orm.immediateload() for usage examples.
- joinedload(loadopt, attr, innerjoin=None)¶
Produce a new Load object with the orm.joinedload() option applied.
See orm.joinedload() for usage examples.
- lazyload(loadopt, attr)¶
Produce a new Load object with the orm.lazyload() option applied.
See orm.lazyload() for usage examples.
- load_only(loadopt, *attrs)¶
Produce a new Load object with the orm.load_only() option applied.
See orm.load_only() for usage examples.
- noload(loadopt, attr)¶
Produce a new Load object with the orm.noload() option applied.
See orm.noload() for usage examples.
- subqueryload(loadopt, attr)¶
Produce a new Load object with the orm.subqueryload() option applied.
See orm.subqueryload() for usage examples.
- undefer(loadopt, key)¶
Produce a new Load object with the orm.undefer() option applied.
See orm.undefer() for usage examples.
- undefer_group(loadopt, name)¶
Produce a new Load object with the orm.undefer_group() option applied.
See orm.undefer_group() for usage examples.
- sqlalchemy.orm.join(left, right, onclause=None, isouter=False, join_to_left=None)¶
Produce an inner join between left and right clauses.
orm.join() is an extension to the core join interface provided by sql.expression.join(), where the left and right selectables may be not only core selectable objects such as Table, but also mapped classes or AliasedClass instances. The “on” clause can be a SQL expression, or an attribute or string name referencing a configured relationship().
orm.join() is not commonly needed in modern usage, as its functionality is encapsulated within that of the Query.join() method, which features a significant amount of automation beyond orm.join() by itself. Explicit usage of orm.join() with Query involves usage of the Query.select_from() method, as in:
from sqlalchemy.orm import join session.query(User).\ select_from(join(User, Address, User.addresses)).\ filter(Address.email_address=='foo@bar.com')
In modern SQLAlchemy the above join can be written more succinctly as:
session.query(User).\ join(User.addresses).\ filter(Address.email_address=='foo@bar.com')
See Query.join() for information on modern usage of ORM level joins.
Changed in version 0.8.1: - the join_to_left parameter is no longer used, and is deprecated.
- sqlalchemy.orm.outerjoin(left, right, onclause=None, join_to_left=None)¶
Produce a left outer join between left and right clauses.
This is the “outer join” version of the orm.join() function, featuring the same behavior except that an OUTER JOIN is generated. See that function’s documentation for other usage details.
- sqlalchemy.orm.with_parent(instance, prop)¶
Create filtering criterion that relates this query’s primary entity to the given related instance, using established relationship() configuration..
Changed in version 0.6.4: This method accepts parent instances in all persistence states, including transient, persistent, and detached. Only the requisite primary key/foreign key attributes need to be populated. Previous versions didn’t work with transient instances. | http://docs.sqlalchemy.org/en/rel_0_9/orm/query.html?highlight=query | CC-MAIN-2014-42 | refinedweb | 4,962 | 50.12 |
When.
By carrying out port routing, we can also connect from the Internet to our connected objects. However, beware of security vulnerabilities. It is better to connect your objects to a home automation server which is the job after all!
How to assign a fixed IP address to an ESP8266 or ESP-01 project?
The ESP8266WiFi library allows you to precisely assign the connection parameters:
- address you want to assign
- server, it is he who assigns the IP address. Here, we inform him that we want to reserve a fixed IP address. By default, we use the DNS server of the internet box or the router, so we indicate the same address.
- This is the IP address of the internet box or router
- subnet. Generally, it is 255, 255, 255, 0. It will be necessary to check directly at the level of the internet box or the router
Here, we will assign the IP address 192.168.1.40 to the ESP01 or ESP8266 module.
IPAddress ip(192, 168, 1, 40); IPAddress dns(192, 168, 1, 1); IPAddress gateway(192, 168, 1, 1); IPAddress subnet(255, 255, 255, 0);
The WiFi.config() method is used to configure the IP address and connection parameters to the local WiFi network.
WiFi.config(ip, dns, gateway, subnet);
Then we can connect as usual
WiFi.begin(ssid, password);
Read this article to learn more about the ESP8266WiFi library
How to assign a fixed IP address to an ESP project
The WiFi.h library for ESP32 is the equivalent of the ESP8266WiFi library.
The configuration and the call of the methods are perfectly identical!
You just have to include conditional code at the start of the project to load the library that corresponds to the platform. Detection is automatic.
#ifdef ESP32 #include <WiFi.h> #else #include <ESP8266WiFi.h> #endif
Upload the Arduino code to test the fixed IP address
Here is a complete code example that you can upload from the Arduino IDE or PlatformIO.
The code is ESP32 and ESP8266 compatible (including ESP01). On the Arduino IDE, you can remove the first line #include <Arduino.h>.
#include <Arduino.h> #ifdef ESP32 #include <WiFi.h> #else #include <ESP8266WiFi.h> #endif const char* ssid = "enter_your_ssid"; const char* password = "enter_your_password"; WiFiServer server(80); void setup() { Serial.begin(115200); delay(10); IPAddress ip(192, 168, 1, 40); IPAddress dns(192, 168, 1, 1); IPAddress gateway(192, 168, 1, 1); IPAddress subnet(255, 255, 255, 0); WiFi.config(ip, dns, gateway, subnet); WiFi.begin(ssid, password); while (WiFi.status() != WL_CONNECTED) { delay(500); Serial.print("..."); } Serial.println("WiFi connected"); server.begin(); Serial.println("Web server running."); delay(500); Serial.println(WiFi.localIP()); } void loop() { // put your main code here, to run repeatedly: }
PlatformIO configurations for ESP32, ESP8266 or ESP-01
Here are some PlatformIO configurations for a LoLin d1 Mini (ESP8266), an ESP-01 (512KB) / ESP-01S (1MB) WiFi module or a LoLin D32 Pro (ESP32).
Check the assigned IP address
Upload the project and open the serial monitor to verify that the IP address has been correctly assigned
From the Arduino IDE
From PlatformIO
--- More details at --- Miniterm on /dev/cu.usbserial-1410 115200,8,N,1 --- --- Quit: Ctrl+C | Menu: Ctrl+T | Help: Ctrl+T followed by Ctrl+H --- .....................WiFi connected Web server running. 192.168.1.40
Updates
20/10/2020 | https://diyprojects.io/how-to-assign-fixed-ip-esp32-esp8266-esp01/ | CC-MAIN-2021-49 | refinedweb | 554 | 58.38 |
I have always been slightly confused about the difference between classes and structures in C#. For many years, structures seemed identical to classes, but were simply not as extensible. Recently I had a relook at them and came up with two key identifying features that help me differentiate the two..
Where they are stored Value and Reference values So, the first main difference for me is that structure instances are stored on the stack and class instances are stored on the heap.
The second main difference is that structures are value types while classes are reference types. With this in mind it is interesting to see the differences between structures and classes in the following code snippet and output.
using System;
namespace DifferenceBetweenClassesAndStructures {
struct StrucutreExample { public int x; } class ClassExample { public int x; } class Program { static void Main(string[] args) { StrucutreExample st1 = new StrucutreExample(); // Not necessary, could have done StructureExample1 st1; StrucutreExample st2; ClassExample cl1 = new ClassExample(); ClassExample cl2; cl1.x = 100; st1.x = 100; cl2 = cl1; st2 = st1; cl2.x = 50; st2.x = 50; Console.WriteLine("st1 - {0}, st2 - {1}", st1.x, st2.x); Console.WriteLine("cl1 - {0}, cl2 - {1}", cl1.x, cl2.x); Console.ReadLine(); } } }
As you can see form the above code, we have instantiated 2 instances of a structure and 2 instances of a classe identically, and then pointed the second instance of the structure and the class to the first instance of each. The output is different, while the second instance of the class is pointing via reference to the first instance of the class, with the structure the second instance makes an independent copy of the first structure and so when you change the 2nd structures value, it does not effect the first structures value.
So with that in mind, the output would be as follows…
2011-07-19 01-07-19 PM
And there you go, two major differences for me between structures and classes in C# | http://blog.markpearl.co.za/2-main-differences-between-Structures-and-Classes-in-C | CC-MAIN-2019-04 | refinedweb | 324 | 60.55 |
This is a discussion on Re: Connecting VPNned namespaces - DNS ; There are several ways to solve the problem, each with slightly different mechanism but the same effect for users. Use stub zones, slave zones, or forward zones. For example, using stub zones on the Campbell's server: options { // no ...
There are several ways to solve the problem, each with slightly
different mechanism but the same effect for users. Use stub zones,
slave zones, or forward zones.
For example, using stub zones on the Campbell's server:
options {
// no forwarders statement
}
zone "tate.local" {
type stub;
masters { 192.168.77.1; };
};
(If you do have a forwarders statement in options, add an empty
forwarders statement into the stub zone.)
The result of this is, if there is a recursive query ending in
"tate.local" sent to the Campbell server, that server will send an
iterative query to the tate.local server.
If you change the zone type from "stub" to "forward" and change
"masters" to "forwarders", the difference is that the query from one
server to the other is recursive. In this case, that's probably a
meaningless difference.
If instead you use a slave zone (replace "stub" with "slave" in the
example above, and leave the "masters" line unchanged), then each
server will get a copy of the other server's zone and answer
authoritatively for that zone. This can introduce change latency (up
to several hours, depending on the refresh timer length) into the
process unless you also add an NS record for the other server to each
zone. On the other hand, responses to queries will be slightly
faster, since each server will have both zones hosted locally.
Chris Buxton
Professional Services
Men & Mice
Address: Noatun 17, IS-105, Reykjavik, Iceland
Phone: +354 412 1500.
On Oct 11, 2007, at 7:25 AM, Bertram Scharpf wrote:
> Hi,
>
>
> I'm not an experienced network maintainer but I successfully
> set up two local networks with two name servers. Now I
> connected them over a VPN. Say there are:
>
> 192.168.77.1 jessica.tate.local
> 192.168.77.2 chester.tate.local
> 192.168.77.3 billy.tate.local
>
> 192.168.88.1 mary.campbell.local
> 192.168.88.2 burt.campbell.local
> 192.168.88.3 chuck.campbell.local
>
> The Tate's "resolv.conf"s point to 192.168.77.1 and the Campbell's
> ones point to 192.168.88.1 .
>
> Now I want a request for e. g. billy.tate.local on the
> Campbell side to be redirected to 192.168.77.1 and vice
> versa. Could anyone give me a hint how this is designed
> best?
>
> Thanks in advance,
>
> Bertram
>
>
> --
> Bertram Scharpf
> Stuttgart, Deutschland/Germany
>
>
> | http://fixunix.com/dns/236937-re-connecting-vpnned-namespaces.html | CC-MAIN-2014-35 | refinedweb | 449 | 65.32 |
Hello Gary,OK, I have now pushed another fix, which I am pretty confident will fix your build problem on Solaris 10 without breaking any other architecture. Please pull the latest version of Bacula from the git repo. It is version 7.9.6 19 June 2017. As previously mentioned, you will need to do a proper ./configure prior to the make.
Advertising
I would appreciate it if you would let me know if it works "out of the box" now.I would appreciate it if you would let me know if it works "out of the box" now.
Thanks for reporting this and for giving me rapid feedback. Best regards, Kern On 06/19/2017 05:09 AM, Gary R. Schmidt wrote:
Hi Kern, On 2017-06-19 02:55, Kern Sibbald wrote:Hello Gary, Could you send me a diff -u of your patch or simply the patched lz4.c file. I think I did the inverse of what is needed -- that is I removed the pack() on Solaris 10. If I did get it backwards, the code will obviously not work.diff -u: =================================================== bacula-7.9.4/src/lib $ diff -u lz4.c.bad lz4.c --- lz4.c.bad Fri Jun 16 17:27:26 2017 +++ lz4.c Fri Jun 16 17:27:43 2017 @@ -171,7 +171,7 @@ #endif #if !defined(LZ4_FORCE_UNALIGNED_ACCESS) && !defined(__GNUC__) -# pragma pack(push, 1) +# pragma pack(1) #endif typedef struct _U16_S { U16 v; } _PACKED U16_S; @@ -179,7 +179,7 @@ typedef struct _U64_S { U64 v; } _PACKED U64_S; #if !defined(LZ4_FORCE_UNALIGNED_ACCESS) && !defined(__GNUC__) -# pragma pack(pop) +# pragma pack() #endif #define A64(x) (((U64_S *)(x))->v) =================================================== And lz4.c file attached as well. Cheers, Gary B-)
------------------------------------------------------------------------------ Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! _______________________________________________ Bacula-devel mailing list Bacula-devel@lists.sourceforge.net | https://www.mail-archive.com/bacula-devel@lists.sourceforge.net/msg08729.html | CC-MAIN-2017-47 | refinedweb | 309 | 76.42 |
Please Say “Why”
As .NET demands new books and articles and the economy has given a lot of smart folk free time, the world is becoming inundated in .NET books, articles, talks and courses, many of which I am tapped to review. Some are wonderful. Some are awful. Most, however are *almost* good, the path to goodness well within the author’s reach but for the answer to one question: “why?”
Most of my feedback is riddled with questions that start with why: “Why was it built this way?” “Why are there three choices and how do I choose?” “Why should I care?” Please, when you write, remember this question and answer it thoroughly and well. The why is *so* much more important than the how. The online documentation for .NET is fabulous for describing the how, once you understand the motivation for this class, that method or the other namespace.
Prose that provides the how is transient, but prose that provides the why becomes classic because the why itself is surprisingly applicable between technologies. At the very least, if you answer the why, it will save me work if I’m to review your prose. | https://sellsbrothers.com/12590 | CC-MAIN-2021-21 | refinedweb | 196 | 75.3 |
Suppose you want to write to a Google Spreadsheet from a Python script. Here’s an example spreadsheet that you might want to update from a script:
I did some searching and found this page, which quickly led me to the Python Developer’s Guide for the Google Spreadsheet API.
There’s a simple “Getting started with Gdata and Python” page. The upshot is 1) make sure you have a recent version of Python (e.g. 2.5 or higher), then 2) install the Google Data Library. The commands I used were pretty much
mkdir ~/gdata
(download the latest Google data Python library into the ~/gdata directory)
unzip gdata.py-1.2.4.zip (or whatever version you downloaded)
sudo ./setup.py install
That’s it. You can test that everything installed fine by running “./tests/run_data_tests.py” to verify that the tests all pass. The program “./samples/docs/docs_example.py” lets you list all of your Google Spreadsheets, for example. An extremely useful program that lets you insert rows right into a spreadsheet is “./samples/spreadsheets/spreadsheetExample.py” and someone has also got a really nice example of uploading a machine’s dynamic IP address to a spreadsheet.
The most painful thing is that InsertRow() must be called with a spreadsheet key and a worksheet key. If you find out those values, you could hardcode them into the script and probably cut the size of the script in half. Or you could just look in the url to see the key value. That’s what I did. So here’s an miniature example script to write to a Google Spreadsheet from a Python script:
#!/usr/bin/python
import time
import gdata.spreadsheet.service
password = 'yourpassword'
weight = '180'
# Find this value in the url with 'key=XXX' and copy XXX below
spreadsheet_key = 'pRoiw3us3wh1FyEip46wYtW'
# All spreadsheets have worksheets. I think worksheet #1 by default always
# has a value of 'od6'
worksheet_id = 'od6'
spr_client = gdata.spreadsheet.service.SpreadsheetsService()
spr_client.email = email
spr_client.password = password
spr_client.source = 'Example Spreadsheet Writing Application'
spr_client.ProgrammaticLogin()
# Prepare the dictionary to write
dict = {}
dict['date'] = time.strftime('%m/%d/%Y')
dict['time'] = time.strftime('%H:%M:%S')
dict['weight'] = weight
print dict
entry = spr_client.InsertRow(dict, spreadsheet_key, worksheet_id)
if isinstance(entry, gdata.spreadsheet.SpreadsheetsList):
print "Insert row succeeded."
else:
print "Insert row failed."
That’s it. Run the script to append a new row to the current spreadsheet. By the way, if you make a chart from the spreadsheet data, you can right-click on the chart, select “Publish chart…” from the menu, and get a snippet of HTML to copy/paste that will embed the chart on a web page. It will look like this:
That’s a live image served up by Google, and when the spreadsheet gets new data, the image should update too.
31 Responses to Write to a Google Spreadsheet from a Python script (Leave a comment)
Why would we like to do that?
Sound anything else than a User friendly interface. Let’s say apart of
prof. programmers.
Looks easy – I gave up on doing this to a MS Excel document earlier on, so good to keep this in mind for future reference..
Well reading this has certainly boosted my motivation this morning. Looks like my efficiency barrier has just been pushed up that little bit more.
Thanks for the tip Matt!
I am an SEO no idea about Python script
but i have already refered to my known Python geeks. Hope to get feedback from them soon
Chris, I have a couple follow-up blog posts that will show a neat application (to me at least).
Great! But why Python script? Is it also available for PHP?
Is there a php version too please?
Thanks
Anthony
Great post! Reading your post has certainly peaked my interested in learning more about how to use python in publishing graphics on websites. As Henri said in a post, I’m interested in knowing whether Google spreadsheet supports PHP.
Matt,
It’s good to see a conversation on the news which i’ve read in December on Ubuntu Forums on following Post,.
On that time it made me curious to know that either Google is working for every programing language that works for different operating systems ? Not sure if it will continue or not.
kinda neat
I could see some uses I know some on who publishes his weather station stats online who could use this – so when are google spreadsheets they going to to suport asynconous queus
@Henri, GData for PHP is supported by the Zend Framework: Also, Matt, WordPress 2.7.1 is out now, time to jump ship from 2.6.5 and get comment replies and automattic upgrades 😉
Matt, python script is not so popular. So, only a few will find this article useful. We, all want you to write some articles which have more universal appeal and reach. Hope, as a writer you will consider our request.
RE: “We, all”
Err, I think “all” can speak for themselves, thanks.
Hmm. I like the Python script idea. You know what is amazing, is that thanks to Matt Cutts I actually know what all this stuff is now, and I dont know of any lawyer who knows as much about social media, SEO, etc., as me because of Matt’s videos and blog.
I used to think Google were a bunch of closed society creeps, but their one employee, Matt Cutts, changed that. I am sold.
Thanks Matt. BTW, I redesigned my ehlinelaw dot com site and based it in large part on what I learned from you. I would appreciate a comment or two from you and any of the old timers in here.
oops, I had one last question. Can I use this feature on my Blackberry Bold?
Matt, what’s Google’s stance on the use of Google Docs as the spreadsheet / graphing / pdf producing engine for external sites? Would it be ok to embed gdocs spreadsheet functionality within another app?
how to find the number for the other worksheets?
I mean, worksheet_id = ‘od6’ is for the first, but the other?
Anyway thanks for the nice post
Easier to use a form (which can still be POSTed to using Python, if you need to)?
Tried to run the script using Cronjob, didn’t work for me
Hi Matt,
Saw your blog post this morning thanks to Peter Norvig’s writeup. Thank you so much for posting this, it’s always nice to learn about an API through some concrete examples. Python is a pleasure to work with. Just wanted to encourage you that many of us appreciate your writeups, so please don’t listen to the negative & ungrateful voices above-
Matt, Thanks for this great video! It’s clear, concise and a real pleasure to watch! And the info you gave us is for me: PRICELESS!
Thanks again! 😀
I seem to have to go to my spreadsheet, and click “publish” on the chart manually every time the spreadsheet changes. Any way to fix this?
The latest (2.01) gdata version didn’t work with the latest Python. It only supports 2.4-2.6.
Thank you for posting this up! I found the example extremely useful, as it is just about exactly what I want to do. In my case, I wanted to received the data in the URL and append it to a spreadsheet, so this meant all I had to do was figure out extracting the variables from the URL. Turns out the gdata module has the required functions for that, so I’m good to go.
Thanks for this I found your example very useful, It’s pretty much what I wanted to do, In my case I wanted to grab some of the query string from URL and put it into a spreadsheet.
Got it all working and have leanrt alot of new features about the google spreadsheets.
This broke in late 2009? Anyone know if this method still works?
pretty damn useful. the fact that i now know about being able to do this is going to help incredibly!
Hello,
I got a error when going to insert data in the spreadsheet
gdata.service.RequestError: {‘status’: 400, ‘body’: ‘We're sorry, a server error occurred. Please wait a bit and try reloading your spreadsheet.’, ‘reason’: ‘Bad Request’}
Would please help me?
Thanks
Hello Matt,
Thank you for the sample code and the links. I do have a question though. I need to access individual cells, and the Developer API is more like selected interfaces with some sample code. Is there anywhere that list *all* the interface functions along with their descriptions, like a man page?
Thanks again,
Mike
Hey Matt and Community,
I just found this at work. I love me some Python and I repeated your example today using Python 2.7.1. I saw, via youtube, at IO a Python script populate and format a PowerPoint document. I’d love to to see your take on mixing up presentations, spreadsheet graphical data and Python. Maybe a follow up is due?
Thanks for the information about the api!
James
Dude, thanks. The Google documentation was rather hard to understand, and this helped me GREATLY. Thanks | https://www.mattcutts.com/blog/write-google-spreadsheet-from-python/ | CC-MAIN-2015-40 | refinedweb | 1,543 | 74.79 |
Buiding a example simple with ASP.NET MVC5. we can see, many developer program ASP.NET MVC 5
Today, I will shared buiding way create project ASP.NET MVC 5, with program Visual Studio 2019
Everyone,can download here:
Installing, note you tab open Individual Components, after then, you choose Class Designer and LINQ to SQL Tools
Setup project ASP.NET MVC 5
You create project
You choose ASP.NET Web Application, let's go, you can need choose Version Framework
After create success project, you can add news HomeControler.cs file in Controllers directory
Ok, Begin you create layout project
- Create Shared folder in directory Views
- Create _LayoutPage1.cshtml file in Views/Shared directory
you create layout razor success!, you open HomeController.cs, click-right function index()-> choose add view
You have new Index.cshtml file in Views/Home/Index.cshtml directory
Continue! you need to Controllers/HomeController.cs open it and configuration function index(), the folllowing below code
using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.Mvc; namespace MVC5_HelloWorld.Controllers { public class HomeController : Controller { // GET: Home public ActionResult Index() { ViewBag.title = "MVC5 - Hello World"; return View(); } } }
We set ViewBage.title to insert a string of data or an array of data that we want it to be displayed outside of View
You can using LinQ, you can see the following below
//HomeController.cs var data = (from s in _db.users select s).ToList(); ViewBag = data; return View() //Views/Home/Index.cshtml @foreach(var result in ViewBag.data){ <span>@result.name</span> <span>@result.created_at</span> }
The above is an example if you want to retrieve data in SQL SERVER, to display outside the View
Continue to you Views / Home / Index.cshtml and open up and edit the following:
@{ ViewBag.Title = "Index"; Layout = "~/Views/Shared/_LayoutPage1.cshtml"; } <h2>@ViewBag.title</h2>
That's it, you can Run the project and test it out!
Discussion | https://practicaldev-herokuapp-com.global.ssl.fastly.net/skipperhoa/asp-net-mvc-5-helllo-world-1o55 | CC-MAIN-2020-45 | refinedweb | 322 | 53.47 |
Arduino Ethernet Shield
Arduino Ethernet shield is an external board which can be mounted on arduino female headers and this Ethernet shield can now bring arduino over network. Arduino with Ethernet shield can talk/communicate with the devices/peripherals present not only on a home or office network but can go further over internet.
Arduino communicates with the Ethernet shield on SPI protocol. Arduino SPI pins 10, 11, 12 and 13 make connection and talks with the arduino Ethernet shield. Many arduino Ethernet libraries are present on internet which can be utilized to work with arduino and Ethernet shield. Arduino Ethernet Libraries are a great path to start with. Libraries contain arduino Ethernet board initialization and startup functions. We only need to write our main code rest of initialization and controlling part is done by the library(Thanks to arduino open source community).
Arduino Ethernet shield Web controlled Fan
Every device on a network is identified by an IP and mac address. So if we want our arduino to connect with our home network we must first identify what IP our router is assigning to clients. The best and easiest way is to open from your computer and laptop Command Prompt. You can find cmd by pressing windows button on keyboard and then in search type cmd. Open cmd and enter ipconfig /all command. This command will display the IP that router assigned to your system. See the below picture.
Arduino Web control switch using arduino Ethernet shield – Circuit diagram
Arduino Ethernet shield web controlled fan – Project code
#include <SPI.h>
#include <Ethernet.h>
After libraries i defined the arduino pin to which i am going to connect the fan. Pin#4 of arduino is used to control the fan.
int fan = 4;
So now our fan controlling arduino gpio is also defined. Now we need to assign MAC and IP to our arduino board or arduino Ethernet shield. Mac is an address which is assigned to every physical device by the manufacturer and user also has control to change the mac address. To arduino Ethernet shield we also need to assign a mac address. Mac address is 48-bit wide. On every private network each device must have a unique and different IP address. I also assigned a mac address to my arduino Ethernet shield. Since i am in my home network to which only 3 to 4 devices are connected normally so i can put mac address which i hope is not conflicting. You can also use the mac which i use when implementing the same project by your self. After mac its time for the real address which is IP address. In the previous picture we identified the IP series our router is assigning to clients which are connected to it. Arduino also needs to request the router for an IP assignment. But we have a small advantage here we can request the router to assign the arduino IP which we need. Like we now our router is working on 192.168.56.x. We can request router to assign us the IP 192.168.56.100. Almost all the time router accepts this request and assign the IP which is requested if this IP is available. In the arduino code i included the below instructions for mac and IP assignment.
byte mac[] = { 0xDE, 0xAD, 0xBE, 0x3F, 0xFE, 0xED };
byte ip[] = { 192.168.56.100 };
I selected .100 because its too far from 1 which is the first client so 100 clients in a home network is not normal.
We want our arduino Ethernet shield to control the fan connected to its pin and we want to control it using a web browser. In web browser we want to load a page which contains the fan control button. Pressing the button generates an event at arduino and arduino switches the fan On or Off. This web page is served by the arduino Ethernet shield and we have to write its code. But before writing code we need to tell the router that our Ethernet shield will serve web pages or technically we need to tell the router that our Ethernet shield will work as a server and it will serve pages to clients. This is done by starting a server with the simple instruction/code statement below.
EthernetServer server(80);
In the setup() function i initialized the serial monitor at 9600 bps. Serial monitor is initialized only for debugging purposes you can comment or delete it if you want. Arduino fan controlling GPIO is made an output pin. Ethernet shield is passed the IP and mac address also the server is started in setup() function.
Ethernet.begin(mac, ip, gateway, subnet);
server.begin();
To check the IP address assigned by the router the instruction Serial.println(Ethernet.localIP()); is included in the code. Its sure that the above 192.168.56.100 will be assigned but its better to check if any error occurred during assignment.
In the loop function arduino server is checking if any client requested web page. The statement EthernetClient client = server.available(); is checking if any request arrived. If so it serves the web page.
Important: Both the client(your device through which you are accessing arduino fan server) and arduino server must be connected to same network in order for successful shake hand and communication. If any one of them is not present on the same network communication between them is not possible. Web page will not load in your browser.
What happens when you press the toggle link or button in web page? Well the statements in arduino code which defines the link button are
client.println(“<a href=\”/?button1on\”\”>Turn On FAN</a>”);
client.println(“<a href=\”/?button1off\”\”>Turn Off FAN</a><br />”);
Now when you press the link/button web page tries to load the link present in the above HTML statements. In simple words client makes an another request to load the page. At server side when server receives request it decodes it and save the data present in request and instead of offering new page to client it loads the same web page again.
Data present in the link which server decoded is ?button1on and ?button1off.
Next statements which toggles the state of fan are if statements.
if (readString.indexOf(“?button1on”) >0){
digitalWrite(fan, HIGH);
}
if (readString.indexOf(“?button1off”) >0){
digitalWrite(fan, LOW);
} | https://www.engineersgarage.com/arduino/arduino-ethernet-shield-controlled-switch/ | CC-MAIN-2021-04 | refinedweb | 1,066 | 65.32 |
Hi,
> Now imagine you decided to put framework code forcefully into default
> namespace - you prevented everyone from using that namespace even though
> they might not be using the framework.
Not quite correct. They could use the names space and name it something else. I think it could
be smart enough to only add the default names spaces if they were not already defined or their
names used.
> You cannot really avoid doing it because of the tools that work with MXML are built
on top of tools
> that work with XML, and if MXML breaks some basic XML rules, the tools will
> stop working.
Yep it would involve changing the compiler (both MXML and perhaps ASDocs) what other tools
work with MXML other then various IDEs I guess? "old" style MXML would still be a valid option
which any new feature it may take a while for the IDE's to catch up.
> The only reason for using MXML, really, is that there are tools for working
> with XML, so you have prerequisites. But, really, if anyone was about to
> break the compatibility with XML
The root names space could be added by any tool (or by hand if required) so I don;t see the
need to move away from XMML all together. Still an interesting idea about making another templating
system that's not XML based.
Thanks,
Justin | http://mail-archives.apache.org/mod_mbox/incubator-flex-dev/201203.mbox/%3CE0568C76-F898-4C7E-A122-170A5DA07758@classsoftware.com%3E | CC-MAIN-2016-36 | refinedweb | 230 | 73.81 |
Arnaud-
I hope we can both agree to move this discussion to
jdom-interest@jdom.org from now on - I am sure lots of people are tired
of hearing us already ;-)
>
> So, if the DOM is "ridiculously complex", JDOM appears to be
> ridiculously simple... Sorry, I couldn't resist. ;-)
This is sort of the crux of your comments - I address other comments
individually later on. But this is for the readers who just want to
read a paragraph and click 'Next' ;-)
I understand that DOM is, in a sense, your "baby," as you are both
co-editor of the DOM spec and co-chair of the XML Working Group, and
that you are very involved at the W3C. Could it be possible that you
are being a bit biased? I suggest this because you seem to imply that
there is an either-or relationship between DOM and JDOM. In fact, we go
to great pains to make sure that you can go from DOM or SAX to JDOM, and
from JDOM to DOM or SAX. This, if anything, indicates we are /very/
committed to standards. We just feel that if you have standard input
and output, why whould all the stuff "in the middle" give you such a
headache when, in many cases, it doesn't have to? We are offering Java
developers an alternative. Certainly as you know, the users will
dictate what is used much more than you or I, right? ;-) I hope you
will give JDOM the chance that people have given DOM, and where it
works, admit that. Certainly I am willing to admit where your API does
things that ours doesn't, and point users to you... thanks... my other
> Hi.
It sounds like there may have been a misunderstanding. As JDOM is so
new, this is something that tends to happen with exciting products,
especially in the Java and XML arena, where things move so fast. JDOM
is not intended to be a 100% accurate representation of XML. Instead,
it is an API specifically for Java developers, and even further, aimed
at Java developers who are perhaps not XML gurus, per say. While
certainly there are things that you, or I, or other XML-ites may look at
and say "Well, that's not technically correct," these same items are
often the reason that so many developers are so concerned about using
XML, and have such a hard time.
We are very honest about the fact that we seek to solve 80% of the
problems of Java and XML, not 100%. Additionally, we are very clear
when we deviate. In fact, in addition to the numerous documentation and
slides where we lay this out, we are adding a FAQ section; this means
there are three clearly marked places. Honestly, if you read the docs
at all, you can't miss it ;-) You might want to hop onto
jdom-interest@jdom.org now, as we are discussing some of these features,
and if the goal of "100% accuracy" is worth the price paid for it (re:
DOM, which we both know, for reasons that are legitimate, is a heavy
API). In some cases, we believe it is not, such as the PI placement
within a document.
However, I want to be clear to you and others that we have not and do
not seek to conceal this; I was a bit put off by your implication that
we were being misleading, as all of our press and docs are very clear
that our goal is usability, and intuitiveness, not to be a replacement
for DOM and SAX in every situation, always.
>
> The way namespaces are handled show a clear misunderstanding of the
> basics of XML namespaces and, unless a serious redesign is undertaken,
> it will only work for simple cases.
This is also a bit strong of a statement, I think. I just finished
authoring the O'Reilly book, "Java and XML", and feel I am pretty strong
in the XML world ;-) There is a difference in simplifying something
(intentionally) to help out the common Java developer, as opposed to not
understanding something at all. Additionally, we actually removed the
support for scoped namespaces, and it only took about 30 minutes ;-) So
it's not that big a change at all. I hope you'll join us on
jdom-interest to find out more about our direction!
-Brett | http://mail-archives.apache.org/mod_mbox/cocoon-dev/200004.mbox/%3C3908BAB0.E59EF24C@gte.net%3E | CC-MAIN-2017-13 | refinedweb | 733 | 65.76 |
Async Process¶
This is considered an advanced topic.
Synchronous versus Asynchronous¶
Most program code operates synchronously. This means that each statement in your code gets processed and finishes before the next can begin. This makes for easy-to-understand code. It is also a requirement in many cases - a subsequent piece of code often depend on something calculated or defined in a previous statement.
Consider this piece of code:
print "before call ..." long_running_function() print "after call ..."
When run, this will print
"before call ...", after which the
long_running_function gets to work for however long time. Only once
that is done, the system prints
"after call ...". Easy and logical
to follow. Most of Evennia work in this way and often it’s important
that commands get executed in the same strict order they were coded.
Evennia, via Twisted, is a single-process multi-user server. In simple
terms this means that it swiftly switches between dealing with player
input so quickly that each player feels like they do things at the same
time. This is a clever illusion however: If one user, say, runs a
command containing that
long_running_function, all other players
are effectively forced to wait until it finishes.
Now, it should be said that on a modern computer system this is rarely an issue. Very few commands run so long that other users notice it. And as mentioned, most of the time you want to enforce all commands to occur in strict sequence.
When delays do become noticeable and you don’t care in which order the
command actually completes, you can run it asynchronously. This makes
use of the
run_async() function in
src/utils/utils.py:
run_async(function, *args, **kwargs)
Where
function will be called asynchronously with
*args and
**kwargs. Example:
from evevnnia import utils print "before call ..." utils.run_async(long_running_function) print "after call ..."
Now, when running this you will find that the program will not wait
around for
long_running_function to finish. In fact you will see
"before call ..." and
"after call ..." printed out right away.
The long-running function will run in the background and you (and other
users) can go on as normal.
Customizing asynchronous operation¶
A complication with using asynchronous calls is what to do with the
result from that call. What if
long_running_function returns a value
that you need? It makes no real sense to put any lines of code after the
call to try to deal with the result from
long_running_function above
- as we saw the
"after call ..." got printed long before
long_running_function was finished, making that line quite pointless
for processing any data from the function. Instead one has to use
callbacks.
utils.run_async takes reserved kwargs that won’t be passed into the
long-running function:
at_return(r)(the callback) is called when the asynchronous function (
long_running_functionabove) finishes successfully. The argument
rwill then be the return value of that function (or
None).
def at_return(r): print r
at_return_kwargs- an optional dictionary that will be fed as keyword arguments to the
at_returncallback.
at_err(e)(the errback) is called if the asynchronous function fails and raises an exception. This exception is passed to the errback wrapped in a Failure object
e. If you do not supply an errback of your own, Evennia will automatically add one that silently writes errors to the evennia log. An example of an errback is found below:
def at_err(e): print "There was an error:", str(e)
at_err_kwargs- an optional dictionary that will be fed as keyword arguments to the
at_errerrback.
An example of making an asynchronous call from inside a Command definition:
from ev import utils from game.gamesrc.commands.basecommand import Command class CmdAsync(Command): key = "asynccommand" def func(self): def long_running_function(): #[... lots of time-consuming code return final_value def at_return(r): self.caller.msg("The final value is %s" % r) def at_err(e): self.caller.msg("There was an error: %s" % e) # do the async call, setting all callbacks utils.run_async(long_running_function, at_return, at_err)
That’s it - from here on we can forget about
long_running_function
and go on with what else need to be done. Whenever it finishes, the
at_return function will be called and the final value will pop up
for us to see. If not we will see an error message.
Process Pool¶
Note: The Process pool is currently not available nor supported, so the following section should be ignored. An old and incompatible version of the procpool can be found in the evennia/procpool repository if you are interested.
The
ProcPool is an Evennia subsystem that launches a pool of
processes based on the ampoule package (included with Evennia). When
active,
run_async will use this pool to offload its commands.
ProcPool is deactivated by default, it can be turned on with
settings.PROCPOOL_ENABLED. It should be noted that the default
SQLite3 database is not suitable for for multiprocess operation. So if
you use ``ProcPool`` you should consider switching to another database
such as MySQL or PostgreSQL.
The Process Pool makes several additional options available to
run_async. The following keyword arguments make sense when
ProcPool is active:
use_thread- this force-reverts back to thread operation (as above). It effectively deactivates all additional features
ProcPooloffers.
proc_timeout- this enforces a timeout for the running process in seconds; after this time the process will be killed.
at_return,
at_err- these work the same as above.
In addition to feeding a single callable to
run_async, the first
argument may also be a source string. This is a piece of python source
code that will be executed in a subprocess via
ProcPool. Any extra
keyword arguments to
run_async that are not one of the reserved ones
will be used to specify what will be available in the execution
environment.
There is one special variable used in the remote execution:
_return.
This is a function, and all data fed to
_return will be returned
from the execution environment and appear as input to your
at_return
callback (if it is defined). You can call
_return multiple times in
your code - the return value will then be a list.
Example:
from src.utils.utils import run_async source = """ from time import sleep sleep(5) # sleep five secs val = testvar + 5 _return(val) _return(val + 5) """ # we assume myobj is a character retrieved earlier # these callbacks will just print results/errors def callback(ret): myobj.msg(ret) def errback(err): myobj.msg(err) testvar = 3 # run async run_async(source, at_return=callback, at_err=errback, testvar=testvar) # this will return '[8, 13]'
You can also test the async mechanism from in-game using the
@py
command:
@py from src.utils.utils import run_async;run_async("_return(1+2)",at_return=self.msg)
Note: The code execution runs without any security checks, so it should
not be available to unprivileged users. Try
contrib.evlang.evlang.limited_exec for running a more restricted
version of Python for untrusted users. This will use
run_async under
the hood.
delay¶
The
delay function is a much simpler sibling to
run_async. It is
in fact just a way to delay the execution of a command until a future
time. This is equivalent to something like
time.sleep() except delay
is asynchronous while
sleep would lock the entire server for the
duration of the sleep.
def callback(obj): obj.msg("Returning!") delay(10, caller, callback=callback)
This will delay the execution of the callback for 10 seconds. This function is explored much more in the Command Duration Tutorial.
Assorted notes¶
Note that the
run_async will try to launch a separate thread behind
the scenes. Some databases, notably our default database SQLite3, does
not allow concurrent read/writes. So if you do a lot of database
access (like saving to an Attribute) in your function, your code might
actually run slower using this functionality if you are not careful.
Extensive real-world testing is your friend here.
Overall, be careful with choosing when to use asynchronous calls. It is mainly useful for large administration operations that have no direct influence on the game world (imports and backup operations come to mind). Since there is no telling exactly when an asynchronous call actually ends, using them for in-game commands is to potentially invite confusion and inconsistencies (and very hard-to-reproduce bugs).
The very first synchronous example above is not really correct in the
case of Twisted, which is inherently an asynchronous server. Notably you
might find that you will not see the first
before call ... text
being printed out right away. Instead all texts could end up being
delayed until after the long-running process finishes. So all commands
will retain their relative order as expected, but they may appear with
delays or in groups.
Further reading¶
Technically,
run_async is just a very thin and simplified wrapper
around a Twisted Deferred object; the wrapper sets up a separate
thread and assigns a default errback also if none is supplied. If you
know what you are doing there is nothing stopping you from bypassing the
utility function, building a more sophisticated callback chain after
your own liking. | http://evennia.readthedocs.io/en/latest/Async-Process.html | CC-MAIN-2018-13 | refinedweb | 1,499 | 64.81 |
There are a number of data grids available for WPF but the latest and the one most likely to become the standard WPF data grid is the one introduced with Visual Studio 2010. For an indepth look at how the DataGrid works see: Using the WPF DataGrid
This also introduces new ways of generating controls bound to data sources. Unfortunately in the case of the Windows Search data connector the automatic code generation doesn't work 100% because of the use of field names that contain dots - e.g. System.Filename. This might well be fixed in the future but for the moment we need to add some hand-generated code to make it work. This might prove useful in other situations.
If you have been following the project using a WPF application then so far everything has worked the same as in the case of a Windows Forms application. It is assumed that you have the data source called SearchResults already set up complete with a table called Table consisting of a single column called System.FileName.
There is one important change to be made to the data source before making use of it in data binding. Change the column's property Null Value from "Throw Exception" to Null. This allows empty rows in the database to exist without causing the application to crash.
To place a data bound control onto the form you have to move to the form designer and have the Data Sources window open next to it. From the drop-down list that appears next to the table you can select the control you want to bind to it. In this case make sure it's the DataGrid.
Select the DataGrid as the control to be bound to the Table.
To place a bound DataGrid on the form in principle all you have to do is drag it onto the design surface. This generates the XAML needed and some code to create the table adaptor and the view. The DataGrid is created with a column complete with suitable header and the XAML contains the static resources needed to create the database and the view.
Unfortunately the auto-generated code isn't particularly helpful. To make it all work we need to create a database and a view instance manually. The code in the button's click event handler is the same as for the Windows version plus some extra code to setup the data table and view bound to the grid.
First we setup the connection string, the connection object and the table adaptor as before:
string connectionString = "Provider=Search.CollatorDSO; Extended Properties= 'Application=Windows'"; OleDbConnection SearchConnect = new OleDbConnection(); SearchConnect.ConnectionString = connectionString; OleDbDataAdapter SearchAdpt = new OleDbDataAdapter( "SELECT Top 5 System.FileName FROM SYSTEMINDEX", SearchConnect);
We can also setup the CollectionViewSource which is used to navigate through the data table and the database object using the static resources generated by the designer:
ColleCollectionViewSource ViewSource = (CollectionViewSource) (this.FindResource( "tableViewSource"))); SearchResults SearchResults = ((SearchResults)(this.FindResource( "SearchResults")));
Finally we can fill the table with data:
SearchAdpt.Fill(SearchResults);
At this point the DataGrid should fill with data but what actually happens is that fills with five blank lines. Whenever this happens it is a sign that the DataGrid is correctly bound to the database but the columns of the DataGrid are not bound to the fields in the table.
The reason in this case seems to be that the designer has generated an ADO data object with the column name _System_FileName but the view object has been generated with a column name System.Filename and the dot is interpreted by the binding engine as a hierarchical property i.e. the Filename property of the System object. There seems to be no easy way to correct this problem.
One fairly easy to implement but advanced solution to the problem is to convert the data table in the database into an IEnumerable collection. We can even convert it into an IEnumerable collection of row objects and the good news is that each item in the collection has a property with the correct name, i.e. _System_FileName.
To convert to an IEnumerable collection we simply use the table's AsEnumberable method:
EnumerableRowCollection <SearchResults.TableRow> TableData = SearchResults.Table.AsEnumerable <SearchResults.TableRow>();
Now TableData is a collection with the correct column names that we can bind to the DataGrid. This could be done by editing the XAML but it's just as easy to modify the values set in the generated XAML using code:
tableDataGrid.DataContext = TableData; tableDataGrid.ItemsSource = TableData; ((DataGridTextColumn)tableDataGrid. Columns[0]).Binding = new Binding("_System_FileName");
You also need to add:
using System.Data;
Following these changes you should be able to see the first five rows of data listed in the DataGrid.
The WPF DataGrid in action
If you add aditional columns to the query you can repeat the drag-and-drop of the table to the designer and the DataGrid code will be regenerated to include the additional columns. You will, however, have to re-bind the new columns to the correct column names following the pattern of the first column.
<ASIN:0735621640 >
<ASIN:0470477229>
<ASIN:0596527357>
<ASIN:1430272058> | http://www.i-programmer.info/projects/38-windows/609-windows-search-wds-4.html?start=4 | CC-MAIN-2014-42 | refinedweb | 859 | 53.1 |
Re: variable question
- From: Joshua Johnson <bithead999@xxxxxxxxx>
- Date: Sat, 1 Dec 2007 02:17:37 -0500
Hello,
When using FTP I have found that if a transfer fails part way into the
transfer then you end up with an incomplete file on the remote system. To
avoid incomplete files being processed. I have used a temp file name to
perform the initial PUT command, then when the PUT is complete I call REN to
give the file it's intended name. If the transfer fails the REN will never
happen and the receiver will know the transfer failed. If the remote file
exists or may exist you can call DEL first then PUT/REN. You must call
EXITONERROR before your DEL command so it won't exit if the file doesn't
exist, then EXITONERROR again after the delete command to set the flag back.
Another solution is to use a second confirmation file that could included
the EOF and/or byte count of the data file. Another problem I had was not
all errors caused FTP to exit when exitonerror was set (can't remember which
ones, too long ago). To determine the exact error I would colon to MPE and
set a variable before the command, then colon to MPE after the command and
set appropriate variables if it failed (snip example below). The variable
FTPLASTREPLY is very helpful along with FTPLASTERR. To make your $STDLIST
cleaner you can redirect the FTP output to a file and only display it if an
error occurred (run;stdin=*!_ft_tmpin;stdlist=*!_ft_list).
echo EXITONERROR > *!_ft_tmpin
echo :setvar _ft_lastcmd 'OPEN' >> *!_ft_tmpin
echo OPEN !_ft_rmnode >> *!_ft_tmpin
echo :setvar _ft_lastcmd 'USER' >> *!_ft_tmpin
echo USER !_ft_rmuname >> *!_ft_tmpin
echo !_ft_rmpass >> *!_ft_tmpin
echo :if lft(ftplastreply,2)='53' >> *!_ft_tmpin
echo : setvar _ft_fail true >> *!_ft_tmpin
echo : setvar _ft_failuser true >> *!_ft_tmpin
echo :endif >> *!_ft_tmpin
Several years ago I wrote an application that was a front end for regular
FTP. It stored the user/password/remote directory, etc information in a
database accessed with a Cobol program. The rest was all CI script. It had
different configs for production/test/development environments so you could
move a job/program through environments without any code changes for remote
hosts, it would know where it was and use the configs for that environment.
It had a method to determine what FTP command failed and then if appropriate
would stage the transfer and keep trying until the server became
available/issue resolved (background job handled retries). It did the rename
thing of course and if the rename failed it would re-connect and delete the
temp file so it didn't clutter up remote servers. It could transfer a file
to multiple hosts with one call to the app. Then only stage hosts that
failed. Logging everything too of course. It would do GET and DIR on remote
hosts also. I have all the code if anyone is interested in FTP on steroids.
It has a Quick screen to put the configs into the database, but you could
use Query or Suprtool.
Joshua Johnson
-----Original Message-----
From: HP-3000 Systems Discussion [mailto:HP3000-L@xxxxxxxxxxxxx] On Behalf
Of John Pitman
Sent: Friday, November 30, 2007 10:03 PM
To: HP3000-L@xxxxxxxxxxxxx
Subject: Re: [HP3000-L] variable question
I have found setting passive to help make the transfer less likely to time
out on you. Uses a different no of ports or something.
jp
________________________________________
From: HP-3000 Systems Discussion [HP3000-L@xxxxxxxxxxxxx] On Behalf Of Olav
Kappert [okappert@xxxxxxxxxx]
Sent: Saturday, 1 December 2007 12:55 PM
To: HP3000-L@xxxxxxxxxxxxx
Subject: Re: [HP3000-L] variable question
Opps, I did not see the FTP. So Dave is right. You need to pipe in
your input if you are going to use FTP. The only way to do that is to
echo everything needed for FTP into a file and use that as redirected
input..
Olav.
Dave Powell, MMfab wrote:
First off, FTP can't expand variables, but :echo can, so you need to builda
file (any old temp file is ok), and redirect FTP's input to it, more orless
like:single-digits
!FILE FTPI = FTPI; REC=-72,,F,ASCII
!ECHO exitonerror >> *FTPI
!ECHO verbose >> *FTPI
!ECHO open whatever >> *FTPI
!ECHO user whoever>> *FTPI
!ECHO put !remotefile >> *FTPI
!ECHO quit >> *FTPI
!
!RUN < FTPI
!IF FTPLASTERR <> 0
! some error routine, etc
This will also give you better error checking if anything goes wrong.
Also, you dont need the 1st 4 setvars that just dummy-out the variables you
are going to set later.
Also, it looks to me like your code would eat the leading zeros
months. Try the "rht" function like Keven saidI
----- Original Message -----
From: "mag" <mgallotti@xxxxxxxxxxxxxxx>
To: <HP3000-L@xxxxxxxxxxxxx>
Sent: Friday, November 30, 2007 13:58
Subject: [HP3000-L] variable question
hello,
My file name,remotefile, is not coming out like I thought. I am trying to
name
the file "cntr" and concatenate the month and year to the file name. What
am getting is this : "!file_name". Can you see what I am doing wrong?
Thanks!
!setvar file_name ' '
!setvar remotefile ' '
!setvar mon ' '
!setvar yr ' '
!setvar mon '!hpmonth'
!setvar yr '!hpyear'
!setvar file_name 'CNTR!mon!yr'
!setvar remotefile "!file_name"
!COMMENT
!FTP
OPEN
USER
!put !remotefile
* To join/leave the list, search archives, change list settings, *
* etc., please visit *
.
- References:
- Re: variable question
- From: Dave Powell, MMfab
- Re: variable question
- From: Olav Kappert
- Re: variable question
- From: John Pitman
- Prev by Date: Re: variable question
- Next by Date: Crash Trivia
- Previous by thread: Re: variable question
- Next by thread: Re: variable question
- Index(es): | http://newsgroups.derkeiler.com/Archive/Comp/comp.sys.hp.mpe/2007-12/msg00005.html | crawl-002 | refinedweb | 935 | 72.46 |
Section 4.5 Translation Units
A C program usually consists of multiple translation units. A translation unit is a file with the file ending
.c. The C compiler translates the functions of a translation unit into object code, which it stores in a binary file with the file ending
.o. In a Unix system, the command
$ cc -o x.o -c x.c
translates the translation unit
x.c into the binary file
x.o. The C compiler can than link multiple such translated translation units to an executable file:
$ cc -o prg x.o y.o z.o
Here,
prg is the name of the to be produced executable program and can be chosen freely. Afterwards, the current directory contains a file
prg that can be started with
$ ./prg
Remark 4.5.1.
Separating the translation into these steps is helpful for larger projects for which the translation can take a long time: If only one translation unit is changed after a previous build, the compiler only needs to re-compile the changed translation unit and perform the final linking step. Additionally, separate translation units can be translated to object code independently, which makes this step easy to parallelize. This can speed up large builds considerably on modern systems.
The C compiler can find errors in the program code and issue warnings in both stages of the translation process. While the warnings do not abort the translation process, they should be considered carefully as they can hint towards subtle programming errors. A good C program should be translated by the compiler without warnings.
Subsection 4.5.1 main
To successfully build an executable, exactly one contributing translation unit needs to contain a function with the name
main. Program execution starts with this function. Unix programs (and, consequently, C programs) can be started with arguments. These are given to the
main function in the form of two parameters,
argc and
argv.
argc contains the number of the provided program arguments, including the program name as mandatory first argument. The actual character strings of the arguments are available in the
argv array. A character string, usually just called string in C is a null-terminated sequence of characters. Strings are referred to by the address of their first character.
argv is therefore an array of pointers to the first character of each argument (see Figure 4.5.2).
argvwhen starting the program as
./factorial 5.
For an example, consider a main function that calls the
factorial function from the previous section with an argument obtained from the command line.
#include <stdio.h> #include <stdlib.h> /* here is the declaration of the factorial function */ int main(int argc, char *argv[]) { if (argc <= 1) { fprintf(stderr, "syntax: %s <value>\n", argv[0]); return 1; } int n = atoi(argv[1]); int r = factorial(n); printf("%u\n", r); return 0; }
mainfunction for the factorial function.
We build this program with the following commands:
$ cc -o factorial.o -c factorial.c $ cc -o factorial factorial.o
Then, the following program execution will fail:
$ ./factorial syntax: ./factorial <value>
The provided message tells us to provide a number whose factorial the program should compute. The following invocation will produce the desired result:
$ ./factorial 5 120
The
main function first checks whether an argument was provided. This is the case if the value of
argc is greater than 1 (since the program name is always the first parameter.) If no argument was given, the program prints an explanatory message to the user and terminates with the value 1. Otherwise, the first argument (a string of characters) is converted to an integer number and its factorial is computed. The program displays the result via
printf and then terminates successfully with the value 0.
Remark 4.5.4.
In Unix, every program can provide an “exit code” upon termination. In a C program, this is the return value of the
main function. By convention, an exit code of 0 signifies a successful execution, whereas other numbers can encode different errors.
Subsection 4.5.2 Calling Functions from Other Translation Units
Let us assume that we want to separate the
main function and the
factorial function into different translation units. It is often good practice to bundle functions that are thematically connected, for example because they operate on similar data, into their own translation unit. We usually separate the
main function from other functions since it contains mostly argument handling and gives the relevant arguments to the other functions. The
factorial function could be reused in a different project where factorials need to be computed; our
main function less so. Therefore, it is reasonable to separate the functions into two translation units, which are compiled separately.
#include "factorial.h" int factorial(int n) { int res = 1; while (n > 1) { res = res * n; n = n - 1; } return res; }
factorial.c
#ifndef FACTORIAL_H #define FACTORIAL_H int factorial(int n); #endif /* FACTORIAL_H */
factorial.h
#include <stdio.h> #include <stdlib.h> #include "factorial.h" int main(int argc, char *argv[]) { if (argc <= 1) { fprintf(stderr, "syntax: %s <value>\n", argv[0]); return 1; } int n = atoi(argv[1]); int r = factorial(n); printf("%u\n", r); return 0; }
main.c
mainfunction in two separate translation units. The header file
factorial.hcontains the prototype of the function
factorial. The preprocessing directives (
#ifdef, etc.) ensure that the file content is only included once per translation unit. While not necessary here, this convention becomes important when header files include other header files, to break infinite recursive include sequences.
When we delete the
factorial function from the translation unit in Listing 4.5.3, the compiler rejects the translation unit since it does not know
factorial's type. For a successful translation, the compiler needs to know the type of every called function. 8 The type of
factorial can be established in the
main.c translation unit by providing the prototype of
factorial. The prototype of a function consists of its name, its return type, and the types of its parameters:
int factorial(int);
This is commonly called a function declaration, in contrast to a function definition where additionally, the function's code is provided in a body. In practice, we do not manually duplicate the prototype of every function into every translation unit in which it is used. On the one hand, this would require writing a lot of redundant code. On the other hand, it would be prone to errors since all translation units would need to be adjusted if we change the function, e.g., by adding or changing a parameter. For this reason, we create header files that are included by the C preprocessor.
The C preprocessor is a separate program that is invoked by the C compiler before it performs the actual translation. It transforms a text into a new text by expanding preprocessing directives. All preprocessing directives start with a hash sign (
#). The directive
#include "x.h" for example interrupts the preprocessing of the current file, preprocesses the file
x.h, and then resumes preprocessing the original file. As a result, in our example in Figure 4.5.5, the preprocessing inserts the content of
factorial.h into both translation units,
factorial.c and
main.c.
Subsection 4.5.3 Makefiles
In practice, projects can easily consist of hundreds to thousands of translation units. To avoid building them all by hand, there is the Unix tool
make. It operates based on a file with the name
Makefile, in which we specify how to build the project. This description contains the dependencies between the involved files and a description how to produce files from their prerequisites.
factorial: factorial.o main.o cc -o $@ $^ main.o: main.c factorial.h factorial.o: factorial.c factorial.h %.o: %.c cc -o $@ -c $< clean: rm -f factorial *.o
Makefilefor the factorial program.
The first two lines of the
Makefile specify that we need the files
factorial.o and
main.o to build the file
factorial, and that the latter file is built from the former two files with the command
cc -o factorial factorial.o main.o. The last two lines determine that any file ending with
.o is built from a similarly named file with ending with
.c. The command
cc -o x.o -c x.c performs this translation.
The placeholders in the build rules have the following meaning:
$@: the “target” of the rule, i.e., the text on the left of the colon
$<: the first “prerequisite” of the target, i.e., the first word on right of the colon
$^: all prerequisites of the target, i.e. all words on right of the colon
Usage of
make is not restricted to C. It is a general tool to describe build processes and dependencies. It is however most commonly used for C projects. | https://prog2.de/book/sec-c-tu.html | CC-MAIN-2022-33 | refinedweb | 1,467 | 57.87 |
Doing Your Homework, With Style
Note
The original poster claims this is not a school homework. Accepting that, it still doesn't mean how to answer to homework requests is not worthy of discussion.
As usual in all programming lists, every once in a while someone will post a question in the Python Argentina list which is obviously his homework. To handle that there are two schools of thought.
- Telling the student how to do it is helping them cheat.
- Telling the student how to do it is teaching him.
I tend more towards 1) but I think I have discovered a middle road:
1.5) Tell the student a solution that's more complicated than the problem.
That way, if he figures out the solution, he has done the work, and if he doesn't figure it out, it's going to be so obviously beyond his skill the teacher will never accept it as an answer.
As an example, here's the problem for which help was requested:
Given an unsorted list of two-letter elements (une lowercase, one uppercase), for example:['eD', 'fC', 'hC', 'iC', 'jD', 'bD', 'fH', 'mS', 'aS', 'mD']
Sort it by these criteria:
-
Create subsets according to the uppercase letter, and sort them by the number of members in ascending order, like this:['fH', 'mS', 'aS', 'fC', 'hC', 'iC', 'jD', 'bD', 'eD', 'mD']
-
Then sort each subset in ascending order of the lowercase letter, like this:['fH', 'aS', 'mS', 'fC', 'hC', 'iC', 'bD', 'eD', 'jD', 'mD']
Ignoring that the problem is not correctly written (there are at least two ways to read it, probably more), I proposed this solution, which requires python 3:
from collections import defaultdict d1 = defaultdict(list) [d1[i[1]].append(i) for i in ['eD', 'fC', 'hC', 'iC', 'jD', 'bD', 'fH', 'mS', 'aS', 'mD']] {i: d1[i].sort() for i in d1} d2 = {len(d1[i]): d1[i] for i in d1} print([item for sublist in [d2[i] for i in sorted(d2.keys())] for item in sublist])
This produces the desired result: ['fH', 'aS', 'mS', 'fC', 'hC', 'iC', 'bD', 'eD', 'jD', 'mD'] but it's done in such a way that to understand it, the student will need to understand roughly three or four things he has probably not been taught yet.
Syndicated 2013-03-10 20:23:32 from Lateral Opinion | http://www.advogato.org/person/ralsina/diary/698.html | CC-MAIN-2015-35 | refinedweb | 394 | 61.9 |
- Advertisement
Bogdan TatarovMember
Content Count5
Joined
Last visited
Community Reputation108 Neutral
About Bogdan Tatarov
- RankNewbie
Mahjong based tile perspective
Bogdan Tatarov replied to Bogdan Tatarov's topic in General and Gameplay ProgrammingI was finally able to solve it with ordering without any hacks. Here's how (if anyone needs it): 1. Get the tiles for each layer (z axis) 2. Get the ones that are free (the ones that do not intersect half way through with other tiles) 3. Sort the free ones right to left, top to bottom and insert them in an array with ordered ones 4. Remove the free ones from the array 5. Repeat step 2 until there are no tiles left. 6. Reverse the array with the ordered tiles. Works like a charm.
Mahjong based tile perspective
Bogdan Tatarov replied to Bogdan Tatarov's topic in General and Gameplay ProgrammingThere are two problems with that solution: 1. There is an animation at the beginning that "draws" the level where tiles fly from all the corners of the screen. 2. I need to implement the same solution for the level editor. It may be a bit slow to re-render the entire scene every time new tile is moved/added but it gets the job done.
Mahjong based tile perspective
Bogdan Tatarov replied to Bogdan Tatarov's topic in General and Gameplay ProgrammingI may be missing something really simple but given example 1 it will render A before B as its Y is lower. If I render top to bottom -> right to left everything looks okay except for situations like example 2.
Mahjong based tile perspective
Bogdan Tatarov replied to Bogdan Tatarov's topic in General and Gameplay ProgrammingYes, that was what I thought at first (rendering right to left, top to bottom) but in this particular case the left tile is rendered before the right because its Y is smaller (assuming I'm working with top-left orientation). Just to say that if the tile is at X = 5, Y = 6, then it's rendered at X = 5, Y = 6 and X = 5, Y = 7.
Mahjong based tile perspective
Bogdan Tatarov posted a topic in General and Gameplay ProgrammingI'm building a simple Mahjong based game in Lua. I'm adding a simple perspective to my tiles on the bottom-left edges. I can successfully generate the level in proper tile order if tiles don't touch half-way using the following code: table.sort(self.tiles, function(tile1, tile2) if tile1.level_layer == tile2.level_layer then if tile1.level_y == tile2.level_y then return (tile1.level_x - tile2.level_x) > 0 else return (tile1.level_y - tile2.level_y) < 0 end else return (tile1.level_layer - tile2.level_layer) < 0 end end ) It works like a charm. However when I start implementing the half-way touching, everything fails. For example I cannot implement a solution for the following problem: In example 1 is the way it should be rendered, but it gets rendered as example 2. Is there an easy way to implement such perspective?
- Advertisement | https://www.gamedev.net/profile/209164-btatarov/ | CC-MAIN-2019-22 | refinedweb | 504 | 63.39 |
Open.
Open.
First we're going to need python version 3.6, if you're not on this version you can download it at:
We're also going to need a few libraries, first being the OpenCV library, to install this enter the following:
pip install opencv-python
You can additionally install the contributor kit if you wish (Not required)
pip install opencv-contrib-python
In OpenCV projects you may find that you'll be using Number systems a lot, I recommend using the library Numpy. In this example it will not be required but you can install numpy by entering the following into your terminal
pip install Numpy
Now that we have our libraries lets get to the fun stuff. In this example we will be taking a picture of multiple people (or yourself) and applying Split HSV, Saturation and hue filters, as well as showing a bitwise filter. The outcome should look something like this
The code
import cv2
img = cv2.imread("mult.jpg", 1) # image reading
converting it into Hue, saturation, value (HSV)
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
the : in an array in python means that we're going to slice that part of the array
h = hsv[:, :, 0] s = hsv[:, :, 1] v = hsv[:, :, 2]
hsv_split = np.concatenate((h, s, v), axis=1) cv2.imshow("Split hsv", hsv_split)
some of the values require multiple variables, hence why ret is shown multiple times
ret, min_sat = cv2.threshold(s, 40, 255, cv2.THRESH_BINARY)
showing an image is very simple, first argument is the name, second is the image we wish to show
cv2.imshow("Sat filter", min_sat)
ret, max_hue = cv2.threshold(h, 15, 255, cv2.THRESH_BINARY_INV) # will do the inverse of the normal threshold
cv2.imshow("Hue filter", max_hue)
the final image is the min saturation and the max hue put together
final = cv2.bitwise_and(min_sat, max_hue) cv2.imshow("Final", final)
cv2.imshow("Original image", img)
the windows will display until a key is pressed, this is using key characters, in this case we're using escape, which is 27 but 0 also works
cv2.waitKey(0)
destroy all windows will prevent you from having to mass spam the kill keys
cv2.destoryAllWindows()
And we're done. To test this simply run
python test.py
In some operating systems you may need to run
python3 test.py
Very simple introduction to OpenCV, the library has much potential.
Some useful links:
Numpy/Spicy documentation
Link to image used in example
Learn Free how to create a virtual pen and eraser with python and OpenCV with source code and complete guide. This entire application is built fundamentally on contour detection. It can be thought of as something like closed color curves on compromises that have the same color or intensity, it's like a blob. In this project we use color masking to get the binary mask of our target color pen, then we use the counter detection to find the location of this pen and the contour to find it.)., career. | https://morioh.com/p/dTSArXM1qKGC | CC-MAIN-2020-40 | refinedweb | 502 | 62.78 |
Project Server programmability
Learn about the major programmability features in Project Server 2013. This article includes information about porting applications that were built for previous versions of Project Server.
Last modified: July 01, 2013
Applies to: Project Professional 2013 | Project Server 2013
Project Server 2013 is designed to support most applications that were developed for Project Server 2010 and new solutions for multiple platforms, where apps can access both online and on-premises Project Server installations. Applications and extensions that were developed for Project Server 2003 or earlier must be redesigned to use the client-side object model (CSOM) or the Project Server Interface (PSI). Applications that were developed for Office Project Server 2007 or Project Server 2010 may require some changes and recompiling to use the PSI; to use the CSOM, those applications require a redesign.
The Project Server platform enables high levels of programmer productivity by building on SharePoint Server 2013, .NET Framework 4, and the OData protocol with the CSOM. Developers can extend Project Web App with apps, app parts, and Web Parts, define workflows by using SharePoint Designer 2013, and enforce business rules by using remote event receivers for Project Server events.
Project Web App is built upon SharePoint Server 2013, and uses master pages and Web Parts to make it easier to build custom apps and Project Web App solutions. Project Server 2013 integrates deeply with SharePoint Server 2013 as the platform for project collaboration, reporting, site administration, security, and workflow management.
The project sites include more information and collaboration options for team members, where you can add default apps that include a project summary, specialized SharePoint lists for tasks with a timeline, tracking issues, risks, project deliverables, and the team calendar, along with the document library and team discussions. Custom apps for Project Server 2013 provide extensions and flexibility for team collaboration. You can also add app parts to customize an app, by using the same mechanism to add and edit Web Parts when you edit a page. You can locate project sites anywhere within the SharePoint farm where Project Server is installed. To use other core services of SharePoint Server 2013, such as Excel Services and Enterprise Search, an administrator can enable and configure the services.
When you install Project Server 2013, you provision the Project Service Application in the SharePoint Web Services site. The Project Service Application includes the local Windows Communication Foundation (WCF) services and ASMX web services for the PSI. Other examples of service applications include SharePoint Search and SharePoint Document Management. For more information, see the SharePoint Server 2013 developer documentation.
The Project Service Application is a logical service provider that can manage multiple instances of Project Web App. Project Server provisioning creates a specific Project Web App site within a SharePoint web application. The Home page of Project Web App contains links to the Project Center page, Resource Center page, and the Business Intelligence Center page for reporting, plus a page that contains a list of additional standard apps. Figure 1 shows the Edit Page command in the Setttings drop-down list on the Home page of Project Web App, which allows you to add or edit Web Parts.
To access the Site Settings page in Project Web App, choose the Settings icon in the top-right corner of the page. The Site Settings page () enables changing the look and feel and the site theme, adding custom Web Parts, and modifying or creating master pages for project sites.
Customization of the code in ASPX pages, or customization of Project Web App master pages with SharePoint Designer 2013, is not supported. Customization of the code in Project Web App pages can cause problems with Project Server updates and service packs.
Customization of Project Web App with SharePoint packages
Because Project Web App is a SharePoint application, and project sites are SharePoint sites, you can add custom apps, Web Parts, event handlers, custom fields, and other features by using SharePoint packages (.wsp files) or SharePoint apps (.spapp files). A SharePoint package or an app package can include multiple Project Server entities, where entity definitions are specified in an elements.xml file within the package.
For Project Online, you can add buttons to the Project Web App ribbon, but you can't remove or rename existing product buttons, and you can't create new ribbon tabs. For more information, see How to: Create custom actions to deploy with apps for SharePoint.
The PSEntityProvision.xsd schema file is available in the Project 2013 SDK download, in the Documentation\Schemas\AppProvisioning subdirectory. Figure 2 shows the XML Schema Explorer view in Visual Studio of the PSEntityProvision schema, where the LookupTable sequence is expanded.
SharePoint packages that install features for Project Server can contain one or more elements.xml files that follow the PSEntityProvision schema. The Project Server entities in a single XML file must appear in the following order:
Workflow phases
Lookup tables
Custom fields
Workflow stages
Enterprise project types
Event handlers
When you create a SharePoint package that contains Project Server entities, it is possible to put the entity definitions in multiple elements.xml files. Each XML file could pass the schema validation, but the entities in the whole package might not be in the correct order. For example, a custom field entity in the first XML file could refer to a lookup table in the second XML file. During installation, the custom field cannot be created because the lookup table has not yet been created.
If a package installation fails, objects that have been created remain in Project Web App, but the package does not install completely. Reinstalling the package can work, but that is not a good experience for customers. When the entity definitions span multiple elements.xml files, organize the Project Server entities in the entire SharePoint package to ensure that installation follows the correct order. With the PSEntityProvision.xsd schema in the Project 2013 SDK download, it is possible to develop a tool that checks for the prescribed order of entities in the XML files.
When you upgrade an application that was developed for a previous version of Project Server, you can choose to use either the CSOM or the PSI for a programmatic interface that includes methods to create, read, update, and delete project entities (the CRUD operations). Although the CSOM internally calls the PSI, it does not fully replace all PSI methods. For scenarios and limitations of the PSI and of the CSOM, see What the PSI does and does not do and What the CSOM does and does not do.
If your application primarily reads data from Project Server, you can use the reporting tables and views in the Project Server database for an on-premises scenario. If you intend to use the application with Project Online, you can use the OData protocol for the ProjectData service, which provides both on-premises and online access to the reporting data. For more information, see ProjectData - Project 2013 OData service reference
Using the PSI
The PSI enables full-trust client applications, including Project Professional 2013, Project Web App, and LOB applications, to access Project Server data within a SharePoint farm. The PSI is built and used with .NET Framework 4 and provides advantages such as a well-known development environment with built-in security, error handling, and garbage collection.
The PSI is accessed through WCF services or ASMX web services. The ASMX interface is based on WCF. Each PSI service typically contains a base class with CRUD methods for items within that class. Items are specified by related DataSet classes. For example, the CustomFields service contains the CustomFields class with methods such as CreateCustomFields2. Data for one or more enterprise custom fields are specified in the CustomFieldDataSet.
There are 22 public, documented PSI services, which are duplicated in the WCF interface and the ASMX interface. The PSI also includes eight private, undocumented services. Project Web App and Project Professional use the public PSI services and the private PSI services. The PSI is generally factored to match the business objects. That is, each PSI method is associated with a business object such as Calendar or Resource. The PSI is the primary interface to the business objects. Because the business layer provides reusable business logic components, different applications that interact with Project Server data use the same business logic.
PSI methods that asynchronously interact with Project Server have names that begin with Queue. Each PSI method is implemented with a separate interface that uses. For an introduction to the detailed reference for PSI namespaces, classes, methods, properties, events, and related assemblies, see Project 2013 PSI reference overview.
Project Server 2013 uses the exception handling of the .NET Framework. All errors are logged in the server, at the top of the PSI stack. Some errors send a simple report to the client, such as a SoapException object for the ASMX interface or a FaultException object for the WCF interface. Exceptions can be recorded in the application event log, and some errors also record a detailed report on the server in the Unified Logging Service (ULS) trace logs.
For local full-trust applications, the PSI is also extensible. You can add a .NET assembly with a service that provides new functionality, uses the same Project Server security infrastructure, and calls other PSI methods or inherits from PSI classes. A PSI extension can also provide the business logic and database access required for new functionality.
Using the CSOM
With the CSOM, you can develop apps that access Project Online or an on-premises Project Server 2013 installation. Apps can be distributed in a public Office Store or a private app catalog. The CSOM is designed to be an easy-to-use API that directly consumes or provides data by name with LINQ queries, rather than by passing datasets and constructing changeXml parameters or XML filter parameters. The CSOM implements the main functionality of the Project Server Interface (PSI) for the primary entities such as Project, Task, EnterpriseResource, and Assignment. The CSOM includes many additional entities such as CustomField, LookupTable, WorkflowActivities, EventHandler, and QueueJob, which support other common Project Server functionality.
The CSOM can be used by copying the following resources to your local development computer:
For .NET Framework 4 development, copy the %ProgramFiles%\Common Files\Microsoft Shared\Web Server Extensions\15\ISAPI\Microsoft.ProjectServer.Client.dll assembly.
For documentation of the CSOM classes and members, see the Microsoft.ProjectServer.Client namespace. For an example application, see Getting started with the CSOM and .NET.
For Microsoft Silverlight development, copy the %ProgramFiles%\Common Files\Microsoft Shared\Web Server Extensions\15\TEMPLATE\LAYOUTS\ClientBin\Microsoft.ProjectServer.Client.Silverlight.dll assembly.
To develop apps for Windows Phone 8, copy the %ProgramFiles%\Common Files\Microsoft Shared\Web Server Extensions\15\TEMPLATE\LAYOUTS\ClientBin\Microsoft.ProjectServer.Client.Phone.dll assembly.
To use JavaScript for developing web apps and apps for other devices, copy the %ProgramFiles%\Common Files\Microsoft Shared\Web Server Extensions\15\TEMPLATE\LAYOUTS\PS.js file and the PS.debug.js file. For an example web app, see Getting started with the Project Server 2013 JavaScript object model.
The CSOM internally calls the PSI; therefore, if the PSI cannot do a job, neither can the CSOM. For limitations of the CSOM, see What the CSOM does and does not do and What the PSI does and does not do. For more information about developing with the CSOM, see What's new and what's out for developers in Project 2013 and Client-side object model (CSOM) for Project Server 2013. and the CSOM provide comprehensive interfaces to business objects in Project Server.
Applications developed for the PDS are not compatible with later versions of Project Server. The CSOM and the PSI provide functional parity for the PDS, but do not match PDS methods or parameters.
For more information about PDS compatibility and guidelines for porting PDS extensions to the PSI, see PDS Parity in PSI Web Services.
Porting applications built for Project Server 2007 and Project Server 2010
The PSI in Project Server 2013 is a superset of the PSI object model in Office Project Server 2007 and Project Server 2010. Many applications built for the two previous versions of Project Server continue to work in local full-trust, on-premises installations of Project Server 2013. However, the following kinds of applications require updates or redesign:
Use the CSOM for applications that are adapted for use with Project Online.
Use the CSOM for applications that are adapted for use on mobile devices and tablet computers.
Use the CSOM for applications that are available as apps in the Office Store or a private app catalog.
For applications that modify project scheduling, use the CSOM, or change the application to use the QueueUpdateProject2 PSI method.
Local or web applications that log on users to different instances of Project Web App should use programmatic settings for WCF endpoints of the CSOM or the PSI. The methods are deprecated. Apps should use OAuth authentication in place of Forms authentication and for use with Project Online. For more information, see Authorization and authentication for apps in SharePoint 2013.
Applications that rely on or modify specific Project Server security settings.
For many custom Project Server workflows, you can use SharePoint Designer 2013 to create declarative workflows. For custom workflows that require additional programming, you should not directly use classes or members in the Microsoft.Office.Project.Server.Workflow namespace. Instead, use the Microsoft.ProjectServer.Client.WorkflowActivities class in the CSOM.
In general, applications that use impersonation should be rewritten to use the WCF interface of the PSI. Applications that do simple status updates for other users do not require impersonation. They can use the StatusAssignment.SubmitStatusUpdates method in the CSOM or the Statusing.SubmitStatusForResource method in the PSI.
Middleware components that run on the Project Server computer can be installed only for on-premises use, and must use the WCF interface of the PSI. For example, a middleware component that uses the ASMX interface to exchange data between Project Web App on-premises and an external timesheet application would have to be rewritten to use the WCF interface of the PSI. To work with Project Online, the component would have to be redesigned as an app and use the CSOM.
Migration and compatibility of custom solutions
Classes and members in the public ASMX and WCF interfaces of the PSI are identical. But, the number of columns and size of datatables used or returned by PSI methods can be different between Project Server 2013 and the two previous Project Server versions. There are also differences in the reporting tables and views, compared with the Reporting database in previous versions.
When you migrate a solution to Project Server 2013, or if a solution does not work as expected, you should at a minimum do the following:
Update the solution by opening it in Visual Studio 2012. Some solutions can also use Visual Studio 2010.
Change the target to .NET Framework 4.
Change assembly references to use the Project Server 2013 assemblies, such as Microsoft.Office.Project.Server.Library.dll and Microsoft.Office.Project.Server.Events.Receivers.dll.
Make a list of the ASMX web references or the WCF service references and namespace names, and then delete the Project Server references.
Add the ProjectServerServices.dll proxy assembly that you can build from the WCF proxy source files in the Project 2013 SDK download, or add the proxy source files for the required WCF services. For ASMX services, add the front-end ASMX web service references again, by using the same namespace names; or add the ProjectServerServices.dll proxy assembly that you can build from the WSDL sources in the Project 2013 SDK download.
If you change from using the ASMX interface of the PSI to the WCF interface, you can initialize the client classes either programmatically or by using WCF endpoints in app.config. Use programmatic initialization when you have to quickly switch to different instances of Project Web App, or when you are developing a Web Part that uses the PSI.
There are several new methods and datasets in the PSI services in Project Server 2013 and some DataRow classes contain new properties. For example, the QueueUpdateProject2 method in the PSI uses the Project Server scheduling engine to reschedule an updated project without you having to open the project in Project Professional 2013, and also allows adding or deleting project entities in the same call.
Compile and test the solution.
Project Server 2013 has two scheduling engines. The newer scheduling engine is the same as the scheduling engine in Project Professional 2013. When you make scheduling changes and publish the changes by using the Scheduling Web Part (Project Details page) in Project Web App or a project site, or by using the CSOM, the calculation of dates, costs, duration, remaining work, baselines, and other changes related to scheduling are the same as if you made the changes and published the project by using Project Professional 2013. However, except for the QueueUpdateProject2 method, PSI methods use the older scheduling engine that was migrated from Project Server 2010. The reason is to ensure that legacy applications behave the same in Project Server 2013 as they previously did.
Both the older and the newer scheduling engines have the following limitations:
Single project scheduling only Scheduling affects only the current project, when changes are made through task status updates with the PSI or the CSOM, or with Project Web App. If the current project has links to other projects, subprojects, or master projects, the linked projects are not changed. on the summary task. The difference in behavior can be confusing for a user.
Project Server deletes actuals on a summary task assignment if the subtask duration shortens or the finish date is changed.
Following are issues and limitations of PSI programming with the older Project Server scheduling engine:
Changing the active status of a task The older the older scheduling engine, see the blog articles Introducing inactive tasks in Project 2010 and Project Server 2010: Scheduling on the web, the PSI and Project Professional. For a comparison of scheduling in Project Professional 2010 and Project Web App in Project Server 2010, see Web-based schedule management comparison.
Earned value not calculated The older scheduling engine by using the QueueUpdateProject method, the field values do not change. To avoid the problem, use the QueueUpdateProject2 method.
You can handle the PSI scheduling limitations in the following ways:
If the CSOM has the methods the application requires, use the CSOM instead of the PSI.
Open projects in Project Professional and save them back to Project Server.
In reports, do not include fields that the PSI does not update.
Add a note in reports about data that may be stale.
There are flags in the reporting tables and the cubes that help you detect when some project data is not updated. The reporting data in the MSP_EpmProject table and in MSP_EpmProject_UserView includes the following fields:
ProjectWbsIsStale Indicates whether the work breakdown structure (task outline hierarchy) is stale.. This happens when the child projects are published explicitly, not as part of the master project publishing.
ProjectCalculationsAreStale Project Professional saved a project without calculating the schedule (that is, the calculation mode is set to Manual on the Schedule tab in the Project.
If you have permissions in Microsoft SQL Server to access the Project Server database, you can read the reporting tables and views. If you have the necessary Project Server permissions, you can also read data from the reporting tables by using OData queries. Developers are strongly discouraged from directly accessing the draft, published, or archive tables through SQL Server queries in the Project Server database. Making direct changes in any of the tables in the Project Server database can damage referential integrity and interfere with database access through the Project Server Queuing Service.
Applications that directly access the draft, published, or archive tables and views are also dependent on the database schemas, which can change in service packs or later versions of Project Server 2013. Applications that directly access the databases also lose the built-in Project Server security, common business logic, tracking, audits, error checking, reporting, workflow, and other features. You would likely have to rewrite such an application after Project Server 2013 updates.
For all of these reasons, Project Professional and Project Web App do not make direct calls to the draft, published, or archive tables; neither should any other application that integrates with Project Server.
The schemas for the draft, published, and archive tables are not documented. You can use the reporting tables to help generate reports, and the schema for the reporting tables and views is documented in the Project 2013 SDK download. For the OData schema of the reporting data, see ProjectData - Project 2013 OData service reference.
What's new and what's out for developers in Project 2013
Project Server 2013 architecture
What the PSI does and does not do
What the CSOM does and does not do
Client-side object model (CSOM) for Project Server 2013
Getting started developing Project Server 2013 workflows
Project 2013 programming references
Project 2013 PSI reference overview
How to: Create custom actions to deploy with apps for SharePoint
Introducing Inactive Tasks in Project 2010Link
Project Server 2010: Scheduling on the Web, the PSI and Project Professional
Web-Based Schedule Management Comparison | https://msdn.microsoft.com/en-us/library/ms504195(v=office.15).aspx | CC-MAIN-2015-18 | refinedweb | 3,561 | 51.58 |
SQLObject actually has, in its history, a great deal of similarity to other Python ORMs. Not just the whole wraps-a-database-thing (which it obviously should have in common), but little implementation details. For instance, like PyDO and Django, it used to have a list of columns (instead of using attribute assignment). All of those projects have changed since then... and probably in a similar way you can lingering artifacts of that past implementation detail.
One of the ways is how the class is actually constructed. This history often reflects a past when a class was a mostly-dumb holder of a data definition. Then some outside code (the ORM itself) looks at the class definition for special attributes, and constructs a bunch of stuff. So, for instance, though you do name = StringCol() in SQLObject, StringCol is just a description of the column. It doesn't actually do anything, and if you later fetch MyClass.name you won't get back anything related to StringCol. Because what is actually happening is that those descriptions are collected, then the class is built.
This is something I'm trying to move away from in SQLObject, and I think 0.8 will have some significant progress here. One of the goals of that progress is to make a distinction between SQLObject and ActiveRecord, (and, less direction, from Django's ORM). Because -- admitting that this is a judgemental and subjective term -- I want SQLObject to be the most Pythonic of these options. Where Pythonic doesn't just mean fits-the-language (can't expect a Ruby library to want to fit into Python) but is a more generic term for Everything That Is Good In Programming.
In this case, there's a specific feature of Python I want to maintain: backtracking. Python's namespaces and tendency towards functions and imperative code means that it's fairly easy, given a local bit of code, to figure out what that code is doing in terms of the larger system. You can read code inside out, instead of having to figure everything out up front. Metaprogramming on the whole tends to break that, because you don't even understand the dialect of code you are reading, not to mention where methods are implemented and what side effects they might have. So SQLObject already breaks backtracking; my goal is to mitigate that.
One instance is joins, which have some annoying surprises in SQLObject exactly because of the legacy of treatment of classes as declaration. I have a refactoring of joins (not yet checked in) that will hopefully clear things up and generally simplify things. Over on the TurboGears list people wanted syntax similar to Django's for adding instances, and it would have meant:
class Person(SQLObject): addresses = MultipleJoin('Address') p = Person.get(1) p.addAddress(street='123 W 12th', ...)
I.e., the presence of that join would cause the addAddress method to be created. Seeing addAdress in code, how would you figure out what that did? Well, you'd just have to be familiar with how the code works, because addAddress simply won't exist in any other fashion. But that's what I want to get away from; what will actually go into SQLObject will be:
class Person(SQLObject): addresses = OneToMany('Address') p = Person.get(1) p.addresses.create(street='123 W 12th', ...)
If you want to know what's going on, you look up OneToMany.create(). Well... sadly it won't be that easy, as OneToMany is a descriptor, and it actually returns an object that delegates to SelectResults. There's still a lot to keep track of, and the challenge I'll have is to figure out a way to present a wider set of core concepts than is currently in the documentation (for instance, SelectResults is a really important class, but it's not documented and it's always instantiated for you). I'm thinking SQLObject docs should move towards a casual overview, with deep links into generated documentation that is more complete.
Why not treat the one-to-many join as what it naturally returns? A list. Let the user make natural Python list operations on the join and perform the necessary magic in the background. Rails ActiveRecord has a similar approach, and use Ruby list operators to deal with it to an extent.
It seems very natural and Pythonic to use Python list operators to deal with results that for all appearences are a list (though un-ordered).
I'll agree in general that using familiar list constructs is good.
Ian's example above is actually something that there is no Python list equivalent for: "create" would create a new instance (with the provided parameters, plus a parameter for the join column) and add it to the list. This is not the same as a simple "append".
ManyToMany joins will use the set-like methods .add() and .remove(). The fact that Set uses .add() is the specific reason I named the method I showed as create and not add. It's certainly my intention to make these recognizable to the degree possible.
Also, they are still lazily evaluated. So you can apply methods like .filter(extra_clauses) or .orderBy(column) and these are evaluated in the database. I think -- but still need to give it some thought -- that it will be good for "natural" joins as well. E.g., if there's a ManyToMany join between Book and Author, you might do Author.select(Author.books & Book.title.startswith('L')) to select all authors that have books with titles that start with L -- Author.books would evaluate (in that context) to the SQL join. So that would kind of address a separate request at the same time.
Just tripped over this. Ian, I don't think you want to move way from what Active Record. It sounds like you want to embrace it:class Person < ActiveRecord::Base has_many :addresses end p = Person.find(1) p.addresses.create(:street => "123 W 12th")
It's not too late, Ian. You're still much welcome in the Ruby camp. Ryan Tomayko is already finding himself more than comfortably accustomed :)
Ohhhhhh. This is just what I need right now. Thanks. :) | http://www.ianbicking.org/magic-and-backtracing-code.html | CC-MAIN-2015-48 | refinedweb | 1,035 | 63.9 |
“O. The thing is, using Flash Player as part of a web application was a really good way of supporting the Publish and Subscribe paradigm as it was able to cater for those scenarios that require live updates, such as live stock prices, news updates and changes to betting odds, whereas straight forward HTTP, with its request/reply paradigm, is a good way of supporting static pages. A good number of companies put a lot of effort in to developing applications that used Flash in order to provide their users with realtime data. When Apple announced that iOS would not support Adobe Flash they were left high and dry without an iPhone solution and to get back into the mobile market I imagine that a good number of them went for long polling.
So, what is a longpoll? Well, it isn’t a tall guy from Warsaw, the idea is to mimic the Publish and Subscribe pattern. The scenario goes like this:
- The browser requests some data from the server.
- The server doesn’t have that data available and allows the request to hang.
- Sometime later the response data is available and the server completes the request.
- As soon as the browser receives the data, it displays it and then promptly requests an update.
- The flow now loops back to point 2.
I suspect that the Guys at Spring aren’t too keen on the term ‘long poll’ either as they refer to this technique more formally as asynchronous request processing
In looking at my Long Poll or Asynchronous Request Processing flow above you can probably guess what will happen to your server. Each time you force the server to wait for data to become available, you tie up some of its valuable resources. If your web site is popular and comes under heavy load, then the number of resources consumed waiting for updates rockets and consequentially your server may run out and crash.
In February and March I wrote a series of blogs on the Producer Consumer pattern and this seemed like the ideal situation where long polling would come in useful. If you’ve not read my Producer Consumer pattern blogs take a look here
In that original scenario I said that a “TV company sends a reporter to every game to feed live updates into a system and send them back to the studio. On arriving at the studio the updates will be placed in a queue before being displayed on the screen by a Teletype.”
As times have changed the TV company wants to replace the old Teletype with a web application that displays the match updates in something like realtime.
In this new scenario the president of the TV company hires our friends at Agile Cowboys inc to sort out the update. To make things easier he gives them the source code for the
Match, and
MatchReporter classes, which are reused in the new project. Agile Cowboys’ CEO hires of a couple of new developers for the job: a specialist in JavaScript, JQuery, CSS and HTML to do the front end and a Java guy for the Spring MVC Webapp stuff.
The front end specialist comes up with the following JavaScript polling code:
var allow = true; var startUrl; var pollUrl; function Poll() { this.start = function start(start, poll) { startUrl = start; pollUrl = poll; if (request) { request.abort(); // abort any pending request } // fire off the request to MatchUpdateController var request = $.ajax({ url : startUrl, type : "get", }); // This is jQuery 1.8+ // callback handler that will be called on success request.done(function(reply) { console.log("Game on..." + reply); setInterval(function() { if (allow === true) { allow = false; getUpdate(); } }, 500); }); // callback handler that will be called on failure request.fail(function(jqXHR, textStatus, errorThrown) { // log the error to the console console.log("Start - the following error occured: " + textStatus, errorThrown); }); }; function getUpdate() { console.log("Okay let's go..."); if (request) { request.abort(); // abort any pending request } // fire off the request to MatchUpdateController var request = $.ajax({ url : pollUrl, type : "get", }); // This is jQuery 1.8+ // callback handler that will be called on success request.done(function(message) { console.log("Received a message"); var update = getUpdate(message); $(update).insertAfter('#first_row'); }); function getUpdate(message) { var update = "<div class='span-4 prepend-2'>" + "<p class='update'>Time:</p>" + "</div>" + "<div class='span-3 border'>" + "<p id='time' class='update'>" + message.matchTime + "</p>" + "</div>" + "<div class='span-13 append-2 last' id='update-div'>" + "<p id='message' class='update'>" + message.messageText + "</p>" + "</div>"; return update; }; // callback handler that will be called on failure request.fail(function(jqXHR, textStatus, errorThrown) { // log the error to the console console.log("Polling - the following error occured: " + textStatus, errorThrown); }); // callback handler that will be called regardless if the request failed or succeeded request.always(function() { allow = true; }); }; };
The class, called
Poll, has one method,
start(), which takes two arguments. The first one is used by the browser to subscribe to the match update data feed, whilst the second is the URL that’s used to poll the server for updates. This code is called from the JQuery
ready(…) function.
$(document).ready(function() { var startUrl = "matchupdate/subscribe"; var pollUrl = "matchupdate/simple"; var poll = new Poll(); poll.start(startUrl,pollUrl); });
When the
start() method is called it makes an Ajax request to the server to subscribe to the match updates. When the server replies with a simple “OK” the
request.done(…) handler starts a 1/2 second timer by calling
setInterval(…) with an anonymous function as an argument. This function uses a simple flag ‘
allow‘ that if true allows the
getUpdate() method to get called. The
allow flag is then set to false to ensure that there are no reentrancy problems.
The
getUpdate(…) function makes another Ajax call to the server using the second URL argument described above. This time the
request.done(…) handler grabs the match update and converts it into HTML and inserts it after the ‘
first_row‘ div to display it on the screen.
Getting back to the scenario and the CEO of Agile Cowboys Inc wants to impress his new girlfriend, so he buys her a Porsche 911. Now he can’t pay for it using his own cash as his wife will find out what’s going on, so he pays for it with a chunk of the cash from the TV company deal. This means that he can only afford a graduate trainee to write the server side code. This graduate may be inexperienced, but he does reuse the
Match, and
MatchReporter classes in order to provide match updates. Remember that a
Queue and a
Match are injected into the
MatchReporter. When the
MatchReporter.start() method is called, it loads the match and reads the update messages where it checks their timestamps and adds them to the queue at the appropriate moment. If you want to see the code for the
MatchReporter,
Match etc, take a look at the original blog.
The graduate trainee then creates a simple Spring match update controller
@Controller() public class SimpleMatchUpdateController { private static final Logger logger = LoggerFactory.getLogger(SimpleMatchUpdateController.class); @Autowired private SimpleMatchUpdateService updateService; @RequestMapping(value = "/matchupdate/subscribe" + "", method = RequestMethod.GET) @ResponseBody public String start() { updateService.subscribe(); return "OK"; } /** * Get hold of the latest match report - when it arrives But in the process * hold on to server resources */ @RequestMapping(value = "/matchupdate/simple", method = RequestMethod.GET) @ResponseBody public Message getUpdate() { Message message = updateService.getUpdate(); logger.info("Got the next update in a really bad way: {}", message.getMessageText()); return message; } }
The
SimpleMatchUpdateController contains two very simple methods. The first one,
start(), simply calls the
SimpleMatchUpdateService to subscribe to the match updates, whilst the second,
getUpdate(), asks the
SimpleMatchUpdateService for the next match update. Looking at this you can probably guess that all the work is done by the
SimpleMatchUpdateService.
@Service("SimpleService") public class SimpleMatchUpdateService { @Autowired @Qualifier("theQueue") private LinkedBlockingQueue<Message> queue; @Autowired @Qualifier("BillSkyes") private MatchReporter matchReporter; /** * Subscribe to a match */ public void subscribe() { matchReporter.start(); } /** * * Get hold of the latest match update */ public Message getUpdate() { try { Message message = queue.take(); return message; } catch (InterruptedException e) { throw new UpdateException("Cannot get latest update. " + e.getMessage(), e); } } }
The
SimpleMatchUpdateService also contains two methods. The first,
subscribe(), tells the
MatchReporter to start putting updates into the queue. The second,
getUpdate(), removes the next update from the
Queue and returns it to the browser as JSON for display.
So far so good; however, in this case the queue is implemented by an instance of
LinkedBlockingQueue. This means that if there’s no update available when the browser makes its request then the request thread will block in the
queue.take() method, tying up a valuable server resources. When an update is available
queue.take() returns and sends the
Message to the browser. To the inexperienced graduate trainee all seems well and the code goes live. The following Saturday it’s the start of the Football Premiership (soccer if you’re in the US), one of the busiest weekends of the sporting calendar and a very large number of users want the latest info on the big game. Of course the server runs out of resources, is unable handle the load and constantly crashes. The president of the TV company is not too happy about this and summons the CEO of Agile Cowboys to his office. He makes it crystal clear that blood will flow if this problem is not fixed. The CEO of Agile Cowboys realises his mistake and, after an argument with his girlfriend, takes back the Porsche. He then emails a Java/Spring consultant and offers him the Porsche if he’ll come and fix the code. The Spring consultant can’t turn down such an offer and accepts. This is mainly because he knows that the Servlet 3 specification addresses this issue by allowing a
ServletRequest to be put into asynchronous mode. This frees the server resources, but keeps the
ServletResponse open, allowing some other third party thread to complete the processing. He also knows that the Guys at Spring have come up with a new technique in Spring 3.2 called the “Deferred Result” that’s designed for these situations. In the meantime the Agile Cowboys CEO’s ex-girlfriend, still upset about losing her Porsche, emails his wife telling her all about her husband’s affair…
As this blog is turning into an episode of Dallas I think its time to end. So, will the code get fixed in time? Will the Java/Spring Consultant spend too much time driving his new Porsche? Will the CEO forgive his girlfriend? Will his wife divorce him? For the answers to these questions and more information on Spring’s
DeferredResult technique tune in next time…
You may have noticed that there’s another HUGE hole in the sample code, namely that fact that there can only be one subscriber. As this is only sample code and I’m talking about long polling and not implementing Publish and Subscribe, the problem is rather ‘off topic’. I may (or may not) fix it later.
The code that accompanies this blog is available on Github at: | https://www.javacodegeeks.com/2013/08/long-polling-tomcat-with-spring.html | CC-MAIN-2018-26 | refinedweb | 1,846 | 62.38 |
Thanks for the explanation! But at this point I feel I'm a bit confused about how it all _supposed_ to work in the current design :)
- Queries
- All Stories
- Search
- Advanced Search
- Transactions
- Transaction Logs
Advanced Search
Sat, Jul 10
Add more test cases
Fri, Jul 9
Add a test for a value created via SBTarget::CreateValueFromData.
Add a test for a value created via SBTarget::CreateValueFromData.
Thanks for the prompt review!
Thu, Jul 8
Jim, can you take a look, please?
Wed, Jul 7
Tue, Jul 6
Use local variables in test
Assert for SetValueFromCString
Simplify the test case
Given that the access modifiers are not used by LLDB and none of the tests fail because of this cleanup, looks good to me :)
Jun 17 2021
Is this specific to GCC? To be on the safe side, would it make sense to check like type_name == "__unknown__" && compiler == GCC?
Jun 2 2021
Thanks for fixing this!
Is this code used for auto-generated docs? Could be have this documentation in C++ definitions (lldb/API/SBType.h) as well? I usually just read the C++ source code, but I can imagine having the same docs in two places might be not the best idea...
May 19 2021
Accidentally included only two out of three lines in the commit :)
remove the @skip
May 18 2021
Thanks for a quick review!
Keep @skipIfWindows for now
Add more checks in tests
May 17 2021
I see some unit tests have failed in the pre-merge checks. Are those flaky? I don't think fixing spelling in the comment should affect the tests :)
Apr 22 2021
In D98370#2709828, @jingham wrote:
Be careful here. There are really two kinds of persistent variables: "expression results" and variables created in the expression parser. The former are by design constant. The idea is that you can use them to checkpoint the state of a value and refer to it later. You can use their values in expressions, but they aren't supposed to be modifiable. Those are the ones called $NUM.
The ones you make in an expression, like:
(lldb) expr int $myVar = 20
on the other hand are modifiable.
In D98370#2705741, @jingham wrote:
Sure. But but when I was poking around at it a little bit, it seems like the other use cases already work, and the only one that was failing was the case where you call persist on a persistent variable. If that is really true, then maybe we should fix the failing case directly.
Apr 21 2021
In D98370#2686515, @jingham wrote:
If persisting already persistent variables is indeed the only failure case, then I wonder if it wouldn't be more straightforward to just see if the ValueObject is already a persistent variable and have Persist just return the incoming variable.
Apr 13 2021
@jingham @teemperor ping :)
Mar 13 2021
Address review comments:
- Don't create expensive ConstString objects
- Merge ParseStaticConstMemberDIE into ParseVariableDIE
- Add more test cases
Mar 10 2021
Hi @teemperor , here's an attempt to fix SBValue::Persist method. I've highlighted a few moments in the patch I'm not so sure about, please let me know what you think. Thanks!
Mar 8 2021
I was looking into the issue with SBValue::Persist () and the logs and errors eventually led me here to IRInterpreter. And the incorrect error message I noticed just by accident :)
Mar 3 2021
Mar 2 2021
Use clang_host instead of clang
The original commit was reverted, abandon in favor of
Mar 1 2021
The test was also executed on lldb-aarch64-ubuntu --
But we have REQUIRES: x86, shouldn't it exclude ARM?
Feb 22 2021
Added the test cases provided by jankratochvil@
Feb 17 2021
To be honest, I'm not sure how to reproduce this kind of debug info. I've tried a few examples (e.g. force inline the function from another CU) , but it didn't work.
Feb 16 2021
Hi @jankratochvil, can you take a look at this, please?
Feb 9 2021
Change DW_AT_decl_file handling as per @jankratochvil comment.
Feb 6 2021
Thanks for reviewing!
In D92223#2546736, @jankratochvil wrote:
Feb 2 2021
In D92164#2534626, @tatyana-krasnukha wrote:
Feb 1 2021
Thanks for the review! Can you submit it for me, since I don't have commit access?
Jan 31 2021
Jan 10 2021
Thanks for the explanation, this makes sense. I've checked the mailing list archives and it seems there was already a discussion about the enumerators in the .debug_names index back in 2018 --. You were the one to bring it up and the consensus was that the enumerators should go into the index too.
Jan 8 2021
Generate fully qualified names for enum constants.
In D94077#2481942, @labath wrote:
This suffers from the same problem as the other patch, where the other index classes (apple_names and debug_names) will essentially never be able to implement this feature. (Unless they re-index the dwarf themselves, that is, but this would defeat the purpose of having the index in the first place.)
In D94077#2479984, @shafik wrote:
We can have unscoped enums in namespace and class scope and the enumerators won't leak out from those scopes. Thus we can have shadowing going on e.g.:
...
How does this change deal with those cases?
Handle enum constants similar to global variables, support scoped lookup in the expression evaluator.
Add more test cases.
Jan 5 2021
@labath @jankratochvil ping :)
Fix formatting
Dec 22 2020
Thanks for a quick review! Can you please land these changes for me since I don't have commit access.
Remove the test case for unscoped enum, because it's implementation defined and can break any moment.
Register the new method using lldb-instr.
Register the new method using lldb-instr.
Dec 10 2020
Thanks for your review! If there are no objections to this patch, can you accept and land it for me? :)
Dec 8 2020
@labath @teemperor gentle ping :) What do you think about this approach?
Dec 4 2020
In D92643#2434050, @jingham wrote:
I couldn't tell what you meant by this... I would expect that a Type would tell you that static members exist, and their type, etc. But I wouldn't expect a Type to be able to get a value for a static member. After all, the type comes from some Module, which could be shared amongst a bunch of processes in lldb. The actual member value could be different in each of these processes, but since the Type has no process, it has no way to get the correct value.
In D92643#2433428, @labath wrote:
Are the static members included in the (SB)Type object that gets created when parsing the enclosing type? If yes, we might be able to retrieve them that way. Whether that would be cleaner -- I'm not sure...
(I would expect they are included, as otherwise they would be unavailable to the expression evaluator.)
Hi, please, take a look at this patch!
Nov 30 2020
Pavel, thanks for review!
Nov 28 2020
Nov 27 2020
Remove unused field
Simplified the test according to teemperor's comments.
Hi, please, take a look at this patch.
Nov 3 2020
Ping :)
@teemperor @JDevlieghere | https://reviews.llvm.org/feed/?userPHIDs=PHID-USER-beao54t35ewqeek2agsd | CC-MAIN-2021-31 | refinedweb | 1,205 | 72.05 |
12.3. Exploring and Cleaning PurpleAir Sensor Data¶
To recap our analysis, our plan is to:
Find a list of possibly collocated AQS and PurpleAir sensors.
Contact AQS sites to find truly collocated sensor pairs.
Explore and clean AQS measurements for one sensor.
Explore and clean PurpleAir measurements for the other sensor in the pair
Join the AQS and PurpleAir measurements together for all the sensor pairs.
Fit a model to make PurpleAir measurements match AQS measurements.
In this section, we’ll perform step 4 of this plan. We’ll explore and clean data from a PurpleAir sensor. We’ll use Barkjohn’s data to skip step 5.
In the previous section (Section 12.2),
we analyzed data from the AQS site
06-067-0010.
The matching PurpleAir sensor is named
AMTS_TESTINGA, and we’ve used
the PurpleAir website to download the
data for this sensor into the
data/purpleair_AMTS folder.
!ls -alh data/purpleair_AMTS/*
-rw-r--r-- 1 sam staff 50M Nov 3 15:52 data/purpleair_AMTS/AMTS_TESTING (outside) (38.568404 -121.493163) Primary Real Time 05_20_2018 12_29_2019.csv -rw-r--r-- 1 sam staff 50M Nov 3 15:52 data/purpleair_AMTS/AMTS_TESTING (outside) (38.568404 -121.493163) Secondary Real Time 05_20_2018 12_29_2019.csv -rw-r--r-- 1 sam staff 48M Nov 3 15:52 data/purpleair_AMTS/AMTS_TESTING B (undefined) (38.568404 -121.493163) Primary Real Time 05_20_2018 12_29_2019.csv -rw-r--r-- 1 sam staff 50M Nov 3 15:52 data/purpleair_AMTS/AMTS_TESTING B (undefined) (38.568404 -121.493163) Secondary Real Time 05_20_2018 12_29_2019.csv
There are four CSV files. What does each one contain? The data dictionary for the PurpleAir data 1 says that each sensor has two separate instruments, A and B, that each record data. In this case, the first two CSV files correspond to the instrument A, and then last two CSV files correspond to B. Having two instruments is useful for data cleaning; if A and B disagree about a measurement, we might question the integrity of the measurement and decide to remove it.
The data dictionary also mentions that each instrument records Primary data and Secondary data. The Primary data contains the fields we’re interested in: PM2.5, temperature, and humidity. The Secondary data contains data for other particle sizes, like PM1.0 and PM10. Since we’re interested in PM2.5, we’ll work only with the Primary data for instruments A and B.
Our goals for this section:
Load and subset columns from instrument A.
Check and adjust the granularity of the PM2.5 readings.
Repeat steps 1 and 2 for instrument B.
Average together A and B measurements, dropping values where the two instruments disagree.
Join PurpleAir and AQS data.
12.3.1. Loading and Subsetting Columns from Instrument A¶
When CSV files have long names, we can load the CSV names into a Python variable
to more easily load them into
pandas.
from pathlib import Path data_folder = Path('data/purpleair_AMTS') pa_csvs = sorted(data_folder.glob('*.csv')) pa_csvs
[PosixPath('data/purpleair_AMTS/AMTS_TESTING (outside) (38.568404 -121.493163) Primary Real Time 05_20_2018 12_29_2019.csv'), PosixPath('data/purpleair_AMTS/AMTS_TESTING (outside) (38.568404 -121.493163) Secondary Real Time 05_20_2018 12_29_2019.csv'), PosixPath('data/purpleair_AMTS/AMTS_TESTING B (undefined) (38.568404 -121.493163) Primary Real Time 05_20_2018 12_29_2019.csv'), PosixPath('data/purpleair_AMTS/AMTS_TESTING B (undefined) (38.568404 -121.493163) Secondary Real Time 05_20_2018 12_29_2019.csv')]
# The Primary data for instrument A is the first CSV in `pa_csvs` pa = pd.read_csv(pa_csvs[0]) pa
672755 rows × 11 columns
Let’s look at the columns to see which ones we want to subset.
display_df(pa.iloc[0].to_frame().reset_index(), rows=11)
Although we’re interested in PM2.5, it appears there are two columns that
contain PM2.5 data:
PM2.5_CF1_ug/m3 and
PM2.5_ATM_ug/m3.
What is the difference between these two columns?
For brevity, we’ll share what Barkjohn et al. found: PurpleAir sensors use a
calculation to convert a raw laser recording into a PM2.5 number.
There are two ways to convert the raw laser data into PM2.5 which correspond to
the CF1 and ATM columns.
The CF1 conversions result in higher PM2.5 numbers when more PM2.5 particles are
present.
Barkjohn found that using CF1 produced better results than ATM, so we’ll
keep
PM2.5_CF1_ug/m3.
So, we’ll subset and rename the columns of
pa.
def subset_and_rename_A(df): df = df[['created_at', 'PM2.5_CF1_ug/m3', 'Temperature_F', 'Humidity_%']] df.columns = ['timestamp', 'PM25cf1', 'TempF', 'RH'] # RH stands for Relative Humidity return df pa = (pd.read_csv(pa_csvs[0]) .pipe(subset_and_rename_A)) pa
672755 rows × 4 columns
12.3.2. What’s the Granularity?¶
In order for the granularity of these measurements to match the AQS data, we want one average PM2.5 for each date (a 24-hour period). PurpleAir states that sensors take measurements every 2 minutes. Let’s double check the granularity of the raw measurements, before we aggregate measurements to 24-hour periods.
12.3.2.1. Parsing the Timestamps¶
Like the AQS data, we’ll convert the
timestamp column from strings
to
pd.TimeStamp objects using
pd.to_datetime.
pa.head(2)
date_format = '%Y-%m-%d %X %Z' times = pd.to_datetime(pa['timestamp'], format=date_format) times
0 2018-05-20 00:00:35+00:00 1 2018-05-20 00:01:55+00:00 2 2018-05-20 00:03:15+00:00 ... 672752 2019-12-29 23:55:30+00:00 672753 2019-12-29 23:57:30+00:00 672754 2019-12-29 23:59:30+00:00 Name: timestamp, Length: 672755, dtype: datetime64[ns, UTC]
Note
The
pd.to_datetime() method tries to automatically infer the timestamp
format if we don’t pass in the
format= argument. In many cases (including this one),
pandas will parse the timestamps properly. However, sometimes the parsing doesn’t
output the correct timestamps, so it’s helpful to explicitly specify the format.
Next, we’ll apply this conversion to the
pa dataframe.
As we’ll soon see,
pandas has special support for dataframes with an index
of timestamps, so we’ll also set the dataframe index to the timestamps.
def parse_timestamps(df): date_format = '%Y-%m-%d %X %Z' times = pd.to_datetime(df['timestamp'], format=date_format) return (df.assign(timestamp=times) .set_index('timestamp')) pa = (pd.read_csv(pa_csvs[0]) .pipe(subset_and_rename_A) .pipe(parse_timestamps)) pa.head(2)
Timestamps are tricky—notice that the original timestamps were given in the UTC
timezone. However, the AQS data were averaged according to the local time in
California, which is either 7 or 8 hours behind UTC time, depending on whether
daylight saving time is in effect.
Thus, we also need to change the timezone of the PurpleAir timestamps to match
the local timezone using the
df.tz_convert() method.
This method only operates on the index of the dataframe, which is one reason
why we set the index of
pa to the timestamps.
# # The US/Pacific timezone corresponds to the timezone in California, and will # automatically adjust for Daylight Saving Time. pa.tz_convert('US/Pacific')
672755 rows × 3 columns
def convert_tz(pa): return pa.tz_convert('US/Pacific') pa = (pd.read_csv(pa_csvs[0]) .pipe(subset_and_rename_A) .pipe(parse_timestamps) .pipe(convert_tz)) pa.head(2)
12.3.2.2. Visualizing Timestamps¶
One way to visualize the timestamps is to count how many appear in each 24-hour
period, then plot those counts over time.
To group time series data in
pandas, we can use the
df.resample() method.
This method works on dataframes that have a index of timestamps.
It behaves like
df.groupby(), except that we can specify how we want the
timestamps to be grouped—we can group into dates, weeks, months, and many
more options.
# The 'D' argument tells resample to aggregate timestamps into individual dates (pa.resample('D') .size() # We can call aggregation methods just like we would with `groupby()` )
timestamp 2018-05-19 00:00:00-07:00 315 2018-05-20 00:00:00-07:00 1079 2018-05-21 00:00:00-07:00 1074 ... 2019-12-27 00:00:00-08:00 1440 2019-12-28 00:00:00-08:00 1200 2019-12-29 00:00:00-08:00 480 Freq: D, Length: 590, dtype: int64
# We'll reuse this plotting code later, so we'll put it in a function def plot_measurements_per_day(df, extra_lines=False): # Make the plot wider plt.figure(figsize=(10, 6)) pa.resample('D').size().plot(label='Sensor', linewidth=1.5) plt.xlabel('') plt.ylabel('# measurements / day') if not extra_lines: return plt.axvline('2019-05-30', c='red', label='May 30, 2019') plt.axhline(1080, xmax=0.63, c='red', linestyle='--', linewidth=3, label='1080 points / day') plt.axhline(720, xmin=0.64, c='red', linestyle='--', linewidth=3, label='720 points / day') plt.legend(loc='upper right');
plot_measurements_per_day(pa)
This is a fascinating plot. First, we see clear gaps in the data where there were no measurements. It appears that significant portions of data in July 2018, September 2019, and October 2019 are missing. Even when the sensor appears to be working, the number of measurements per day is slightly different. For instance, the plot is “bumpy” between August and September 2018—each date has a different number of measurements. This means we need to decide: what should we do with missing data? But perhaps more urgently…
There are strange “steps” in the plot—some dates have around 1000 readings, some around 2000, some around 700, and some around 1400. If a sensor takes measurements every 2 minutes, there should be a maximum of 720 measurements per day. For a perfect sensor, the plot would display a flat line at 720 measurements. Why is this not the case?
12.3.2.3. Why is the Sampling Rate Inconsistent?¶
Deeper digging reveals that although PurpleAir sensors currently record data every 120 seconds, this was not always the case. Before May 30, 2019, sensors recorded data every 80 seconds. That is, before May 30, 2019, a perfect sensor would record 1080 points a day.
Let’s mark these on the plot of measurement counts per day.
plot_measurements_per_day(pa, extra_lines=True)
We see that the change in sampling rate does explain the drop at May 30, 2019. But what about the time periods where there were many more points than expected? This could mean that some measurements were duplicated in the data. We can check this by looking at the measurements for, say, Jan 1, 2019.
# Passing a string into .loc will filter timestamps pa.loc['2019-01-01']
2154 rows × 3 columns
There are 2154 readings, which is almost double the 1080 expected readings. Is this because the readings are duplicated?
pa.loc['2019-01-01'].index.value_counts()
2019-01-01 00:00:17-08:00 2 2019-01-01 16:08:39-08:00 2 2019-01-01 15:49:59-08:00 2 .. 2019-01-01 08:12:08-08:00 2 2019-01-01 08:13:28-08:00 2 2019-01-01 23:59:29-08:00 2 Name: timestamp, Length: 1077, dtype: int64
We see that each timestamp appears exactly twice. And, if we look at the data for one timestamp, we see that the data are repeated.
pa.loc['2019-01-01 00:00']
Next, we can verify that all duplicated dates contain the same PM2.5 reading.
Using
.groupby('timestamp') is possible, but runs very slowly.
Instead, we’ll resample into minute-long intervals.
We know that the PM2.5 readings within an interval are all equal if the maximum
reading minus minimum reading is zero.
# Some 1-min intervals have no readings, so we need to handle that case def ptp(s): return 0 if len(s) == 0 else np.ptp(s) (pa .resample('1min') ['PM25cf1'] .agg(ptp) )
timestamp 2018-05-19 17:00:00-07:00 0 2018-05-19 17:01:00-07:00 0 2018-05-19 17:02:00-07:00 0 .. 2019-12-29 15:57:00-08:00 0 2019-12-29 15:58:00-08:00 0 2019-12-29 15:59:00-08:00 0 Freq: T, Name: PM25cf1, Length: 848160, dtype: int64
(pa .resample('1min') ['PM25cf1'] .agg(ptp) .describe() ) # A standard deviation of 0 means that all values are equal.
count 848160.0 mean 0.0 std 0.0 ... 50% 0.0 75% 0.0 max 0.0 Name: PM25cf1, Length: 8, dtype: float64
So, every duplicated timestamp has identical PM2.5 readings. Since this is also
true for both temperature and humidity, we’ll drop duplicate rows from
pa.
def drop_duplicate_rows(df): return df[~df.index.duplicated()] pa = (pd.read_csv(pa_csvs[0]) .pipe(subset_and_rename_A) .pipe(parse_timestamps) .pipe(convert_tz) .pipe(drop_duplicate_rows)) pa.head(2)
plot_measurements_per_day(pa, extra_lines=True)
We see that after dropping duplicate dates, the plot of measurements per day looks much more consistent with the the sampling rate we expect 2.
12.3.3. What do we do About Missing Data?¶
Next, we must decide how to handle missing values. We’ll follow Barkjohn’s original analysis: we only keep a 24-hour average if there are at least 90% of the possible points for that day. We’ll need to remember that before May 30, 2019 there are 1080 possible points in a day—after May 30, there are 720 points possible.
per_day = (pa .resample('D') .size() .rename('per_day') .to_frame() ) per_day
590 rows × 1 columns
needed_measurements_80s = 0.9 * 1080 needed_measurements_120s = 0.9 * 720 cutoff_date = pd.Timestamp('2019-05-30', tz='US/Pacific') def has_enough_readings(one_day): [n] = one_day date = one_day.name return (n >= needed_measurements_80s if date <= cutoff_date else n >= needed_measurements_120s)
should_keep = per_day.apply(has_enough_readings, axis='columns') should_keep
timestamp 2018-05-19 00:00:00-07:00 False 2018-05-20 00:00:00-07:00 True 2018-05-21 00:00:00-07:00 True ... 2019-12-27 00:00:00-08:00 True 2019-12-28 00:00:00-08:00 True 2019-12-29 00:00:00-08:00 False Freq: D, Length: 590, dtype: bool
Now, we can average together the readings for each day, then remove the days with incomplete data.
(pa.resample('D') .mean() .loc[should_keep] )
515 rows × 3 columns
Finally, we can put this step into the pipeline:
def compute_daily_avgs(pa): should_keep = (pa.resample('D') ['PM25cf1'] .size() .to_frame() .apply(has_enough_readings, axis='columns')) return (pa.resample('D') .mean() .loc[should_keep]) pa = (pd.read_csv(pa_csvs[0]) .pipe(subset_and_rename_A) .pipe(parse_timestamps) .pipe(convert_tz) .pipe(drop_duplicate_rows) .pipe(compute_daily_avgs)) pa.head(2)
Now, we have the average daily PM2.5 readings for instrument A.
12.3.4. What if A and B channels disagree?¶
Recall that there are two instruments per sensor named A and B. If the sensors disagree on a particular day, we’ll drop the day from the data. First, we’ll need to repeat the data wrangling we just performed on instrument A on instrument B.
Thankfully, the CSV for instrument B is very similar to the CSV for instrument A except that the CSVs contain slightly different sets of columns. We’ll define a subset procedure for instrument B, then reuse the same pipeline.
pa_B = pd.read_csv(pa_csvs[2]) pa_B.head(2)
2 rows × 11 columns
def subset_and_rename_B(df): df = df[['created_at', 'PM2.5_CF1_ug/m3']] df.columns = ['timestamp', 'PM25cf1'] return df
pa_B = (pd.read_csv(pa_csvs[2]) .pipe(subset_and_rename_B) .pipe(parse_timestamps) .pipe(convert_tz) .pipe(drop_duplicate_rows) .pipe(compute_daily_avgs)) pa_B.head(2)
We can see that the values in B differ slightly from the values in A.
(pa .rename(columns={'PM25cf1': 'A'}) .assign(B=pa_B['PM25cf1']) [['A', 'B']] )
515 rows × 2 columns
We’ll apply Barkjohn’s method: drop rows if the PM2.5 values for A and B differ by more than 61%, or by more than 5 µg m⁻³.
A = pa['PM25cf1'] B = pa_B['PM25cf1'] abs_diff = (A - B).abs() perc_diff = (A - B) * 2 / (A + B) should_drop = (perc_diff >= 0.61) | (abs_diff >= 5) # We'll end up dropping 12 rows np.count_nonzero(should_drop)
12
Finally, we’ll add this step into the pipeline. After dropping rows, our final PM2.5 values will be the average of instruments A and B.
def process_instrument_B(pa, pa_B): A = pa['PM25cf1'] B = pa_B['PM25cf1'] avg = (A + B) / 2 abs_diff = (A - B).abs() perc_diff = (A - B) * 2 / (A + B) should_drop = (perc_diff.isna()) | (perc_diff >= 0.61) | (abs_diff >= 5) return ( pa.assign(PM25cf1=avg) .loc[~should_drop] )
pa = (pd.read_csv(pa_csvs[0]) .pipe(subset_and_rename_A) .pipe(parse_timestamps) .pipe(convert_tz) .pipe(drop_duplicate_rows) .pipe(compute_daily_avgs) .pipe(process_instrument_B, pa_B)) pa
502 rows × 3 columns
At last, we have the final PM2.5 readings for a PurpleAir sensor.
We’ve done a lot of work: we’ve decided how to handle
missing data, we aggregated the readings for instrument A, averaged the readings
together with instrument B, and removed rows where A and B disagreed.
This work has given us a set of PM2.5 readings that we are more confident in.
We know that each PM2.5 value in the final
pa dataframe is the daily average from
two separate instruments that generated consistent and complete readings.
To fully replicate Barkjohn’s analysis, we would need to repeat this process over all the PurpleAir sensors. Then, we would repeat the AQS cleaning procedure on all the AQS sensor. Finally, we would merge the PurpleAir and AQS data together. This procedure produces daily average readings for each collocated sensor pair.
For brevity, we’ll omit the code to repeat the data processing for the rest of the raw sensor data. Instead, we’ll reuse Barkjohn’s cleaned and merged final dataset:
# just display a few columns cols = [1, 2, 6, 8, 20, 21] final = (pd.read_csv('data/cleaned_purpleair_aqs/Full24hrdataset.csv') .iloc[:, cols]) final
12430 rows × 6 columns
In the next section, we’ll proceed to the final step of the analysis—constructing models that can produce a correction for PurpleAir sensor measurements.
- 1
pa_data_dict.pdf, downloaded from in November 2021.
- 2
Careful readers will see two spikes above the maximum measurements around November of each year when Daylight Saving Time is no longer in effect. When clocks are rolled back one hour, that day has 25 hours instead of the usual 24 hours. Timestamps are tricky! | http://www.textbook.ds100.org/ch/12/pa_cleaning_purpleair.html | CC-MAIN-2021-49 | refinedweb | 3,015 | 59.4 |
while using cpp Qthread, make a simple emit to QML
Hello people, I am new to Qt but researching for a week,
before saying something I am gonna share my code's snippets first
*** mythread.h***
#include <QThread>
class MyThread : public QThread
{
public:
MyThread();
void run();
signals:
void ledSignal(int value);
public slots:
};
** mythread.h finish ***
*** mythread.cpp ***
MyThread::MyThread()
{
}
void MyThread::run()
{
emit ledSignal("led is high");
}
*** mythread.cpp finish***
and also in qml side I have simple connection listener which is very simple and well working.
The problem is, how can I emit a function from thread, because above the code is not running because of
"emit ledSignal("led is high");"
I am working on raspberry and need to show the state of led on the GUI
Many thanks for your helps.
Hi and welcome to devnet ?
Are you calling
starton your thread at any time ?
Also be aware that with your implementation the signal will be emitted only once and then the thread will stop.
The only simple thing that I want to do
think very simple,
imagine there is a loop for(int i = 0; i < 10, i++) emit sendNumberToFrontEnd(i)....
in qt side; imagine there is a very very simple GUI is showing changed data.
+-------------+
1, 2, 3... +
+-------------+
in reality I am going to listen UART serial port data, and gonna show it, but this is not the case at this time, Because I am getting crazy and I can not even doing such that simple duty.
Don't use such a tight loop your test, your signal is going to be emitted in one row and at best you are only going to see the last one.
Since you are going to use an UART, I guess you are going to use QSerialPort. If that's the case, you are rather going to use the worker object approach. Therefore I'd suggest adopting that from the start and for a dummy implementation, just use a QTimer and connect it to a slot or lambda that will emit your custom signal.
All right there is already implementations for UART.
The other issue that I should solve is, I am going to turn on a led which is connected to a bread board and a led context. And that led should be in a while loop
such as -> while(1) ... turn on led(led pin number, HIGH) and then sleep(5 second) then turn off the led then sleep(5 second again)
in that case I need a while loop, and I want to show off led status which is "Turned On or Off" on a simple GUI
That case, if I write the loop in Qt's cpp main thread, it locks every thing until the loop finish. But loop is not going to be finished because I want an infinite loop.
So I need to do it in a seperate thread and it works, because seperate thread never locks my application process. But now problem comes again, how to show off the led's status from a thread which is running out of current thread.
No need for an infinite loop for that, just use a QTimer that you set to trigger every 5 seconds. Then you can change the led status as well emit the signal without any risk of blocking the event loop. You could even do it in the main thread if switching the led is fast enough.
Okey,
Those informations well but I could not get proper answer because it is my bad, I did not mention real thing.
think about there is a very simple GUI again only shows a light speed such as
if light is turning on and turning off in 5 second, I want to show in gui "led speed 5 second" or etc.
but there is a touch button also.
so, if I press the touch button, light's speed will decrease to 2.5 second, and I also need to show off "led speed 2.5 second"
and if I press again it will increase 5 second again.
Now. I should write my own sleep function because if I use current sleep function eg: sleep(5)
program will never understand whether if I press the button or not, because it will sleep for 5 second. So that I need to create a loop which should loop for 5 second and any button event occurs I will understand and also break the loop, if there is no event in 5 second automatically it will break the loop, so on and so forth...
Now, even the custom sleep function is a big job by it self also I need to know if there is touch buttun debouncing as well.. :/
So I think QTimer will not work for me because also my led's speed is not static, it will change after a button is pressed.
What is your brillant suggestion.
By the way I am very happy that there is a person and helping us. Thanks dude.
@xmastree
You don't want to try to call
sleep(5)or similar. You want to use
QTimerjust like @SGaist said. You can change timeout on subsequent calls, or stop timers and restart them with different timeout, if necessary.
Okey,
But what is the chances to access and send an emit function from a thread to GUI ?
If we can, please support me a simple example with detailed, because I am getting very confused also I should solve that issue
Thanks guys. | https://forum.qt.io/topic/90566/while-using-cpp-qthread-make-a-simple-emit-to-qml | CC-MAIN-2019-18 | refinedweb | 923 | 76.25 |
menpo3d makes manipulating 3D mesh data a simple task. In particular, this package provides the ability to import, visualize and rasterize 3D meshes. Although 3D meshes can be created within the main
menpo package, this package adds the real functionality for working with 3D data.
Visualizing 3D objects
menpo3d adds support for viewing 3D objects through
Mayavi, which is based on VTK.
One of the main reasons menpo3d is a seperate project to the menpo core
library is to isolate the more complex dependencies that this brings to the
project. 3D visualization is not yet supported in the browser, so we rely on
platform-specific viewing mechanisms like
QT or
WX. It also limits
menpo3d to
be Python 2 only, as
VTK does not yet support Python 3.
In order to view 3D items you will need to first use the
%matplotlib qt
IPython magic command to set up
QT for rendering (you do this instead of
%matplotlib inline which is what is needed for using the usual Menpo
Widgets). As a complete example, to view a mesh in
IPython you
would run something like:
import menpo3d mesh = menpo3d.io.import_builtin_asset('james.obj')
%matplotlib qt mesh.view()
If you are on Linux and get an error like:
ValueError: API 'QString' has already been set to version 1
Try adding the following to your
.bashrc file:
export QT_API=pyqt export ETS_TOOLKIT=qt4
Open a new terminal and re-run IPython notebook in here, this should fix the issue.
If you are running Windows and recieve this error, try:
set QT_API=pyqt set ETS_TOOLKIT=qt4
Alternatively, try installing wxPython:
conda install wxpython
and using
%matplotlib wx. | https://www.menpo.org/menpo3d/ | CC-MAIN-2019-13 | refinedweb | 274 | 59.43 |
Here is how to use our Amazon Scraping API deployed in Mashape to gather product details and pricing, with just ASIN as input.
Here is a sample output for AmazonBasics 9 Volt Everyday Alkaline Batteries –
{ "small_description": "Pack of eight 9 Volt Alkaline Batteries - 3-year shelf life so you can store for emergencies or use immediately - Works with a variety of devices including digital cameras, game controllers, toys, and clocks; do not attempt to recharge - Ships in Certified Frustration-Free Packaging", "average_rating": 4.1, "url": "", "product_information": { "Product Dimensions": "10 x 5 x 3 inches", "Amazon Best Sellers Rank": " #30 in Health & Household #1 in Health & Household > House Supplies > Household Batteries > 9V #18 in Health & Household > Sales & Deals", "International Shipping": "This item can be shipped to select countries outside of the U.S.", "ASIN": "B00MH4QM1S", "Item model number": "6LR16-8PK", "Shipping Weight": "12.8 ounces ", "Domestic Shipping": "Currently, item can be shipped only within the U.S. and to APO/FPO addresses. For APO/FPO shipments, please check with the manufacturer regarding warranty and support issues.", "UPC": "841710157253 841710106381" }, "availability_quantity": null, "availability_status": "In Stock. Ships from and sold by Amazon.com in easy-to-open packaging. Gift-wrap available. In Stock. ", "brand": "AmazonBasics", "images": [ "", "", "", "", "" ], "price": "$9.80", "model": "6LR16-8PK", "name": "AmazonBasics 9 Volt Everyday Alkaline Batteries (8-Pack)", "total_reviews": 4579, "full_description": "AmazonBasics brings you everyday items at a great value. An Amazon Brand.", "productCategory": "Health & Household > Household Supplies > Household Batteries > 9V" }
Step 1: Create an Account in Mashape
- Go to the Mashape Marketplace page –
- If you are new to Mashape you can create an account by clicking on the ‘Sign Up’ button at the top of the page. If you have an account sign in to Mashape with your credentials.
- Then open up ScrapeHero’s API page –
Step 2: Subscribe to a Plan and Get API
- To subscribe to ScrapeHero’s Amazon API, click on the ‘Pricing’ tab on the API page. The browser will redirect you to the API pricing page.
- Choose the basic plan to get 50 API calls for free. If you want more you can subscribe to a paid plan or contact us if your volume is high.
After you have subscribed to ScrapeHero’s Amazon API, Mashape will automatically generate your personal API key that you can use in every call. It is your personal identifier and should not be shown to the public.
Step 3: Test the API
Now come back to the API documentation page
- Find the product ASIN you need to search for. Every Amazon product will have an ASIN. (An ASIN from any other domain will not work.) For example, if the product URL is –. The ASIN will be B010TQY7A8.
- By default, the ASIN input box is filled with a sample product ASIN. If you want to try another product, paste the ASIN in the “asin” input box under URL parameters and click the button ‘TEST ENDPOINT’.
Step 4: Python Script to consume the API
The python script below will let you scrape details and pricing for multiple products using our API.
import requests asins = [ "B00MH4QM1S", "B00IQBT1AK", "B0056GDG90", "B00GMAWI66", "B00HS3I8VA", "B004IUN7JO", "B00SG595J8", "B0188NZSP2", "B0188BA9KS" ] data = [] for asin in asins: response = requests.get(""%asin, headers={ "X-Mashape-Key": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx", "Accept": "application/json"} ) data.append(response.json()) print(data)
The data gathered would look like this –
You can save this scraped data to a JSON file or Database for further use.
We can help with your data or automation needs
Turn the Internet into meaningful, structured and usable data | https://www.scrapehero.com/amazon-price-api/ | CC-MAIN-2018-51 | refinedweb | 586 | 62.78 |
20 April 2012 08:08 [Source: ICIS news]
SINGAPORE (ICIS)--China Resources Packaging has started its new 300,000 tonne/year polyethylene terephtalate (PET) bottle chip unit at ?xml:namespace>
The producer’s remaining two PET units with the same capacity at
The specific date for the start-up at the two units has not been finalised yet, the source added.
The country’s total PET bottle chip capacity will increase by nearly 20% after the three units come on line, according to Chemease, an ICIS service in
The spot prices of PET bottle chip in domestic market have been dropped because of the over-supply situation in April, a market player said.
Another PET producer Zhejiang Wankai, started up its 400,000 tonne/year polyethylene terephthalate (PET) continuous polymer unit on 11 March.
The spot prices of PET bottle chip were at yuan (CNY) 10,900-11,000/tonne ($1,727-1,743/ton | http://www.icis.com/Articles/2012/04/20/9552103/china-resources-starts-up-new-pet-unit-at-changzhou-on-19.html | CC-MAIN-2015-22 | refinedweb | 155 | 52.33 |
- Adobe
- michelleyaiser.com
Table of contents
Created
24 October 2011
The syntax of a programming language is the set of rules you must follow when writing code. These rules dictate the symbols and words that can be used and how to structure your code. Syntax errors are some of the most common errors made by developers, especially when they are first learning a language. Typically, the compiler cannot compile code that contains syntax errors.
One thing to keep in mind—syntax does not provide information on the meaning behind the symbols or structure in your code. The semantics of a programming language provide the meaning behind the symbols, keywords, and structure. A program that is syntactically correct may not be semantically correct.
In this article, you will learn the syntax of ActionScript 3.
ActionScript 3 is a case-sensitive language. Identifiers that differ only in case are considered different identifiers. For example, the following code creates two different variables:
var sampleVariable:int; var SampleVariable:int;
The semicolon character (
; ) is used to terminate a statement. If you omit the semicolon, the compiler assumes that each line of code represents a single statement. Terminating each statement with a semicolon is good practice and makes your code easier to read.
You can use parentheses (
()) in three ways in ActionScript. This technique is shown in the following example:
var a:int = 2; var b:int = 3; trace((a++, b++, a+b)); // 7
Third, you can use parentheses to pass one or more parameters to functions or methods. In the following example, a String value is passed to the
trace() function:
trace("hello"); // hello
One or more lines of code enclosed in curly brackets (
{ } ) is called a block. Code is grouped together and organized into blocks in ActionScript 3. The bodies of most programming constructs like classes, functions, and loops are contained inside blocks.
function sampleFunction():void{ var sampleVariable:String = "Hello, world."; trace(sampleVariable); } for(var i:uint=10; i>0; i--){ trace(i); }
Any spacing in code—spaces, tabs, line breaks, and so on—is referred to as whitespace. The compiler ignores extra whitespace that is used to make code easier to read. For example, the following two examples are equivalent:
for(var i:uint=0; i<10; i++){ trace(i); } for(var i:uint=0; i<10; i++){trace(i);}
As you write ActionScript, you can leave notes to yourself or others. For example, use comments to explain how certain lines of code work or why you made a particular choice. Code comments are a tool you can use to write text that the computer ignores in your code. ActionScript 3 code supports two types of comments: single-line comments and multiline comments. The compiler ignores text that is marked as a comment.
Single-line comments begin with two forward slash characters (
// ) and continue until the end of the line. For example, the following code contains a single-line comment:
// a single line comment
Multiline or block comments begin with a forward slash and asterisk (
/* ) and end with an asterisk and forward slash (
*/ ).
/* This is multiline comment that can span more than one line of code. */
Another common use of comments is to temporarily "turn off" one or more lines of code. For example, use comments to figure out why certain ActionScript code isn't working the way you expect by placing the code within comment syntax so that the compiler ignores it. You can also use comments to test a different way of doing something.
A literal is any fixed value that appears directly in your code. The following examples are literals:
17 "hello" -3 9.4 null undefined true false
Literals can also be grouped to form compound literals. The following example shows a compound literal being passed as a parameter to the Array class constructor.
var myStrings:Array = new Array(["alpha", "beta", "gamma"]); var myNumbers:Array = new Array([1,2,3,5,8]);
Reserved words are words that you cannot use as identifiers in your code because the words are reserved for use by ActionScript. Reserved words include lexical keywords, which are removed from the program namespace by the compiler. The compiler reports an error if you use a lexical keyword as an identifier. The following table lists ActionScript 3 lexical keywords.
Table 1. ActionScript 3 Lexical Keywords
There is a small set of keywords, called syntactic keywords, that can be used as identifiers, but that have special meaning in certain contexts. The following table lists ActionScript 3 syntactic keywords.
Table 2. ActionScript 3 Syntactic Keywords
There are also several identifiers that are sometimes referred to as future reserved words. These identifiers are not currently reserved in ActionScript 3. However, Adobe recommends avoiding these words because a subsequent version of the language may include them as keywords.
Table 3. ActionScript 3 Future Reserved Words
Slash syntax is not supported in ActionScript 3. Slash syntax was used in earlier versions of ActionScript to indicate the path of a movieclip or variable.
Now that you are familiar with the syntax, learn other fundamentals of programming in ActionScript 3. Begin with ActionScript 3 fundamentals: Variables, ActionScript 3 fundamentals: Data types, ActionScript 3 fundamentals: Operators, and ActionScript 3 fundamentals: functions.
The content in this article is based on material originally published in the Learning ActionScript 3 user guide created by Adobe Community Help and Learning. | https://www.adobe.com/devnet/actionscript/learning2/as3-fundamentals/syntax.html | CC-MAIN-2018-39 | refinedweb | 891 | 54.32 |
On Tue, 2006-01-17 at 17:18 -0800, Hal Hildebrand wrote:
> I would like to use the digester to process XML defined by schemas. The only problem
> is that the XML in question has elements that are defined in another namespace,
> which is known only at the time of processing.
Do you mean that the structure of *most* of the input xml is known, and
that just a few places it can contain any arbitrary xml? If so, you
could use NodeCreateRule to store those unknown bits as DOM nodes.
Or do you mean that the whole structure is really unknown until runtime?
If so, what are you planning to do with this completely unknown data?
Regards,
Simon
---------------------------------------------------------------------
To unsubscribe, e-mail: commons-user-unsubscribe@jakarta.apache.org
For additional commands, e-mail: commons-user-help@jakarta.apache.org | http://mail-archives.apache.org/mod_mbox/commons-user/200601.mbox/%3C1137550936.4767.12.camel@localhost.localdomain%3E | CC-MAIN-2014-49 | refinedweb | 140 | 64.1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.