text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Guys and gals, as you can see by my username I am a PhpGuy trying to get to grips with Python.
I've had a play around with Python/Pygame/Tkinter and I am very impressed with what i've seen so far.
The scripts i've written i have placed into a library to keep for future use. Now I am trying to tackle classes/imports/modules etc.
So, the reason i'm here...
I have created a forms class (which forms a base - no pun intended
I have this 'myapp.py' script:
- Code: Select all
from Tkinter import Tk, mainloop
import sys
sys.path.append('mod')
# Import the forms
from forms.personalDetailsClass import personalDetails
def buton_pressed(FORM_VALUES):
print FORM_VALUES
#
# The bones of the app..
def main():
root = Tk()
# Setup the class as a standalone window.
personal_details = personalDetails(root)
# Add the title to the main label.
personal_details.titletext.set('Add New Student')
# Run the mainloop.
root.mainloop()
# Run the software.
if __name__ == "__main__":
main()
Here is the line from the personal details form class file:
- Code: Select all
self.submitbutton = Button(self.window, text='Add', command=buton_pressed)
Now here is where I'm stumped...
Inside the class, i am trying to send the data (when the button is pressed) to a function outside the class but it keeps throwing an error telling me the buton_pressed function is not defined. Why?
In Php, if i imported (include/require) a file, anything below the import line could use the imported file, anything above the line, the imported file could use.
Why is my script throwing errors at me? how do imports work in Python?
Any help would be greatly appreciated,
Cheers,
Pete.
Edit:
If i place the function inside the same file as the class, all works fine.
If i place the class in my main file, it works.
Just can't seem to get it to work when they are separated
| http://www.python-forum.org/viewtopic.php?f=6&t=7384&p=9465 | CC-MAIN-2014-15 | refinedweb | 321 | 66.94 |
tpinit - routine for joining an application
#include <atmi.h> int tpinit(TPINIT *tpinfo)
tpinit() allows a client to join a System/T application. Before a client can use any of the System/T communication or transaction routines, it must first join a System/T application. Because calling tpinit() is optional, a client may also join an application by calling many ATMI routines (for example, tpcall(3c)) which transparently call tpinit() with tpinfo set to NULL. A client may want to call tpinit() directly so that it can set the parameters described below. In addition, tpinit() must be used when application authentication is required (see the description of the SECURITY keyword in ubbconfig(5)), or when the application wishes to supply its own buffer type switch (see typesw(5)). After tpinit() successfully returns, the client can initiate service requests and define transactions.
If tpinit() is called more than once (that is, after the client has already joined the application), no action is taken and success is returned.
tpinit()'s argument, tpinfo, is a pointer to a typed buffer of type TPINIT and a NULL sub-type. TPINIT is a buffer type that is typedefed in the atmi.h header file. The buffer must be allocated via tpalloc() prior to calling tpinit(3c). The buffer should be freed using tpfree(3c) after calling tpinit(). The TPINIT typed buffer structure includes the following members:
char usrname[MAXTIDENT+2]; char cltname[MAXTIDENT+2]; char passwd[MAXTIDENT+2]; char grpname[MAXTIDENT+2]; long flags; long datalen; long data;
usrname, cltname, grpname and passwd are all NULL-terminated strings. usrname is a name representing the caller. cltname is a client name whose semantics are application defined. The value sysclient is reserved by the system for the cltname field. The usrname and cltname fields are associated with the client at tpinit() time and are used for both broadcast notification and administrative statistics retrieval. They should not have more characters than MAXTIDENT, which is defined as 30. passwd is an application password in unencrypted format that is used for validation against the application password. Due to UNIX restrictions on one-way encryption, the passwd is significant only to 8 characters. grpname is used to associate the client with a resource manager group name. If grpname is set to a 0-length string, then the client is not associated with a resource manager and is in the default client group. The value of grpname must be the null string (0-length string) for /WS clients. Note that grpname is not related to ACL GROUPS.
The setting of flags is used to indicate both the client-specific notification mechanism and the mode of system access. These settings may override the application default; however, in the event that they cannot, tpinit() will print a warning in a log file, ignore the setting and return the application default setting in the flags element upon return from tpinit(). For client notification, the possible values for flags are as follows:
Only one of the above flags can be used at a time. If the client does not select a notification method via the flags field, then the application default method will be set in the flags field upon return from tpinit().
For setting the mode of system access, the possible values for flags are as follows:
Only one of the above flags can be used at a time. If the client does not select a notification method or a system access mode via the flags field, then the application default method(s) will be set in the flags field upon return from tpinit(). See ubbconfig(5) for details on both client notification methods and system access modes.
datalen is the length of the application specific data that follows. The buffer type switch entry for the TPINIT typed buffer sets this field based on the total size passed in for the typed buffer (the application data size is the total size less the size of the TPINIT structure itself plus the size of the data placeholder as defined in the structure). data is a place holder for variable length data that is forwarded to an application defined authentication service. It is always the last element of this structure.
A macro, TPINITNEED, is available to determine the size TPINIT buffer necessary to accommodate a particular desired application specific data length. For example, if 8 bytes of application specific data are desired, TPINITNEED(8) will return the required TPINIT buffer size.
A NULL value for tpinfo is allowed for applications not making use of the authentication feature of the System/T. Clients using a NULL argument will get default values of 0-length strings for usrname, cltname and passwd; no flags set and no application data.
tpinit() returns -1 on error and sets tperrno to indicate the error condition.
Under the following conditions, tpinit() fails and sets tperrno to:
tpchkauth(3c) and a non-NULL value for the TPINIT typed buffer argument of tpinit() are available only on sites running Release 4.2 or later.
The interfaces described in tpinit(3c) are supported on UNIX System and MS-DOS operating systems. However, signal-based notification is not supported on MS-DOS. If it is selected at tpinit() time, then a userlog(3c) message is generated and the method is automatically set to dip-in.
TCP/IP addresses may be specified in the following forms:
//host.name:port_number //#.#.#.#:port_number
In the first format, the domain finds an address for hostname using the local name resolution facilities (usually DNS). hostname must be the local machine, and the local name resolution facilities must unambiguously resolve hostname to the address of the local machine.
In the second example, the string #.#.#.# is in dotted decimal format. In dotted decimal format, each # should be a number from 0 to 255. This dotted decimal number represents the IP address of the local machine.
In both of the above formats, port_number is the TCP port number at which the domain process will listen.
More than one address can be specified if desired by specifying a comma-separated list of pathnames for WSNADDR Addresses are tried in order until a connection is established. Any member of an address list can be specified as a parenthesized grouping of pipe-separated network addresses. For example:
WSNADDR=(//m1.acme.com:3050|//m2.acme.com:3050),//m3.acme.com:3050
For users running under Windows, the address string would look like this:
set WSNADDR=(//m1.acme.com:3050^|//m2.acme.com:3050),//m3.acme.com:3050
The carat (^) is needed to escape the pipe (|).
TUXEDO randomly selects one of the parenthesized addresses. This strategy distributes the load randomly across a set of listener processes. Addresses are tried in order until a connection is established. Use the value specified in the application configuration file for the workstation listener to be called. If the value begins with the characters ``0x'', it is interpreted as a string of hex-digits, otherwise it is interpreted as ASCII characters.
Signal restrictions may prevent the system using signal-based notification even though it has been selected by a client. When this happens, the system generates a log message that it is switching notification for the selected client to dip-in and the client is notified then and thereafter via dip-in notification. (See ubbconfig(5) description of the *RESOURCES NOTIFY parameter for a detailed discussion of notification methods.) Note that signaling of clients is always done by the system so that the behavior of notification is consistent regardless of where the originating notification call is made. Because of this, only clients running as the application administrator can use signal-based notification. The ID for the application administrator is identified as part of the configuration for the application.
If signal-based notification is selected for a client, then certain ATMI calls may fail, returning TPGOTSIG due to receipt of an unsolicited message if TPSIGRSTRT is not specified.
tpterm(3c) | http://edocs.bea.com/tuxedo/tux64/sect3c/tpinit.htm | crawl-002 | refinedweb | 1,322 | 53.71 |
I am trying to transform from a trapezoid (in the first image) to a rectangle (in the second image), but getting a strange result (in the third image).
My plan was to use a perspective transform, defined by the four corner points of the trapezoid and the four corner points of the rectangle.
In this example, for the trapezoid they are:
ptsTrap = [[ 50. 100. ] [ 50. 200. ] [ 250. 64.73460388] [ 250. 235.26539612]]
and for the rectangle:
ptsRect = [[ 50. 100.] [ 50. 200.] [ 250. 100.] [ 250. 200.]]
I am getting a perspective transform from these points:
T = cv2.getPerspectiveTransform(ptsTrap, ptsRect)
And then building the image from that:
arrTrapToRect = cv2.warpPerspective(arrTrap, T, arrTrap.shape[:2])
However, as you can see from the image, this isn’t giving the expected transformation.
I can’t seem to work out why even the points that defined the transform are not being projected according to it. Any ideas?
Best answer
Your methodology is correct. The problem arises when you specify the coordinates of your corner points. I don’t know how you calculated them, but you have swapped your X and Y axes. This is reflected in the transformation applied to your final image. I find the corner points to be:
ptsTrap = [[[ 99. 51.]] [[ 64. 251.]] [[ 234. 251.]] [[ 199. 51.]]] ptsRect = [[[ 102. 49.]] [[ 100. 249.]] [[ 200. 250.]] [[ 200. 50.]]]
Finding the perspective transform from these points gives the correct result:
For reference, this is the code I used:
import cv2 import numpy as np def find_corners(image): im = cv2.Canny(image, 100, 200) cnt = cv2.findContours(im,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)[0] cnt = cv2.approxPolyDP(cnt[0], 5, True) return cnt.astype(np.float32) def main(argv): trap = cv2.imread('trap.png', cv2.IMREAD_GRAYSCALE) rect = cv2.imread('rect.png', cv2.IMREAD_GRAYSCALE) ptsTrap = find_corners(trap) ptsRect = find_corners(rect) T = cv2.getPerspectiveTransform(ptsTrap, ptsRect) warp = cv2.warpPerspective(trap, T, rect.shape[:2]) cv2.imshow('', warp) cv2.imwrite('warp.png', warp) cv2.waitKey() cv2.destroyAllWindows() | https://pythonquestion.com/post/opencv-perspective-transform-giving-unexpected-result/ | CC-MAIN-2020-16 | refinedweb | 327 | 60.21 |
Red Hat Bugzilla – Bug 603635
mailman breaks CC field incorrectly
Last modified: 2015-08-31 23:37:54 EDT
+++ This bug was initially created as a clone of Bug #461707 +++
* Description of problem:
When customer send an email in certain way, CC filed in
received email shows wiered output.
For example, If email is sent in the following strings,
To: test-list@example.com
Cc: DS◯◯半導体 ◯◯課長1 <test1@node1.example.com>
Cc: DS◯◯半導体 ◯◯課長2 <test1@node1.example.com>
Output of Cc field in received email becomes as follows.
To: test-list@example.com
Cc: 長2, DS◯◯半導体 ◯◯課長,DS◯◯半導体 ◯◯課@node1.example.com
This problem occurs only when a mailing list created in mailman is specified in To/Cc field as one send an email. So this seems to be mailman specific problem.
* How reproducible:
Always.
*Steps to Reproduce:
1. On one machine(eg, node1.example.com), setup postfix so
that it can send mail to each user within the local
machine and setup mailman and create one
list(eg,test-list).
2. Create at least 3 users for mail accounts(eg,user1,user2,
user3) and include user1 into the list(test-list).
3. Send a mail from user2 or user3 account in the following
way.
To: test-list@node1.example.com
Cc: DS◯◯半導体 ◯◯課長1 <user2@node1.example.com>
Cc: DS◯◯半導体 ◯◯課長2 <user3@node1.example.com>
4. Check the mail in user1 account(user1 should have received
the mail through the test-list) and see Cc field on the
received mail. It should look like the following.
From: user2(or user3)
Sender: test-list-bounces@node1.example.com
Date: XX:XX XM
To: test-list@node1.example.com
Cc: 長2,◯◯半導体 ◯◯課長1,DS◯◯半導体 ◯◯課@node1.example.com
Note: This problem is reproduceable even if you use only
English charactors. I couldn't get what pattern of
string it occurs. But looks like it occurs when you put
long strings in Cc field. Here is example of step 3 in
English charactors.
3. Send a mail from user2 or user3 account in the following
way.
To: test-list@node1.example.com
Cc: DSDS◯◯ handotai ◯◯kachokacho2 <user2.node1.example.com>
Cc: DSDS◯◯ handotai ◯◯kachokacho3 <user3.node1.example.com>
4. Check the mail in user1 account(user1 should have
received the mail through the test-list) and see Cc field
on the received mail. It should look like the following.
achokacho3,DSDS◯◯ handotai ◯◯kachokacho2,DSDS◯◯ handotai ◯◯k@node1.example.com
*The display in Cc field might look different depending on if you sent the mail in UTF-8 or other charactor set. It does look wierd anyway though.
*Actual results:
Output in Cc field in received mail looks wierd.
*Expected results:
Display name should be printed in Cc field correctly.
*Additional info:
mailman-2.1.9-2/ RHEL5.1
----------------------------------
The problem is that python script in mailman put COMMASPACE(", ") in between strings in Cc field where it shouldn't. By putting COMMASPACE in between strings, it'll split the strings in Cc field on the display of email clients.
*CC field in Recieved mail header
Cc: =?ISO-2022-JP?B?GyRCIXkheSF5IXkheSF5IXkheSF5IXkheSF5IXkheSF5IXkheRsoQg==?=@dhcp-1-164.bne.redhat.com,
=?ISO-2022-JP?B?GyRCIXkheSF5GyhC?= <user5@dhcp-1-164.bne.redhat.com>
Note: COMMASPACE(", ") after @dhcp-1-164.bne.redhat.com is causing the problem.
The code adding the COMMPASPACE(", ") in the Cc field is the following.
From /usr/lib/mailman/pythonlib/email/Utils.py
115 def getaddresses(fieldvalues):
116 """Return a list of (REALNAME, EMAIL) for each fieldvalue."""
117 all = COMMASPACE.join(fieldvalues)
118 a = _AddressList(all)
119 return a.addresslist
I've changed the line:117 as follows for testing. This way, it doesn't put COMMASPACE in between the strings.
117 all = fieldvalues
After I've made the above change, I couldn't observe the problem any longer. Even if I put multiple recipients in Cc field. But I doubt this is the right method
getaddresses is used in /usr/lib/mailman/Mailman/Handlers/AvoidDuplicates.py
def process(mlist, msg, msgdata):
...
# RFC 2822 specifies zero or one CC header
del msg['cc']
if ccaddrs:
msg['Cc'] = COMMASPACE.join([formataddr(i) for i in ccaddrs.values()])
--- Additional comment from tao@redhat.com on 2008-10-29 20:39:58 EDT ---
Hi,
It's been few months since I've escalated this ticket. My customer is
continuously contacting me for the fix.
Could you give the current status of this issue? The customer especially
wonder when it will be fixed so that they can plan the deployment of the
system.
Please let me know which update you're planning to fix the issue.
Regards,
Masahiro
This event sent from IssueTracker by mokubo
issue 184316
--- Additional comment from dnovotny@redhat.com on 2008-11-06 04:44:53 EST ---
hello,
I am a new maintainer of mailman and I've started to look at this bug. From the analysis you showed above I think there is a different problem:
def getaddresses(fieldvalues):
116 """Return a list of (REALNAME, EMAIL) for each fieldvalue."""
117 all = COMMASPACE.join(fieldvalues)
the problem is, that "fieldvalues" should contain a list, where each item will be *one mail address* - that way, when you put COMMASPACE between them, you will get addresses separated by commas.
in the example of the errorneous behavior above it is clear, that the *splitting* is done wrong: it splits inbetween, so one adress is teared apart and mailman thinks they are two (and puts COMMASPACE between "them")
the question is, where does this splitting occur: if you send wrong "fieldvalues" -splitted inbetween- to getaddresses(), you will get wrong cc field, obviously
so the analysis have to look, why the caller of the function "getaddresses()" puts two separated items in the list, where there should be one...
--- Additional comment from pm-rhel@redhat.com on 2009-03-26 13:27:27 ED "?".
--- Additional comment from dnovotny@redhat.com on 2009-06-12 08:53:53 EDT ---
Created an attachment (id=347555)
this is the sample of the mail which causes the problem
--- Additional comment from dnovotny@redhat.com on 2009-06-12 08:58:15 EDT ---
looking at the sample mail (comment #6)
I can see, that the name is broken apart by a newline character, because it is long... this seems to be parsed by _parseaddr.py . The investigation led me to a similar Debian bug:
they solved the problem by patching the _parseaddr.py, backporting from a newer version:
--- /usr/lib/mailman/pythonlib/email/_parseaddr.py.distrib 2006-06-13 05:43:49.000000000 +0200
+++ /usr/lib/mailman/pythonlib/email/_parseaddr.py 2009-02-20 13:19:35.000000000 +0100
@@ -170,6 +170,7 @@
self.pos = 0
self.LWS = ' \t'
self.CR = '\r\n'
+ self.FWS = self.LWS + self.CR
self.atomends = self.specials + self.LWS + self.CR
# Note that RFC 2822 now specifies `.' as obs-phrase, meaning that it
# is obsolete syntax. RFC 2822 requires that we recognize obsolete
@@ -416,7 +417,7 @@
plist = []
while self.pos < len(self.field):
- if self.field[self.pos] in self.LWS:
+ if self.field[self.pos] in self.FWS:
self.pos += 1
elif self.field[self.pos] == '"':
plist.append(self.getquote())
--- Additional comment from dnovotny@redhat.com on 2009-06-12 08:59:37 EDT ---
can you try this, if it helps our problem?
--- Additional comment from dnovotny@redhat.com on 2009-06-12 09:06:48 EDT ---
Created an attachment (id=347558)
the proposed patch
--- Additional comment from tao@redhat.com on 2009-06-22 20:13:51 EDT ---
Event posted on 06-22-2009 08:13pm EDT by mokubo
Hi,
Any update on this? Our customer want the fix as early as possible and
wants to know when the fix
will be included.
Please let us know when the patch will be included in the package.
Regards,
Masa
This event sent from IssueTracker by mokubo
issue 184316
--- Additional comment from dnovotny@redhat.com on 2009-06-24 06:29:20 EDT ---
hello Masa,
with my patch, the cc field seems to be all in one row and not broken, but I'm not sure if I reproduced the problem correctly, because I don't know Japanese characters. I've built RPM packages of mailman with my patch, so the customer (or you) can use them and see if this helps
D.
Committed to CVS, fixed in version mailman-2.1.12-12.el6.
Red Hat Enterprise Linux 6.0 is now available and should resolve
the problem described in this bug report. This report is therefore being closed
with a resolution of CURRENTRELEASE. You may reopen this bug report if the
solution does not work for you. | https://bugzilla.redhat.com/show_bug.cgi?id=603635 | CC-MAIN-2018-43 | refinedweb | 1,447 | 60.01 |
#include <StelObserver.hpp>
Inherited by SpaceShipObserver.
Create a new StelObserver instance which is at a fixed Location.
Update StelObserver info if needed. Default implementation does nothing.
Reimplemented in SpaceShipObserver.
Get the position of the home planet center in the heliocentric VSOP87 frame in AU.
Get the distance between observer and home planet center in AU.
Get the informations on the current location.
Get whether the life of this observer is over, and therefore that it should be changed to the next one provided by the getNextObserver() method.
Reimplemented in SpaceShipObserver.
Get the next observer to use once the life of this one is over.
Reimplemented in SpaceShipObserver. | http://stellarium.org/doc/0.10.4/classStelObserver.html | CC-MAIN-2015-18 | refinedweb | 107 | 54.08 |
Ok, i made a psot earlier but my code is alot different and different question so i didnt know where to put it.
My question is thiss... in my calcScore function. This is my requirement:
Design and implement a function double calcScore() that calculates and returns the average of the three scores that remain after dropping the highest and lowest scores a performer received. This function should be called once for each contestant by your function main(), and it should be passed the Stat object that contains the 5 scores from the judges.
I am having trouble of passing the values of score 1-5 into the calcScore function. If i didnt have to use the function it wouldnt be a problem. But I do, and i dont know how to do it.
#include "Stat.h" #include <iostream> using namespace std; void getJudgeData(int &); double calcScore(); int main() { Stat s; int score1 = 0, score2 = 0, score3=0, score4=0, score5=0; cout << "This program assignment number 4 was created by Jeremy Rice" << endl; cout<< "The Next American Idol!!!"<<endl; cout<< "Please enter your scores below!"<<endl; cout<< "Enter contestant #1's scores!"<<endl; cout<<"Judge #1 what is your score"<<endl; getJudgeData(score1); s.setNum(score1); cout<<"Judge #2 what is your score"<<endl; getJudgeData(score2); s.setNum(score2); cout<<"Judge #3 what is your score"<<endl; getJudgeData(score3); s.setNum(score3); cout<<"Judge #4 what is your score"<<endl; getJudgeData(score4); s.setNum(score4); cout<<"Judge #5 what is your score"<<endl; getJudgeData(score5); s.setNum(score5); cout<<"Talent score for contestant #1 is: "<<calcScore()<<endl; system("pause"); return 0; } void getJudgeData(int &getNum) { Stat s; cin>> getNum; return; } double calcScore() { Stat s; cout<<(s.getSum()- s.getMin()- s.getMax()) / 3 <<endl; } | https://www.daniweb.com/programming/software-development/threads/94896/class-use-in-a-function | CC-MAIN-2017-26 | refinedweb | 293 | 65.83 |
The overhead for column families was greatly reduced in 0.8 and 1.0.
It should now be possible to have hundreds or thousands of column
families. The setting 'memtable_total_space_in_mb' was introduced that
allows for a global memtable threshold, and cassandra will handle
flushing on its own.
See
Another thing you should consider is the lack of built in access
controls. There is an authentication/authorization interface you can
plug in to and examples in the examples/ directory of the source
download.
On Wed, Dec 21, 2011 at 10:36 AM, Ryan Lowe <ryanjlowe@gmail.com> wrote:
> What we have done to avoid creating multiple column families is to sort of
> namespace the row key. So if we have a column family of Users and accounts:
> "AccountA" and "AccountB", we do the following:
>
> Column Family User:
> "AccountA/ryan" : { first: Ryan, last: Lowe }
> "AccountB/ryan" : { first: Ryan, last: Smith}
>
> etc.
>
> For our needs, this did the same thing as having 2 "User" column families
> for "AccountA" and "AccountB"
>
> Ryan
>
>
> On Wed, Dec 21, 2011 at 10:34 AM, Flavio Baronti <f.baronti@list-group.com>
> wrote:
>>
>> Hi,
>>
>> based on my experience with Cassandra 0.7.4, i strongly discourage you to
>> do that: we tried dynamical creation of column families, and it was a
>> nightmare.
>> First of all, the operation can not be done concurrently, therefore you
>> must find a way to avoid parallel creation (over all the cluster, not in a
>> single node).
>> The main problem however is with timestamps. The structure of your
>> keyspace is versioned with a time-dependent id, which is assigned by the
>> host where you perform the schema update based on the local machine time. If
>> you do two updates in close succession on two different nodes, and their
>> clocks are not perfectly synchronized (and they will never be), Cassandra
>> might be confused by their relative ordering, and stop working altogether.
>>
>> Bottom line: don't.
>>
>> Flavio
>>
>> Il 12/21/2011 14:45 PM, Rafael Almeida ha scritto:
>>
>>> Hello,
>>>
>>>
>>>
>>>
>>>
>>
> | http://mail-archives.apache.org/mod_mbox/incubator-cassandra-user/201112.mbox/%3CCALHkrw9o2i0cmc9iGa=k9nzjOjH267UvQB2HRG3HzEXr-yJCCg@mail.gmail.com%3E | CC-MAIN-2016-26 | refinedweb | 333 | 62.27 |
#include <DofMapper.hpp>
Perform mappings between degrees-of-freedom and equation-indices.
A degree-of-freedom is specified by four things: (entity-type,entity-id,field,offset-into-field)
An equation-index is a member of a globally contiguous, zero-based index space.
A DOF-mapping allows the caller to provide a degree-of-freedom and obtain an equation-index. A reverse DOF-mapping allows the caller to provide an equation-index and obtain a degree-of-freedom.
By default this DofMapper class provides DOF-mappings and reverse-DOF-mappings. Providing reverse DOF-mappings consumes extra memory since it requires constructing an additional FEI object to do the reverse lookups. If this is not desired, reverse-mappings can be disabled using DofMapper's second constructor argument.
The FEI library is utilized for accumulating and storing the mappings. (fei::VectorSpace provides DOF-mappings, and fei::ReverseMapper provides reverse-DOF-mappings.)
Since the FEI works entirely with primitive data types (e.g., int) and has no knowledge of stk::mesh types, this DofMapper class essentially acts as a translation bridge between stk::mesh and the FEI library.
Definition at line 52 of file DofMapper.hpp.
Constructor that internally creates an fei::VectorSpace object.
Definition at line 24 of file DofMapper.cpp.
Constructor that accepts an existing fei::VectorSpace object. Destructor
Definition at line 32 of file DofMapper.cpp.
Given a mesh, an entity-type and a field, store the resulting DOF mappings. This method iterates the buckets for the specified entity-type, and for each bucket that has the given field and is selected by the specified selector. DOF-mappings are stored for each entity-id in the bucket.
This method may be called repeatedly, to add dof mappings for different parts, different entity-types, different fields, etc.
Definition at line 50 of file DofMapper.cpp.
This method internally calls fei::VectorSpace::initComplete(), which finalizes and synchronizes the DOF-mappings (ensures that indices for shared-entities are consistent, etc.). Also, if reverse-mappings are not disabled, this method creates the reverse-mappings object. (The get_dof() method is not available until after this has happened.)
This is a collective method, must be called on all processors.
Definition at line 124 of file DofMapper.cpp.
Query whether reverse-DOF-mappings are enabled. (See second constructor argument above.)
Definition at line 89 of file DofMapper.hpp.
Return the integer id that the specified field is mapped to. The integer id is the FEI's representation of the field.
Definition at line 136 of file DofMapper.cpp.
Return a global equation index for the specified entity type/id pair and field.
Note: this method should be const, but it calls an fei query that is not const. When the fei method is corrected, this method will be made const.
Note2: this method may not work correctly until after 'finalize()' has been called.
Definition at line 142 of file DofMapper.cpp.
Given a global_index, return the specification for the DOF that it corresponds to. Throw an exception if the global_index is not found, or if DofMapper::finalize() has not been called. Note: this method will be const after the corresponding fei query is corrected for constness.
Definition at line 169 of file DofMapper.cpp.
Return the underlying fei::VectorSpace object.
Definition at line 121 of file DofMapper.hpp.
Return the underlying fei::VectorSpace object.
Definition at line 125 of file DofMapper.hpp. | http://trilinos.sandia.gov/packages/docs/r11.2/packages/stk/doc/html/classstk_1_1linsys_1_1DofMapper.html | CC-MAIN-2014-15 | refinedweb | 565 | 51.85 |
Created on 2014-07-18 12:49 by eddygeek, last changed 2018-12-09 20:12 by serhiy.storchaka. This issue is now closed.
pickle.loads raises a TypeError when calling the datetime constructor, (then a UnicodeEncodeError in the load_reduce function).
A short test program & the log, including dis output of both PY2 and PY3 pickles, are available in this gist; and extract on stackoverflow:
I am using pickle.dumps(reply, protocol=2) in PY2
then pickle._loads(pickled, fix_imports=True, encoding='latin1') in PY3
(tried None and utf-8 without success)
Native cPickle loads decoding fails too, I am only using pure python's _loads for debugging.
Sorry if this is misguided (first time here)
Regards,
Edward
I have no idea what was done to pickle for Python3, but this line works for me to unpickle a Python2 protocol 2 datetime pickle under Python3, where P2 is the Python2 pickle string:
pickle.loads(bytes(P2, encoding='latin1'), encoding='bytes')
For example,
>>> P2
'\x80\x02cdatetime\ndatetime\nq\x00U\n\x07Þ\x07\x12\r%%\x06á¸q\x01\x85q\x02Rq\x03.'
>>> pickle.loads(bytes(P2, encoding='latin1'), encoding='bytes')
datetime.datetime(2014, 7, 18, 13, 37, 37, 451000)
I don't understand the Python3 loads() docs with respect to the "encoding" and "errors" arguments, and can't guess whether this is the intended way. It seems at best highly obscure. But hard to argue with something that works ;-)
The code works when using encoding='bytes'. Thanks Tim for the suggestion.
So this is not a bug, but is there any sense in having encoding='ASCII' by default in pickle ?
@eddygeek, I'd still call something so unintuitive "a bug" - it's hard to believe this is the _intended_ way to get it to work. So I'd keep this open until someone with better knowledge of intent chimes in.
> The code works when using encoding='bytes'. Thanks Tim for the suggestion.
> So this is not a bug, but is there any sense in having encoding='ASCII' by default in pickle ?
It is most definitely a bug. And it adds another road block to moving python applications from 2.7 to 3.x!
encoding='bytes' has serious side effects and isn't useful in the general case. For instance, it will result in dict-keys being unpickled as bytes instead of as str after which hilarity ensues.
I got the exception
UnicodeDecodeError: 'ascii' codec can't decode byte 0xdf in position 1: ordinal not in range(128)
when testing an application for compatibility in Python 3.5 on a pickle created by Python 2.7. The pickled data is a nested data structure and it took me quite a while to determine that the single datetime instance was the culprit.
Here is a small test case that reproduces the problem::
# -*- coding: utf-8 -*-
# pickle_dump.py
import datetime, pickle, uuid
dti = datetime.datetime(2015, 10, 12, 13, 17, 42, 123456)
data = { "ascii" : "abc", "text" : u"äbc", "int" : 42, "date-time" : dti }
with open("/tmp/pickle.test", "wb") as file :
pickle.dump(data, file, protocol=2)
# pickle_load.py
# -*- coding: utf-8 -*-
import pickle
with open("/tmp/pickle.test", "rb") as file :
data = pickle.load(file)
print(data)
$ python2.7 pickle_dump.py
$ python2.7 pickle_load.py
{'ascii': 'abc', 'text': u'\xe4bc', 'int': 42, 'date-time': datetime.datetime(2015, 10, 12, 13, 17, 42, 123456)}
$ python3.5 pickle_load.py
Traceback (most recent call last):
File "pickle_load.py", line 6, in <module>
data = pickle.load(file)
UnicodeDecodeError: 'ascii' codec can't decode byte 0xdf in position 1: ordinal not in range(128)
That error message is spectacularly useless.
There are two issues here.
1. datetime.datetime accepts only bytes, not str.
2. Unpickling non-ASCII str pickled in Python 2 raises an error by default.
The second issue usually hides the first one. The demonstration of the first issue:
>>> pickle.loads(b'cdatetime\ndatetime\n(U\n\x07l\x01\x01\x00\x00\x00\x00\x00\x00tR.')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: an integer is required (got type str)
The first issue can be solved by accepting str argument and encoding it to bytes. The second issue can be solved by changing an encoding or an error handler. Following patch uses the "surrogateescape" error handler.
>>> pickle.loads(b'cdatetime\ndatetime\n(U\n\x07l\x01\x01\x00\x00\x00\x00\xc3\xa4tR.')
datetime.datetime(1900, 1, 1, 0, 0, 0, 50084)
Unfortunately setting the "surrogateescape" error handler by default has a side effect. It can hide string decoding errors. In addition, unpickling datetime will not always work with different encodings.
>>> pickle.loads(b'cdatetime\ndatetime\n(U\n\x07l\x01\x01\x00\x00\x00\x00\xc3\xa4tR.', encoding='latin1')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
UnicodeEncodeError: 'ascii' codec can't encode characters in position 8-9: ordinal not in range(128)
>>> pickle.loads(b'cdatetime\ndatetime\n(U\n\x07l\x01\x01\x00\x00\x00\x00\xc3\xa4tR.', encoding='utf-8')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: an integer is required (got type str)
> The?
The problem is that you can't unpickle a data that contains both datetime
classes (datetime, date, time) instances and strings (including attribute
names, so actually this affects instances of any Python classes). Yes, it only
affects pickles transferred between 2.x and 3.x Pythons.
Yet one possible solution is to change datetime classes in 2.x to produce more
portable pickles. But new pickles will be larger, and pickling and unpickling
will be slower, and this doesn't solve a problem with existing pickled data.
We still are receiving bug reports for 2.7.3 and like.
I wonder if this can be fixed using a fix_imports hook. I agree, it would be nice to fix this issue by modifying 3.x versions only.
> .. pickling and unpickling will be slower
If we are concerned about performance, we should definitely avoid the decode-encode roundtrip.
Here is a patch against 2.7 that makes datetime pickling portable.
It doesn't solve problem with existing pickled data, but at least it allows to convert existing pickled data with 2.7 to portable format.
Here is alternative patch for 2.7. It makes datetime pickling produce the same data as in 3.x.
The side effect of this approach: it makes datetime pickling incompatible with Unicode disabled builds of 2.x.
IMNSHO, the problem lies in the Python 3 pickle.py and it is **not** restricted to datetime instances
(for a detailed rambling see) .
In Python 2, 8-bit strings are used for text and for binary data. Well designed applications will use unicode for all text, but Python 2 itself forces some text to be 8-bit string, e.g., names of attributes, classes, and functions. In other words, **any 8-bit strings explicitly created by such an application will contain binary data.**
In Python 2, pickle.dump uses BINSTRING (and SHORT_BINSTRING) for 8-bit strings; Python 3 uses BINBYTES (and SHORT_BINBYTES) instead.
In Python 3, pickle.load should handle BINSTRING (and SHORT_BINSTRING) like this:
* convert ASCII values to `str`
* convert non-ASCII values to `bytes`
`bytes` is Python 3's equivalent to Python 2's 8-bit string!
It is only because of the use of 8-bit strings for Python 2 names that the mapping to `str` is necessary but all such names are guaranteed to be ASCII!
I would propose to change `load_binstring` and `load_short_binstring` to call a function like::
def _decode_binstring(self, value):
# Used to allow strings from Python 2 to be decoded either as
# bytes or Unicode strings. This should be used only with the
# BINSTRING and SHORT_BINSTRING opcodes.
if self.encoding != "bytes":
try :
return value.decode("ASCII")
except UnicodeDecodeError:
pass
return value
instead of the currently called `_decode_string`.
Christian,
I don't think your solution will work for date/time/datetime pickles. There are many values for which pickle payload consists of bytes within 0-127 range. IIUC, you propose to decode those to Python 3 strings using ASCII encoding. This will in turn require accepting str type in date/time/datetime constructors.
Alexander Belopolsky wrote at Thu, 15 Oct 2015 17:56:42 +0000:
> I don't think your solution will work for date/time/datetime pickles.
> There are many values for which pickle payload consists of bytes
> within 0-127 range.
Hmmmm.
> IIUC, you propose to decode those to Python 3
> strings using ASCII encoding.
Yes. There are too many BINSTRING instances that need to be Python 3
strings.
> This will in turn require accepting str
> type in date/time/datetime constructors.
These datetime... constructors are strange beasts already.
The documentation says that three integer arguments are required for
datetime.datetime but it accepts a single bytes argument anyway. I
agree that it would be much nicer if there was a
datetime.datetime.load method instead. Unfortunately, that would
require Guido's time machine to go back all the way to 2003 (at least).
So yes, the only practical solution is to accept a single str typed
argument (as long as it is ASCII only). An alternative would be to add
a dispatch table for loading functions to Python 3's pickle that would
be used by load_global. That would add indirection for the datetime
constructors but would allow support for other types requiring
arguments of type bytes.
The change I proposed in
to fix the handling of binary 8-bit strings is still necessary.
To summarize:
IMHO the solution needs to be implemented in Python 3 — otherwise
pickles with binary strings created by Python 2.x cannot be loaded in
Python 3. Changing the pickle implementation of Python 2 doesn't fix
existing pickles and couldn't fix the general problem of binary
strings, anyway.
This issue is getting old. Is there any way to solve this for Python 3.6?
TL;DR - Just one more example of why nobody should *ever* use pickle under any circumstances. It is useless for data that is not transient for consumption by the same exact versions of all software that created it.
Patches against 2.7 are not useful here. Either we write a unpickle deserializer for python 2 datetime pickles that works for all existing previous datatime pickled data formats from Python 3. Or we close this as rejected because the data formats are rightly incompatible as the in-process object states are incompatible between the two versions.
If you want to serialize something, use a language agnostic data format - ideally one with a defined schema. Never pickle.
Advice for those who have stored such data in Python 2 pickles: Write a Python 2 program to read your data and rewrite it in a portable data format that has nothing to do with pickle. Anything else is a gross hack.
NumPy starves from the same issue. In NumPy this problem was solved by requiring encoding='latin1' passed to unpickler. It makes sense to use the same approach for datetime classes.
New changeset 8452ca15f41061c8a6297d7956df22ab476d4df4 by Serhiy Storchaka in branch 'master':
bpo-22005: Fixed unpickling instances of datetime classes pickled by Python 2. (GH-11017)
New changeset 0d5730e6437b157f4aeaf5d2e67abca23448c29a by Serhiy Storchaka in branch '3.7':
[3.7] bpo-22005: Fixed unpickling instances of datetime classes pickled by Python 2. (GH-11017) (GH-11022)
New changeset 19f6e83bf03b3ce22300638906bd90dd2dd5c463 by Serhiy Storchaka (Miss Islington (bot)) in branch '3.6':
bpo-22005: Fixed unpickling instances of datetime classes pickled by Python 2. (GH-11017) (GH-11022) (GH-11024)
New changeset 1133a8c0efabf6b33a169039cf6e2e03bfe6cfe3 by Serhiy Storchaka in branch 'master':
bpo-22005: Fix condition for unpickling a date object. (GH-11025)
I?
This is the same hack as in NumPy, so we are at least consistent here. I think we have to keep it some time after 2020, maybe to 2025.
@Serhiy Any chance we can roll these back before the release so that they can have some time for discussion? I have serious concerns about having to support some Python 2/3 compatibility hack in datetime for the next 6 years. If this is worth doing at all, I think it can safely wait until the next release.
This issue is already open for a long time. There is a problem which can not be worked around from the user side. I want to get it solved in 3.6, and today is the last chance for this. This is important for migrating from Python 2 to Python 3. You can open a discussion on Python-Dev, and if there will be significant opposition, this change can be reverted before releasing the final version of 3.6.8.
I do not care enough about this to fight about it.
The issue has been open long enough that I do not think it justified the urgency of rushing through a patch just before the release and merging without review, but now that it is in the release of multiple versions, I think we may be stuck with*.
Paul Ganssle wrote at Fri, 07 Dec 2018 17:22:36 +0000:
> > Gregory P. Smith (gregory.p.smith) 2017-03-02 18:57
> > TL;DR - Just one more example of why nobody should *ever* use pickle
> > under any circumstances. It is useless for data that is not transient
> > for consumption by the same exact versions of all software that
> > created*.
This is completely and utterly wrong, to put it mildly.
The official documentation of the pickle module states (I checked 2.7
and 3.7):
The pickle serialization format is guaranteed to be backwards
compatible across Python releases.
Considering that this issue is 4.5 years old, one would assume that the
pickle documentation would have been changed in the meantime if
Gregory's and Paul's view matched reality.
But my or your personal views about the usability of pickle don't
matter anyway. There are too many libraries and applications that have
been using pickle for many years.
I personally know about this kind of usage in applications since 1998.
In that particular case, the pickled information resides on machines
owned by the customers of the applications and **must** be readable by
any new version of the application no matter how old the file
containing the pickle is. Rewriting history by some Python developers
is not going to impress the companies involved!
Have a nice day!
New changeset e328753d91379274b699b93decff45de07854617 by Gregory P. Smith in branch 'master':
bpo-22005: Document the reality of pickle compatibility. (GH-11054)
New changeset 331bfa4f2c3026a35e111303df0f198d06b4e0c8 by Miss Islington (bot) in branch '3.7':
bpo-22005: Document the reality of pickle compatibility. (GH-11054)
It is fundamentally impossible for pickled data to magically cross the 2 and 3 language boundary unscathed.
The basic str/bytes/unicode types in the language changed meaning. Code must be written manually by the data owners to fix that up based on what the types and encodings should actually be in various places given the language version the data is being read into.
The code in the PRs for this bug appears to do that in the requisite required hacky manner for stored datetime instances.
This fact isn't new. It happened 10 years ago with the release of Python 3.0. The documentation is not a contract. I'm fixing it to mention this.
Serhiy: should this one be marked fixed?
With Gregory's addition I think this issue can be considered fixed. Thank you Gregory. | https://bugs.python.org/issue22005 | CC-MAIN-2021-04 | refinedweb | 2,576 | 66.33 |
I'm doing the ruby koans exercises and am a bit confused about why the answers are such in the test_default_value_is_the_same_object method exercises. Below is the code:
def test_default_value_is_the_same_object
hash = Hash.new([])
hash[:one] << "uno"
hash[:two] << "dos"
assert_equal ["uno", "dos"], hash[:one]
assert_equal ["uno", "dos"], hash[:two]
assert_equal ["uno", "dos"], hash[:three]
end
I'm not sure why no matter what the key is, the value is always "uno" and "dos"? I thought when the key is
one, the returned value should be "uno"; when key is "two", the returned value should be "dos". Why no matter what the keys are, the value is always an array?
Thank you and I'm looking forward to your answer!
hash = Hash.new([])
Will instantiate a new array with
[] (let's call it Harvey), then make a hash with Harvey as its default.
hash[:one]
doesn't exist, so you get Harvey. Harvey gets
"uno" added to him, using the
Array#<< operator (equivalent to
harvey.push("one"))
hash[:two]
also doesn't exist, so you get Harvey again (who, remember, already contains
"uno"). He now also gets
"dos".
hash[:three]
returns Harvey, still with his
"uno" and
"dos".
If you wanted the code to behave like you think it should, with a different array in each key, you need to return a new array every time you want a default, not Harvey every single time:
hash = Hash.new { |h, k| h[k] = [] }
And if you just want the hash to have nothing to do with arrays, ignore Harvey, and use
Hash#[]= instead of
Array#<<:
hash = Hash.new() hash[:one] = "uno" hash[:two] = "dos" | http://m.dlxedu.com/m/askdetail/3/e9d5cfe73a83422aeb8cbe5aa3b14a83.html | CC-MAIN-2018-22 | refinedweb | 271 | 71.65 |
Table of Contents
Before we can explore the available APIs for processing XML documents with Java, we’re going to need a few good examples. For most of this book, my examples are going to focus on XML protocols. These are XML applications used for machine-to-machine exchange of information across the Internet over HTTP. In this chapter I’ll show you how such documents move from one machine to another, and how you can use Java to interpose yourself in the process. However, since this is not a book about network programming, I’m going to be careful to keep all the details of network transport separate from the generation and processing of XML documents. When you work with an XML document, you don’t care whether it came from a file, a network socket, a string, or something else.
Three such XML protocol applications are of particular interest. The first is a very straightforward application called RSS. RSS is used to exchange headlines and abstracts between different Web news sites. It is available in two versions, RSS 0.9.1, which is based on an early working draft of the Resource Description Framework (RDF), and RSS 1.0 which is based on the final W3C recommendation of RDF. Both variants are used on the Web today.
The second XML application I’ll investigate in some detail is XML-RPC. XML-RPC supports remote procedure calls across the Internet by passing method names and arguments embedded in an XML document over HTTP. The third example application is a more complex implementation of this idea called SOAP. Whereas XML-RPC uses only elements, SOAP adds attributes and namespaces as well. SOAP even lets the body of the message be an XML element from some other vocabulary, so it opens up a host of other interesting examples.
One of the major uses of XML is for exchanging data between heterogenous systems. Given almost any collection of data, it’s straightforward to design some XML markup that fits it. Since XML is natively supported on essentially any platform of interest, you can send data encoded in such an XML application from point A to point B without worrying about whether point A and point B agree on how many bytes there are in a float, whether ints are big endian or little endian, whether strings are null delimited or use an initial length byte, or any of the myriad of other issues that arise when moving data between systems. As long as both ends of the connection agree on the XML application used, they can exchange information without worrying about what software produced the data. One side can use Perl and the other Java. One can use Windows and the other Unix. One can run on a mainframe and the other on a Mac. The document can be passed over HTTP, e-mail, NFS, BEEP, Jabber, or sneakernet. Everything except the XML document itself can be ignored.
The details of the XML markup used depend heavily on the information you’re exchanging. If you’re exchanging financial data, you might use the Open Financial Exchange (OFX). If you’re exchanging genetic codes, you might use the Gene Expression Markup Language (GEML). If you’re exchanging news articles in a syndication service, you might use NewsML. And if no standard XML application exists that fits your needs, you’ll probably invent your own; but whatever XML application you choose, there are certain features that will crop up again and again and that can benefit from standardization. These include the envelope used to pass the data and the representations of basic data types like integer and date.
When only two systems are involved, they only talk to each other, and they always send the same type of message, an envelope may not be needed. It’s enough for one system to send the other the message in the agreed upon XML format. However, when it’s actually many dozens, hundreds, or even thousands of different systems exchanging many different kinds of messages in many different ways, it’s useful to have some standards that are independent of the content of the message. This offers up some hope that when a message in an unrecognized format is received, it can still be processed in a reasonable fashion. For example, a system might receive a message ordering one thousand “Frodo Lives” buttons but not know how to handle that order. However, it may be able to read enough information from the envelope to route the request to the program that does know how to process the order.
In XML-RPC, essentially all the markup is the envelope and all the text content is the data inside the envelope. SOAP and RSS are a little more complex. For SOAP, the envelope is an XML document, and the data is too. In some ways RSS, especially RSS 1.0, is the most complex of all because it’s based on the relatively complex RDF syntax. RDF mixes the envelope and the data together so that you can’t point to any one element in the whole document and say “That’s the envelope,” or “That element is the data.” Instead, pieces of both the envelope and the data are intermingled throughout the complete document. In all three cases, however, it’s straightforward to extract the data from the envelope for further processing.
Another area that’s ripe for standardization is the proper representation of low-level data such as dates and numbers. Nobody really cares how many bytes there are in an int as long as there are enough to hold all the values they want to hold. Nobody really cares whether dates are written Day-Month-Year or Month-Day-Year as long as it’s easy to tell which is which. It doesn’t really matter how this information is passed, as long as there’s one standard way of doing it that everyone can agree on and process without excessive hassle.
In XML all data of any type must be passed as text. The proper textual representation of simple data types such as integer and date is trickier than most developers initially assume. For example, integers can be straightforwardly represented in the form 42, -76, +34562, 0, and so forth. The normal base-10 representation with optional plus or minus signs is fully adequate for most needs. However, consider the number 28562476535, the dollar value of Bill Gates’s Microsoft stock holdings alone as of July 24, 2002. This is a perfectly good integer, albeit a large one. However, it’s so large that trying to use it in many programs will lead to a crash or some other form of error.
Floating point numbers are even worse. Two different computers can look at an unambiguous string such as 65431987467.324345192 and interpret it as two different numbers. Dates cause problems even for humans. Is 07/04/01 the fourth of July, 2001? the fourth of July 1901? the seventh of April 2001? Some other date? These are all very really issues that cause real problems in systems today.
XML itself doesn’t standardize the text representation of data, but the W3C XML Schema Languages does. In particular, schemas define the 44 simple data types shown in Table 2.1. By assigning these types to particular elements, you can clearly state what a particular string means in a syntax everyone can understand. And if these types aren’t enough, the W3C XML Schema Language also lets you define new types that are combinations or restrictions of these basic types.
Even without using schema validation or the full schema apparatus, you can use these types in your own documents. Simply attach an xsi:type attribute to any element identifying the type of that element’s content. The xsi prefix is mapped to the namespace URI. Example 2.1 shows an XML document that uses these types to label different parts of an order document. Notice that some things that might naively be assumed to be numeric types are in fact strings.
Example 2.1. An XML document that labels elements with schema simple types
<?xml version="1.0" encoding="ISO-8859-1"?> <Order xmlns: <Customer id="c32" xsi:Chez Fred</Customer> <Product> <Name xsi:Birdsong Clock</Name> <SKU xsi:244</SKU> <Quantity xsi:12</Quantity> <Price currency="USD" xsi:21.95</Price> <ShipTo> <Street xsi:135 Airline Highway</Street> <City xsi:Narragansett</City> <State xsi:RI</State> <Zip xsi:02882</Zip> </ShipTo> </Product> <Product> <Name xsi:Brass Ship's Bell</Name> <SKU xsi:258</SKU> <Quantity xsi:1</Quantity> <Price currency="USD" xsi:144.95</Price> <Discount xsi:.10</Discount> <ShipTo> <GiftRecipient xsi: Samuel Johnson </GiftRecipient> <Street xsi:271 Old Homestead Way</Street> <City xsi:Woonsocket</City> <State xsi:RI</State> <Zip xsi:02895</Zip> </ShipTo> <GiftMessage xsi: Happy Father's Day to a great Dad! Love, Sam and Beatrice </GiftMessage> </Product> <Subtotal currency='USD' xsi: 393.85 </Subtotal> <Tax rate="7.0" currency='USD' xsi:28.20</Tax> <Shipping method="USPS" currency='USD' xsi:8.95</Shipping> <Total currency='USD' xsi:431.00</Total> </Order>
As well as explicit labeling, a document can use a schema to indicate the type. However, right now the APIs for such things aren’t finished so it’s best to explicitly label elements when the types are important.
XML-RPC only uses the int, boolean, decimal, dateTime, and base64 types as well as a string type that’s restricted to ASCII. It also does not allow the NaN, Inf, and -Inf values for double. It does not use xsi:type attributes, relying instead on predefined semantics for particular elements. SOAP allows all 44 types and does use xsi:type attributes to label elements. | http://www.cafeconleche.org/books/xmljava/chapters/ch02.html | crawl-002 | refinedweb | 1,635 | 63.29 |
I just wanted to know if in windows forms I can create a red line around the border of a combobox when its changed? Like just a flash of red and then gone again just to show that it was changed. Catch the user's eye or something. I will provide screens to represent what i would like.
If it is possible, please tell me where I can look it up to gain some information on it.
No border
Border flash on change
Border gone again after a second or two
Anytime the combobox changes, I want to flash a border to indicate it has changed.
The main idea is using a timer and drawing a border for some times. You can draw the border using different solutions. For example you can (1) draw the border on
ComboBox or (2) you can draw border on
Parent of
ComboBox.
In the answer which I posed, I created a
MyComboBox and added a
FlashHotBorder method which can be called to flash border. Also I added a
HotBorderColor property which can be used to set border color.
Flashing Border of ComboBox
To draw a border for
ComboBox you can handle
WM_Paint message of
ComboBox and draw a border for control. Then to flash the border, you need to use a timer and turn on and turn off border for some times:
MyComboBox Code
I've created a
FlashHotBorder method which you can call in
SelectedIndexChanged event. Also if always you want to flash border when selected index changes, you can call it in
OnSelectedIndexChanged. I prefer to call it in event handler. Here is the implementation:
using System.Drawing; using System.Windows.Forms; public class MyComboBox : ComboBox { int flash = 0; private const int WM_PAINT = 0xF; private int buttonWidth = SystemInformation.HorizontalScrollBarArrowWidth; public Color HotBorderColor { get; set; } private bool DrawBorder { get; set; } Timer timer; public MyComboBox() { this.HotBorderColor = Color.Red; timer = new Timer() { Interval = 100 }; timer.Tick += new System.EventHandler(timer_Tick); } protected override void WndProc(ref Message m) { base.WndProc(ref m); if (m.Msg == WM_PAINT && this.DrawBorder) using (var g = Graphics.FromHwnd(this.Handle)) using (var p = new Pen(this.HotBorderColor)) g.DrawRectangle(p, 0, 0, this.Width - 1, this.Height - 1); } public void FlashHotBorder() { flash = 0; timer.Start(); } void timer_Tick(object sender, System.EventArgs e) { if (flash < 10) { flash++; this.DrawBorder = !this.DrawBorder; this.Invalidate(); } else { timer.Stop(); flash = 0; DrawBorder = false; } } protected override void Dispose(bool disposing) { if (disposing) { timer.Dispose(); } base.Dispose(disposing); } }
Then it's enough to use this event handler for
SelectedIndexChanged event of eeach combo which you want to flash:
private void myComboBox1_SelectedIndexChanged(object sender, EventArgs e) { var combo = sender as FlatCombo; if (combo != null) combo.FlashHotBorder(); } | https://codedump.io/share/BjTeqfYYi0ud/1/create-an-outline-of-a-certain-color-on-combobox-when-selected-index-is-changed | CC-MAIN-2018-09 | refinedweb | 449 | 59.19 |
A common thing you will want to learn in React is how to pass a value as a parameter through the onClick event handler. Read on to learn how!
import React from 'react'; const ExampleComponent = () => { function sayHello(name) { alert(`hello, ${name}`); } return ( <button onClick={() => sayHello('James')}>Greet</button> ); } export default ExampleComponent;
For those who want the TL;DR and go straight to the code, take a look at the example above ☝️..
Typically, to call a function when we click a button in React, we would simply pass in the name of the function to the onClick handler, like so:
... return ( <button onClick={sayHello}>Greet</button> ); ...
Notice how in the ExampleComponent code above, we pass in more than just the function name.
In order to pass a value as a parameter through the onClick handler we pass in an arrow function which returns a call to the sayHello function.
In our example, that argument is a string: ‘James’:
... return ( <button onClick={() => sayHello('James')}>Greet</button> ); ...
It’s this trick of writing an inline arrow function inside of the onClick handler which allows us to pass in values as a parameter in React.
Let’s explore some more common examples!
Pass a Button’s Value as a Parameter Through the onClick Event Handler
You might want to pass in the value or name of the button or input element through the event handler.
... return ( <button value="blue" onClick={e => changeColor(e.target.value)}>Color Change</button> ); ...
The button now has an additional attribute, named value. We can get access to the button’s attributes in its event handlers by accessing the event through the event handler.
The example above shows the variable e that is provided from the onClick event handler. This stands for event. Once we have the event, we can access values such as the value or name attribute.
To learn more about the onClick event handler read my tutorial on the onClick Event Handler (With Examples).
💻 More React Tutorials
Hello, Can I get your complete REACT ebook? Let me know cost, I will PayPal.
Regards | https://upmostly.com/tutorials/pass-a-parameter-through-onclick-in-react | CC-MAIN-2020-29 | refinedweb | 346 | 64.3 |
tag:blogger.com,1999:blog-51773053108279782432017-02-14T17:42:48.169+02:00Anton Staykov's BlogAnton Staykov with Azure Stream Analytics<p>Just little over a month ago Microsoft <a href="">announced</a> public preview of a new service – <a href="">Stream Analytics</a>.. </p> <p <a href="">getting started tutorial here</a>.:</p> <p>Now. In order to make the things more interesting I made the following adjustments:</p> <ul> <li>Scaled my event hub to 10 scale units. Thus achieving potentially 10000 events per seconds target.</li> <li>Changed the Event Hub sample code a bit to bump up more messages.</li> <li>Created small PowerShell to help me start N simultaneous instances of my command line app</li> <li>Did everything on a VM in same Azure DC (West Europe) where my Event Hub and Stream Analytics are running</li></ul> <p>Code changes to the original Service Bus Event Hub demo code.</p> <p>I stripped out all unnecessary code (i.e. creating the event hub – I have already created it, I know it is there, parsing command line arguments, etc.). My final Program.cs looks like this:</p><pre class="brush: csharp;"> static void Main(string[] args)<br /> {<br /> System.Net.ServicePointManager.DefaultConnectionLimit = 1024;<br />for($i=1; $i -le 20; $i++)<br />{<br /> start .\BasicEventHubSample.exe <br />}<br /></pre><br /><p.</p><br /><p <a href="">new service tiers of Azure SQL Database</a>.</p><br /><p>Bottom line, with my last try, I bumped 1 000 000 events into Event hub in just about 75 seconds! That makes a little above 13 000 events in second! With just couple of line of code. How cool it is to look at graphic like this:</p><br /><p><img src=""></p><br /><p>How cool it is to look at graphics like the Azure Event Hubs one:</p><br /><p><img src=""></p><br /><p>Azure Event hubs, millions of messages. How long would it take us if we had to create a local test lab to process that amount of data?</p><br /><p>We have to not forget some of the known issues and limitations for the Stream Analytics <a href="">as listed here</a>. Most important of them being:</p><br /><ul><br /><li>Geographic availability (Central US and West Europe)</li><br /><li>Streaming Unit quota (12 streaming units per azure per region per subscription!)</li><br /><li>UTF-8 as the only supported encoding for CSV and JSON input sources</li><br /><li>Really neat performance metrics such as latency are not currently provided</li></ul><br /><p>With this base line, I am convinced that Azure Event Hubs can really deliver millions of events per second throughput, and that Stream Analytics can really process that amount of data. </p> <img src="" height="1" width="1" alt=""/>Anton Staykov authentication in Azure Web Sites<p>Since couple of year (3-4) I strongly evangelize single-sign-on, federated identity, claims authentication and so on. There are at least two major points to support that:</p> <p>You (as developer) don’t want to be responsible for the leak of tens or hundreds of thousands passwords and personal data. This responsibility is just too high.</p> <p>Living in 21st century, there is not a single Internet user, who does not have at least 2 online identities which can be used for authentication (Google, Microsoft, FaceBook, Yahoo, etc.)</p> <p <a href="">Azure Active Directory</a>!</p> <p>What is <a href="">Azure Active Directory</a> – this is the Identity management system that is responsible for all Office 365 subscribers, Dynamics CRM Online subscribers, Microsoft Intune and all Azure Subscriptions! You may even had no idea, but with every Azure subscription, comes one default <a href="">Azure Active Directory</a>. So, if you are using Azure, regardless of that being MSDN benefit, Regular pay-as-you-go or a free Trial, you already have one <a href="">Azure Active Directory</a> tenant! If you wish, you can learn a bit more about how to manage your <a href="">Azure Active Directory here</a>.</p> <p>So, dear user, you have <a href="">created your Azure Web Site</a> and now you have to protect it with your Azure Active Directory tenant. Here are the three easy steps you have to follow:</p> <p>1. Navigate to the Configure tab of your Web site. Scroll down to Authentication / Authorization section and click Configure</p><img src="" width="629" height="436"> <p>3. Select your Azure Active Directory (if you have not changed anything, the name of your Active directory will most probably be “Default Directory) and chose “Create new application”:</p> <p><img src=""></p> <p>Done:</p> <p><img src="" width="623" height="432"></p> <p>Now your site is protected by Azure Active directory, has automatic Claims Authentication, you don’t have to worry about salting and hashing users passwords, don’t need to worry about how user would reset their password and so on. Protecting your site has never been easier!</p> <p>What are the catches! <img class="wlEmoticon wlEmoticon-smile" style="border-top-style: none; border-bottom-style: none; border-right-style: none; border-left-style: none" alt="Smile" src=""> There is always a catch! First of all, this service is yet in preview and has some limitations:</p> <ul> <li>You can only protect your site with your Azure Active directory, but you can add Microsoft Accounts (i.e. <a href="mailto:someone@hotmail.com">someone@hotmail.com</a>) to your Azure Active Directory, but not any external users (i.e. FaceBook, Google, Yahoo)</li> <li>With the current release all users in the configured directory will have access the application. </li> <li>With the current release the whole site is placed behind login the requirement (you cannot define “public” pages, but it is relatively easy to do this in a web.config file). </li> <li>Head less authentication/authorization for API scenarios or service to service scenarios are not currently supported. </li> <li>With the current release there is no distributed log-out so logging the user out will only do so for this application and not all global sessions (which means, that if user comes back, he/she will automatically be logged-in again). </li></ul> <p>Quick, easy and works across the whole stack of supported platforms on Azure Web Sites (.NET, PHP, Java, Node.JS, Python).</p> <img src="" height="1" width="1" alt=""/>Anton Staykov me your e-mail to tell you if you are being hacked!<h3>History</h3> <p>A lot of accounts from public services have recently been hacked, exploited, publicly listed, etc. With every single account breach there are at least 5 services that tell you “check if your account has been hacked” and ask you for your e-mail or account username. Almost never asking for your password. Here I will try to explain why You, dear user shall avoid using any of these services, even if the operator behind the service seems to be respectful like the “<a href="">Bundesamt für sicherheit in der Informationstechnik</a>” (or the German Agency for Information Security) which also offer the service <a href="">“Check if your account exists in the hackers networks that we monitor”</a>.</p> <h3>Problem</h3> <p>This year started with a lot of account breaches in different public services (mainly e-mail services). One such news <a href="">was announced on the very same German Agency for Information Security</a> where they so friendly offer you the free service of checking whether your account is subject to any identity theft. Then it was the <a href="">eBay accounts breach</a>. Then the <a href="">iCloud celebrity accounts breach</a>. Then <a href="">Google account breach</a>. Probably much more in between. With every massive and hysterical announced account breach come a dozen of sites to tell you</p> <blockquote> <p><em><font size="3">You should immediately change your password!</font></em></p></blockquote> <p>and </p> <blockquote> <p><em><font size="3">Hey, gimmie your e-mail, I will tell you if it is hacked!</font></em></p></blockquote> <p>pretending to </p> <blockquote> <p><font size="3">I will not save your e-mail address anywhere, you can trust me!</font></p></blockquote> <p>While the first warning have some sense, none of the others does! </p> <blockquote> <p><font size="4"><em>For Your own good and safe Internet browsing, do not ever use any services that pretend to tell you if your account is being hacked or not!</em></font></p></blockquote> <p>Why? Here is the story of “Why?”</p> <h3>How the attacks work</h3> <p>Without pretending for be a thorough analysis, let me tell you how these attacks (for hacking user accounts) work.</p> <p>Online user identities are usually composed from three main components:</p> <ul> <li>A service (Facebook, Google, Microsoft, eBay, Apple, etc.) <li>A Username / login <li>A Password</li></ul> <p>In order to “hack” your account, the attacker have to first focus on a Service. This is the easiest part. Just follow for couple of months the security reports from one or more monitoring agencies (like <a href="">Symantec</a>, <a href="">SANS Institute</a>, or any other) and watch out which service comes out most often. Or just pick one. </p> <p>Ok, the attacker has identified the service to attack. Say this is Facebook. What next? Now he/she has to hack tens of millions of accounts. Using techniques like <a href="">brute-force</a> attack to identify both login name + password will simply not work. Period. Nobody does this today! The attacker will look for other techniques to obtain, be careful here, <strong>your login name</strong>! Exactly! Your e-mail address. This very same e-mail address that other “friendly” services ask you to give them to check if your account is being hacked / hijacked! </p> <p>By giving your login name / E-mail address to a “let me check this for you” service, you simply fill out attackers database with <strong>real accounts </strong>that can later be used for password hacking!</p> <p>Now, because, You dear user have left your e-mail address in a similar service, You are already potentially subject to hacker attack! <strong>Please, never give your e-mail address or login name to any services of this kind !</strong> Not even to the German Agency for Information security. Even if the service seems to be trustful, using such a service does not do any good for you at all! It only serves its owners for different purposes.</p> <p>We slowly came to the last component of an Online identity that an attacker has to crack to solve the puzzle – <strong>the password</strong>. Your precious “123456”. Again, passwords are (almost) never hacked using brute-force attacks. Attackers usually use dictionaries of most widely used password. So called <a href="">dictionary attack</a>. Simple words, no (or few) special characters, no (or few) capital letters. <a href="">Analysis report</a> shows that even this recent iCloud security breach was committed using dictionaries. </p> <h4>Next steps</h4> <p>OK, now what? </p> <p>First and foremost, never give your account (e-mail address / login name) to a 3rd parties! The worst that could happen – you will be primary target for attacks, if you were safe until now! The least that could happen – you will be entered into a list for further monitoring – SPAM, Hack attacks, etc.! Lists with valid e-mail addresses are being trade (sold for real money!) over the internet ever day!</p> <p>To make sure you are secure online, never use a dictionary word in your password! Your password shall not consist of a single word! Most of the online services already have mechanisms to prevent you from using weak passwords. Trust these “password strength” indicators and never let your password be in the “weak zone”. </p> <p>Well, be careful and always think about your own Internet safety! And never ever give your account from one Service (say Google) to another service (say German Agency for Information Security). For your Google account, trust only Google. For your Facebook account, trust only Facebook, etc.</p> <p>If you see a report for account hack or security breach, never rush for other services, then the very one you use and is responsible for your account. Most of the big players on the market already have forensic tools in place, and make sure you know them and you know how to use them!</p> <h4>Google Account </h4> <p>If you use Google, then navigate to the security section in Your Account. When you are logged-in with your Google account on any of Google’s service, click on the little arrow next to your e-mail and select “Account”:</p> <p><img src=""></p> <p>Then navigate to Security:</p> <p><img src="" width="615" height="215"></p> <p>This part, has the “Recent activity” section which shows really good and interesting information.</p> <h4>Microsoft Account (former Windows Live ID / Hotmail) </h4> <p>If you use Microsoft services the “Recent activity” information is in similar place. Login with your Microsoft account on any of Microsoft services (Hotmail/Outlook, OneDrive) and click on your name:</p> <p><img src="" width="283" height="193"></p> <p>Under “Account settings” you will find “Recent Activity”:</p> <p><img src="" width="357" height="244"></p> <h4>Final notes</h4> <p>Again, never leave (enter, give away) your personal account information to anyone on the Internet!</p> <p>Use strong passwords. It is not that important to change the password often! It is important to use strong password and regularly check the account activity section. Change your password only if you see suspicious action in the recent activity! Or if you receive a legitimate message from your service provider that you have to change your password. Like the e-mail all eBay users received in May 2014:</p> <p><img src="" width="593" height="486"></p> <p>When you receive such an e-mail, first check its authenticity – check the sender and reply-to addresses in message properties. Check for official information on senders (in that case eBay) public internet site. Never click on any link directly from the e-mail. Just navigate to the service as usual and change your password.</p> <p>When you enter your account information (login and password) <strong><font size="4">always</font></strong> check if you do it on the providers sign-in page by verifying web page’s SSL Certificate! All the Big players pay for Extended Validation Certificate which makes the address bar / Certificate path green and displays their name (EV stands for Extended Validation):</p> <p><img src=""></p> <p><img src=""></p> <p>While others just save couple of hundred dollars and not pay for Extended Validation. Still providing a Trusted and encrypted connection with the site: </p> <p><img src=""></p> <p><img src=""></p> <p><img src=""></p> <p><strong>NEVER ENTER YOUR CREDENTIALS</strong>, if the SSL Connection is not verified or not trusted:</p> <p><img src=""></p> <img src="" height="1" width="1" alt=""/>Anton Staykov PowerShell IaaS bulk add Endpoints<p>There are scenarios when your VMs on Azure cloud will need a lot of EndPoints. Of course you have to always be aware of the <a href="">limits that come with each Azure service</a>. But you also don’t want to add 20 endpoints (or 50) via the management portal. It will be too painful. </p> <p>Luckily you can extremely easy add as many endpoints as you will using the following simple PowerShell script:</p><pre><br />Add-AzureAccount<br />Select-AzureSubscription -SubscriptionName "Your_Subscription_Name"<br />$vm = Get-AzureVM -ServiceName "CloudServiceName" -Name "VM_Name"<br />for ($i=6100; $i -le 6120; $i++)<br />{<br /> $Gist</a>. </p><br /><p>Of course, you can use this script, with combination of <a href="">Non-Interactive OrgID Login Azure PowerShell</a> to fully automate your process.</p> <img src="" height="1" width="1" alt=""/>Anton Staykov PowerShell non-interactive login<p>An interesting topic and very important for automation scenarios is how to authenticate a PowerShell script by providing credentials non-interactively. </p> <p>Luckily a recent version of Azure PowerShell (0.8.6) you can provide additional <strong><em>–credential</em></strong> parameter to the <a href="">Add-AzureAccount</a> command (hopefully documentation will be updated soon to reflect this additional parameter). This is very helpful and the key point to enable non-interactive PowerShell Automations with organizational accounts (non-interactive management with PowerShell has always been possible with a Management Certificate).</p> <p>In order to provide proper credentials to the Add-AzureAccount we need to properly protect our password and store it in a file, that can later be used. For this we can use the following simple PowerShell commands:</p><pre>read-host -assecurestring | convertfrom-securestring | out-file d:\tmp\securestring.txt<br /><br /></pre><br /><p>Next we have to use the previously saved password to construct the credentials needed for Add-AzureAccount:</p><pre># use the saved password <br />$password = cat d:\tmp\securestring.txt | convertto-securestring <br /># currently (August, the 13nd, 2014) only organizational accounts are supported (also with custom domain). <br /># Microsoft Accounts (Live ID) are not supported <br />$Gist</a>.</p><br /><p>Credits go to <a href="">Jamie Thomson</a> and fellow MVP <a href="">Mike Wood</a> from their contribution on <a href="">StackOverflow</a>.</p> <img src="" height="1" width="1" alt=""/>Anton Staykov Azure – secrets of a Web Site<p><a href="">Windows Azure Web Sites</a> are, I would say, the highest form of Platform-as-a-Service. As per documentation “<em>The fastest way to build for the cloud</em>”.?</p> <h2>Project KUDU</h2> <p>What very few know or realize, that Windows Azure Websites runs <a href="">Project KUDU</a>, which is publicly available on <a href="">GitHub</a>. Yes, that’s right, Microsoft has released Project KUDU as open source project so we can all peek inside, learn, even submit patches if we find something is wrong.</p> <h2>Deployment Credentials</h2> <p <a href="">WIKI page</a>.</p> <h2>KUDU console</h2> <p>I’m sure very few of you knew about the live streaming logs feature and the development console in Windows Azure Web Sites. And yet it is there. For every site we create, we got a domain name like</p> <p><a href=""></a></p> <p>And behind each site, there is automatically created one additional mapping:</p> <p><a href=""></a></p> <p>Which currently looks like this:</p> <p><img src=""></p> <p:</p> <p><img src=""></p> !</p> <h2>Log Stream</h2> <p>The last item in the menu of your KUDU magic is Streaming Logs:</p> <p><img src=""></p> <p>Here you can watch in real time, all the logging of your web site. OK, not all. But everything you’ve sent to System.Diagnostics.Trace.WriteLine(string message) will come here. Not the IIS logs, your application’s logs.</p> <p><strong>Web Site Extensions</strong></p> <p>This big thing, which I described in my previous post, is all developed using KUDU Site Extensions – it is an Extension! And, if you played around with, you might already have noticed that it actually runs under</p> <p><a href=""></a></p> <p <a href="">Site Extensions WIKI page on the KUDU project</a>. This is also interesting part of KUDU where I suggest you go, investigate, play around!</p> <p>Happy holidays!</p> <img src="" height="1" width="1" alt=""/>Anton Staykov the trail-deploy-test time with Windows Azure Web Sites and Visual Studio Online<h2>Visual Studio Online</h2> <p>Not long ago <a href="">Visual Studio Online</a> went GA. What is not so widely mentioned is the hidden gem – preview version of the actual Visual Studio IDE! Yes, this thing that we use to develop code has now gone online as preview (check the <a href="">Preview Features page on the Windows Azure Portal</a>). </p> <p>- What can we do now? <br>- Live, real-time changes to a Windows Azure Web Site!<br>- Really !? How?</p> <p>First you need to create new VSO account, if you don’t already have one (please waste no time but <a href="">get yours here</a>!). Then you need to link it to your Azure subscription! Unfortunately (or should I use “<strong>ironically</strong>”?) account linking (and creating from within the Azure management portal) is not available for an MSDN benefit account, as per <a href="">FAQ here</a>. </p> <h2>Link an existing VSO account</h2> <p>Once you get (or if you already have) a VSO account, you can link it to your Azure subscription. Just sign-in to the Azure Management portal with the same Microsoft Account (Live ID) used to create VSO account. There you shall be able to see the Visual Studio Online in left hand navigation bar. Click on it. A page will appear asking you to create new or link existing VSO account. Pick up the name of your VSO account and link it!</p><img style="margin: 0px" src=""> <h2> </h2> <h2>Enable VSO for an Azure Web Site</h2> <p>You have to enable VSO for each Azure Web Site you want to edit. This can be achieved by navigating to the target Azure Web Site inside the Azure Management Portal. Then go to <strong><em>Configure</em></strong>. Scroll down and find “Edit Site in Visual Studio Online” and switch this setting to ON. Wait for the operation to complete!</p> <p><img src=""></p> <h2>Edit the Web Site in VSO</h2> <p>Once the Edit in VSO is enabled for you web site, navigate to the dashboard for this Web Site in Windows Azure Management Portal. A new link will appear in the right hand set of links “Edit this Web Site”:</p> <p><img src=""></p> <p>The VSO IDE is protected with your deployment credentials (if you don’t know what is your deployment credentials, please take a few minutes to read through <a href="">this article</a>). </p> <p>And there you go – your Web Site, your IDE, your Browser! What? You said that I forgot to deploy my site first? Well. Visual Studio Online<strong> is</strong> Visual Studio Online. So you can do “File –> New” and it works! Oh, yes it works: </p> <p><img src=""></p> <p>Every change you make here is immediately (in real-time) reflected to the site! This is ultimate, the fastest way to troubleshoot issues with your JavaScript / CSS / HTML (Views). And, if you were doing PHP/Node.js – just edit your files on the fly and see changes in real-time! No need to re-deploy, re-package. No need to even have IDE installed on your machine – just a modern Browser! You can edit your site even from your tablet!</p> <h2>Where is the catch?</h2> <p>Oh, catch? What do you mean by “Where is the catch”? The source control? There is integrated GIT support! You can either link your web-site to a Git (GitHub / VSO project with GIT-based Source Control), or just do work with local GIT repository. The choice is really yours! And now you have fully integrated source control over your changes!</p> <img src="" height="1" width="1" alt=""/>Anton Staykov Azure Migration cheat-sheet<p>I.</p> <h3>Disclaimer</h3> <blockquote> <p).</p></blockquote> <h3>Database</h3> <p>If you work with Microsoft SQL Server it shall be relatively easy to go. Just download, install and run against your local database the <a href="">SQL Azure Migration Wizard</a>. It is <strong>The</strong> tool that will migrate your database or will point you to features you are using that are not compatible with SQL Azure. The tool is regularly updated (latest version is from a week before I write this blog entry!). </p> <p>Migrating schema and data is one side of the things. The other side of Database migration is in your code – how you use the Database. For instance SQL Azure does not accept “<strong>USE [DATABASE_NAME]</strong>”:</p> <p>[schema_name].[table_name].[column_name], </p> <p>instead of </p> <p>[database_name].[schema_name].[table_name].[column_name].</p> <p>Another issue you might face is the lack of support for SQLCLR. I once worked with a customer who has developed a .NET Assembly and installed it in their SQL Server to have some useful helpful functions. Well, this will not work on SQL Azure.</p> <p>Last, but not least is that you (1) shall never expect SQL Azure to perform better, or even equal to your local Database installation and (2) you have to be prepared for so called <strong><em>transient</em></strong> errors in SQL Azure and handle them properly. You better get to know the <a href="">Performance Guidelines and Limitations for Windows Azure SQL Database</a>.</p> <h3>Codebase</h3> <h4>Logging</h4> <p <a href="">Windows Azure Diagnostics</a> and don’t forget – you can still write your own logs, but why not throwing some useful log information to System.Diagnostics.Trace.</p> <h4>Local file system</h4> <p <a href="">here</a>.</p> <p>Now you will probably say “Well, yeah, but when I put everything into a blob storage isn’t it <strong>vendor-lock-in</strong>?”?</p> <h3>Authentication / Authorization</h3> <p(<a href="">Introduction to Claims</a>, <a href="">Securing ASMX web services with SWT and claims</a>, <a href="">Identity Federation and Sign-out</a>, <a href="">Federated authentication – mobile login page for Microsoft Account (live ID)</a>, <a href="">Online Identity Management via Azure ACS</a>, <a href="">Creating Custom Login page for federated authentication with Azure ACS</a>, <a href="">Unified identity for web apps – the easy way</a>). And couple of blogs I would recommend you to follow in this direction:</p> <ul> <li>Dominic Baier: <a title="" href=""></a></li> <li>Vittorio Bertocci: <a title="" href=""></a></li></ul> <h3>Other considerations</h3> <p.</p> <p:</p> <blockquote> <p. </p></blockquote> <p>And here is the question: </p> <blockquote> <p>What happens when the server side code wants to keep a single object graph of all files uploaded by the end user?</p></blockquote> <p>The solution: I leave it to your brains!</p> <h3>Conclusion</h3> <p. </p> <p>If you have questions – you are more than welcome to comment!</p> <img src="" height="1" width="1" alt=""/>Anton Staykov SessionAffinity plugin update<p>Important update for the <a href="" target="_blank">SessionAffinity4</a> plugin if you use Azure SDK newer than 2.0 (this is 2.1 and next). First thing to note is that you need to install this plugin (as any other in the <a href="" target="_blank">AzurePluginLibrary</a> project) for each version of Azure SDK you have.</p> <p>If you were using the plugin with Azure SDK 2.0 the location of the plugin is following:</p> <blockquote> <p><font size="2" face="Consolas"><strong>C:\Program Files\Microsoft SDKs\Windows Azure\.NET SDK\v2.0\bin\plugins</strong></font></p></blockquote> <p>For v. 2.1 of the Azure SDK, the new location is:</p> <blockquote> <p><font size="2" face="Consolas"><strong>C:\Program Files\Microsoft SDKs\Windows Azure\.NET SDK\v2.1\bin\plugins</strong></font></p></blockquote> <p>However the plugin has dependency on the Microsoft.WindowsAzure.ServiceRuntime assembly. And as the 2.1 SDK has new version, the plugin will fail to start. Solution is extremely simple. Just browse to the plugin folder, locate the configuration file:</p> <blockquote> <p><font size="2" face="Consolas"><strong>SessionAffinityAgent4.exe.config</strong></font></p></blockquote> <p>It will look like this:</p><pre class="brush: xml;"><?xml version="1.0" encoding="utf-8" ?><br /><configuration><br /> <startup useLegacyV2RuntimeActivationPolicy="true"> <br /> <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.5" /><br /> </startup><br /></configuration><br /></pre><br /><p>Add the following additional configuration:</p><pre class="brush: xml;"> /></pre><br /><p>So the final configuration file will look like that:</p><pre class="brush: xml;"><?xml version="1.0" encoding="utf-8" ?><br /><configuration><br /> <startup useLegacyV2RuntimeActivationPolicy="true"><br /> <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.5" /><br /> </startup><br /> /></configuration><br /></pre><br /><p>Now repackage your cloud service and deploy. </p><br /><p>Please remember – only update the configuration file located in the v<strong> 2.1</strong> of the Azure SDK!</p><br /><p>Happy Azure coding!</p> <img src="" height="1" width="1" alt=""/>Anton Staykov Java Jetty server on Azure with AzureRunMe<p>The <a href="" target="_blank">AzureRunMe</a> project exists for a while. There are a lot of commercial projects (Java, Python, and others) running on Azure using it. The most common scenario for running Java on Azure uses Apache Tomcat server. Let's see how can we use Jetty to run our Java application in a Cloud Service.</p> <p>Frist we will need a Visual Studio. Yep … there are still options for our deployment (such as size of the Virtual Machine to name one) which require recompilation of the whole package and are not just configuration options. But, you can use the <a href="" target="_blank">free Express version</a> (I think you will need both for Web and for Windows Desktop versions). And yes, it is absolutely free and you can use it to build your AzureRunMe package for Azure deployment. Along with Visual Studio, you have to also install the latest version (or the latest supported by the AzureRunMe project) of <a href="" target="_blank">Windows Azure SDK for .NET</a>.</p> <p>Then get the latest version of <a href="" target="_blank">AzureRunMe from GirHub</a>. Please go through the <a href="" target="_blank">Readme</a> to get to know the AzreRunMe project overall.</p> <p>Next is to get the JRE for Windows ZIP package. If you don't have it already on your computer, you have to <a href="" target="_blank">download it from Oracle's site</a> (no direct link supported because Oracle wants you to accept license agreement first). I got the Server JRE version. Have the ZIP handy.</p> <p>Now let's get <a href="" target="_blank">Jetty</a>. The version I got is <a href="" target="_blank">9.0.5</a>. </p> <p>Now get hands dirty.</p> <p>Create a folder structure similar to the following one:</p> <p><img src=""></p> <p>As per AzureRunMe requirements – my application is prepared to run from a single folder. I have java-1.7, jetty-9.0.5 and runme.bat into that folder. To prepare my application for AzureRunMe I create two zip files:</p> <ul> <li><strong>java-1.7.zip</strong> – the Java folder as is</li> <li><strong>jetty-9.0.5.zip</strong> – contains both runme.bat + jetty-9.0.5 folder</li></ul> <p>I also have put a WAR file of my application into jetty's webapps folder. It will later be automatically deployed by the Jetty engine itself. I then upload these two separate ZIP files into a blob container of my choice (for the example I named it <strong>deploy</strong>). Content of the runme.bat file is as simple as that:</p> <blockquote> <p><font face="Consolas">@echo off<br>REM Starting Jetty with depolyed app<br>cd jetty-9.0.5<br>..\java-1.7\jre\bin\java -jar start.jar jetty.port=8080</font></p></blockquote> <p>It just starts the jetty server. </p> <p>Now let's jump to Visual Studio to create the package. Once you've installed Visual Studio and downloaded the latest version of <a href="" target="_blank">AzureRunMe</a>, you have to open the AzureRunme.sln file (Visual Studio Solution file). Usually just double click on that file and it will automatically open with Visual Studio. There are very few configuration settings you need to set before you create your package. Right click on the <strong>WorkerRole</strong> item which is under <strong>AzureRunMe</strong>:</p> <p><img src=""></p> <p>This will open the Properties pages:</p> <p><img src=""></p> <p>In the first page we configure the number of Virtual Machines we want running for us, and their size. One more option to configure – Diagnostics Connection String. Here just replace <strong>YOURACCOUNTNAME</strong> and <strong>YOURACCOUNTKEY</strong> with respective values for Azure Storage Account credentials.</p> <p>Now move to the <strong>Settings</strong> tab:</p> <p><img src=""></p> <p>Here we have to set few more things:</p> <ul> <li><strong>Packages</strong>: the most important one. This is semicolon (<strong>;</strong>) separated list of packages to deploy. Packages are downloaded and unzipped in the order of appearance in the list. I have set two packages (zip files that I have created earlier): deploy/java-1.7.zip;deploy/jetty-9.0.5.zip</li> <li><strong>Commands</strong>: this again a semicolon (<strong>;</strong>) separated list of batch files or single commands to execute when everything is ready. In my case this is the <strong>runme.bat</strong> file, which was in jetty-9.0.5.zip package.</li> <li>Update storage credentials to 3 different places.</li></ul> <p>For more information and description of each setting, please refer to <a href="" target="_blank">AzureRunMe project's documentation</a>. </p> <p>Final step. Right click on AzureRunMe item with the cloud icon and select "Create Package":</p> <p><img src=""></p> <p>If everything is fine you shall get a nice set of files which you shall use to deploy your jetty server in Azure:</p> <p><img src=""></p> <p>You can refer to the <a href="" target="_blank">online documentation here</a>, if you have doubts on how to deploy your cloud service package.</p> <img src="" height="1" width="1" alt=""/>Anton Staykov plugin for Windows Azure<p>In <a href="" target="_blank">previous post</a> I've reviewed what Session Affinity is and why it is so important for your Windows Azure (Cloud Service) deployments. I also introduced the <a href="" target="_blank">SessionAffinity</a> and <a href="" target="_blank">SessionAffinity4</a> plugins, part of the <a href="" target="_blank">Azure Plugin Library project</a>. Here I will describe what this plugin is and how it works.</p> <p>The SessionAffinity plugin is based around Microsoft's <a href="" target="_blank">Application Request Routing</a> module, which can be installed as an add-on in Microsoft's web server – <a href="" target="_blank">IIS (Internet Information Services).</a> This module has dependency of the following other (useful) modules:</p> <ul> <li><a href="" target="_blank">URL Rewrite</a> – similar to the <a href="" target="_blank">Apache's mod_rewrite</a>. You can even translate most of the Apache's mod_rewrite rule to IIS URL Rewrite Rules;</li> <li><a href="" target="_blank">Web Farm Framework</a> - simplifies the provisioning, scaling, and management of multiple servers;</li> <li><a href="" target="_blank">ARR</a> - enables Web server administrators, hosting providers, and Content Delivery Networks (CDNs) to increase Web application scalability and reliability through rule-based routing, client and host name affinity, load balancing of HTTP server requests, and distributed disk caching;</li> <li>External Cache</li></ul> <p>The two most important features of ARR that will help us achieve Session Affinity are the URL Rewrite and load balancing. Of course they only make sense when there is a Web Farm of Servers to manage.</p> <p>Here is a basic diagram which illustrates what happens to your [non-.NET-deployment] when you use SessionAffinity Plugin:</p> <p><img src=""></p> <p.</p> <p>The plugin itself consists of two main modules: </p> <p>Installer bootstrapper – takes care of installing the ARR module and all its dependencies</p> <p <a href="" target="_blank">RoleEnvironment.Changed</a> event. This event occurs when any change to the role environment happens – instances are added or removed, configuration settings is changed and so on. You can read more about handling Role Environment changes on <a href="" target="_blank">this excellent blog post</a>. When you happen to add more role instances (or remove any) all the ARR modules on all the instances must be re-configured to include all the VMs in the Web Farm. This is what Session Affinity agent is doing by constantly monitoring the environment.</p> <p>With this setup now there is ARR module installed on each of the instances. Each ARR module knows about how many total server are there. There is also a software load balancer (part of the Web Farm framework), which also knows which are all the servers (role instances).</p> <p>These are the components in a single instance:</p> <p><img src=""></p> .</p> <p>Here is simple flow diagram for a web request that goes on public port 80 to the cloud service deployed with Session Affinity plugin:</p> <p><img src=""></p> <p>SessionAffinity4 plugin (the one that works with Windows Server 2012 / OS Family 3) has one configurable option: </p> <blockquote> <p><strong>Two10.WindowsAzure.Plugins.SessionAffinity4.ArrTimeOutSeconds</strong></p></blockquote> <p <strong>ArrTimeOutSeconds</strong> to a value which is greater then the expected longest processing page.</p> <img src="" height="1" width="1" alt=""/>Anton Staykov Affinity and Windows Azure<p>Everybody speaks about <a href="" target="_blank">recently announced partnership between Microsoft and Oracle on the Enterprise Cloud</a>. Java has been a <a href="" target="_blank">first-class citizen for Windows Azure</a> for a while and was available via tool like <a href="" target="_blank">AzureRunMe</a> even before that. Most of the customers I've worked with are using <a href="" target="_blank">Apache Tomcat</a> as a container for Java Web Applications. The biggest problem they face is that Apache Tomcat relies on Session Affinity. </p> <p>What is Session Affinity and why it is so important in Windows Azure? Let's rewind a little back to <a href="" target="_blank">this post I've written</a>. Take a look at the abstracted network diagram:</p> <p><img src=""></p> <p>So we have 2 (or more) servers that are responsible for handling Web Requests (Web Roles) and a Load Balancer (LB) in front of them. Developers has no control over the LB. And it uses one and only one load balancing algorithm – Round Robin. This means that requests are evenly distributed across all the servers behind the LB. Let's go through the following scenario:</p> <ul> <li>I am web user X who opens the web application deployed in Azure. </li> <li>The Load Balancer (LB) redirects my web request to Web Role Instance 0. </li> <li>I submit a login form with user name and password. This is second request. It goes to Web Role Instance 1. This server now creates a session for me and knows who I am. </li> <li>Next I click "my profile" link. The requests goes back to Web Role Instance 0. This server knows nothing about me and redirects me to the login page again! Or even worse – shows some error page.</li></ul> <p>This is what will happen if there is no Session Affinity. Session Affinity means that if I hit Web Role Instance 0 first time, I will hit it every time after that. There is no Session Affinity provided by Azure! And in my personal opinion, Session Affinity does not fit well (does not fit at all) in the Cloud World. But sometimes we need it. And most of the time (if not all cases), it is when we run a non-.NET-code on Azure. For .NET there are things like <a href="" target="_blank">Session State Providers</a>, which make developer's life easier! So the issue remains mainly for non .net (Apache, Apache Tomcat, etc). </p> <p>So what to do when we want Session Affinity with .NET web servers? Use the <a href="" target="_blank">SessionAffinity</a> or <a href="" target="_blank">SessionAffinity4</a> plugin. This basically is the same "product", but the first one is for use with Windows Server 2008 R2 (OS Family = 2) while the second one is for Windows Server 2012 (OS Family = 3). </p> <p>I will explain in a next post what is the architecture of these plugins and how exactly they work.</p> <img src="" height="1" width="1" alt=""/>Anton Staykov Directory in Azure – Step by Step<p>Ever, <a href="" target="_blank">Create and manage Azure Virtual Networks</a>, <a href="" target="_blank">Create and manage Azure Virtual Machines</a> and <a href="" target="_blank">add them to Virtual Network</a>.</p> <blockquote> <p><em>Disclaimer: Use this solution at your own risk. What I describe here is purely my practical observation and is based on repeatable reproduction. Things might change in the future.</em></p></blockquote> <p>The foundation pillar for my setup is the following (totally mine!) statement: <font color="#ff0000">The first Virtual Machine you create into an empty Virtual Network in Windows Azure will get the <strong>4th</strong> IP Address in the sub-net range. That means, that if your sub-net address space is <strong>192.168.0.0/28</strong>, the very first VM to boot into that network will get IP Address <strong>192.168.0.4</strong>. The given VM will always get this IP Address across intentional reboots, accidental restarts, system healing (hardware failure and VM re-instantiating) etc., as long as there is no other VM booting while that first one is down.</font></p> <p. </p> <p>Next is one of the the most important parts – assign DNS server for my Virtual Network. I will set the IP Address of my DNS server to 192.168.0.4! This is because I know (assume) the following:</p> <ul> <li>The very first machine in a sub-network will always get the 4th IP address from the allocated pool;</li> <li>I will place only my AD/DC/DNS server in my AD Designated network;</li></ul> <p>Now divide the network into address spaces as described and define the subnets. I use the following network configuration which you can import directly (however please note that you must have already created the <strong>AffinityGroup</strong> referred in the network configuration! Otherwise network creation will fail):</p><pre class="brush: xml;"><NetworkConfiguration <br /> xmlns:<br /> <VirtualNetworkConfiguration><br /> <Dns><br /> <DnsServers><br /> <DnsServer name="NS" IPAddress="192.168.0.4" /><br /> </DnsServers><br /> </Dns><br /> <VirtualNetworkSites><br /> <VirtualNetworkSite name="My-AD-VNet" AffinityGroup="[Use Existing Affinity Group Name]"><br /> <AddressSpace><br /> <AddressPrefix>192.168.0.0/29</AddressPrefix><br /> <AddressPrefix>172.16.0.0/22</AddressPrefix><br /> </AddressSpace><br /> <Subnets><br /> <Subnet name="ADDC"><br /> <AddressPrefix>192.168.0.0/29</AddressPrefix><br /> </Subnet><br /> <Subnet name="Clients"><br /> <AddressPrefix>172.16.0.0/22</AddressPrefix><br /> </Subnet><br /> </Subnets><br /> </VirtualNetworkSite><br /> </VirtualNetworkSites><br /> </VirtualNetworkConfiguration><br /></NetworkConfiguration><br /></pre><br /><p>Now create new VM from gallery – picking up your favorite OS Image. Assign it to sub-net <strong>ADDC</strong>. Wait to be provisioned. RDP to it. Add AD Directory Services server role. Configure AD. Add DNS server role (this will be required by the AD Role). Ignore the warning that DNS server requires fixed IP Address. Do <strong>not</strong> change network card settings! Configure everything, restart when asked. Promote computer to Domain Controller. Voilà! Now I have a fully operations AD DS + DC.</p><br /><p>Let's add some clients to it. Create a new VM from gallery. When prompted, add it to the <strong>Clients</strong> sub-net. When everything is ready and provisioned, log-in to the VM (RDP). Change the system settings – Join a domain. Enter your configured domain name. Enter domain administrator account when prompted. Restart when prompted. Voilà! Now my new VM is joined to my domain.</p><br /><p>Why it works? Because I have:</p><br /><ul><br /><li>Defined DNS address for my Virtual Network to have IP Address of 192.168.0.4</li><br /><li>Created dedicated Address Space for my AD/DC which is 192.168.0.0/29</li><br /><li>Placed my AD/DC designated VM in its dedicated address space</li><br /><li>Created dedicated Address Space for client VMs, which does not overlap with AD/DC designated Address Space</li><br /><li>I put client VMs only in designated Address Space (sub-net) and never put them in the sub-net of AD/DC</li></ul><br /><p>Of course you will get same result if with a single Address Space and two sub-nets. Being careful how you configure the DNS for the Virtual Network and which sub-net you put your AD and your Client VMs in.</p><br /><p>This scenario is validated, replayed, reproduced tens of times, and is being used in production environments in Windows Azure. However – use it at your own risk.</p> <img src="" height="1" width="1" alt=""/>Anton Staykov Azure Basics–Compute Emulator<p>Following the first two posts of the series “Windows Azure Basics” (<a href="" target="_blank">general terms</a>, <a href="" target="_blank">networking</a>) here comes another one. Interestingly enough, I find that a lot of people are confused what exactly is the compute emulator and what are these strange IP Addresses and port numbers that we see in the browser when launching a local deployment. </p> <p>If you haven’t read the <a href="" target="_blank">Windows Azure Basics – part 2 Networking</a>, I strongly advise you to do so, as rest of current post assumes you are well familiar with real Azure deployment networking components.</p> <p>A real world Windows Azure deployment has following important components:</p> <ul> <li>Public facing IP Address (VIP) <li>Load Balancer (LB) with Round Robin routing algorithm <li>Number of Virtual Machines (VM) representing each instance of each role, each with its own internal IP address (DIP – Direct IP Address) <li>Open ports on the VIP <li>Open ports on each VM</li></ul> <p>In order to provide developers with as close to real world as possible, a compute emulator needs to simulate all of these components. So let's take a look what happens when we launch locally a Cloud Service (a.k.a. Hosted Service).</p> <h2>VIP Address</h2> <p>The VIP address for our cloud service will be 127.0.0.1. That is the public IP Address (VIP) of the service, via which all requests to the service shall be routed.</p> <h2>Load Balancer</h2> <p>Next thing to simulate is the Azure Load Balancer. There is a small software emulated Load Balancer, part of the Compute Emulator. You will not see it, you are not able to configure it, but you must be aware of its presence. It binds to the VIP (127.0.0.1). Now the trickiest thing is to find the appropriate ports to bind. You can configure different Endpoint for each of your roles. Only the<strong> Input Endpoints</strong> are exposed to the world, so only these will be bound to the local VIP (127.0.0.1). If you have a web role, the default web port is 80. However, very often this socket (127.0.0.1:80) is already occupied on a typical web development machine. So, the compute emulator tries to bind to the next available port, which is 81. In most of the cases port 81 will be free, so the "public" address for viewing/debugging will be <a href=""></a>. If port 81 is also occupied, compute emulator will try the next one – 82, and so on, until it successfully binds to the socket (127.0.0.1:<strong>XX</strong>). So when we launch a cloud service project with a web role we will very often see browser opening this wired address (<a href=""></a>). The process is same for all Input Endpoints of the cloud service. Remember, the <strong>Input endpoints</strong> are unique per service, so an <strong>Input Endpoint</strong> cannot be shared by more than one Role within the same cloud service.</p> <p>Now that we have the load balancer launched and bound to the correct sockets, let's see how the Compute Emulator emulated multiple instances of a Role.</p> <h2>Web Role</h2> <p>Web Roles are web applications that run within IIS. For the web roles, compute emulator uses IIS Express (and can be configured to use full IIS if it is installed on the developer machine). Compute Emulator will create a dedicated virtual IP Address on the local machine for each instance of a role. These are the DIPs of the web role. A local DIP looks something like 127.255.0.0. Each local "instance" then gets the next IP address (i.e. 127.255.0.0, 127.255.0.1, 127.255.0.2 and so on). It is interesting that the IP Addresses begin at 0 (127.255.0.0). Then it will create a separate web site in IIS Express (local IIS) binding it to the created Virtual IP Address and port 82. The emulated load balancer will then use round robin to route all requests coming to 127.0.0.1:81 to these virtual IP Addresses. </p> <blockquote> <p><em>Note: You will not see the DIP virtual address when you run <strong>ipconfig</strong> command</em>.</p></blockquote> <p>Here is how does my IIS Express look like when I have my cloud service launched locally:</p> <p><img src=""></p> <h2>Worker role</h2> <p>This one is easier. The DIP Addressing is the same, however the compute emulator does not need IIS (neither IIS Express). It just launches the worker role code in separate processes, one for each instance of the worker role.</p> <h2>The emulator UI</h2> <p>When you launch a local deployment, Compute Emulator and Storage Emulator are launched. You can bring the Compute Emulator UI by right clicking on the small azure colored windows icon in the tray area:</p> <p><img src=""></p> <p>For purpose of this post I've created a sample Cloud Service with a Web Role (2 instances) and a Worker Role (3 instances). Here is the Compute Emulator UI for my service. And if I click on "Service Details" I will see the "public" addresses for my service:</p> <p><img src=""></p> <h2>Known issues</h2> <p>One very common issue is the so-called <strong><em>port walking</em></strong>. As I already described, the compute emulator tries to bind to the requested port. If that port isn't available, it tries next one and so on. This behavior is known as "port walking". Under certain conditions we may see port walking even between consequent runs of same service – i.e. the first run compute emulator binds to 127.0.0.1:81, the next run it binds to 127.0.0.1:82. The reasons vary, but the obvious one is "port is busy by another process". Sometimes the Windows OS does not free up the port fast enough, so port 81 seems busy to the compute emulator. It then goes for the next port. So, don't be surprised, if you see different ports when debugging your cloud service. It is normal.</p> <p>Another issue is that sometimes browser launches the DIP Address (<a href=""></a>) instead the VIP one (<a href=""></a>). I haven't been able to find a pattern for that behavior, but if you see a DIP when you debug your web roles, switch manually to the VIP. It is important to always use our service via the VIP address, because this way we also test out application cloud readiness (distributing calls amongst all instances, instead of just one). If the problem persists, try restarting Visual Studio, Compute Emulator or the computer itself. If issue still persists, open a question at <a href="" target="_blank">StackOverflow</a> or the <a href="" target="_blank">MSDN Forum</a> describing the exact configuration you have, ideally providing a Visual Studio solution that constantly reproduces the problem. I will also be interested to see the constant repeatable issue. </p> <blockquote> <p><em><strong>Tip for the post</strong>: If you want to change the development VIP address ranges (so that it does not use 127.0.0.1) you can check out the following file:</em></p> <p><em>%ProgramFiles%\Microsoft SDKs\Windows Azure\Emulator\devfabric\DevFC.exe.config</em></p> <p><em>DevFC stands for "Development Fabric Controller". But, please be careful with what you do with this file. Always make a backup of the original configuration before you change any setting!</em></p></blockquote> <p>Happy Azure coding!</p> <img src="" height="1" width="1" alt=""/>Anton Staykov the Windows Azure Media Services–H.264 Baseline profile>Exploring the boundaries of <a href="" target="_blank">Windows Azure Media Services</a> (WAMS), and following questions on <a href="" target="_blank">StackOverflow</a> and respective <a href="" target="_blank">MSDN Forums</a>, it appears that WAMS has previously supported H.264 Baseline Profile and have had a task preset for Baseline Profile. But now it only has Main Profile and High Profile <a href="" target="_blank">task presets</a>. And because the official documentation says that <a href="" target="_blank">Baseline Profile is supported output format</a>, I don’t see anything wrong in exploring how to achieve that.</p> <p>So what can we do, to encode a video into H.264 baseline profile if we really want? Well, use the following Task Preset at your own will (and risk <img class="wlEmoticon wlEmoticon-smile" style="border-top-style: none; border-left-style: none; border-bottom-style: none; border-right-style: none" alt="Smile" src=""> ):</p><pre class="brush: xml;"><?xml version="1.0" encoding="utf-16"?><br /><!--Created with Expression Encoder version 4.0.4276.0--><br /><Preset<br /><br /> <Job /><br /> <MediaFile<br /><br /> <OutputFormat><br /> <MP4OutputFormat<br /><br /> <VideoProfile><br /> <BaselineH264VideoProfile<br /><br /> <Streams<br /><br /> <StreamInfo><br /> <Bitrate><br /> <ConstantBitrate<br /><br /> </Bitrate><br /> </StreamInfo><br /> </Streams><br /> </BaselineH264VideoProfile><br /> </VideoProfile><br /> <AudioProfile><br /> <AacAudioProfile<br /><br /> <Bitrate><br /> <ConstantBitrate<br /><br /> </Bitrate><br /> </AacAudioProfile><br /> </AudioProfile><br /> </MP4OutputFormat><br /> </OutputFormat><br /> </MediaFile><br /></Preset><br /></pre><br /><p>You can quickly check whether it works for you by using the <a href="" target="_blank">RunTask</a> command line, part of the <a href="" target="_blank">MediaServicesCommandLineTools</a> project. The <a href="" target="_blank">H264_BaselineProfile.xml</a> is provided for reference in the etc folder of the project. You can tweak and Audio and Video bitrates at your will by editing the XML.</p> <img src="" height="1" width="1" alt=""/>Anton Staykov Authentication–Mobile Login Page for Microsoft Live Id<p>Say. </p> <p>Now you noticed that Microsoft Account does not recognize mobile users 100% and you have better logic for determining mobile user agents. You also want to forcibly redirect your mobile user to the mobile login page for Microsoft Account. But how?</p> <p>Well, since you already implemented a custom login page, you already know what this URL is:</p> <p><a href="https://[namespace].accesscontrol.windows.net/v2/metadata/IdentityProviders.js?protocol=wsfederation&realm=[realm]&reply_to=[reply_to]&context=&request_id=&version=1.0&callback">https://[namespace].accesscontrol.windows.net/v2/metadata/IdentityProviders.js?protocol=wsfederation&realm=[realm]&reply_to=[reply_to]&context=&request_id=&version=1.0&callback</a>= <p>This is the URL where you get the JSON feed of registered Identity Providers for your relying party application. When you retrieve it, you have LoginUrl for Live ID looking similar to this one: <p><a href="[namespace].accesscontrol.windows.net%2fv2%2fwsfederation&wp=MBI_FED_SSL&wctx=[encrypted">[namespace].accesscontrol.windows.net%2fv2%2fwsfederation&wp=MBI_FED_SSL&wctx=[encrypted</a>] <p>Now, you can one more parameter to the query string to force a very lightweight (mobile) login page for Microsoft Account. This parameter is <strong><em><font color="#ff0000">pcexp</font></em></strong> and the value should be <strong><em><font color="#ff0000">false</font></em></strong>. So now your LoginUrl for Microsoft Account (Live ID) will look similar to this one: <p><a href="[namespace].accesscontrol.windows.net%2fv2%2fwsfederation&wp=MBI_FED_SSL&wctx=[encrypted]&pcexp=false">[namespace].accesscontrol.windows.net%2fv2%2fwsfederation&wp=MBI_FED_SSL&wctx=[encrypted]<b><font color="#ff0000">&pcexp=false</font></b></a> <p>That’s perfect! It works! Thanks! <p>Replace <strong><font color="#ff0000">login.live.com/login.srf?</font></strong> with <font color="#ff0000">mid.live.com/si/login.aspx?</font>. The result>Done. Happy coding! <p>Please respect your users and their existing online identities! Do not ask them to create new usernames/password if they don’t explicitly want to! <img src="" height="1" width="1" alt=""/>Anton Staykov the Azure Media Services – clip or trim your media files>So, we have <a href="" target="_blank">Windows Azure Media Services</a>, which can transcode (convert from one video/audio format to another), package and deliver content. How about more advanced operations, such as clipping or trimming. I want, let’s say to cut off first 10 seconds of my video. And the last 5 seconds. Can I do it with Windows Azure Media Services ? Yes I can, today (5 April 2013).</p> <p>The easiest way to start with Media Services is by using the <a href="" target="_blank">MediaServicesCommandLineTools</a> project from GitHub. It has very neat program – <a href="" target="_blank">RunTask</a>. It expects two parameters: partial (last N characters) Asset Id and path to task preset. It will then display a list of available Media Processors to execute the task with. You chose the Media Processor and you are done! </p> <p>So what task preset is for Clipping or Trimming? You will not find that type of task on the list of <a href="" target="_blank">Task Presets for Azure Media Services</a>. But you will find a couple of interesting task presets in the <a href="" target="_blank">MediaServicesCommandLineTools</a> project under the <a href="" target="_blank">etc</a> folder. Lets take look at the <a href="" target="_blank">Clips.xml</a>:</p><pre class="brush: xml;"><?xml version="1.0" encoding="utf-16"?><br /><!--Created with Expression Encoder version 4.0.4276.0--><br /><Preset<br /><br /> <Job /><br /> <MediaFile><br /> <Sources><br /> <Source<br /><br /> <Clips><br /> <Clip<br /><br /> </Clips><br /> </Source><br /> </Sources><br /> </MediaFile><br /></Preset><br /></pre><br /><p>It is a very simple XML file with two attribute values that are interesting for us. Namely <strong>StartTime</strong> and <strong>EndTime</strong>. These attributes define points in time where to start clipping and there to end it. With the given settings (StartTime: 00:00:04, EndTime: 00:00:10) the result media asset will be a video clip with length of 6 seconds which starts at the 4th second of the original clip and ends at the 10th second of the original.</p><br /><p>As can also see, I haven’t removed an important comment in the XML – "Created with Expression Encoder version 4.0.4276.0". Yes, I used Expression Encoder 4 Pro to create a custom job preset. You can try that too!</p><br /><p>Tune on for more “media services bending tips”.</p> <img src="" height="1" width="1" alt=""/>Anton Staykov Federation and Sign-Out<p>We <a href="" target="_blank">NemID</a> in Denmark, and so on, and so on.</p> <p>I do believe that every single internet user has profile with at least one of these Identity Providers. And if you, dear reader, do not have any existing online profile, please do leave a comment, but be honest!</p> <p>All of the developers, architects, decision makers, by all means we shall respect this fact!</p> <p <a href="" target="_blank">Windows Azure Access Control Service</a>, which is now part of <a href="" target="_blank">Windows Azure Active Directory</a>. I’ve written a number of articles on that subject (<a href="" target="_blank">Introduction to Claims</a>, <a href="" target="_blank">Securing ASMX web services with Claims and SWT tokens</a>, <a href="" target="_blank">Online Identity Management via Windows Azure ACS</a>, <a href="" target="_blank">Unified Identity for Web Apps – the easy way</a>, <a href="" target="_blank">Creating custom login page for Federated Authentication with Windows Azure ACS</a>) and yet I see people unaware of such service and do want to implement their own ASP.NET Membership Provider.</p> <p>I also see people willing to embrace the service. They go their way through the <a href="" target="_blank">Identity and Access Tool for Visual Studio</a>:</p> <p><img src=""></p> <p>While this option is great, it misses one very core feature – the <strong>log off</strong> feature! So you happily created your federated sign in, configured Identity Providers, etc. Now you login to test. Next you click the default <strong>[log off]</strong> link in your web app. And … you are still logged in! What the hack? You will ask.</p> <p>Well, when using Federated Log-in, we also have to use a Federated Log-Off (or Sign Out). For this, we have to edit our default log-off method and add one single line. Imagine the default Log Off method looks like:</p><pre class="brush: csharp;">[HttpPost] <br />[ValidateAntiForgeryToken] <br />public ActionResult LogOff() <br />{ <br /> WebSecurity.Logout(); <br /> return RedirectToAction("Index", "Home"); <br />}<br /></pre><br /><p>We only have to add:</p><pre class="brush: csharp;"> FederatedAuthentication.WSFederationAuthenticationModule.SignOut();<br /></pre><br /><p>So the final Log Off will be like this:</p><pre class="brush: csharp;">[HttpPost]<br />[ValidateAntiForgeryToken]<br />public ActionResult LogOff()<br />{<br /> WebSecurity.Logout();<br /> FederatedAuthentication.WSFederationAuthenticationModule.SignOut();<br /> return RedirectToAction("Index", "Home");<br />}<br /></pre><br /><p>And voliah! We are done. Now we can also successfully log off the web application. Note that <a href="" target="_blank">FederatedAuthentication</a> type is part of the <a href="" target="_blank">System.IdentityModel.Services</a> assembly and you must add a reference to it.</p><br /><p>Couple of things to pay attention to and remember:</p><br /><ul><br /><li>Identity And Access menu item (result of Identity and Access tool installation) will <strong>only</strong> be visible for web projects targeting <strong>4.5 Framework</strong>! <br /><li>You have to reference <a href="" target="_blank">System.IdentityModel.XX (4.0.0.0)</a> assemblies and not <a href="" target="_blank">Microsoft.IdentityModel.XX (3.5.0.0)</a> assemblies in your project. Failing to do so, you may see unexpected behavior and even errors and failures. Very often if you upgrade your project from .NET Framework prior 4.5 to .NET Framework 4.5 there are references left to Microsoft.IdentityModel.XX – remove them explicitly! <br /><li>Do respect your users’ existing online identities! The users will respect you, too!</li></ul> <img src="" height="1" width="1" alt=""/>Anton Staykov journey with Windows Azure Media Services–Smooth Streaming, HLS<p>Back in January Scott Gu <a href="" target="_blank">announced</a> the official release of <a href="" target="_blank">Windows Azure Media Services</a>. It is amazing platform that was out in the wild (as a CTP, or Community Technology Preview) for less then an year. Before it was RTW, I created a small project to demo out its functionality. The source code is public on <a href="" target="_blank">GitHub</a> and the live site is public on <a href="" target="_blank">Azure Web Sites</a>. I actually linked my GitHub repo with the Website on Azure so that every time I push to the Master branch, I got a new deployment on the WebSite. Pretty neat!</p> <p>At its current state Windows Azure Media Services does support the VOD (or Video On Demand) scenario only. Meaning that you can upload your content (also known as <strong><em>ingest</em></strong>), convert it into various formats, and deliver to audience on demand. What you cannot currently do is publish Live Streaming – i.e. from your Web Cam, or from your Studio.</p> <p>This blog post will provide no direct code samples. Rather then code samples, my aim is to outline the valid workflows for achieving different goals. For code samples you can take a look at the <a href="" target="_blank">official getting started guide</a>, <a href="" target="_blank">my code with web project</a>, or the <a href="" target="_blank">MediaServicesCommandLineTools project on GitHub</a>, which I also contribute to.</p> <p>With the current proposition from Azure Media Services you can encode your media assets into ISO-MP4 / H.264 (AVC) video with AAC-LC Audio, <a href="" target="_blank">Smooth Streaming</a> format to deliver greatest experience to your users, or even to <a href="" target="_blank">Apple HTTP Live Streaming format</a> ).</p> <p>You can achieve different tasks (goals) in different ways sometime. Windows Azure Media Services currently works with 4 Media Processors:</p> <ul> <li>Windows Azure Media Encryptor </li> <li>Windows Azure Media Encoder</li> <li>Windows Azure Media Packager</li> <li>Storage Decryption</li></ul> <p>When you want to complete some task you always provide a <strong><em>task preset</em></strong> and a <strong><em>media processor</em></strong> which will complete the given task. It is really important to pay attention to this detail, because giving a task preset to the wrong processor will end up in error and task failure.</p> <h3>So, how to get (create/encode to) a Smooth Streaming Content?</h3> <p>Given we have an MP4 video source - H.264 (AVC) Video Codec + AAC-LC Audio Codec. The best will be if we have multiple MP4 files representing same content but with different bitrates. Now we can use the <strong><em>Windows Azure Media Packager</em></strong> and the <a href="" target="_blank">MP4 To Smooth Streams task preset</a>.</p> <p>If we don’t have MP4 source, but we have any other <a href="" target="_blank">supported import format</a> (unfortunately MOV is not a supported format), we can use <strong><em>Windows Azure Media Encoder</em></strong> to transcode our media into either an MP4 (H.264) single file, or directly into Smooth Streaming Source. <a href="" target="_blank">Here is a full list of a short-named task presets</a> that can be used with Windows Azure Media Encoder. To directly create a Smooth Streaming asset, we can use any of the <strong><em>VC1 Smooth Streaming XXX</em></strong> task presets, or any of the <strong><em>H264 Smooth Streaming XXX</em></strong> task presets. That will generate a Smooth Streaming asset encoded with either VC-1 Video profile, or H.264(AVC) Video Codec.</p> <h3>OK, how about Apple HTTP Live Streaming (or HLS)?</h3> <p>Well, Apple HLS is similar to Smooth Streaming. However, there is a small detail, it only supports H.264 Video codec! The most standard way of creating Apple HLS asset is by using <strong>Windows Azure Media Packager</strong> and the XML task preset for “<a href="" target="_blank">Convert Smooth Streams to Apple HTTP Live Streams</a>”. Please take a note on the media processor – it is the Windows Azure Media Packager. This also will accept an input asset to be valid Smooth Streaming Asset encoded with H.264 (AVC) video codec! Do not forget that you could have created Smooth Streams with <strong>VC-1 Video Profile</strong> codec, which are totally valid and running Smooth Streams, but they will fail to convert to Apple HTTP Live Streams.</p> <h3>Hm, can’t we get all-in-one?</h3> <p>I mean, can’t I have a single media asset and deliver either Apple HTTP Live Streams or Smooth Streams, depending on my client? Sure we can. However this is CPU intensive process. It is called “<strong>dynamic packaging</strong>”. The source must be a multi-bitrate MP4 asset. This one consists of multiple MP4 files of same content with different bitrates. And it requires an on-demand streaming reserved units from Media Services. You can read more about dynamic packaging <a href="" target="_blank">here</a>.</p> <img src="" height="1" width="1" alt=""/>Anton Staykov Azure and Entity Framework<p>Recently I was asked by a friend “How to deal the Transient Fault handling framework against SQL Azure while using Entity Framework?”. How really?</p> <p>Here are a bunch of resources that describe in detail what the Transient faults are, how to deal with them, and in particular how to use the TFHF (Transient Fault Handling Framework) along with Entity Framework:</p> <p><a href=""></a> <p><a href=""></a> <p><a href=""></a> <p>A concrete sample from the Windows Azure CAT (CAT states for Customer Advisory Team) team site:</p><pre class="csharpcode" style="overflow: visible; word-wrap: break-word; border-top: medium none; font-family: ; border-right: medium none; white-space: pre-wrap; border-bottom: medium none; text-transform: none; word-spacing: 0px; zoom: 1; color: ; padding-bottom: 5px; text-align: left; padding-top: 5px; padding-left: 5px; margin: 10px; border-left: medium none; orphans: 2; widows: 2; letter-spacing: normal; line-height: 14px; padding-right: 5px; width: 611px; background-color: rgb(244,244,244); text-indent: 0px; -webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0px"><span class="rem" style="border-top: medium none; border-right: medium none; border-bottom: medium none; zoom: 1; color: ; padding-bottom: 0px; padding-top: 0px; padding-left: 0px; margin: 0px; border-left: medium none; padding-right: 0px"><font color="#008000" face="Consolas"><font style="font-size: 9pt">// Define the order ID for the order we want.</font></font></span><font style="font-size: 9pt"><br />">int</font></span><font color="#000000" face="Consolas"> orderId = 43680; an EntityConnection.<"> conn = <">"name=AdventureWorksEntities"</font></span></span><font color="#000000" face="Consolas">); a long-running context with the connection.<"> context = <">(conn);">try< open the connection">if</font></span><font color="#000000" face="Consolas"> (conn.State != </font><span style="border-top: medium none; border-right: medium none; border-bottom: medium none; zoom: 1; color: ; padding-bottom: 0px; padding-top: 0px; padding-left: 0px; margin: 0px; border-left: medium none; padding-right: 0px"><font color="#408080" face="Consolas">ConnectionState</font></span><font color="#000000" face="Consolas">.Open)<br /> {<br /> conn.Open();<br /> }">// Execute a query to return an order. Use a retry-aware scope for reliability.</font></span><br />"> order = <">return</font></span><font color="#000000" face="Consolas"> context.SalesOrderHeaders.Where(<">"it.SalesOrderID = @orderId"</font></span></span><font color="#000000" face="Consolas">,">ObjectParameter<">"orderId"</font></span></span><font color="#000000" face="Consolas">, orderId)).Execute(</font><span style="border-top: medium none; border-right: medium none; border-bottom: medium none; zoom: 1; color: ; padding-bottom: 0px; padding-top: 0px; padding-left: 0px; margin: 0px; border-left: medium none; padding-right: 0px"><font color="#408080" face="Consolas">MergeOption</font></span><font color="#000000" face="Consolas">.AppendOnly).First();">// Change the status of the order.</font></span><br /><font color="#000000" face="Consolas"> order.Status = 1;">// Delete the first item in the order.</font></span><br /><font color="#000000" face="Consolas"> context.DeleteObject(order.SalesOrderDetails.First());><font color="#000000" face="Consolas"> detail = <><br /><font color="#000000" face="Consolas"> {<br /> SalesOrderID = 1,<br /> SalesOrderDetailID = 0,<br /> OrderQty = 2,<br /> ProductID = 750,<br /> SpecialOfferID = 1,<br /> UnitPrice = (<">decimal</font></span><font color="#000000" face="Consolas">)2171.2942,<br /> UnitPriceDiscount = 0,<br /> LineTotal = 0,<br /> rowguid = </font><span style="border-top: medium none; border-right: medium none; border-bottom: medium none; zoom: 1; color: ; padding-bottom: 0px; padding-top: 0px; padding-left: 0px; margin: 0px; border-left: medium none; padding-right: 0px"><font color="#408080" face="Consolas">Guid</font></span><font color="#000000" face="Consolas">.NewGuid(),<br /> ModifiedDate = </font><span style="border-top: medium none; border-right: medium none; border-bottom: medium none; zoom: 1; color: ; padding-bottom: 0px; padding-top: 0px; padding-left: 0px; margin: 0px; border-left: medium none; padding-right: 0px"><font color="#408080" face="Consolas">DateTime</font></span><font color="#000000" face="Consolas">.Now<br /> };<br /><br /> order.SalesOrderDetails.Add(detail); again">finally< dispose of the context and the connection.</font></span><br /></font><font face="Consolas"><font style="font-size: 9pt" color="#000000"> context.Dispose();<br /> conn.Dispose();<br />}</font></font></pre><br /><p>Well, this is the raw source provided. To be hones, I would extract it / encapsulate in some more generalized way (for instance create some Extension methods to call for all CRUD operations; or even better – create my own DataService on top of the EF, so my code will never work with the bare boned EF context, but some contract instead.</p> <img src="" height="1" width="1" alt=""/>Anton Staykov Azure Federations Talk at SQL Saturday 152 / Bulgaria<p>Last Saturday we had the first edition of SQL Saturday for Bulgaria – <a href="">SQL Saturday 152</a>. I submitted my talk in the early stages of event preparation. It is “An intro to SQL Azure Federations”. I rated it as “beginners”, as it is intended to put the grounds on scaling out with SQL Azure. However it turned out that the content is for at least level 300 technical talk, and the audience shall have foundations for SQL Azure to attend the talk. Anyway, I think it went smoothly and funny. You can find the <a href="">slides here</a>. And I hope to pack a GitHub project soon for the extensions on EF Code First I used to get data out from Federation Members and perform Fan-out Queries.</p> <p>Already looking forward for the next appearance of SQL Saturday in Bulgaria.</p> <img src="" height="1" width="1" alt=""/>Anton Staykov Azure v.Next–Azure Websites, Linux on Azure, Persistent VM and much more …<div dir="ltr" style="text-align: left;" trbidi="on">Building Cloud applications has never been easier! Ever! The recent news announced at <a href="" target="_blank">MEET Windows Azure</a> event just proved it! The most exciting, the most anticipating, the most wanted release of Windows Azure is now here! <a href="" target="_blank">Check out the samples</a>, <a href="" target="_blank">get the tools</a> and dive in the clouds!<br /><h2> Azure Websites</h2>Did you want to run your Drupal site in <a href="" target="_blank">Windows Azure</a>? Or maybe your Joomla project, or the new Umbraco 5, don’t forget your small WordPress site. Now you can either built it from scratch, or just deploy it. How to deploy? Do you like Git, or FTP ? Whatever you like, whatever you are confortable with – Windows Azure Websites is the platform to run your site, be it small or large scale enterprise site! Here is just a screenshot showing you the sample gallery, where you can chose how to start your site, if you haven’t yet:<br /><img src="" /><br />You say that Joomla runs on PHP and MySQL! You are correct, Windows Azure supports PHP for quite some time, actually (almost) since the beginning, but it is easier now. What about MySQL? Well have you heard of <a href="" target="_blank">ClearDB</a>? A company that have been providing database-as-a-service for MySQL based applications. Globally distributed, fault tolerant database as a service. They have been partnering with Microsoft to provide <a href="" target="_blank">MySQL-as-a-service within the Windows Azure data centers</a>. Well, ironically enough their site is down for the time I write this blog post. But, trust me, since the MySQL is running in Windows Azure, it will not be down <img alt="Smile" class="wlEmoticon wlEmoticon-smile" src="" style="border-bottom-style: none; border-left-style: none; border-right-style: none; border-top-style: none;" />.<br />Oh, you have noticed – the Windows Azure Portal – reimagined! The whole portal now runs on HTML5 with METRO style interface. I have to admit that I like it much better than the old Silverlight based portal!<br /><h2> Persistent VM</h2>It is not a replacement for Windows Azure VM Role, which still is stateless. It is a whole new feature, named Persistent VM. Having said that – it means, that all change you made to your VM after you deploy it to Windows Azure, will be reliably persisted across VM reboots, healings, recycling. How cool is that? Not only that, now with the Persistent VM feature, you would get a SLA for just 1 instance! What could you use that Persistent VM for? Just imagine – SQL Server, SharePoint, Linux …<br /><h2> Linux</h2>What else you could do with Windows Azure now? You can, for example run your Linux based VM! Yes, Linux on Azure! How cool is that, ah? Currently there are 4 distros you can chose from:<br /><ul><li>OpenSUSE 12.1 </li><li>CentOS-6.2 </li><li>Ubuntu 12.04 </li><li>SUSE Linux Enterprise Server 11 SP2</li></ul>But I am sure more will come soon! <br /><h2> Virtual Network</h2>Connecting your own infrastructure to the cloud has never been easier. Windows Azure Virtual Network lets you configure network topology, including configuration of IP addresses, routing tables and security policies. It uses IPSEC protocol to provide a secure connection between your corporate VPN gateway and Windows Azure. <br />If I were you, I would go through the new Windows Azure <a href="" target="_blank">Fact Sheet</a>, go for the free trial to check out the Websites, and maybe even try the Linux VMs! <br />As a side note, something that is really on my head for quite a few years – finally we, in Bulgaria, will officially have Windows Azure! </div><img src="" height="1" width="1" alt=""/>Anton Staykov your ASMX WebServices with SWT and Claims<p>I was recently involved into interesting project, that was using the plain old ASMX web services. We wanted to migrate it to the <a href="" target="_blank">Windows Azure</a> Access Control Service and make use of Claims.</p> <p!</p> <p>The result is on … <a href="" target="_blank">GitHub</a>. I initially wanted to be on CodePlex, because I have other projects there and am more used to TFS style of working. But CodePlex’s TFS is down for quite some time, which was a good excuse to use <a href="" target="_blank">GitHub</a>. There is some explanations in the Readme.txt file, as well as comments in the code. So feel free to get the code, play around with it, ping me if it is not working for some reason, and so on!</p> <p>The project makes extensive use of <a href="" target="_blank">SWT Implementation, done by the Two10Degrees’ team</a>. But I added a compiled assembly reference for convenience.</p> <img src="" height="1" width="1" alt=""/>Anton Staykov Windows Azure on June the 7th<p>I’m following <a href="" target="_blank">Windows Azure</a> since its first public CTP at PDC’2008. It was amazing then, it is even more amazing now, and more exciting to come (I’m really, really excited!) …</p> <p>Get ready to <a href="" target="_blank">MEET Windows Azure</a> live on June the 7th. Register to watch live (June the 7th 1PM PDT) <a href="" target="_blank">here</a>. Be informed by following the conversation <a href="" target="_blank">@WindowsAzure</a>, <a href="" target="_blank">#MEETAzure</a>, <a href="" target="_blank">#WindowsAzure</a></p> <p>And, if you want to be more social, register for the <a href="" target="_blank">Social meet up on Twitter</a> event, <a href="" target="_blank">organized by fellow Azure MVP Magnus Martensson</a>.</p> <p>What I can tell you for sure, without breaking my NDA, is that you don’t want to miss that event!</p> <p>See you there!</p> <p><strong>MEET Windows Azure Blog Relay:</strong> <ul> <li>Roger Jennings (<a href="">@rogerjenn</a>): <a href="">Social meet up on Twitter for Meet Windows Azure on June 7th</a></li> <li>Anton Staykov (<a href="">@astaykov</a>): <a href="">MEET Windows Azure on June the 7th</a></li> <li>Patriek van Dorp (<a href="">@pvandorp</a>): <a href="">Social Meet Up for ‘MEET Windows Azure’ on June 7th</a></li> <li>Marcel Meijer (<a href="">@MarcelMeijer</a>): <a href="">MEET Windows Azure on June the 7th</a></li> <li>Nuno Godinho (<a href="">@NunoGodinho</a>): <a href="">Social Meet Up for ‘MEET Windows Azure’ on June 7th</a></li> <li>Shaun Xu (<a href="">@shaunxu</a>) <a href="">Let's MEET Windows Azure</a></li> <li>Maarten Balliauw (<a href="">@maartenballiauw</a>): <a href="">Social meet up on Twitter for MEET Windows Azure on June 7th</a></li> <li>Brent Stineman (<a href="">@brentcodemonkey</a>): <a href="">Meet Windows Azure (aka Learn Windows Azure v2)</a></li> <li>Herve Roggero (<a href="">@hroggero</a>): <a href="">Social Meet up on Twitter for Meet Windows Azure on June 7th</a></li> <li>Paras Doshi (<a href="">@paras_doshi</a>): <a href="">Get started on Windows Azure: Attend “Meet Windows Azure” event Online</a></li> <li>Simran Jindal (<a href="">@SimranJindal</a>): <a href="">Meet Windows Azure – an online and in person event, social meetup #MeetAzure (+ Beer for Beer lovers) on June 7th 2012</a></li> <li>Michael Wood (<a href="">@mikewo</a>): <a href="">Learn about Windows Azure and Chat with Experts, June 7th</a></li> <li>Shiju Varghese (<a href="">@shijucv</a>): <a href="">Social meet up on Twitter for MEET Windows Azure on June the 7th</a></li> <li>Jeremie Devillard (<a href="">@jeremiedev</a>): <a href="">Meet the Cloud–Windows Azure Event 7th June</a></li> <li>Kris van der Mast (<a href="">@KvdM</a>): <a href="">Get ready to meet Windows Azure</a></li> <li>Mike Martin (<a href="">@TechMike2KX</a>): <a href="">Don’t miss the online Windows Azure event of the year : MEET Windows Azure on June 7th </a></li> <li>Bill Wilder (<a href="">@codingoutloud</a>): <a href="">Get ready to “Meet #WindowsAzure” in a live streamed event June 7 at 4:00 PM Boston time</a></li> <li>Eric Boyd (<a href="">@EricDBoyd</a>): <a href="">Meet Windows Azure – Unveiling the Latest Platform</a></li> <li>Magnus Mårtensson (<a href="">@noopman</a>): <a href="">Social meet up on Twitter for MEET Windows Azure on June 7th</a></li></ul> <img src="" height="1" width="1" alt=""/>Anton Staykov to Claims<p>It is 21<sup>st</sup> <b>actively use</b> ! <p>As being a developer, I also know that the easiest way to go with a site, which offers some kind personalization, is to use my own authentication and authorization mechanism! But this thinking I have left behind me. I decided to step into the <b>present</b> ). <p>If you want to join me, let me first list out the terms which you will begin working with on a daily basis: <p. <p><b>Claim</b> – this is an assertion about an object issued by an Identity Provider. In the given sentence, the Claim is “Name” and it value is “Anton Staykov” <p><b>Identity Provider</b> – an authority, which issues security tokes, that contain claims. Bulgarian or any Government is an Identity Provider, which issues Passports. And the passports are <p><b>Security Tokens</b> – this is a digitally signed object, which contains claims. A Token may contain one or more claims. <p>And last, but not least, you, dear reader are the <b>Relying Party</b> to which I present my token that contains claims. <p. <p>As some final words, I want to share with you details from some studies conducted amongst online users about their perceptions about online shopping experience. <p>· 3 out 4 online shoppers avoid creating new user accounts <p>· 76% of online shoppers admit to have given incomplete or wrong information when required to create new user account <p>· 24 % of online shoppers abandon the site, when it requires a registration <p.</p> <img src="" height="1" width="1" alt=""/>Anton Staykov | http://feeds.feedburner.com/astaykov | CC-MAIN-2017-09 | refinedweb | 14,192 | 54.73 |
Ran
Published Monday, September 08, 2008 7:00 PM
by
Brad Rutkowski
If you would like to receive an email when updates are made to this post, please register here
RSS
PingBack from
Episode 44 - Tobias Weltner gives an inside look at PowerShell Plus « PowerScripting Podcast
I get the following error:
A parameter cannot be found that matches parameter name 'Authentication'
Which version of powershell are you using?
Chris Haines
Ah forget it, I just found out they added that parameter in CTP 2.0.
what if i can't upgrade to poweshell 2?
how can i solve with ps 1?
ale
The -Authentication was put into v3 to help with this issue. You cannot use get-wmiobject in POSH v1 to connect to namespaces that require packet privacy. You'd need to use [wmisearcher] or the .NET framework and then set the authentication in there. Sorry. It CAN be done though, just a bit more work.
Brad Rutkowski
Here's an example of [wmisearcher]:
[wmisearcher]$wmisearcher = "SELECT * FROM IISApplicationPoolSetting"
$wmisearcher.scope = "\\Server1\root\MicrosoftIISv2"
$wmisearcher.scope.options.EnablePrivileges = $true
$wmisearcher.scope.options.Impersonation = "Impersonate"
$wmisearcher.scope.options.Authentication = "PacketPrivacy"
$wmisearcher.scope.options | http://blogs.technet.com/brad_rutkowski/archive/2008/09/08/getting-access-denied-when-trying-to-query-root-mscluster-namespace-remotely-against-windows-2008.aspx | crawl-002 | refinedweb | 193 | 58.99 |
dgl.function¶
In DGL, message passing is expressed by two APIs:
send(edges, message_func)for computing the messages along the given edges.
recv(nodes, reduce_func)for collecting the incoming messages, perform aggregation and so on.
Although the two-stage abstraction can cover all the models that are defined in the message passing paradigm, it is inefficient because it requires storing explicit messages. See the DGL blog post for more details and performance results.
Our solution, also explained in the blog post, is to fuse the two stages into one kernel so no explicit messages are generated and stored. To achieve this, we recommend using our built-in message and reduce functions so that DGL can analyze and map them to fused dedicated kernels. Here are some examples (in PyTorch syntax).
import dgl import dgl.function as fn import torch as th g = ... # create a DGLGraph g.ndata['h'] = th.randn((g.number_of_nodes(), 10)) # each node has feature size 10 g.edata['w'] = th.randn((g.number_of_edges(), 1)) # each edge has feature size 1 # collect features from source nodes and aggregate them in destination nodes g.update_all(fn.copy_u('h', 'm'), fn.sum('m', 'h_sum')) # multiply source node features with edge weights and aggregate them in destination nodes g.update_all(fn.u_mul_e('h', 'w', 'm'), fn.max('m', 'h_max')) # compute edge embedding by multiplying source and destination node embeddings g.apply_edges(fn.u_mul_v('h', 'h', 'w_new'))
fn.copy_u,
fn.u_mul_e,
fn.u_mul_v are built-in message functions, while
fn.sum
and
fn.max are built-in reduce functions. We use
u,
v and
e to represent
source nodes, destination nodes, and edges among them, respectively. Hence,
copy_u copies the source
node data as the messages,
u_mul_e multiplies source node features with edge features, for example.
To define a unary message function (e.g.
copy_u) specify one input feature name and one output
message name. To define a binary message function (e.g.
u_mul_e) specify
two input feature names and one output message name. During the computation,
the message function will read the data under the given names, perform computation, and return
the output using the output name. For example, the above
fn.u_mul_e('h', 'w', 'm') is
the same as the following user-defined function:
def udf_u_mul_e(edges): return {'m' : edges.src['h'] * edges.data['w']}
To define a reduce function, one input message name and one output node feature name
need to be specified. For example, the above
fn.max('m', 'h_max') is the same as the
following user-defined function:
def udf_max(nodes): return {'h_max' : th.max(nodes.mailbox['m'], 1)[0]}
Broadcasting is supported for binary message function, which means the tensor arguments
can be automatically expanded to be of equal sizes. The supported broadcasting semantic
is standard and matches NumPy
and PyTorch. If you are not familiar
with broadcasting, see the linked topics to learn more. In the
above example,
fn.u_mul_e will perform broadcasted multiplication automatically because
the node feature
'h' and the edge feature
'w' are of different shapes, but they can be broadcast.
All DGL’s built-in functions support both CPU and GPU and backward computation so they
can be used in any autograd system. Also, built-in functions can be used not only in
update_all
or
apply_edges as shown in the example, but wherever message and reduce functions are
required (e.g.
pull,
push,
send_and_recv).
Here is a cheatsheet of all the DGL built-in functions. | https://docs.dgl.ai/api/python/function.html | CC-MAIN-2020-29 | refinedweb | 574 | 58.89 |
Section (3) aio_fsync
Name
aio_fsync — asynchronous file synchronization
Synopsis
#include <aio.h>
DESCRIPTION
The
aio_fsync() function
does a sync on all outstanding asynchronous I/O operations
associated with
aiocbp−>aio_fildes.
(See aio(7) for a description of
the aiocb structure.); it, −1 is returned, and
errno is set appropriately.
ERRORS
- EAGAIN
Out of resources.
- EBADF
aio_fildesis not a valid file descriptor open for writing.
- EINVAL
Synchronized I/O is not supported for this file, or
opis not
O_SYNCor
O_DSYNC.
- ENOSYS
aio_fsync() is not implemented.
ATTRIBUTES
For an explanation of the terms used in this section, see attributes(7).
SEE ALSO
aio_cancel(3), aio_error(3), aio_read(3), aio_return(3), aio_suspend(3), aio_write(3), lio_listio(3), aio(7), sigevent(7) | https://manpages.net/detail.php?name=aio_fsync | CC-MAIN-2022-21 | refinedweb | 118 | 52.56 |
We.
Issues with using one large Entity Model:
I. Performance..
II. Cluttered Designer Surface.
III. Intellisense experience is not great
When you generate an Edm model from a database with say 1000 tables, you will end up with 1000 different entity sets. Imagine how your intellisense experience would be when you type “context.” in the VS code window.
IV. Cluttered CLR Namespaces
Since a model schema will have a single EDM namespace, the generated code will place the classes in a single namespace. Some users have complained that they don’t like the idea of having so many classes in a single namespace.
Possible Solutions.
I. Compile time view generation.
II. Choosing the right set of tables
Join the conversationAdd Comment
PingBack
julie
You state above that "the prescriptive guidance from EF team is to pre-generate views for all EF applications."
If this is the case, then why do you not provide a better integration scenario in Visual Studio?
The steps that you suggest are not onerous, but they are also not obvious either.
I would expect Visual Studio to implement the best practice by default, but allow me to easily change it. In the next release of EF, could you please do the best solution by default?
Julie,
Out of the 3 folders in the zip, only one(SubsettingUsingForeignKeys) corresponds to the post today. The other two are for the second part of the post where I will go over type reuse with "Using". Since designer does not support "Using", the Edmx files would not be very useful for these.
I will try to share the Edmx file for SubsettingUsingForeignKeys sample but in the mean while you can put it together pretty easily from the CSDL, SSDL and MSL files following the steps from Sanjay in this post :.
Thanks
Srikanth
J’ai récemment été sollicité pour proposer des solutions afin de résoudre des problèmes de performances
I work with model with more than 70 tables and it will grow.
I think that it would be great to be able to work with EDM Model like we work with database model in SQL Server. In SQL Server we are able to generate different diagrams which describes some aspects of relations. It could be also implemented in EDM Diagram in some way.
Helpful may be creating boundaries inside Model so we would work with all model or only with a part of it, but the part would still have relations with other parts (tables in parts).
Example slices:
OrderSlice which consists tables: Order, OrderDetails, OrderStatus, OrderType, OrderHistory
ProductSlice which consists tables: Product, productCategory, ProductFamily, ProductImages, ProductJme, ProductDescription, etc
Is this all has sense to implement in future EF?
A bit more helpful than Elisa Flasko’s comment "Well, big entities are big entities…!" when someone asked this question at TechEd Europe recently.
Last week, a customer asked me how to solve a big EDM performance problem? In his case, his model was
More general information about Entity Framework runtime performance can be found at.
Weekly digest of interesting stuff
I worked with 250 tables in the Entity Model an can not split it into 2 or more Entity Models. I used allways pregenerated Views, but the compiletime is much to high. The Runtime Performance is good.
Planed Microsoft a Performance Patch in the next Month ?
EntityFramework的开发领导SrikanthMandadi称这个包含两部分内容的文章为
Hey, we’re working with quite a large database and using edmgen2.exe to generate our emdx and .cs files. I found this link very helpful as i didn’t know that pre-generating the Views would actually speed everything up. It’s created an 80 meg .cs file which VS actually struggles to build.. Once it’s built though. It means development is much faster than it used to be. Every time we used to make a change and started up the web site we’d have to wait ages before linq would respond.
I’d recommend to anyone to do this view generation stuff before they work with linq to entities on a day to day basis.
I hope in the next version alot of the speed issues and this hidden stuff is going available as options or properties. Also that linq to entities catches up with Linq to SQL.
Direi che una buona pagina da cui partire è questo documento su MSDN :Performance Considerations
Does anyone know if there has been some improvement for big database structure? Does the VS 2010/.NET 4 handle it better?
We are in the development process of an application that will grow. For the moment and for the next year we are not expecting a very huge model, but it might become large later on.
What has changed with the new upcoming versions?
Thanks
Last week, a customer asked me how to solve a big EDM performance problem? In his case, his model?
una porkeria su Entity Framework..
52 entities, 55 associations (Foreign keys)
The Validate step worked slower and slower.
Now it crashes both in VS and at run time …
EF is big and clumsy. I even wonder if it can be fixed. Many unnecessary features
that should have been orthogonal to the framework not built in.
I actually wanted to use Linq to SQL, that’s a lean piece of software. But MS drops it and picks EF as the "winner".
I apologize for being this harsh but it’s ridiculous to consider a 50-100 tables system as being big. What’s a 500 tables system then?
I concur with Juan above …
could your rewrite the petshop demo whit Entity Framework, so we can get a best practice sample.
What’s wrong with the same type defined in multiple models? For example, with AdventureWorks, tables in the Person schema are related both to tables in the Sales and Human Resources schemas. Why not simply create two models, one for Sales and another for Human Resource, but with Person tables in both models? What are the problems with this approach?
Good information and good way your blog post. Good luck blogger man. | https://blogs.msdn.microsoft.com/adonet/2008/11/24/working-with-large-models-in-entity-framework-part-1/ | CC-MAIN-2018-13 | refinedweb | 1,009 | 64.71 |
Upstream MEtadata GAthered with YAml (UMEGAYA)
Help! there is a bug that I do not manage to solve by myself. -- Charles
This proposal is for all packages, not just science packages. Please ignore the fields that do not apply to your package.
Contents
- Upstream MEtadata GAthered with YAml (UMEGAYA)
- Other Upstream metadata
Introduction
This is an effort to collect meta-information about upstream projects in a file called debian/upstream/metadata in the source packages maintained in a publicly accessible version control system (VCSug), currently Subversion or Git. Since this information is directly accessed from the VCS, it can be updated without uploading the source packages to the Debian archive.
Umegaya is also the name of a draft collector system that is implemented on. Its source is available on git.debian.org and Branchable. It is used in to feed the data in the UltimateDebianDatabase.
Proof of principle
To make the DebianMed web sentinels use the UDD, fed from the debian/upstream/metadata via upstream-metadata.debian.net, to display bibliographic information about which academic article to cite when using our packages. This is currently done by collecting the information in the central file used to create the med-* metapackages. This work was announced in October 2012 in the Bits from Debian Pure Blends.
The Umegaya instance running at is collecting and organising debian/upstream/metadata and debian/copyright files as pools. Currently they are pushed daily in the QA team's Subversion repository's directory packages-metadata. A UDD importer consisting of a gatherer and a UDD module is in development.
The date about bibliographic information is loaded in the bibref table of the UltimateDebianDatabase. The following UDD query outputs all source packages featuring bibliographic information. (The join is needed to exclude those references of packages that are not yet uploaded to Debian package pool but used in so called blends prospective packages.)
SELECT distinct s.source from bibref b join sources s on s.source = b.source;
Syntax
This syntax is being formalised as DEP 12.
The debian/upstream/metadata file is in YAML format. In its simplest form, it looks much like the paragraph format used in Debian control files. Nevertheless, there may be sometimes unexpected behaviours, for instance field contents that have a colon inside have to be quoted in some cases. If in doubt, there are validators available, either on-line Online YAML Parser, or in command line (yamllint).
Fields
In alphabetic order. Let's try to use the same vocabulary as in DOAP as much as possible. Fields that are the same as in DOAP are followed by an asterisk.
- Archive
- When the upstream work is part of a large archive, like CPAN.
- ASCL-Id
Id number in the Astrophysics Source Code Library
- Bug-Database
- A URL to the list of known bugs for the project.
- Bug-Submit
- A URL that is the place where new bug reports should be sent.
- Cite-As
The way the authors want their software be cited in publications. The value is a string which might contain a link in valid HTML syntax. (see discussion on Debian Science list)
- Changelog
- URL to the upstream changelog.
- Which person, mailing list, forum,… to send messages in the first place.
- CPE
One or more space separated Common Platform Enumerator values useful to look up relevant CVEs in the National Vulnerability database and other CVE sources. See CPEtagPackagesDep for information on how this information can be used. Example: "cpe:/a:ethereal_group:ethereal"
- Donation
- A URL to a donation form (or instructions).
- A URL to the online FAQ.
- Funding
- One or more sources of funding which have supported this project (e.g. NSF OCI-12345).
- Gallery
- A URL to a gallery of pictures made with the program (not screenshots).
- Name *
- Upstream name of the packaged work.
- Other-References
- A URL to a upstream page containing more references.
- Reference
- One or more bibliographic references, represented as a mapping or sequence of mappings containing the one or more of the following keys. The values for the keys are always scalars, and the keys that correspond to standard BibTeX entries must provide the same content.
- Author
Author list in BibTeX friendly syntax (separating multiple authors by the keyword "and" and using as few as possible abbreviations in the names, as proposed in).
- Booktitle
- Title of the book the article is published in
- DOI
- This is the digital object identifier of the academic publication describing the packaged work.
- Editor
- Editor of the book the article is published in
- Eprint
- Hyperlink to the PDF file of the article.
- ISBN
- International Standard Book Number of the book if the article is part of the book or the reference is a book
- ISSN
- International Standard Serial Number of the periodical publication if the article is part of a series
- Journal
- Abbreviated journal name [To be discussed: which standard to recommend ?].
- Number
- Issue number.
- Pages
- Article page number(s). [To be discussed] Page number separator must be a single ASCII hyphen. What do we do with condensed notations like 401-10 ?
- PMID
ID number in the PubMed database.
- Title
- Article title.
- Type
A BibTeX entry type indicating what is cited. Typical values are article, book, or inproceedings. [To be discussed]. In case this field is not present, article is assumed.
- URL
- Hyperlink to the abstract of the article. This should not point to the full version because this is specified by Eprint. Please also do not drop links to pubmed here because this would be redundant to PMID.
- Volume
- Journal volume.
- Year
- Year of publication
- Debian-package
- Optional: citation information can be restricted to some specific binary package of a multi-binary package if the reference is only concerning this package; Note: This is just a proposal and might change in the future
- A URL to a registration form (or instructions).
- Repository
- URL to a repository containing the upstream sources.
- Repository-Browse
- A URL to browse the repository containing the upstream sources.
- Screenshots
One or more URLs to upstream pages containing screenshots (not screenshots.debian.net), repesented by a scalar or a sequence of scalars.
- Security-Contact
- Which person, mailing list, forum,… to send security-related messages in the first place.
- Webservice
- URL to an web page where the packaged program can also be used.
Fields that are not recommended
Some fields are present in debian/upstream/metadata files, but have been introduced there only for exploratory purposes, so their use is not recommended in general, especially when their contents duplicate existing information from other packaging files.
- The packaged work's homepage.
- Watch
Currently it contains the main line of debian/watch. It is therefore assumed to be in format version 3. For surveying multiple locations, it could contain a YAML sequence.
Reserved fields
The following fields are used internally and must not be present in debian/upstream/metadata.
- ping
- Field to trigger the gatherer, with no output returned.
- YAML-ALL
- Used to dump the loaded record.
- YAML-URL
- Used to override the repository's URL provided by debcheckout.
- YAML-REFRESH-DATE
Used to deduce how long umegaya will ignore calls to refresh (to avoid hammering Alioth).
TODO: ignore them safely.
Discussion
Let's discuss here, on a mailing list (debian-med or debian-qa), or a discussion page, if available.
The data is not really Debian-specific, lets put it outside Debian and use mechanisms for Mapping package names across distributions.
To do: formalise the above using Config::Model::Backend::Yaml, and generate docs as explained in .
* In addition to ?DOAP, other Semantic Web ontologies/namespaces/schemas should be reused in order to not reinvent the wheel, and enable such metadata to participate to the ?Semantic Web (see also Open Linked Data matters). As such, SPDX would be an interesting standard to link to, as well as ADMS.F/OSS, for packages description, IMHO. Syntactically, any form of RDF would be interesting to explicitely convey the prefixes in the field names... and I'm not sure it can be done in ?YAML -- OlivierBerger
Problems
BibTeX: Currently there is no way to let {} that might be used to force capitalisation in BibTeX entries slip through from debian/upstream/metadata into BibTeX
It seems that the python library which is used to parse debian/upstream/metadata files for inclusion into UDD has a bug when values are of the form <d:d> (decimal_number colon decimal_number). You should include strings like this into single quotes. (see Discussion on Debian Med mailing list)
Template
Here is a template for a debian/upstream/metadata file which can be used to specify citations:
Reference: Author: <please use full names and separate multiple author by the keyword "and"> Title: Journal: Year: Volume: Number: Pages: DOI: PMID: URL: eprint:
Examples
You can find lots of examples using codesearch.debian.net.
Errors
The most common error is that you are not allowed to use the string ": " inside a yaml value since this is separating key-value pairs. So please quote such values or use a separate line.
Lintian check
Simon Kainz has written some preliminary lintian check to verify the syntax of debian/upstream/metadata files (see also 731340). Any testing is welcome. A simple lintian check for YAML syntax was implemented by Petter Reinholdtsen (see 813904).
Deprecated features
Hyphen shortcut for mappings
Only a subset of YAML is used: sequences are only expected to contain scalars and mappings are only expected to contain a scalar or a mapping, but with only one level of imbrication.
In addition, two conventions that are not part of the YAML format are used:
- Field names are case-insensitive.
- Nested mappings are shortcuts for longer field names composed of both mapping field names separated by a dash. The following two examples are equivalent:
Foo: Bar: baz
Foo-Bar: baz
UDD loading through a YAML intermediate
The bibliographic data was refreshed daily at (URL not valid anymore) via a local cron job. As specified in config-org.yaml, it was retreived by the script fetch_bibref.sh and loaded in the UDD as triples (package, key, value) with the bibref_gatherer.
Old names for the file debian/upstream/metadata
debian/upstream-metadata.yaml was first used and then shortened to debian/upstream. See. Migration from old file name to new file name can be handled by cme fix dpkg once a proper model for DEP-12 is created.
debian/upstream was then used until February 2014, where it was replaced by debian/upstream/metadata, so that debian/upstream/ became directory useable by other programs and in particular uscan. See the archive of the debian-devel mailing list for details.
Other Upstream metadata
Edam files
The EDAM ontology provides some means to classify software used in bioinformatics. The Debian Med team intends to link all bioinformatic tools with the EDAM ontology. To approach this the YAML file debian/upstream/edam can provide extra information.
Fields
- ontology
- EDAM (1.13) (currently version 1.13 is the latest EDAM version)
- topic
- EDAM topic
- scopes
- EDAM scopes | https://wiki.debian.org/UpstreamMetadata?action=diff | CC-MAIN-2016-40 | refinedweb | 1,826 | 56.76 |
Good morning Lianne, I am glad that the answer helped you. The thought will upset you for awhile, however, when this thought comes, I would like for you to change those thoughts to all the good things in the relationship and eventually the thoughts will fade and the pain will go away. If the two of you stay together there will be times for awhile the thoughts will return (during an argument etc) but those times will become less and less as the two of you work building a strong, healthy relationship. Explain to your boyfriend that he will need to be patient with you during those times and slowly but surly the memory will fade. One last thing is to put a plan in place to prevent this kind of thing happening again. Love one another and safeguard the relationship. If you need to talk with me again you can go to my profile and ask your question, or, just address the question to Ja`Ree. It might also help to find someone you truly trust to talk with who can help you work through it if the thoughts and feelings become too | http://www.justanswer.com/relationship/67bnd-hi-thanks-help-answer-helped-allowed.html | CC-MAIN-2016-40 | refinedweb | 194 | 80.85 |
Problem is, I'm a lazy, lazy person, and have not been able to muster the energy to actually get writing, which leads me to this blog post - since I've not been updating the blog as I should either, I'll kill two projects with one meeting and make the actual development process open as well, as a series of blog posts and a repository at BitBucket.
For someone else to be able to follow the work, I obviously have to nail down what the goal of this exercise is:
* Create a tool that can expose a Python API in a RESTish fashion * The API itself must not have to know about the tool * It must run on at least CherryPy and two other webapp frameworks TBD (no, not Django) * It must handle HTTP errors * It must be able to encode data into JSON before returning it * It must run on Python 3.2+ * It must not care what the proper definition of RESTful is
In addition, some good-to-haves:
* It may make linking between resources easier (if feasible) * It may be able to use other data formats than JSON * It may run on Python 2.7
Because I enjoy working with CherryPy since it's very good at staying out of my way, I'll start out writing for CherryPy and then generalize from there. Just to get started, I have created a minimal CherryPy app to work from, even though I'll split the tool from the framwork (or the REST framework from the web framework?) later. The entire code looks like this
import cherrypy def requesthandler(*pathargs, **kwargs): cherrypy.response.status = "500 Server Error" return "Not implemented" class PyRest(object): def index(self, *args, **kwargs): return requesthandler(*args, **kwargs) index.exposed = True CONF = { 'global': { 'server.socket_host': '0.0.0.0', 'server.socket_port': 8888, } } if __name__ == '__main__': ROOT = PyRest() cherrypy.quickstart(ROOT, '/', CONF) def application(environ, start_response): cherrypy.tree.mount(PyRest(), '/', None) return cherrypy.tree(environ, start_response)
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }} | https://dzone.com/articles/simple-restful-ish-exposure | CC-MAIN-2017-34 | refinedweb | 343 | 54.05 |
About me
Full Name: Álax de Carvalho Alves
GitHub: alaxalves
Gitlab: alaxalves
LinkedIn: Álax Alves -
Affiliation: Universidade de Brasília - UnB
Location: Brasília, Distrito Federal - Brazil
Organization: Digital Impact Alliance
Sub-Organization: Public Lab
Timezone: Brasilia Standard Time (GMT-3)
Telephone: +55 61 998 053 551
Project: DIAL: PublicLab's Spectral Workbench - Rails and DevOps Upgrades±+Rails+and+DevOps+upgrades and part of
PublicLab's Mapknitter Image Export and Spectral Workbench upgrade
Introduction
I'm currently an undergraduate of Software Engineering at University of Brasília (UnB), Brasília, Brazil. I will get my degree at the middle of this year. The time I have spent in college made me very passionate about Ruby on Rails, Open Source and specially DevOps. Contributing to a bigger context, specially a project with social impact, has been one of the professional practices I'd like to keep performing, among various other reasons I'll discuss that's why I think this project fits right in for me. Along with PublicLab's wide community of contributors, I have identified some issues and needs for the Spectral Workbench project repository.
I have been an intern at LAPPIS (which stands for Advanced Laboratory of Production, Research and Software Innovation) for almost 5 years. This college laboratory is focused on applied research on DevOps, and contributes heavily with Open Source projects, in many areas such as Chatbots, IOT, Data Analysis, Machine Learning and Social Networks, all of those using Agile development methods. LAPPIS professors encourage a culture of a collaborative effort between teachers and students to contribute actively to Free Software communities, and use of Agile practices. The laboratory has collaborated on numerous open source projects, such as Rocket.Chat, Linux kernel, Debian, Noosfero, Rasa. Several students that worked in the laboratory have participated in GSoC as both mentees as mentors in past editions. I have played different roles throughout my experience in LAPPIS, such as Developer, Product Owner, Scrum Master, DevOps Developer.
Also I have been an active contributor throughout the PublicLab ecosystem of repositories - with significant contributions to PublicLab's Mapknitter, Plots2 and Spectral Workbench itself. Not only contributing with code and pull requests but being an active member by reviewing PRs, helping other fellow contributors, engaging in discussions and such.
Throughout these experiences, both in Open Source communities and in a physical laboratory, I had the privilege to contribute to social causes, collaborate in several open source projects, learn from maintainers and community leaders, improve my technical and interpersonal skills, teach newcomers and much more. Such experiences not only enriched my coding skills, but also my soft skills, such as the ability to work as a team, either remotely or pairing.
Project description
This kind of upgrading project requires a set of "pre-steps" in order to be done smoothly. First step is obtaining stable test coverage, so my first issue will be to increase and stabilize test coverage. Along with that I plan to improve the DevOps pipeline, by having a trusted Continuous Integration tool, a test environment that reflects our developing environment. I have started some of this effort in.
I also have some ideas on improving the CI integration, such as breaking it into multiple runners to fasten the build -- which I have already done here ->. By achieving this we'll have not only a more trustful environment to work on, but also a more standardized project, since we have this type of CI settings in other PL projects, such as Mapknitter and Plots2 repositories.
For this initial part of the project I have planned ~1 month to get everything properly set up.
After getting a nice test coverage, a proper CI and developing environment we can start with the upgrade itself. It's widely embraced as a good practice to open smaller Pull Requests containing smaller, but significant, parts of code as in an granular/incremental upgrade. Not only we'll get more stable code versions but it'll feel like there's progress being made. For that I'll count on the quick review of fellow contributors to get the project flowing constantly.
I have already started this effort by opening several PRs at Spectral Workbench repository at the Mapknitter Export upgrade project, the first step is to identify which parts of the export module need to be upgraded. Also, we need to define how the upgrades should happen and which new features or flows should be changed/included on the upgrade.
I could identify a few upgrades that can be done in the export module. One thing is the method:
def self.generate_perspectival_dist defined in the
lib/exporter.rb file. This method is clearly doing a lot of things it “shouldn’t” when it comes to one of Rails’ core principles - keeping code readable, small and simple.
By going through the exporter module the first step is breaking those huge methods into smaller ones, in order to get a code easier to understand, and even easier to maintain. This being done, we could reduce the amount of params that are needed to execute the method in
app/models/warpable.rb#L154, which can significantly increase the performance of the method
generate_perspectival_distort defined at the warpable model, since it runs on every to-be exported image.
The same idea given by the def self.generate_perspectival_dist method can by used for
def self.run_export(user,resolution,export,id,slug,root,average_scale,placed_warpables,key),
def self.generate_composite_tiff(coords, origin, placed_warpables, slug, ordered) and
def self.distort_warpables(scale, warpables, export, slug) methods.
Along with this code refactoring, some features such as collecting a set of image URLs and their corner coordinates, determining the image dimensions in pixel and converting corner coordinates to pixel positions can be worked on alongside the refactoring. In other words given a collection of warped images, calculate pixel positions of image collection relative to each other. This would involve some refactoring at
app/models/map.rb file. Some other performance related features, such as producing an SVG artifact containing images at relative positions for less memory usage, might be discussed with the mentors as the project flows - the idea list is extensive.
I think it’s possible to accomplish both projects since I am already familiar with Mapknitter code, since I’ve been a heavy contributor and also because Spectral Workbench has a small codebase in comparison to Mapknitter’s and Plots2’s, so the Rails upgrade is going to be less painful.
Abstract/summary (<20 words):
Improve the project's DevOps workflow and proceed gradually with the Rails versions upgrades for Spectral Workbench, in order to get more stable versions. Along with that, the Mapknitter’s export idea is to run the exporting process as a scalable web service - which has been recently achieved; but also work on the performance of the Export module, by doing some refactoring and including previously defined feature with it.
Problem
Being part of the great PublicLab community has gotten me acquainted with the coding and organization practices and patterns and contributing to Spectral Workbench and the Mapknitter projects made me acknowledge the powerful tool they are, but also made me take notice of some of the issues I'd like to attack on this project besides the upgrade itself, when it comes to the Spectral Workbench project, the main goals are:
1 - Better/stabler control of its ruby and javascript dependencies. By switching its dependencies manager from Bower to Yarn, and locking gem version on the Gemfile.
2 - Containerize the development environment using Docker's modern tooling. Effort started in.
3 - Include and configure a tool to monitor Test Coverage, such as SimpleCov.
4 - Improve the continuous integration tool project relation by scaling its running pipelines and adding test scripts. Effort has started in
5 - Update the entire code to a newer Ruby language syntax and Rails framework syntax, this will remove any, or in the worst case, most of the deprecation warnings.
6 - Standardize Spectral Workbench code and practices by setting a ruby coding stylesheet as in Mapknitter and Plots2 projects.
Summarizing, the major milestones are increasing the test coverage, then creating a more stable workflow pipeline by updating CI configuration and the app's environments. Then move forward with the gradual Rails upgrades.
In order to get a more fluid developing schedule, a plan that I have developed during my contributions to Mapknitter project can be replicated to the Spectral Workbench upgrading project. Since opening a PR to Mapknitter's main branch was taking a little longer than expected, we have adopted a new strategy in order to work faster. Along with the mentors we came up with the following plan: pull request from this development branch to main branch, to make our work more transparent to the mentors and to ourselves, thus making it easier to review, since every change will be in a single pull request. This strategy has been proven to be very effective since made possible to be ahead of schedule at the Mapknitter project.
When it comes to the Mapknitter image export project, the main goals are: 1 - Identify with mentors which parts of the Exporting module should be upgraded and how. Also, which features or flow changes can be done along with the upgrade. 2 - Refactor the existing Export-related modules and methods 3 - Change the Export related methods callbacks spread around the Mapknitter code 4 - Work on significant performance upgrades, by replacing outdated code and setting a new image exporting flow - new callbacks and processors. 5 - Work on new features related to the image exporting process previously defined with mentors.
By the end of the GSoC program we should have a very stable Spectral Workbench environment certified by the increased test coverage, improved CI configuration, with stable working environments and obviously properly updated to the latest Rails. And an improved Mapknitter image export flow with significant performance upgrades and newly designed features.
Timeline/milestones
Detailed Timeline
Proposal Review Period (March 31, 2020 - May 3, 2020)
In this phase I intend to study Spectral Workbench's code base more deeply, so I can get used to its goals, functionalities, patterns, code styling, test workflow (fixtures, factories) etc. It is also important to get more familiar with Spectral's test suite to better understand their similarities and differences. And also plan a strategy, along with the community, on increasing the test coverage, that is my first goal. Meanwhile keep working on obtaining a stable development environment.
Regarding the Mapknitter export project I plan on identifying along with mentors the parts of the Image Exporting module that should be upgraded and how we will accomplish that. Also, which features or flow changes can be done along with the upgrade. By doing this “investigation” and planning job on this part of the calendar we’ll certainly be able to complete the task on schedule or even ahead of it.
Community Bonding (May 4, 2020 - June 1, 2020)
This period I'd like to get to know more and get known in the PublicLab community since there are newcomers every day, and also get familiar with the interested parts(stakeholders) of the Spectral Workbench and Mapknitter project.
This is important because PublicLab maintains a massive project structure and a vast community, and a small change in any project could affect many people, so I want to be aligned with the organization and project intents. It's a good plan since I could replicate existing work in PublicLab's ecosystem, thus having a more standardized collection of projects - which makes it easier for newcomers to contribute. Also it’ll be easier to get acquainted of the organization’s goals.
Along with that, I will still be studying the tools that could be used and codebase. I may even solve some small issues on the repository. This way I will make myself comfortable with the code, and will see a clearer path to achieve my GSoC proposals, and also will provide trust to the PublicLab community in my work. This schedule applies both for the Spectral Workbench and the Mapknitter Image Exporting project.
At this part of the project I plan on already having everything defined regarding the Mapknitter Image Exporting project when it comes to which parts should be upgraded and how; And which features or exporting flow changes will be done along with the upgrade.
Coding Period (June 1, 2020 - August 31, 2020)
Week 1 (June 1, 2020 - June 7, 2020)
In this period, the first step on the Spectral project would be obtaining a stable development environment(using Docker), then increase Spectral Workbench''s test coverage. Along with my mentor we could previously set a stable test coverage percentage and then the new tests will be Pull Requested to the community evaluation, so we can be confident about the next steps we'll take on.
Formalizing a plan through a GitHub issue is also a plan for this part of the project. Which means writing down what and how everything of the Exporting project will be accomplished, so that the community can discuss and give their input. Also, I plan on start refactoring the first code pieces of the Export module.
Weeks 2-3 (June 8 - June 14, 2020)
The next step on Spectral would be upgrading/improving the existing development/test/production environment along with the CI/CD pipelines. We could achieve that by creating stable docker environments and later a stable CI/CD pipeline that could go over the entire workflow. Also the plan is already start with the Rails 3.2 -> Rails 4.2 upgrade in parallel.
In the Mapknitter Exporting project, I intend to keep on refactoring the existing Export-related modules and methods and in parallel change the Export related methods callbacks spread around the Mapknitter code.
Week 4-5 (June 15 - June 29, 2020)
The refactoring of the existing Export-related modules and methods and callbacks spread around the Mapknitter code. A WIP pull request will be opened so that reviewers can give their input as fast as they can.
At the Spectral project, after having efficient test coverage, stable environments and DevOps pipeline, there's a stable and trustable environment. Upgrading to Rails version 4.2.8 is the most logical next step to take, because we'll be able to obtain a stable project version faster and easier -some benefits of gradual upgrades. Along with this upgrade we can start locking the gems versions to the proper ones. A WIP pull request will also be opened on Spectral so that reviews can give their input as fast as they can.
Evaluation 1 (June 29 - July 5, 2020)
At this first milestone, the plan is having Spectral Workbench project with increased test coverage, stable dockerized environment, stable and improved CI/CD pipelines and running on Rails version 4.2.8.
By this time of the schedule, the existing Export-related modules and methods along with the callbacks spread around the Mapknitter code should be entirely improved.
Weeks 6-7 (July 6 - July 19, 2020)
In the second phase, after the feedback from the community is done, code refactoring or even minor improvements could be suggested and added. After everything is properly reviewed we could start working on an upcoming upgrade from Rails 4.2.8 to Rails 5.0.1 and open a WIP status pull request. Along with that we can start switching our javascript dependencies manager, removing Bower and start setting Yarn.
For the Mapknitter Export project, after I receive the community feedback I can start working on the suggestions. And then begin the work on the performance upgrades, by replacing outdated code and setting a new image exporting flow - all new callbacks and processors. A third party library for such work could also be considered to integrate. Since new features will be already planned and established, I intend to already start working on those.
Week 8 (July 20 - 26, 2020)
By this time of schedule, we’ll have achieved significant performance upgrades with the Export module, with new methods and callbacks, and new features for it will be on execution.
In Spectral, after the code is reviewed, we need to get feedback from users and/or maintainers of the PublicLab community. Any possible bug should be fixed here and can be used as a contingency time, in order to work on unfinished issues. If nothing critical appears we could start working in a Rails 5.0.1 to Rails 5.2.2 upgrade. The goal is having Spectral running on Rails 5.2.2 by the end of this period.
Evaluation 2 (July 27 - August 2, 2020)
On this milestone Spectral Workbench will have a stable functional version running on Rails 5.2.2 version. In the Mapknitter project we should have the exporting modules completely improved and a couple of new features and export flow should be already set up. A pull request will be already opened allowing the community to review and suggest changes for both projects.
And any eventual bug or improved will be either discussed or taken care of. Also, for Spectral I intend to start switching our assets "precompilator" from Sprockets to Webpacker, in order to get a more up-to-date pattern.
Weeks 9-10 (August 3 - August 16, 2020)
In the third phase we will start upgrading from Rails 5.2.2 to Rails 6.0.0, worth mentioning that a WIP status MR will be opened so that the PublicLab community gets in touch with the work that is getting done. And any eventual new feature for Mapknitter’s exporting project will be developed at this latest parts, a WIP pull request will be opened for these in order to get constant reviews and feedback for the community.
Any eventual bug should be fixed, and some missing tests should be written, for example controller, model and/or system tests. But, at this point the source code should be stable enough for production, already with the new javascript dependencies manager, Yarn - this regarding the Spectral upgrade project - the work on Sprockets replacing should still be going on in parallel as well.
Week 11-12 (August 17 - 30, 2020)
In the last phase Rails 6.0.0 with Webpacker will be completely set for Spectral, worth to mention that a WIP status pull request will be opened so that the PublicLab community get in touch with the work that is getting done. Hopefully we'll have the Rails 6 Debian package already done in order to ship Spectral to production with the latest Rails.
The plans for Mapknitter Image Export project is having the export modules completely refactored with readable methods that have significantly better performance when it comes to the older methods. This refactoring will also present new features for the exporting module, such as generating an SVG artifact containing images at relative positions and others.
This final part of the project - last two weeks - could also be used as a contingency time. If anything goes wrong or takes more than planned, this period of time should be used to work on these problems. Any final feedback should be given at this point, and a final Pull Request should be opened.
Students Submit Code and Final Evaluations (August 31 - September 7, 2020)
The final release for Mapknitter’s Export module should contain:
- Export module refactored to a more readable code
- Export module with more performatic methods
- Pixel positions calculation of image collections feature
- Producing SVG artifact containing images at relative positions feature
- Models and Controllers with exporting related callbacks completely
- New performatic image exporting flow
The final release for Spectral Workbench should contain:
- Test coverage tool set up and increased test coverage.
- SW dependencies and core code updated to the latest Rails version.
- Switch of deprecated side-tools, such as Bower and SProckets.
- Deprecated dependencies and pieces of code removed.
- CI/CD integration and pipeline improved.
- Docker and Docker Compose set for all environments.
Summarized Timeline for Spectral Workbench project
Other Commitments
I currently have no ongoing college activities due to the corona virus crisis. The forecast for decreasing the number of infections in my country is for September, so I think that at least until June / July I will be working only on this project, in case of back to college activities I would still have around 30 hours of weekly availability.
Needs
I'd like to have a constant validation from the PublicLab team, this means having a solid communication channel and a great code review policy. I intend to break this single update in several and reviewing the consequent pull requests is essential to get this done smoothly.
Contributions to PublicLab
Contribution to PublicLab's Spectral Workbench
Several contributions to PublicLab's Mapknitter
Some contributions to PublicLab's Plots quite a long time now, specifically since my 3rd semester in college(2016) since I started college(2015), that dates 3-4 years now. Most of the projects I have been involved in are using Ruby as the main language and Rails as the main framework, either as an API or web application.
I have mentioning that I have recently worked.
- Mapknitter: Mapknitter is a free and open-source software created and run by Public Lab. It lets people upload their own aerial images, position them in a web interface over some existing map data, share it, and export for print. I have worked in Mapknitter during GSoC 2019, with its Rails upgrades, besides that a lot has also been accomplished, such as improved CI/CD pipelines, modern side-tools, old code removal, and etc. Link:
-, developers:
- SMI-Slave: The Sistema de Monitoramento de Insumos of the Universidade de Brasília (SMI-UnB), is a web application developed to assist in the monitoring and management of Universidade de Brasília's power consumption and distribution. The idea is to monitor, collect and display data of each campus power supply, allowing a much better comprehension of the usage patterns and energy quality received from the distribution station. SMI is divided is three layers. The slave layer is responsible for the communication with energy transductors and data collection. Link:
- SMI-Master: The master layer, which is responsible for all the data management, data processing, and database redundancy. Link:
- SMI-Front: The presentation layer, which holds the front-end of the application, including the dashboard for researchers. Link:
- Analizo: I'm pretty passionate about shell script and bash script, I like to use those mostly for automation app. For Owla I had the opportunity to participate in the Software development using Rational Unified Process(RUP) and later Scrum/XP. Owla is an open source online tool to aid both teachers and students to improve their experiences in classes. Some of its features include Real-time Q&A, Real time slides sharing - both using Rails' WebSocket - comments and forums. Link:
- to the Klara API.
Link:
- Klara API: Klara API is also part of the IoT context along with participated participated in this project as a developer only. It isa Pygame 2D game engine built in python! Link:
- Simian website:
- Pan-Pan!: I have participated in this project as a developer only. Pan Pan! is an web application that aims to aid music bands management so that they upgrade their performance. Link:
- Graskell: I have participated in this project as a developer only. A small Haskell library that includes some graphs related functions. Link:
Significant Contributions
Here are some contributions for some diverse Open Source projects I consider most relevant:
Mapknitter's Rails upgrade:
During Google Summer of Code 2019 I had the opportunity to know the wide PublicLab ecosystem of projects, I have fallen in love with the community, they have welcomed me very well. For Mapknitter's upgrading project I got the chance to work not only with Ruby on Rails itself, but also javascript and bash script. Along with a fellow developer we have gotten the chance to learn a lot and also be free to plan our schedule and execute it. And we have successfully upgraded Mapknitter from Rails 3.2 to Rails 5.1.6. Unfortunately we could not merge the Rails 6 upgrade because we could not ship it to production, since there's a restriction in our production environment to Debian packaging. See:
A small resume of the trajectory can be found at:
Noosfero's Rails upgrade:
Along with my colleagues, I had the opportunity to upgrade this huge software to latest Rails at the time (Rails 5.1.6), along with that I had to fix around ~8000 (EIGHT THOUSAND) tests.
SMI-Front's setting of Server-Side-Rendering:
Since there was a need for replacing the Client-Side rendering with Server-Side rendering I took this mission to accomplish the change. The results are in the link below.
SMI-Slave's script to automatically setup everything that was needed in a Raspberry Pi:
We always had to take several equal steps in order to properly install everything that was required in the various Raspberries we had. What I did here was gathering all those similar steps in a script and now with a single command you can install everything we need for SMI-Slave run properly in a Raspberry Pi.
Dontfile's upgrade to Rails 5.2.2:
Dontfile is a small software, so upgrading it to Rails 5.2.2 has given me some experience, certainly not a painful one, but also a chance to learn more.
Noosfero's Code Quality tool and Rubocop stylesheet setup:
Here I have set GitLab's default quality tool to check the code for quality downgrades. I have also improved the rubocop checks could keep parallel serve api and jobs, also setting Redis to cache JSON data: contributed to Mapknitter's production environment that is dockerized, by that I have also improved Travis CI settings and consequently mapped some asset precompilation issues, dependencies versioning issues, and so on. Despite not being merged the work I have done here has been used in subsequent merge requests. the master branch.
Analizo's Doxyparse setup of auto GitHub release creation process:
In Analizo's parser module Doxyparse I have implemented an entire automated workflow to generate a new Doxyparse version. I have used mostly shell scripts) and the PublicLab community. Both work heavily with Open Source projects, in many tech areas such as iOT, ChatBots, Machine Learning, Web Design, Data Analysis, etc.
More specifically regarding PublicLab, my entire Mapknitter project has been collaborative, I worked asynchronously with a fellow student named Kaustubh Nair (@kaustubh-nair on GitHub) throughout the whole project, and it turned out to be a very successful project.
I have also previously experienced working in a startup company, it has also been a great experience, working in a multidisciplinary team.
Passion
I have started working with the PublicLab community for almost a year now and what I have loved the most is how close the core developers get to the new contributors and how they trust us developers. My mentors have given me so much freedom to work that kept me motivated to always do more and with more quality. But one of the main things that made me propose to PublicLab it's the idea of using my tech knowledge to help keep science free and accessible to all.
Audience
As a big fan of Open Source technologies I intend to use only free softwares and technologies that are accessible to the community.
Commitment
I truly understand this kind of commitment. That's why I intend to give my full dedication and commitment to this project. I know it will be a wonderful contribution not only to the PublicLab community but also to my future professional career.
11 Comments
@cess @bansal_sidharth2996 @warren @gauravano @sashadev-sky Could you guys please give your input here?
Is this a question? Click here to post it to the Questions page.
This is a really well crafted proposal. With your track record on MapKnitter and PublicLab.org, we have confidence in your ability to do it!
My worry is -- is there any way to bring your DevOps experience closer to our high priority projects? SWB is relatively dormant and stable right now, to be honest. Are there optimizations to the MapKnitter exporter system that could be made, esp. with tuning the containerized export processes?
I'm mainly thinking about this because we are likely to have significantly fewer slots this year, and will likely face some tough choices when we get to project selection. No reflection on the quality of the proposal, which we think is great on its own merits!
Is this a question? Click here to post it to the Questions page.
Reply to this comment...
Log in to comment | https://publiclab.org/notes/alaxallves/03-06-2020/gsoc-proposal-2020-spectral-workbench-rails-and-devops-upgrades | CC-MAIN-2020-34 | refinedweb | 4,787 | 50.26 |
#include <ucred.h> ucred_t *ucred_get(pid_t pid);
void ucred_free(ucred_t *uc);
uid_t ucred_geteuid(const ucred_t *uc);
uid_t ucred_getruid(const ucred_t *uc);
uid_t ucred_getsuid(const ucred_t *uc);
gid_t ucred_getegid(const ucred_t *uc);
gid_t ucred_getrgid(const ucred_t *uc);
gid_t ucred_getsgid(const ucred_t *uc);
int ucred_getgroups(const ucred_t *uc, const gid_t **groups);
const priv_set_t *ucred_getprivset(const ucred_t *uc, priv_ptype_t set);
pid_t ucred_getpid(const ucred_t *uc);
projid_t ucred_getprojid(const ucred_t *uc);
zoneid_t ucred_getzoneid(const ucred_t *uc);
uint_t ucred_getpflags(const ucred_t *uc, uint_t flags);
m_label_t *ucred_getlabel(const ucred_t *uc);
size_t ucred_size(void);
These functions return or act on a user credential, ucred_t. User credentials are returned by various functions and describe the credentials of a process. Information about the process can then be obtained by calling the access functions. Access functions can fail if the underlying mechanism did not return sufficient information.
The ucred_get() function returns the user credential of the specified pid or NULL if none can be obtained. A pid value of P_MYID returns information about the calling process. The return value is dynamically allocated and must be freed using ucred_free().
The ucred_geteuid(), ucred_getruid(), ucred_getsuid(), ucred_getegid(), ucred_getrgid(), and ucred_getsgid() functions return the effective UID, real UID, saved UID, effective GID, real GID, saved GID, respectively, or -1 if the user credential does not contain sufficient information.
The ucred_getgroups() function stores a pointer to the group list in the gid_t * pointed to by the second argument and returns the number of groups in the list. It returns -1 if the information is not available. The returned group list is valid until ucred_free() is called on the user credential given as argument.
The ucred_getpid() function returns the process ID of the process or -1 if the process ID is not available. The process ID returned in a user credential is only guaranteed to be correct in a very limited number of cases when returned by door_ucred(3C) and ucred_get(). In all other cases, the process in question might have handed of the file descriptor, the process might have exited or executed another program, or the process ID might have been reused by a completely unrelated process after the original program exited.
The ucred_getprojid() function returns the project ID of the process or -1 if the project ID is not available.
The ucred_getzoneid() function returns the zone ID of the process or −1 if the zone ID is not available.
The ucred_getprivset() function returns the specified privilege set specified as second argument, or NULL if either the requested information is not available or the privilege set name is invalid. The returned privilege set is valid until ucred_free() is called on the specified user credential.
The ucred_getpflags() function returns the value of the specified privilege flags from the ucred structure, or (uint_t)-1 if none was present.
The ucred_getlabel() function returns the value of the label, or NULL if the label is not available. The returned label is valid until ucred_free() is called on the specified user credential. This function is available only if the system is configured with Trusted Extensions.
The ucred_free() function frees the memory allocated for the specified user credential.
The ucred_size() function returns sizeof(ucred_t). This value is constant only until the next boot, at which time it could change. The ucred_size() function can be used to determine the size of the buffer needed to receive a credential option with SO_RECVUCRED. See socket.h(3HEAD).
See DESCRIPTION.
The ucred_get() function will fail if:
There is not enough memory available to allocate sufficient memory to hold a user credential. The application can try again later.
The caller does not have sufficient privileges to examine the target process.
The calling process cannot open any more files.
The physical limits of the system are exceeded by the memory allocation needed to hold a user credential.
The target process does not exist.
The ucred_getprivset() function will fail if:
The privilege set argument is invalid.
The ucred_getlabel() function will fail if:
The label is not present.
The ucred_geteuid(), ucred_getruid(), ucred_getsuid(), ucred_getegid(), ucred_getrgid(), ucred_getsgid(), ucred_getgroups(), ucred_getpflags(), ucred_getprivset(), ucred_getprojid(), ucred_getpid(), and ucred_getlabel() functions will fail if:
The requested user credential attribute is not available in the specified user credential.
See attributes(5) for descriptions of the following attributes:
getpflags(2), getppriv(2), door_ucred(3C), getpeerucred(3C), priv_set(3C), socket.h(3HEAD) , attributes(5), labels(5), privileges(5) | https://docs.oracle.com/cd/E36784_01/html/E36874/ucred-getgroups-3c.html | CC-MAIN-2021-17 | refinedweb | 717 | 52.7 |
TCP is the reliable network which works on the point to point connection. The virtual channel is created for a reliable communication which ensures that no data packet is lost, this article introduces the Socket programming with the help of java.net package and ServerSocket and socket class.
The TCP is a protocol well known for its reliability of data transmission. The work of TCP depends on a protocol known as two way handshake protocol. In this the server and client both agree on the start of communication and then a virtual point to point channel is created between them to start the transmission. The TCP stands for transmission control protocol.
This is reliable because it ensures that data is transmitted at the other end. On the zero level it uses the acknowledgement phenomenon. However while working with Java we don't need to worry about the ground zero. Java programming language provide the classes Socket and ServerSocket which takes care of all the other matters, you only need to worry about the application configuration.
Following are the steps need to be taken in order to establish a transmission channel and transmit data between the server and client using Socket and ServerSocket.
1. Create a ServerSocket which will declare itself on a specified port number.
2. Client creates the Socket object which will try to contact the server on the port, on which server declared its service.
3. Server need to accept the communication request from the client, for this purpose we can use the accept() method of ServerSocket and it will use the reference of the Socket to take further communication control.
4. The Socket class provide the access to Input and Output Streams which in turns may be used by either server or client to send and receive data from each other.
5. getInputStream() and getOutoutStream() are the two methods provided by the socket class to process the information along the channel.
If you understand the theory now is the time to demonstrate a practical example, for this purpose we will be using ServerClass and ClientClass. These are as names suggests, server and client respectively which will transmit and receive the messages along the channel. We have placed some outputs to show how the complete process goes.
package networking; import java.io.*; import java.net.*; public class ServerClass { public static void main(String[] args){ try{ System.out.println("creating server socket.."); ServerSocket ss = new ServerSocket(5000); System.out.println("created server socket"); System.out.println("waiting for connection"); Socket cs = ss.accept(); System.out.println("connection accepted"); PrintWriter bw = new PrintWriter(cs.getOutputStream(),true); BufferedReader br = new BufferedReader(new InputStreamReader(cs.getInputStream())); bw.println("Hi Client!"); bw.flush(); System.out.println(br.readLine()); } catch(Exception i){ System.out.println(i); } } }
Note: Exception handling is necessary as there may be exceptions occurring. However these are caught type exceptions so compiler will be telling you why you need them.
So as we see in the program there is a ServerClass which declares itself on the port number 5000. If any client want it may connect this server at this port. After declaring its existence the server program will wait for any incoming connections. And when we run the client program it will accept that using accept() method and proceed.
As you can see in the program the socket object we get in return from accept() method is stored in cs reference and on this we call getOutputStream() and getInputStream() which returns us the respective streams which we wrap in the PrintWriter and BufferedReader to take advantages of the under lying streams.
We have used the println() and flush() to transmit data on network channel and readLine() to read from network channel. There are other methods too which you can use according to your needs.
The client side program is kind of similar except that there is no ServerSocket object and InetAddress is used to give server its own identity.
package networking; import java.net.*; import java.io.*; public class ClientClass { public static void main(String[] args) { try { Socket s = new Socket("localhost", 5000); PrintWriter bw = new PrintWriter(s.getOutputStream(), true); BufferedReader br = new BufferedReader(new InputStreamReader(s.getInputStream())); bw.println("Hi Server!"); bw.flush(); System.out.println(br.readLine()); } catch (Exception e) { System.out.println(e); } } }
There is everything nearly the same. So now let us see the running of these programs which will result following result.
Step 1: Run server program
creating server socket.. created server socket waiting for connection
Step 2: run the client program
Hi Client!
Step 3: check back the server side for what change we see in output
creating server socket.. created server socket waiting for connection connection accepted Hi Server!
You see the result, after we run the client program the server which was waiting for any connection, accepts the coming connection from client and proceed further with the program.
This is quite much to know about the Socket Programming in Java. If you want to know how you are going to manage the running of these two programs simultaneously without termination one of them please read on.. | http://www.examsmyantra.com/article/64/java/socket-programming-in-java-and-introduction-to-tcp-network | CC-MAIN-2019-09 | refinedweb | 854 | 55.84 |
This tutorial was made by Stephen Rodriguez — you can follow him on Twitter!
Typescript is an effective and powerful programming language created by Microsoft. As a superset of the Javascript language, Typescript provides developers the ability to create strongly typed Javascript code. Although most object-oriented languages like Java have this feature supported as part of their core, other languages like Python and Javascript do not.
Typescript is the missing piece in a very long battle to provide better clarity and quality of the Javascript language itself. Typescript, at its very core, is Javascript but extends the language to provide us support for defining variable types, generics, classes, interfaces, namespaces, and so much more.
Code editors like VS Code make it easy to begin using Typescript today and provide a broad range of documentation and tools to ease use. I strongly recommend taking a look at the Typescript homepage to learn more about what is Typescript and how you can begin using it today.
Why TS and not other strongly-typed language like Flow or Better.js?
Of course, there are other options out there but would coding be fun if there were no competitors? Thus, we need to focus on what problem all of these languages are trying to solve. At Thinkster, we believe that Typescript, amongst its competitors, has proven itself to be the right tool for this problem. The problem can be described as the following:
We need a framework to enhance the Javascript language to give developers the ability to create highly-defined, and strongly-typed codebases with as little complexity as possible
That problem description is quite lengthy, but in short, we need a language that does more than just enhances the Javascript language, it also needs to help us write better code. And that is why we, along with our friends at Google, chose Typescript as their language of choice for Angular 2.
What is a Transpiler?
In short, a Transpiler is a tool that converts code from one source language to another. Transpilers, also known as “source-to-source” compilers, are not the same as compilers. Compilers are something that have been around for ages. The purpose of a compiler is to convert higher-level languages like Java into lower-level languages like Assembly. A Transpiler on the other hand, is not like this at all. Their primary focus is not to convert, but rather to interpret. | https://thinkster.io/tutorials/getting-started-with-typescript | CC-MAIN-2020-05 | refinedweb | 404 | 60.85 |
Bulid Assembly Error
Hello.
The game is being developed.
Unity 2017.2.0.0p2 and Visual Studio 2017 ver 15.4.4 and .Net 4.7.02556
The Unity editor plays well.
But Bulid Error .
r.cs(762,9): error CS0246: The type or namespace name 'RijndaelManaged' could not be found (are you missing a using directive or an assembly reference?)
Assets\SteamVR\Plugins\openvr_api.cs(1640,49): error CS0234: The type or namespace name 'PlatformID' does not exist in the namespace 'System' (are you missing an assembly reference?)
UnityEditor.BuildPlayerWindow+BuildMethodException: 90)
Is there any solution?
For your information
My computer has a different version of Unity and a visual studio as well as a visual studio 2015.
Answers
Are you using a dll? And is that dll set to be compiled with Windows Store and x86? And does that dll support UWP?
Taqtile
@mark_grossnickle
Set UWP
You are referencing a library that isn't supported in 4.5:
I assume that is because the SteamVR plugin may not support UWP?
Taqtile | https://forums.hololens.com/discussion/9263/unity-bulid-assembly-error | CC-MAIN-2021-49 | refinedweb | 173 | 60.41 |
The QGridView class provides an abstract base for fixed-size grids. More...
#include <qgridview.h>
Inherits QScrollView.
List of all member functions..
See also Abstract Widget Classes.
The parent, name and widget flag, f, arguments are passed to the QScrollView constructor.
See also cellRect().
Returns the height of a grid row. See the "cellHeight" property for details.
Returns the geometry of a cell in a cell's coordinate system. This is a convenience function useful in paintCell(). It is equivalent to QRect( 0, 0, cellWidth(), cellHeight() ).
See also cellGeometry().
Returns the width of a grid column. See the "cellWidth" property for details..
Returns the size of the grid in pixels.
Returns the number of columns in the grid. See the "numCols" property for details.
Returns the number of rows in the grid. See the "numRows" property for details. );
paintEmptyArea() is invoked by drawContents() to erase or fill unused areas.().
Sets the height of a grid row. See the "cellHeight" property for details.
Sets the width of a grid column. See the "cellWidth" property for details.
Sets the number of columns in the grid. See the "numCols" property for details.
Sets the number of rows in the grid. See the "numRows" property for details.
See also QWidget::update().
This property holds the height of a grid row.
All rows in a grid view have the same height.
See also cellWidth.
Set this property's value with setCellHeight() and get this property's value with cellHeight().
This property holds the width of a grid column.
All columns in a grid view have the same width.
See also cellHeight.
Set this property's value with setCellWidth() and get this property's value with cellWidth().
This property holds the number of columns in the grid.
Set this property's value with setNumCols() and get this property's value with numCols().
See also numRows.
This property holds the number of rows in the grid.
Set this property's value with setNumRows() and get this property's value with numRows().
See also numCols.
This file is part of the Qt toolkit. Copyright © 1995-2005 Trolltech. All Rights Reserved. | http://doc.trolltech.com/3.3/qgridview.html | crawl-002 | refinedweb | 352 | 72.42 |
Quantum phase estimation is one of the most important subroutines in quantum computation. It serves as a central building block for many quantum algorithms. The objective of the algorithm is the following:
Given a unitary operator $U$, the algorithm estimates $\theta$ in $U\vert\psi \rangle =e^{\boldsymbol{2\pi i} \theta }|\psi \rangle$. Here $|\psi\rangle$ is an eigenvector and $e^{\boldsymbol{2\pi i}\theta}$ is the corresponding eigenvalue. Since $U$ is unitary, all of its eigenvalues have a norm of 1.
The general quantum circuit for phase estimation is shown below. The top register contains $t$ 'counting' qubits, and the bottom contains qubits in the state $|\psi\rangle$:
1.1 Intuition
The quantum phase estimation algorithm uses phase kickback to write the phase of $U$ (in the Fourier basis) to the $t$ qubits in the counting register. We then use the inverse QFT to translate this from the Fourier basis into the computational basis, which we can measure.
We remember (from the QFT chapter) that in the Fourier basis the topmost qubit completes one full rotation when counting between $0$ and $2^t$. To count to a number, $x$ between $0$ and $2^t$, we rotate this qubit by $\tfrac{x}{2^t}$ around the z-axis. For the next qubit we rotate by $\tfrac{2x}{2^t}$, then $\tfrac{4x}{2^t}$ for the third qubit.
When we use a qubit to control the $U$-gate, the qubit will turn (due to kickback) proportionally to the phase $e^{2i\pi\theta}$. We can use successive $CU$-gates to repeat this rotation an appropriate number of times until we have encoded the phase theta as a number between $0$ and $2^t$ in the Fourier basis.
Then we simply use $QFT^\dagger$ to convert this into the computational basis.
1.2 Mathematical Foundation
As mentioned above, this circuit estimates the phase of a unitary operator $U$. It estimates $\theta$ in $U\vert\psi \rangle =e^{\boldsymbol{2\pi i} \theta }|\psi \rangle$, where $|\psi\rangle$ is an eigenvector and $e^{\boldsymbol{2\pi i}\theta}$ is the corresponding eigenvalue. The circuit operates in the following steps:
i. Setup: $\vert\psi\rangle$ is in one set of qubit registers. An additional set of $n$ qubits form the counting register on which we will store the value $2^n\theta$:$$ \psi_0 = \lvert 0 \rangle^{\otimes n} \lvert \psi \rangle$$
ii. Superposition: Apply a $n$-bit Hadamard gate operation $H^{\otimes n}$ on the counting register:$$ \psi_1 = {\frac {1}{2^{\frac {n}{2}}}}\left(|0\rangle +|1\rangle \right)^{\otimes n} \lvert \psi \rangle$$
iii. Controlled Unitary Operations: We need to introduce the controlled unitary $C-U$ that applies the unitary operator $U$ on the target register only if its corresponding control bit is $|1\rangle$. Since $U$ is a unitary operatory with eigenvector $|\psi\rangle$ such that $U|\psi \rangle =e^{\boldsymbol{2\pi i} \theta }|\psi \rangle$, this means:$$U^{2^{j}}|\psi \rangle =U^{2^{j}-1}U|\psi \rangle =U^{2^{j}-1}e^{2\pi i\theta }|\psi \rangle =\cdots =e^{2\pi i2^{j}\theta }|\psi \rangle$$
Applying all the $n$ controlled operations $C − U^{2^j}$ with $0\leq j\leq n-1$, and using the relation $|0\rangle \otimes |\psi \rangle +|1\rangle \otimes e^{2\pi i\theta }|\psi \rangle =\left(|0\rangle +e^{2\pi i\theta }|1\rangle \right)\otimes |\psi \rangle$:\begin{aligned} \psi_{2} & =\frac {1}{2^{\frac {n}{2}}} \left(|0\rangle+{e^{\boldsymbol{2\pi i} \theta 2^{n-1}}}|1\rangle \right) \otimes \cdots \otimes \left(|0\rangle+{e^{\boldsymbol{2\pi i} \theta 2^{1}}}\vert1\rangle \right) \otimes \left(|0\rangle+{e^{\boldsymbol{2\pi i} \theta 2^{0}}}\vert1\rangle \right) \otimes |\psi\rangle\\\\ & = \frac{1}{2^{\frac {n}{2}}}\sum _{k=0}^{2^{n}-1}e^{\boldsymbol{2\pi i} \theta k}|k\rangle \otimes \vert\psi\rangle \end{aligned}
where $k$ denotes the integer representation of n-bit binary numbers.
iv. Inverse Fourier Transform: Notice that the above expression is exactly the result of applying a quantum Fourier transform as we derived in the notebook on Quantum Fourier Transform and its Qiskit Implementation. Recall that QFT maps an n-qubit input state $\vert x\rangle$ into an output as$$ QFT\vert x \rangle = \frac{1}{2^\frac{n}) $$
Replacing $x$ by $2^n\theta$ in the above expression gives exactly the expression derived in step 2 above. Therefore, to recover the state $\vert2^n\theta\rangle$, apply an inverse Fourier transform on the ancilla register. Doing so, we find$$ \vert\psi_3\rangle = \frac {1}{2^{\frac {n}{2}}}\sum _{k=0}^{2^{n}-1}e^{\boldsymbol{2\pi i} \theta k}|k\rangle \otimes | \psi \rangle \xrightarrow{\mathcal{QFT}_n^{-1}} \frac {1}{2^n}\sum _{x=0}^{2^{n}-1}\sum _{k=0}^{2^{n}-1} e^{-\frac{2\pi i k}{2^n}(x - 2^n \theta)} |x\rangle \otimes |\psi\rangle $$
v. Measurement: The above expression peaks near $x = 2^n\theta$. For the case when $2^n\theta$ is an integer, measuring in the computational basis gives the phase in the ancilla register with high probability:$$ |\psi_4\rangle = | 2^n \theta \rangle \otimes | \psi \rangle$$
For the case when $2^n\theta$ is not an integer, it can be shown that the above expression still peaks near $x = 2^n\theta$ with probability better than $4/\pi^2 \approx 40\%$ [1].
Let’s take a gate we know well, the $T$-gate, and use Quantum Phase Estimation to estimate its phase. You will remember that the $T$-gate adds a phase of $e^\frac{i\pi}{4}$ to the state $|1\rangle$:$$ T|1\rangle = \begin{bmatrix} 1 & 0\\ 0 & e^\frac{i\pi}{4}\\ \end{bmatrix} \begin{bmatrix} 0\\ 1\\ \end{bmatrix} = e^\frac{i\pi}{4}|1\rangle $$
Since QPE will give us $\theta$ where:$$ T|1\rangle = e^{2i\pi\theta}|1\rangle $$
We expect to find:$$\theta = \frac{1}{8}$$
In this example we will use three qubits and obtain an exact result (not an estimation!)
#initialization import matplotlib.pyplot as plt import numpy as np import math # importing Qiskit from qiskit import IBMQ, Aer from qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister, execute # import basic plot tools from qiskit.visualization import plot_histogram
Now, set up the quantum circuit. We will use four qubits -- qubits 0 to 2 as counting qubits, and qubit 3 as the eigenstate of the unitary operator ($T$).
We initialize $\vert\psi\rangle = \vert1\rangle$ by applying an $X$ gate:
qpe = QuantumCircuit(4, 3) qpe.x(3) qpe.draw()
Next, we apply Hadamard gates to the counting qubits:
for qubit in range(3): qpe.h(qubit) qpe.draw()
Next we perform the controlled unitary operations. Remember: Qiskit orders its qubits the opposite way round to the image above.
repetitions = 1 for counting_qubit in range(3): for i in range(repetitions): qpe.cu1(math.pi/4, counting_qubit, 3); # This is C-U repetitions *= 2 qpe.draw()
We apply the inverse quantum Fourier transformation to convert the state of the counting register. Here we provide the code for $QFT^\dagger$:
def qft_dagger(circ, n): """n-qubit QFTdagger the first n qubits in circ""" # Don't forget the Swaps! for qubit in range(n//2): circ.swap(qubit, n-qubit-1) for j in range(n): for m in range(j): circ.cu1(-math.pi/float(2**(j-m)), m, j) circ.h(j)
We then measure the counting register:
qpe.barrier() # Apply inverse QFT qft_dagger(qpe, 3) # Measure qpe.barrier() for n in range(3): qpe.measure(n,n)
qpe.draw()
backend = Aer.get_backend('qasm_simulator') shots = 2048 results = execute(qpe, backend=backend, shots=shots).result() answer = results.get_counts() plot_histogram(answer)
We see we get one result (
001) with certainty, which translates to the decimal:
1. We now need to divide our result (
1) by $2^n$ to get $\theta$:
This is exactly the result we expected!
# Create and set up circuit qpe2 = QuantumCircuit(4, 3) # Apply H-Gates to counting qubits: for qubit in range(3): qpe2.h(qubit) # Prepare our eigenstate |psi>: qpe2.x(3) # Do the controlled-U operations: angle = 2*math.pi/3 repetitions = 1 for counting_qubit in range(3): for i in range(repetitions): qpe2.cu1(angle, counting_qubit, 3); repetitions *= 2 # Do the inverse QFT: qft_dagger(qpe2, 3) # Measure of course! for n in range(3): qpe2.measure(n,n) qpe2.draw()
# Let's see the results! backend = Aer.get_backend('qasm_simulator') shots = 4096 results = execute(qpe2, backend=backend, shots=shots).result() answer = results.get_counts() plot_histogram(answer)
We are expecting the result $\theta = 0.3333\dots$, and we see our most likely results are
010(bin) = 2(dec) and
011(bin) = 3(dec). These two results would tell us that $\theta = 0.25$ (off by 25%) and $\theta = 0.375$ (off by 13%) respectively. The true value of $\theta$ lies between the values we can get from our counting bits, and this gives us uncertainty and imprecision.
3.2 The Solution
To get more precision we simply add more counting qubits. We are going to add two more counting qubits:
# Create and set up circuit qpe3 = QuantumCircuit(6, 5) # Apply H-Gates to counting qubits: for qubit in range(5): qpe3.h(qubit) # Prepare our eigenstate |psi>: qpe3.x(5) # Do the controlled-U operations: angle = 2*math.pi/3 repetitions = 1 for counting_qubit in range(5): for i in range(repetitions): qpe3.cu1(angle, counting_qubit, 5); repetitions *= 2 # Do the inverse QFT: qft_dagger(qpe3, 5) # Measure of course! qpe3.barrier() for n in range(5): qpe3.measure(n,n) qpe3.draw()
### Let's see the results! backend = Aer.get_backend('qasm_simulator') shots = 4096 results = execute(qpe3, backend=backend, shots=shots).result() answer = results.get_counts() plot_histogram(answer)
The two most likely measurements are now
01011 (decimal 11) and
01010 (decimal 10). Measuring these results would tell us $\theta$ is:
These two results differ from $\frac{1}{3}$ by 3% and 6% respectively. A much better precision!
qpe.draw()
# Load our saved IBMQ accounts and get the least busy backend device with less than or equal to n qubits IBMQ.load_account() from qiskit.providers.ibmq import least_busy from qiskit.tools.monitor import job_monitor provider = IBMQ.get_provider(hub='ibm-q') backend = provider.get_backend('ibmq_vigo') # Run with 2048 shots shots = 2048 job = execute(qpe, backend=backend, shots=2048, optimization_level=3) job_monitor(job)
Job Status: job has successfully run
# get the results from the computation results = job.result() answer = results.get_counts(qpe) plot_histogram(answer)
We can hopefully see that the most likely result is
001 which is the result we would expect from the simulator. Unlike the simulator, there is a probability of measuring something other than
001, this is due to noise and gate errors in the quantum computer.
6. Looking Forward
The quantum phase estimation algorithm may seem pointless, since we have to know $\theta$ to perform the controlled-$U$ operations on our quantum computer. We will see in later chapters that it is possible to create circuits for which we don’t know $\theta$, and for which learning theta can tell us something very useful (most famously how to factor a number!)'} | https://qiskit.org/textbook/ch-algorithms/quantum-phase-estimation.html | CC-MAIN-2020-50 | refinedweb | 1,876 | 55.54 |
I've got an error compiling this package:
src/abraca-application.c:10:26: fatal error: build-config.h: No such file or directory
#include <build-config.h>
^
compilation terminated.
although the file is present in the build directory Waf is in.
Anybody's help is welcome :)
Search Criteria
Package Details: abraca-git 0.8.2-2
Dependencies (6)
-gee (libgee-git)
- xmms2 (xmms2-git)
- git (git-git) (make)
- waf (waf-git) (make)
- vala>=0.24 (vala0.26, vala-git) (make)
Required by (0)
Sources (2)
Latest Comments
XZS commented on 2017-01-04 08:56
philb38 commented on 2016-12-17 13:16
I've got an error compiling this package:
XZS commented on 2014-06-22 17:10
It now uses archlinux' distributed waf instead of the one included and should build again.
mandog commented on 2014-06-20 13:41
This package does not build does not get past the download stage please fix
XZS commented on 2013-12-06 11:21
Last time I checked it did not work with neither libgee nor libgee06. Now it does. Thanks for the notice.
vbmithr commented on 2013-12-05 18:59
Please update the libgee dependency to libgee06
XZS commented on 2013-04-12 12:10
Resolved the subzero problem by pulling it in as a git submodule. A separate package would probably be more clean, but I think the wscripts tie it in too deeply to be worth the effort.
Removed waf from the dependencies again. All necessary waf is completely included in the project.
TrialnError commented on 2013-04-05 23:11
Updated the PKGBuild for pacman4.1
I also updated the installing procedure (from scons to waf)
But there remains a problem with vala/subzero
losinggeneration commented on 2012-11-09 16:20
Adopted.
Changes: Fixed the git checkout (updated it from PKGBUILD-git.proto), split build/package functions, use a build directory, updated description (from GTK2->GTK3), added install file to update icon cache and desktop database, & depend on hicolor icon theme.
Other thoughts: is the build-time dependency on scons needed? Basically, I don't know if the scons in the repository can function without any installed scons.
hi117 commented on 2012-10-16 20:44
Here is a diff of the changes I made
10c10
< depends=('gtk3' 'libgee' 'xmms2')
---
> depends=('gtk2' 'libgee' 'xmms2')
15c15
< _gitroot='git://github.com/Abraca/Abraca.git'
---
> _gitroot='git://git.xmms.se/xmms2/abraca.git'
Anonymous comment on 2012-07-04 20:46
the project moved to github:
Anonymous comment on 2010-06-15 02:54
losinggeneration
Nice catch, thanks. Missed it when i took over package I guess.
losinggeneration commented on 2010-06-14 15:47
git-pull should be changed to git pull
The error was introduced due to the update to waf version 1.9. I included a patch to fix it.
Thank you for the notice. | https://aur.archlinux.org/packages/abraca-git/?comments=all | CC-MAIN-2018-13 | refinedweb | 482 | 65.12 |
This file documents the revision history for Perl extension Gantry 3.64 Wed Jan 13 09:30:55 PST 2010 - fix session plugin test, reported by Andreas Koenig via RT 3.63 Wed Dec 05 12:46:00 CDT 2009 - Fixed perl version in Build.PL to work with newer Module::Build. Thanks to Matt We...lol... sorry... almost made it through without laughing. 3.62 Wed Dec 02 10:36:00 CDT 2009 - Added plugin to handle shibboleth authentication. 3.61 Thu Oct 15 14:13:00 CDT 2009 - Added an Engine method that returns whether the current request is being served by an SSL-enabled host. 3.60 Wed Oct 14 17:03:00 CDT 2009 - Added ability to test cache set methods. This is useful for cases where you need to ensure that cache sets/gets are working correctly. For example, Cache::Memcached does not return any errors if it cannot connect to the specified memcached server(s). - Added aditional parameter, expires, to the cache_set method in the Gantry::Plugins::Cache::Memcached module. Because the underlying memcached module supports setting individual expire times for items in the case, you can now specificy them. If a expire time is not set for the item, then the global cache_expires value is used (this maintains backwards-compatibility). 3.59 Fri July 10 17:45:00 CDT 2009 - Added gantry_secret to serve as a default secret key so you don't have to specify different keys for the authcookie, session, cookie check etc. - Modify ap_req / apache_request so that it never recreates the Apache::Request object. libapreq2 does not like it when you do that and specify a max post size. It will throw a conflicting information error. - Removed code that was setting a test cookie. The code has been replaced by a cookie check plugin. - Preserve all parameters when redirecting during login so that the original request can be fulfilled after login. Also encrypt url during redirect to conceal any posted values you don't want appearing in the url. - Added methods to url encode / decode a specified value. - Modify plugin handling. Plugin method can now be imported into the symbol table of the application using the specified plugin namespace. This allows you to use otherwise incompatible plugins within the same server. - Allow import lists to be added to plugins so that the list of symbols imported can be controlled. - Removed sort of plugin callbacks. It was sorting sub references which basically equated to completely random ordering. - Added PRE_ACTION and POST_ACTION states where plugin callbacks can be registered. - Remove unneeded apreq parse call during file uploads. - Update form templates to decode unsafe html characters before sending them through HTML::SuperForm as it takes care of encoding the unsafe characters. 3.58 Thurs July 02 16:33:00 CDT 2009 - Fix AuthCookie.pm so that if get_auth_schema hasn't been imported then it falls back to using get_schema. 3.57 Weds July 01 15:33:00 CDT 2009 - Call find_redirect after calling form_add/form_edit. This allows the results of form methods to influence the final redirect location if needed. 3.56 Thurs June 25 16:09:00 CDT 2009 - Removed Gantry::Plugins::Session since its being maintained externally now. - Also removed session.tt template. 3.55 Tues June 22 15:24:00 CDT 2009 - Update AuthCookie to first check for matching user/pass then if that fails check for the user name by itself. This is needed so that encrypted passwords can be used. - Add reload_config option to Gantry::Conf to force config file to reload. Add reload_config option in 02flat.t test to fix test failures do to config file not reloading. - Modified Gantry::Conf to be able to parse its config file on import. Also modified MP20/MP13 db connection helpers to use cached Gantry::Conf. This fixes a performance issue where the entire gantry config could be reloaded up to 6 times per request. - Fixed get_auth_conn_info so that if gantry conf is being used but the auth db info is in the apache conf everything still works. - Fixed bug in Build.pm where files without an extension would not be installed and where directories containing a . would cause the build to die. - Trim leading / trailing whitespace from incoming parameters by default. - Added sfbb/form.tt template. - Modified CRUD.pm plugin to allow a default template to be specified in the config. The config option is default_form_template. - Fixed form.tt so that you can specify your own parameters in an onchange event. - Skip querying of foreign tables unless a foreign key is actually used on the form. This gives a huge speed increase when dealing with foreign tables with many rows. 3.54 Sun Mar 22 11:48:54 PDT 2009 - build CPAN dist - All request parameter values will be filtered by default from now on. This is done to prevent XSS vulnerabilities. The filtering is pretty simple, as it is just translating angle brackets, < and >, and quotes, " and ', into their named-entity equivalents <\ and >. If for some an application requires that all request parameter values be unfiltered (which is not safe, and opens the application up to XSS vulnerabilities), then they can specify a config option named 'unfiltered_params'. This value can be set to either '1' or 'on'. Also, if access to a request parameter is needed unfiltered, then the uf_params() method may be used. - Explicitly close auth database handle in Authorization and Authentication handlers on failure so that connections aren't left hanging around when Apache::DBI isn't being used. - Explicitly close database handles in Gantry cleanup() method so that connections aren't left hanging around when Apache::DBI isn't being used. - Change use of DBI->connect_cached() in Gantry::Utils::ModelHelper to DBI->connect() to prevent "idle in transaction" cases when Apache::DBI isn't being used and auto commit is turned off. What ends up happening is when a rollback or commit isn't explicitly issued, the transaction is left open on the database server in a "idle in transaction" state, even though the request has finished being processed. - Enhance results.tt - Add no_options configuration item to suppress header and row options. - Add options for adding a pre header row. - Allow more customizing via classes. - Add ability to specifiy a plugin directory. - Documentation cleanup. - Allow a custom template to be specified for CRUD delete action. - Allow foreign_display_rows to be constrained. - Don't read all rows into memory before processing. - Fix bug in is_date. check_date is not a class method. - Fix a bug with cgi engine. Post params were not being included as part of the cgi object because $self->get_post_body() gets the post body from the cgi object, which hadn't been created yet. - Fixed a bug with form.tt where input_value wasn't being reset. - Changed form.tt to not output a value attribute when the field has no value. - Changed FormMunger to throw an error when an invalid form field is specified. This prevents the form from getting corrupted. - Add option to allow form validation errors to be grouped together by the field. This allows for more versbose error reporting. - Skip form validation for non post requests. The results were being discarded anyways. - Add *.* to list of web_dirs so that files in the top level directory are also installed. - Fix bug in MP13.pm where adding headers would overwrite the previous header instead of appending. - update mod_perl2 test -- skip tests if Apache2::Request does not exist - Updated the documentation, add an external exception handler. Works correctly with the standalone server. - Added some experimental code to throw exceptions and catching them in a state machine. This would allow redirects and such to become exceptions, which could then be handled locally or within the handler. 3.53 Sun Jul 6 11:50:58 PDT 2008 - modify Gantry::Server and Gantry::Engine::CGI to work with the new state machines. - add Gantry::State::Simple state machine - replace unchecked evals around plugin callback execution (so that errors aren't thrown away) with a conditional to see if the arrayref is even defined. - moved engine_init() and post_engine_init plugin callback execution out of init() and into handler() above the init() call so that $r or CGI is available to any pre_init callback methods, and so that any errors that occur within those callbacks are caught correctly with cast_custom_error(). 3.52 - add relocate_permanently() to support 301 redirects (Stas Bekman) - patch Gantry.pm - add a simple sort to the plugin calls - add patches for Cache plugins - add Gantry::Utils::CRON - add setter/getter for Gantry::Utils::Crypt errors - explicitly call the MP20 engine import for the Auth modules - add handle_request_test_post for testing 3.51 Tue Aug 28 11:42:48 CDT 2007 - fix bug in Gantry::Server::handle_request_test - fix some warning bugs in form.tt - modify AuthCookie plugin to support testing with a logged in user - add test method to gantry - add multiselect option to Gantry::Utils::Threeway - add serialize_params base method. 3.50 Tue Jun 19 14:04:10 CDT 2007 - fixes to Gantry::Plugins::Session (Kevin Esteb) - add cache_purge option to Caching pluings (Kevin Esteb) - fix Gantry::Utils::DBIxClass util. Calling mk_classdata instead of mk_classaccessor. - pass orm rom to AutoCRUD callbacks to text_descr. can used to produce a better description on the edit or delete page. - add ability to override the file name when using the write_file method in Gantry::Utils::CRUDHelp - AuthCookie bug, fix root page login redirection - modify AuthCookie login failure error message - fix post_max in the CGI engine - add Gantry::Utils::Captcha - add Gantry::Utils::Crypt 3.49 Mon Apr 30 16:45:07 CDT 2007 - add datetime_now db-agnostic wrapper to Gantry::Utils::DBIxClass - add post_body parsing to Plugins::SOAP::Doc - modify Gantry handler to work with the PageCache plugin - add Gantry::Plugins::PageCache - modify cache plugins - tie Gantry::Conf into Gantry cache. Specifying a cache plugin will enable caching for Gantry::Conf - add engine_cycle method to the Gantry object. - add javascript helper for YUI popup calendaring - factor out login, logout routines in AuthCookie. AuthCookie now provides login/logout methods that can be called at anytime from anywhere in the app. (think registration form) - add add row level error messages to moxie form.tt - modify Gantry handler to store action - add js_root and js_rootp - add search cpan link to pod/doc viewing module - patch uninitialized warning in CGI engine - replaced dojo with jquery javascript libraries for the Gantry samples. - patch Gantry::Utils::DBConnHelper::MP20, the server starting test does not exist in mod_perl 2. Test was useless. - fixed file upload bug, the full extension for .tar.gz was not being matched - modified mod_perl2 test - fix link in doc - add patch from Stas for fix the permissions problem with Init.pm - fix gantry base_root bug -- base_root is now add just before template object creation. - add JSON to the requires list, JSON is not integrated into the CRUD do main method. - added row level permissions by logged in user - added consume_post_body and get_post_body to engines, plugins can use this to preempt normal form parsing of the post body - changed soap plugins to use consume_post_body and get_post_body, making them engine agnostic 3.48 Wed Feb 21 13:52:00 CST 2007 - add Gantry::Utils::Threeway - add save and add another to CRUD and AutoCRUD. To get it from bigtop set 'save_and_add_another' to a true value in your form methods 'extra_keys'. - fix crud so that if the form field is display then don't validate - add hints to form.tt - Revised tutorial to reflect current preferred practices. This involved switching to DBIx::Class and discussing tentmaker. - Modify MP13 and MP20 engine to handle multiple entries for form parameters. Form parameters that contain multiple values will be joined with nulls ( "\0" ). This is exactly how CGI.pm behaves. - changed sample_wrapper stylesheet example to use doc_rootp so it can work with stand alone server. - Added FormMunger util to massage form fields needed by form.tt. - Added no_cancel flag to form.tt, set it if you don't want the Cancel button. In bigtop, set this as one of your form methods 'extra_keys'. - Added onchange to form.tt. Give it the name of a javascript function to trigger when a select list changes (type must be select). You can set this in bigtop with the new html_form_onchange statement. 3.47 Mon Jan 22 09:11:06 CST 2007 - Added Gantry::Plugins::Session - Added Ganty SOAP Support - AuthCookie will not redirect to a full url if the 'url' param is passed. i.e. <input type="hidden" name="url" value="" /> - added test for AjaxFORM - Added auth_cookie_name and auth_cookie_domain as optional conf based accessors for AuthCookie plugin. - Added log_error method to CGI engine, so all engines have it. - Added AJAX form plugin. - Added Session plugin with pluggable caching storage mechanism. (thanks to Kevin Esteb for all the plugins in this release except SOAP) - Added SOAP plugin (mod_perl 2 only). - Changed how plugin namespaces work. Use the new -PluginNamespace wherever you use -Engine, follow that with the plugins you want. 3.46 Wed Dec 20 12:18:43 CST 2006 - Fixed uninitialized warnings in the TT template engine. - Fixed CRUD plugin's validator callback use in edit. - Added Gantry::Build to simplify Build.PLs that need to install web content. - Added Gantry::Plugins::AjaxCRUD, thanks to Kevin Esteb for submitting it. 3.45 Mon Dec 11 10:10:13 CST 2006 - Arranged documenation - Fixed namespace issue with Gantry::Plugins::AuthCookie 3.44 Tue Dec 5 09:42:01 CST 2006 - Added Gantry::Utils::FormErrors to manifest. 3.43 Tue Dec 5 09:09:00 CST 2006 - moved uri, location, path_info, method above the init callback - Param cleaning for CRUD and AutoCRUD no longer makes 0 ints null. But it does make blank ints null. - DBIx::Class autocrud helper now uses transactions so you don't need to set AutoCommit => 1 to use it. - added Gantry::Plugins::AuthCookie ( cookie based auth ). Supports DBIx::Class user models and Apache htpasswd files. - added Template::Plugin::GantryAuthCookie for decrypting auth cookies from Apache::Templates. - added plugin functionality to various phases of the request. Currently you can register the plugin and its callback at the init, pre_init, post_init, pre_process and post_process phase. See Gantry::Plugins::AuthCookie for an example. - Gantry::Plugins::CRUD has a new callback: validator in case you don't want or can't use Data::FormValidator. 3.42 Thu Oct 19 10:05:57 CDT 2006 - Added Samples. - Missing and Invalid fields on form.tt now have their labels in the error message instead of their names. - Gantry.pm's set_auth_model now requires the model for you. - CRUD add, edit, and delete methods now allow their invokers to set $self->stash->view->title to override default window and table titles. - Added support for generic errors on form.tt (not just missing or invalid). - Added field type 'display' to form.tt for fields which should be displayed, but not in an input element. - Added content key to form.tt's fields hashes. This helps when you have a Question #. Text label for input elements, but don't want all that text reported in errors. - Added file upload. 3.41 Wed Oct 4 14:58:22 CDT 2006 - Corrected response header handling in CGI engine. - Corrected Gantry::Conf so it won't be tested if it can't be used. - Added main.tt to the MANIFEST. Bigtop uses it for defaul pages. - Tried again to make prompts visible during CPAN shell install. 3.40 Fri Sep 22 08:14:15 CDT 2006 - Restored and expanded warning avoidance from Gantry.pm accessors. If you ask for a path, you no longer get undef. At least you get ''. 3.39 Thu Sep 21 14:20:17 CDT 2006 - switched plugin, engine loading from string eval to require - added cookie support to the cgi engine. - bug fix to default form.tt - form didn't always select the correct default/previous value in the select lists. - improved Gantry::Conf so gantry.d includes work and instances can conf directly in the gantry.conf or gantry.d instance block. - added Net::Server support to stand alone server (it's slow) - suppressed some annoying undefined value warnings. 3.38 Wed Aug 16 16:26:03 CDT 2006 - Gantry supports DBIx::Class - added tests for auth handlers and models using SQLite - added style to auth pages - add extra testing features to the standalone server. 3.36 Thu Aug 10 09:25:12 CDT 2006 - Took another crack at DBIx::Class 3.35 Never installed - Corrected DBIx::Class interactions 3.34 Tue Aug 8 13:27:57 CDT 2006 - Added a default Gantry::Init for use during testing. 3.33 Tue Aug 8 10:39:14 CDT 2006 - Modify Build.PL to write the Gantry::Init file during the install. This module contains the install options that can be referenced later. - Added action (cancel or submit) and user action (add, edit, or delete) to the end of the parameter list for the redirect callback in CRUD. - Added create method to Gantry::Utils::DBIxClass, calling it (instead of the one provided by DBIx::Class::ResultSet) returns a row with a valid id, even when the id is generated by a sequence within postgres. 3.32 Mon Jul 17 13:49:33 CDT 2006 - added Perlbal client ip fix - Removed ident from essential column list for auth_users table. This makes the model classes compatible with prior auth schemes which did not have that column. - Changed the Apache basic auth scheme slightly to accomodate apps which rely on the Class::DBI. Users now append either Regular or CDBI to all module names used for auth (except Access which never used auth databases). - Added stringify, and an overload to call it, for the DBIxClass Util module. Without it, form.tt can't show selected items foreign keys point to. 3.31 Jun 2 14:46:36 CDT 2006 - Doc updates to Gantry.pm, Tutorial and FAQ - Added support for DBIx::Class. (AutoCRUD still only works for ORMs with Class::DBI's API.) - Converted Gantry::Control::C::* auth modules to use Gantry native models. - Cleaned up (that is reduced) the prerequist list in Build.PL. - Cleaned up Web Directory questions during ./Build install. - Corrected problem with Gantry::Conf parameter fishing in the CGI engine. 3.30 Wed May 10 13:01:44 CDT 2006 - Corrected bug in CGI engine which made the constructor demand a valid hash as an argment. Now an empty one is supplied by default, making it work as the QuickStart doc says it should. Thanks to Краси Беров for pointing this out. 3.29 - Restored access to dircopy from File::Copy::Recursive to custom install code. 3.27-8 Tue Apr 18 15:29:15 CDT 2006 - Tue Apr 18 15:45:32 CDT 2006 - Made failure due to absence of File::Copy::Recursive gentler. 3.26 Mon Apr 17 14:25:48 CDT 2006 - Updates to custom error page 3.25 Fri Apr 7 12:53:20 CDT 2006 - Added Test 3.24 Thu Apr 6 15:12:12 CDT 2006 - Rearranged/Added more tests 3.23 Thu Apr 6 08:44:44 CDT 2006 - Fixes to the db connection methods 3.22 Thu Mar 30 12:41:36 CST 200 - Added test method to the Gantry server to be used for to run a test on a Gantry application 3.21 Wed Mar 29 12:41:36 CST 200 - Updates to documentation 3.20 - Added a stand alone web server, Gantry::Server - Updates to documentation. 3.19 - Modified Build.PL. Now copies all files in the root folder. - Updated the Gantry::Conf api so you can get location level config. - Corrected the mechanism for using Gantry::Conf so mulitple instances in the same server can really have different conf. 3.18 - Documentation updates 3.17 - tied Gantry::Conf to the framework - added initial Gantry::Conf code and stubs for the future - Updated constraint handling in AutoCRUD and CRUD to use the current Data::FormValidator constraint_methods key. - Updated Gantry::Conf to use ConfigureVia Method insteadn of ConfigureViaMethod which allows for multiple ConfigureVia statements in the same instance block and easier dispatching. 3.16 - added PODViewer - CGI Engine fix, casting error to wrong method - fixed prompt problem when installing within CPAN shell 3.15 Thu Feb 16 09:22:03 CST 2006 - add paging.ttc ( paging for the results.tt ) - added cgi parameter parsing code - Added lots of Docs modules - Refactored handler and init so that all conf fishing and other engine specific code is actually in the engines not Gantry.pm 3.14 Wed Feb 1 08:09:54 CST 2006 - modified engines - added CGI::Simple object to all - improved cgi error messages - restored connection info scheme having found the bug (mod_perl helpers were still caching the conn_info data, so everyone was going to the first database hit) - renamed Gantry::Utils::ModelHelper to Gantry::Utils::DBConnHelper to better reflect what it helps with (renamed all it subclasses too) - added Gantry::Utils::Model as a Class::DBI replacement - added Gantry::Utils::ModelHelper to provide db_Main and other useful methods as mixins to Gantry::Utils::Model::* modules. Note that no production app uses Gantry::Utils::Model yet. - removed accessors in Gantry.pm for dbconn, dbuser, and dbpass 3.13 Fri Jan 20 13:27:21 CST 2006 - Added script auth connection handling to db_Main in AuthCDBI util. - Added doc stubs in modules. 3.12 Thu Jan 19 10:44:31 CST 2006 - Added retreive_all_for_main_listing to Gantry::Utils::AuthCDBI which we needed to do since we separated Malcolm's auth models from the regular ones. 3.09 - 3.11 - AuthCDBI, CBDI testing 3.08 - Add Access auth module ( Gantry::Control::C::Access ) provides IP authentication for mod_perl. - updates to pod - updates to Template Default - Changed connection info scheme for regular and cgi scripts in Gantry::Utils::CDBI. This required moving connection info fishing from init_cgi in Gantry.pm to Gantry::Engine::CGI->new - Expanded the connection info scheme change to all engine types by introducting Gantry::Utils::ModelHelper as a (mostly) abstract base class from which a new module for each engine inherits - Expanded connection info scheme to the auth db connecitons. CGI may not be working correctly under the new scheme. 3.07 Mon Dec 26 12:19:05 CST 2005 - Add CGI Engine. ( Gantry::Engine::CGI ); Gantry apps can now run under CGI (including Fast-CGI) 3.06 Tue Dec 20 12:47:47 CST 2005 - Revised Build.PL so that it complains more quietly if File::Recursive::Copy is not installed. - Revised use test section for mod_perl 1.9x so it checks for Apache::Request before trying to use the engine, yielding a cleaner error message. - Change auth front-end modules to use $self->location for redirect_to_main instead of $self->apps_rootp . '/blah' 3.05 Thu Dec 15 12:08:42 CST 2005 - template toolkit object is now unique per location block. - converted front-end auth modules to use Gantry CRUD - added editor_init.ttc ( default tinymce editor componet ) - modified form.tt to handle the editor_init.ttc componet 3.04 Fri Dec 9 17:04:05 CST 2005 - Corrected template error which gave very small widths to text input fields by default (hint: TT uses size as a pseudo-method) 3.03 Fri Dec 9 14:04:05 CST 2005 - Minor template revisions 3.02 Wed Dec 7 16:07:36 CST 2005 - Removed required asterisks from checkbox fields 3.01 Tue Dec 6 13:31:16 CST 2005 - Improved various templates (made form tables wider by default) - Added optional javascript based html editor to form.tt 3.0 Thu Dec 1 13:49:28 CST 2005 - Corrected templates so their html is more standards compliant. - Increased version to indicate that we are using this third generation of our framework in production for multiple apps. 0.26 Wed Nov 16 16:30:39 CST 2005 - Changed Gantry::user_row to return nothing when it can't find a user. 0.25 Wed Nov 16 11:46:53 CST 2005 - Added Gantry::Plugins::CRUD for more control. - Corrected relocate so you have more control of where it takes you from successful AutoCRUD actions. - Corrected error which had boolean select lists always showing their default values instead of showing those only when the database didn't have a value. 0.24 Thu Nov 3 11:21:28 CST 2005 - Fixed Gantry::Utils::CDBI module to work with mp1 and 2. - Modified Build.PL 0.23 Mon Oct 24 15:46:09 CDT 2005 - Add a Class::DBI base module to Gantry. fixes the problem regarging the clashing database handles. - added the log_error method 0.22 Mon Oct 17 20:12:30 EDT 2005 - Switch accessors to methods calls. 0.21 Wed Aug 24 11:00:41 CDT 2005 - Name change to Gantry 0.20 Mon Aug 8 11:30:08 CDT 2005 - Added Gantry::Conrol to framework (Authen, Authz, PageBased) - Added generic templates for mvc processing 0.18 Sat Jun 11 21:08:58 CDT 2005 - Add plugin ablility to frame work - Added compatability for mod_perl 1, 1.99, 2.0 - Add Template::Toolkit support 0.17 Sat Jun 11 21:08:58 CDT 2005 - Branched from KrKit version 0.16 ( ) | https://metacpan.org/changes/distribution/Gantry | CC-MAIN-2019-43 | refinedweb | 4,198 | 66.33 |
Subject: Re: [boost] [review][assign] Formal review of Assign v2 ongoing
From: er (er.ci.2020_at_[hidden])
Date: 2011-06-21 17:05:58
> About your other comments:
> - operator| has a similar meaning in Boost.Range under adaptors.
> - Feel free to suggest another prefix than 'do'.
> - Dot is borrowed from Boost.Assign (1.0).
Did you mean dot or %? The dot is a small price to pay for alternating
between various ways to insert elements in a container, within one
statement:
put( cont )( x, y, z ).for_each( range1 )( a, b ).for_each( range2 );
The answer to "[Is it] just a quest to type less when using standard
containers?" is yes, as illustrated just above, but it is quite a broad
definition of standard containers. Version 2.0 provides macros to
broaden it further (which was used to support Boost.MultiArray, for
example).
As for why such a library should exist, it depends on the degree to
which you value the syntax above, which is very similar to Boost.Assign
(1.0), in this case.
You say "I ordinarily only initialize containers to literals when
writing unit tests.". In this case, I think you are right that you can't
beat C++0x initializer lists. But, you still may need to fill a
container, as above. And also, consider the cases below:
#include <vector>
#include <queue>
#include <string>
#include <tuple>
#include <boost/assign/v2/include/csv_deque_ext.hpp>
int main()
{
typedef std::string s_;
{
typedef std::tuple<s_, int> t_;
typedef std::vector<t_> v_;
v_ v1 = {
t_( "a", 1 ),
t_( "b", 2 ),
t_( "c", 3 ),
t_( "d", 4 ),
t_( "e", 5 )
};
using namespace boost::assign::v2;
v_ v2 = converter(
csv_deque<t_, 2>( "a", 1, "b", 2, "c", 3, "d", 4, "e", 5)
);
}
{
typedef std::queue<int> q_;
// Not aware that an initializer list works here
using namespace boost::assign::v2;
q_ q = converter(
csv_deque<int, 1>( 1, 2, 3, 4, 5 )
);
}
return 0;
}
> - Prefix _ is reserved for const objects (not sure the proper word
for it)
And I think this convention appears elsewhere in Boost, such as
Boost.Parameter.
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2011/06/183014.php | CC-MAIN-2019-51 | refinedweb | 370 | 65.42 |
#include <map>
#include "lex_string.h"
#include "mem_root_array.h"
#include "sql/dd/string_type.h"
Go to the source code of this file.
Type for the list of Sql_check_constraint_share elements.
Type for the list of Sql_check_constraint_spec elements.
Type for the list of Sql_table_check_constraint elements.
Method to check if server is a slave server and master server is on a version not supporting check constraints feature.
Check constraint support is introduced in server version 80016.
Method is used by methods prepare_check_constraints_for_create() and prepare_check_constraints_for_alter(). Check constraints are not prepared (and specification list is cleared) when this method returns to true. In older versions, check constraint syntax was supported but check constraint feature was not supported. So if master is on older version and slave gets event with check constraint syntax then on slave supporting check constraint, query is parsed but during prepare time the specifications are ignored for the statement(event). | https://dev.mysql.com/doc/dev/mysql-server/latest/sql__check__constraint_8h.html | CC-MAIN-2019-35 | refinedweb | 147 | 60.51 |
While I struggle with some misunderstandings of the OCaml module system, I decided to put my interpreter on temporary hold to try some Haskell. It's a language that's been on my radar and been a goal to learn. A combination of Haskell's terse notation ( it turns out
sum . take 10 $ (*) <$> [2,4,6] <*> [1,2,3,4,5] is a completely legitimate, if daunting line of code ) combined with the new terminology (cue monad tutorial) caused some delay. But after deciding to commit, and OCaml's slightly more friendly entry to FP, I felt capable of getting my hands dirty. I felt particularly inspired after seeing code written for Github's Semantic project. At some point, I'd like to work on some meaty Haskell productive code and there's no place to start like starting.
After reading a lot of Learn You A Haskell For Great Good and watching some Youtube videos, I felt reasonably comfortable in being able to write a small Tictactoe CLI game. I went with Stack as my build tool of choice.
Like OCaml, Haskell's type system encourages domain modeling early. I decided to make my board a list of 9 elements, where each element could be either an X, an O, or empty. Since Haskell doesn't really have a null type, I decided to create both a
Move as well as a
Cell
import System.Environment import Data.List data Move = X | O data Cell = Occupied Move | Empty
We're taking
System.Environment because we'll need some IO behavior, and
Data.List for some future functions.
I could have made
Cell a Maybe type, but chose a more descriptive way to express a cell. This way, I can keep track of the move to play as well as see what was in the cell. I also needed a way to render this board out. Since I'm using custom types, I needed to create instances of the Show typeclass.
instance Show Move where show X = "X" show O = "O" instance Show Cell where show (Occupied X) = "X" show (Occupied O) = "O" show Empty = " "
I'm semi-positive I could have used
deriving (Show) on my
Move type but that'll be for a later refactor. Today's primary goal was just writing code. My next plan was to get some board-rendering code up. I needed a function that simply took my board, my
[Cell] and output something pretty.
renderRow :: [Cell] -> String renderRow row = intercalate " | " $ fmap show row dividingLine :: String dividingLine = "----------" renderBoard :: [Cell] -> IO () renderBoard board = do putStrLn $ renderRow firstRow putStrLn dividingLine putStrLn $ renderRow secondRow putStrLn dividingLine putStrLn $ renderRow thirdRow where firstRow = take 3 board secondRow = drop 3 . take 6 $ board thirdRow = drop 6 board
renderRow takes a list of cells and returns the readable version joined by pipes.
renderBoard just some some list-slicing to render no more than 3 rows of 3 elements. Since I'm writing to console, I'll need to return an
IO (), the IO monad. Without getting too in the weeds, I/O is considered to be a side-effect and therefore Haskell forces you to wrap it in a monad.
If I were to call
renderBoard with a list of empty elements
[Empty, Empty, Empty, Empty, Empty, Empty, Empty, Empty, Empty] I would get a very pretty
| | ---------- | | ---------- | |
My next goal was some idea of assignment. I needed to be able to take a
Move and a
[Cell] and return an updated board. There are a couple of rules to this
1) The selected cell must be within bounds.
2) The selected cell must be free.
Given this, I decided to simply create a map of input strings to List indices. Is it pretty? Nope. But it works fine for this case.
getBoardIndex :: String -> Maybe Int getBoardIndex "A1" = Just 0 getBoardIndex "A2" = Just 1 getBoardIndex "A3" = Just 2 getBoardIndex "B1" = Just 3 getBoardIndex "B2" = Just 4 getBoardIndex "B3" = Just 5 getBoardIndex "C1" = Just 6 getBoardIndex "C2" = Just 7 getBoardIndex "C3" = Just 8 getBoardIndex _ = Nothing
Pattern matching in Haskell is a little more terse than OCaml, in that I don't need a match statement. I simply create functions for every possiblity, similar to Elixir's matching. You'll see also I'm returning a
Maybe Int - I chose this because not just do I care if a board index is real, but also if it's free. Two if statements, so I can use monadic binding, or the
>>= operator. For reference:
(>>=) :: m a -> (a -> m b) -> m b
What this says is "Give me a monad of some
a, and give me a function that turns some
a into a monad of
b and I'll return a monad of
b. If I have a
Maybe Int from
getBoardIndex and my function for "is that cell free to assign" takes an
Int and returns a
Maybe then I can use this binding.
data CellTransform = Success [Cell] | Fail String [Cell] verifyIsFree :: [Cell] -> Int -> Maybe Int verifyIsFree board ix = if board !! ix == Empty then Just ix else Nothing assignCell :: String -> Move -> [Cell] -> CellTransform assignCell location move board = case getBoardIndex location >>= verifyIsFree board of Nothing -> Fail "Invalid move" board Just i -> Success ((take i board) ++ [Occupied move] ++ (drop (i+1) board))
You'll see this new
CellTransform type. I added a new type just to carry along error messages and an unmodified board if the board is taken. So my
verifyIsFree takes a board and index, and if the board is free at that element returns the index in a Maybe, else Nothing. Since I'm doing some equality checks of a custom data type, I'll need to make sure that
Cell is also an instance of the Eq typeclass
instance Eq Cell where Occupied X == Occupied X = True Occupied O == Occupied O = True Empty == Empty = True _ == _ = False
This just sets my equality operator for all possible states of a
Cell.
Lastly, my actual game. I need my game to
1) Ask for input
2) Try to assign the cell
3a) If the cell is invalid, tell the user and let them pick again
3b) If the cell is valid, check for a winner
4a) If there's a winner, alert them and end the game
4b) If there's no winner, hand over to the next player
Let's get this coded out
playRound :: Move -> [Cell] -> IO () playRound move board = do putStrLn $ (show move) ++ " 's turn." putStrLn $ "Pick a cell from A1 to C3." renderBoard board putStr "\nInput: " cell <- getLine case assignCell cell move board of Fail err board -> do putStrLn err playRound move board Success newBoard -> do if isThereAWinner move newBoard then do putStrLn $ ("Winner! " ++ (show move) ++ " has won!") renderBoard newBoard return () else playRound (nextMove move) newBoard
Since we're using I/O again, we need to return an IO Monad. That also gives us the
do notation benefits of some slightly more imperative-reading code.
You'll see some fake functions -
isThereAWinner and
nextMove move. We can code these out.
nextMove :: Move -> Move nextMove X = O nextMove O = X isThereAWinner :: Move -> [Cell] -> Bool isThereAWinner move board = or [ -- check top row board !! 0 == (Occupied move) && board !! 1 == (Occupied move) && board !! 2 == (Occupied move), -- check middle row board !! 3 == (Occupied move) && board !! 4 == (Occupied move) && board !! 5 == (Occupied move), -- check bottom row board !! 6 == (Occupied move) && board !! 7 == (Occupied move) && board !! 8 == (Occupied move), -- check left column board !! 0 == (Occupied move) && board !! 3 == (Occupied move) && board !! 6 == (Occupied move), -- check middle column board !! 1 == (Occupied move) && board !! 4 == (Occupied move) && board !! 7 == (Occupied move), -- check right column board !! 2 == (Occupied move) && board !! 5 == (Occupied move) && board !! 8 == (Occupied move), -- check top left -> bottom right board !! 0 == (Occupied move) && board !! 4 == (Occupied move) && board !! 8 == (Occupied move), -- check bottom left -> top right board !! 6 == (Occupied move) && board !! 4 == (Occupied move) && board !! 2 == (Occupied move) ]
This is my least-favorite function here. It's readable with comments, but definitely isn't pleasant. I can't imagine changing this into a 5x5 tic-tac-toe board or something. But once we have this, we can create a
main function.
main :: IO () main = do putStrLn $ "The game is beginning." let newBoard = replicate 9 Empty playRound X newBoard
And we can build our
tictactoe.hs with
stack ghc tictactoe.hs and run
./tictactoe to play!
This was a fun experiment. I tried to avoid having to dig too into monadic operators, the State monad, or advanced Haskell techniques. My primary focus was to just type code and try to get comfortable with the syntax. The compiler is pretty helpful, but not as explicit as Elm's compiler. Since my career goals are to get a job writing backend ML-family code (Scala, OCaml, Haskell, etc) I'll keep on practicing. I'd love any project ideas. I might try to write a Lisp interpreter for a bigger, meatier project.
Discussion (3)
Very nice article, just two small tips:
Eqinstance ever, just derive it
this additionally scales easily to bigger boards by changing
n
I am not completely sure if the code is correct as I hadnt had the chance to test it yet
Thanks Jan! That list comprehension is really readable. I’ll be sure to play with that and practice. Coming from a largely-JS/Ruby background, list comprehensions are definitely something I have to further internalize.
Thanks for noting the Eq derivation as well. Actually after writing this, I re-read that chapter of Learn You A Haskell and realized the same. I’ll update my source and leave an edit in this article.
Thanks for the addition. There is a small typo in line 3, 'colums' should be 'columns'. | https://practicaldev-herokuapp-com.global.ssl.fastly.net/nt591/writing-a-tictactoe-game-in-haskell-545e | CC-MAIN-2022-33 | refinedweb | 1,617 | 73.37 |
SOAP Scripting returning multiple values using getXMLFromString
I do see some discussion thread awhile back with the that getXMLFromString works with multiple values.
Discussion: One to Many Data with XML Data Bank and Writable Data Source.
I have tried to applied the same concept on Groovy. But, it doesn't seems to work. I am planning to put those data into a writable datasource. I know splitting two different method calls would work. I was just thinking maybe there's a better ways than doing that.
Here's the script example:
import soaptest.api.*;
public activateRegression( input, context ) {
Regression = "TRUE"
myPath = "FALSE"
return SOAPUtil.getXMLFromString( [myPath, Regression ] )
}
Here's the error. It does not support an array list.
Error Message:
DataSource: X (row 1): Error during script execution. View Details for more information.
No signature of method: static soaptest.api.SOAPUtil.getXMLFromString() is applicable for argument
types: (java.util.ArrayList) values: [[FALSE, TRUE]]
Possible solutions: getXMLFromString([Ljava.lang.String;), getXMLFromString([Ljava.lang.String;,
boolean)
Additional Details:
No signature of method: static soaptest.api.SOAPUtil.getXMLFromString() is applicable for argument
types: (java.util.ArrayList) values: [[FALSE, TRUE]]
I do see another thread instead of using XML build a JSON output instead and write it to databank. I manage to resolve the issue this way. Thanks.
As the error message suggests, SOAPUtil.getXMLFromString() does not take an ArrayList as an argument. Instead, it takes a String array.
In case this helps, there's an answer on Stackoverflow:
How do I convert a Groovy String collection to a Java String Array?
thanks benken_parasoft, you are right it's treating it as a list. | https://forums.parasoft.com/discussion/comment/10956 | CC-MAIN-2019-18 | refinedweb | 270 | 61.53 |
Write a program to process bowling scores for players on a team. Calculate each player's series (the sum of his bowling games) and his average game score. Sum the players' scores for each game, and calculate the team's total for each game, the team's series, and the team's average game score. For each player and each team, print the scores for each game, series, and average. Write the program so it can handle as many teams as the user wants to process or no teams at all. Allow the user to continue entering team scores until he indicates there are no more teams to process. If one or more teams' scores are entered, print the team number with the highest series and its series score at the end of the program. Name your source code file LetsGoBowling.cpp.
1.
Print an overview of the program's purpose. This should be in terms a user would understand. In the purpose, tell the user how many players are on each team and how many bowling scores there are per player. Make sure the purpose is printed only once per program execution.
2.
Ask the user if he/she wants to continue. He/she must be able to quit without having to enter a single team's score. As long as the user wants to continue entering data, prompt him/her for team's scores. There is no constant number of teams. The user must tell the program when he/she wants to quit.
3.
Process one team at a time. Prompt the user for the first team's scores by first asking for player one's score for game one, then game two, and then game three. Edit each bowling score, and do not continue the program until a valid value is entered. A bowling score can be between 0 and 300, inclusive. If an invalid score is entered, print an error message with the invalid score and request a valid score be entered. After player's one scores are correctly entered, calculate the series total by adding his/her games' scores. Calculate the player's game average (2 decimal precision) by dividing series by number of games. Then print that player's individual games' scores, his series, and his game average. Although three is generally recognized as the number of games in a bowling series, use a symbolic constant for number of games. For this program, assign the number three to the symbolic constant for number of games in a series. Code the program in a way that if the number of games ever changed in series, the only change to your program would be a change to the value of the symbolic constant.
4.
After the first player's processing is complete, start processing the other players' scores on the team. For this program's purpose, use a symbolic constant for the number of players on a team and assign it a value of four. Again, if number of players per team ever changes, the only change to your program should be the value of your symbolic constant.
5.
After all player's scores have been processed and printed, print the team totals for each game and series. Calculate the team's game average (2 decimal precision) by dividing the series total by the number of games in a series. Print team's game average.
6.
Ask the user if he wants to enter more data. Process each team as previously described if he/she does want to enter more data.
7.
When the user decides he/she doesn't want to continue, print the team number with the highest series and its series. For the purposes of this program, do not worry about teams tying for highest series. If two teams did have the same series, the honor of high series would remain with the first of the teams processed by the program. If the user quit the program before entering any teams, do not print any information on high series.
8.
Print a closing message to the user.
9.
Use an array to store the bowling scores. You may need more than one array. You do not need to use a two dimensional array but you can if you want. You must have at least 1 function that accepts an array as an argument and updates the array in the function. You may have more than one function that accepts an array as an argument. Do not use any global variables. Symbolic constants may be used and these may be global.
My code so far is:
#include <cstdlib> #include <iostream> #include <iomanip> #include <stdlib.h> using namespace std; void printintromessage (); void getUserInput (char& Y); void printplayerinfo (const int& numofgamesinseries, const int& numofplayersonteam, int& i, double& scoresinput); int main(int argc, char *argv[]) { const int numofgamesinseries = 3; int i, score[numofgamesinseries]; const int numofplayersonteam = 4; int p, player[numofplayersonteam]; double inputscores[numofgamesinseries] = {}; char Y = Y; double scoresinput; printintromessage (); getUserInput (Y); do { if (Y == 'Y' || Y == 'y') { printplayerinfo (numofgamesinseries, numofplayersonteam, i, scoresinput); } else { cout << "All finished ? Thank you !" << endl; } } while ((Y == 'Y' || Y == 'y')); system("PAUSE"); return EXIT_SUCCESS; } void printintromessage () { cout << "This program processes bowling scores." << endl; cout << "The user is asked to enter 3 bowling scores" << endl; cout << "for each person on the team. There are 4 people" << endl; cout << "per team. After all the scores are entered" << endl; cout << "for a player, the program will list the individual" << endl; cout << "game scores,the player's total score (series), and" << endl; cout << "the player's average.After each team's input is complete," << endl; cout << "the program will list the team's total scores by game," << endl; cout << "the team's total score (series)and the team's average." << endl; cout << "The user will then be asked if he/she wants to" << endl; cout << "enter more data. If not, the program will list the" << endl; cout << "team with the highest total score (series) and that " << endl; cout << "total. If the user wants to enter more data," << endl; cout << "the above process is repeated." << endl; cout << " " << endl; } void getUserInput (char& Y) { cout << "Do you want to enter (more) data?" << endl; cout << "Enter Y to continue, anything else to quit: "; cin >> Y; cout << " " << endl; } void printplayerinfo (const int& numofgamesinseries, const int& numofplayersonteam, int& i, double& scoresinput ) { for (i = 1; i <= numofgamesinseries; i++) { cout << "Enter player " << 1 << "'s score " << i << ": "; cin >> scoresinput; if (scoresinput > 0 && scoresinput <= 300) cout << "good" << endl; else cout << scoresinput << " is not a valid score ! Score must be from 0 to 300" << endl; } }
It's not complete and I know i'm not close, but the problems I am facing are causing me to not be able to move on;
My issues are:
1. How do I store the scores in arrays, then recall them?
2. How do I prompt the user for 3 scores from 4 players before it prints the scores?
3. I have the counter down for player's score, but how do I set one for the player, such as player 1 score - 3, then go to player 2 score 1 - 3?
What I am trying to do is input all the data into the array or if need me multiple arrays and then have them print and I am having lots of issues please help!!! | https://www.daniweb.com/programming/software-development/threads/270525/i-need-help-with-bowling-program | CC-MAIN-2021-43 | refinedweb | 1,225 | 69.92 |
WCF RIA Services Exception Handling
Note: The examples in this blog post are based on the WCF RIA Services PDC beta, and changes to the framework can be done until it hits RTM.
In a preview blog post I wrote about how to handle exception when using .NET RIA Services and the Load method. This blog post will be about the same but instead based on the WCF RIA Services.
If you want add a generic way to log exceptions thrown on the server-side, you can override the DomainService OnError method:
[EnableClientAccess()]
public class DomainService1 : DomainService
{
public IEnumerable<Customer> GetCustomers()
{
throw new ApplicationException("My exception");
}
protected override void OnError(DomainServiceErrorInfo errorInfo)
{
//Log exception errorInfo.Error
}
}
When you make a call to the Load method of the DomainContext on the client-side and the Load operation will fail, an exception will be thrown when the Load operation is completed. If you use Silverlight as the client and you don’t handle the exception on the client-side, the App’s Application_UnhandledException will be executed. This is new to WCF RIA Services In .NET RIA Services no exception was thrown on the client-side.
Something to be aware of is that the WCR RIA Services will use the customErrors section in the web.config to pass a detail server-side exception to the client, or not.
<customErrors mode="On" defaultRedirect="GenericErrorPage.htm"/>
If customErrors is on or remoteOnly (and you aren’t running the app locally on the remote machine), the message of the exception throw on the server-side will not be passed to the client. You will still get an exception, but the information you will get is the name of the server-side method that throw an exception “Load operation failed for query ‘GetCustomers’. …..”. The type of the exception is System.Windows.Ria.DomainOperationException. You will get the same exception type even if the customErrors mode is set to Off or remoteOnly (When you are running the app locally on the remote machine), but after the name of the method, you will also get the server-side exception message “Load operation failed for query ‘GetCustomers’: My exception”.
NOTE: Don’t include sensitive information in the exception message that can be used by a hacker, so think through what kind of message you want to send to the client. In most cases a simple message like “Retrieving customers failed, please try again, if you see the same message please contact an administrator”. Make sure you log the original message so you have something to analyze if a user will contact you.
There are several of ways to check if a server-side load operation failed when using the DomainService Load method. You can either use the LoadOperation object returned from the Load method, and hook up to its completed event, or pass in a callback to the Load method. I prefer to use a callback. Here is an example where an MessageBox will show an exception message if the GetCustomers method fails:
customerDomainContext.Load<Customer>(ds.GetCustomersQuery(),
loadOperation =>
{
if (loadOperation.HasError)
{
MessageBox.Show(loadOperation.Error.Message);
loadOperation.MarkErrorAsHandled();
}
}
,null);
The LoadOperation has a HasError property, you can use this property to see if the load operation has failed. You can then use the Error property of the LoadOperation to get the error message from the server-side (Remember the customErrors mentioned earlier, it can prevent you from getting the message throw from the server-side). By using the LoadOperation’s MarkErrorAsHandled method, you will tell the WCR RIA Services that you have handled the exception, no reason for passing it along. There is also property which you can use to see or specify that the exception is handled, and the property is IsErrorHandled.
If you want to know when I publish a new blog post, you can follow me on twitter: | http://weblogs.asp.net/fredriknormen/wcf-ria-services-exception-handling | CC-MAIN-2015-48 | refinedweb | 642 | 50.26 |
> jfreechart-0.9.12.zip > ChartMouseEvent. * * -------------------- * ChartMouseEvent.java * -------------------- * (C) Copyright 2002, 2003, by Object Refinery Limited and Contributors. * * Original Author: David Gilbert (for Object Refinery Limited); * Contributor(s): Alex Weber; * * $Id: ChartMouseEvent.java,v 1.3 2003/06/12 16:53:57 mungady Exp $ * * Changes * ------- * 27-May-2002 : Version 1, incorporating code and ideas by Alex Weber (DG); * 13-Jun-2002 : Added Javadoc comments (DG); * 26-Sep-2002 : Fixed errors reported by Checkstyle (DG); * 05-Nov-2002 : Added a reference to the source chart (DG); * */ package org.jfree.chart; import java.awt.event.MouseEvent; import org.jfree.chart.entity.ChartEntity; /** * A mouse event for a chart that is displayed in a ChartPanel. * * @author David Gilbert */ public class ChartMouseEvent { /** The chart that the mouse event relates to. */ private JFreeChart chart; /** The Java mouse event that triggered this event. */ private MouseEvent trigger; /** The chart entity (if any). */ private ChartEntity entity; /** * Constructs a new event. * * @param chart the source chart. * @param trigger the mouse event that triggered this event. * @param entity the chart entity (if any) under the mouse point. */ public ChartMouseEvent(JFreeChart chart, MouseEvent trigger, ChartEntity entity) { this.chart = chart; this.trigger = trigger; this.entity = entity; } /** * Returns the chart that the mouse event relates to. * * @return the chart. */ public JFreeChart getChart() { return this.chart; } /** * Returns the mouse event that triggered this event. * * @return the event. */ public MouseEvent getTrigger() { return this.trigger; } /** * Returns the chart entity (if any) under the mouse point. * * @return the chart entity. */ public ChartEntity getEntity() { return this.entity; } } | http://read.pudn.com/downloads9/sourcecode/java/32180/jfreechart-0.9.12/src/org/jfree/chart/ChartMouseEvent.java__.htm | crawl-002 | refinedweb | 249 | 62.85 |
(Not So) Stupid Questions: (2) String Equality
(Not So) Stupid Questions:
(Not So) Stupid Questions:
String Equality side-effects of
String equality don't make sense"
One of our readers submitted the following code, which had us scrambling for our javadocs and a copy of the Java Language Specification. Compile the following:
public class StringTester {
public static void main(String args[]){
String aString = "myValue";
String bString = "myValue";
String cString = "";
if (args.length ==1 ) cString = args[0];
boolean test1 = aString.equals(bString);
System.out.println("a.equals(b): " +
aString + ".equals("+bString+") is " + test1);
boolean test2 = aString == bString;
System.out.println("a==b: " +
aString + " == " + bString+" is " + test2);
boolean test3 = aString.equals(cString);
System.out.println("a.equals(c): " +
aString + ".equals("+cString+") is " + test3);
boolean test4 = aString == cString;
System.out.println("a==c: " +
aString + " == " + cString+" is " + test4);
}
}
When run with
myValue as the command-line argument, this produces the following output:
a.equals(b): myValue.equals(myValue) is true
a==b: myValue == myValue is true
a.equals(c): myValue.equals(myValue) is true
a==c: myValue == myValue is false
So, the two constants,
aString and
bString are not only equivalent, they're the same object, yet
cString is equivalent but is a different object. My question is:
What's the deal with String equality?
First thoughts:
We can see that
aString and
bString are the same object. Doesn't the spec tell us that
- All
Strings are immutable.
- All
Strings are held in a "String pool", with one unique instance of each string of characters.
In other words, doesn't this explain the object equality of
aString and
bString; they're the same run of characters, so there's one object in the String pool pointed to by both
aString and
bString.
The fact that
aString and
cString are equivalent but are different objects seems to indicate that point #2 is not entirely true. Since
cString consists of the characters
myValue, just like
aString and
bString, it should point to the same member of the String pool, and thus have pointer equality, right?
So,
aString,
bString, and
cString have identical values and yet are not equal. It seems that
cString is not working with the String pool as it is a unique object with the same value as the other two.
Strings are unusual in Java because often developers treat them as primitives when they are really objects. Part of this confusion comes up for new developers because a String can be instantiated without using
new as
String a = "hello". But we have more of this possible confusion coming with autoboxing and autounboxing. Are there going to be problems with equals in the future that arise from boxing and unboxing?
This brings me to three questions:
1. What is going on with String equality?
And the two follow-on questions:
2. Why might this be the desired behavior? and
3. Are we going to have more problems with equals starting in J2SE 1.5 that result from the autoboxing and autounboxing?
(Not So) Stupid Questions is where we feature the questions you want to ask but aren't sure how.
- Login or register to post comments
- Printer-friendly version
- 5363 reads | https://today.java.net/pub/a/today/2004/04/07/stringsSQ2.html | CC-MAIN-2015-35 | refinedweb | 528 | 63.8 |
In the world of React, there are two ways of writing a React component. One uses a function and the other uses a class. Recently functional components are becoming more and more popular, so why is that?
This article will help you understand the differences between functional and class components by walking through each one with sample code so that you can dive into the world of modern React!
Rendering JSX
First of all, the clear difference is the syntax. Just like in their names, a functional component is just a plain JavaScript function that returns JSX. A class component is a JavaScript class that extends
React.Component which has a render method. A bit confusing? Let’s take a look into a simple example.
import React from "react"; const FunctionalComponent = () => { return <h1>Hello, world</h1>; };
As you can see, a functional component is a function that returns JSX. If you are not familiar with arrow functions introduced in ES6, you can also check out the example below without.
import React from "react"; function FunctionalComponent() { return <h1>Hello, world</h1>; }
See render with functional component in CodePen
On the other hand, when defining a class component, you have to make a class that extends
React.Component. The JSX to render will be returned inside the render method.
import React, { Component } from "react"; class ClassComponent extends Component { render() { return <h1>Hello, world</h1>; } }
Below is the same example but without using destructuring. If you are not familiar with destructuring, you can learn more about destructuring and arrow functions introduced in ES6!
import React from "react"; class ClassComponent extends React.Component { render() { return <h1>Hello, world</h1>; } }
See render with class component in CodePen
Passing props
Passing props can be confusing, but let’s see how they are written in both class and functional components. Let’s say we are passing props of the name “Shiori” like below.
<Component name="Shiori" />
const FunctionalComponent = ({ name }) => { return <h1>Hello, {name}</h1>; };
Inside a functional component, we are passing props as an argument of the function. Note that we are using destructuring here. Alternatively, we can write it without as well.
const FunctionalComponent = (props) => { return <h1>Hello, {props.name}</h1>; };
In this case, you have to use
props.name instead of name.
See props with functional component in CodePen
class ClassComponent extends React.Component { render() { const { name } = this.props; return <h1>Hello, { name }</h1>; } }
Since it is a class, you need to use
this to refer to props. And of course, we can use destructuring to get
name inside props while utilizing class-based components.
See props with class component in CodePen
Handling state
Now we all know that we cannot avoid dealing with state variables in a React project. Handling state was only doable in a class component until recently, but from React 16.8, React Hook useState was introduced to allow developers to write stateful functional components. You can learn more about Hooks from the official documentation. Here we are going to make a simple counter that starts from 0, and one click on button will increment the count by 1.
Handling state in functional components
const FunctionalComponent = () => { const [count, setCount] = React.useState(0); return ( <div> <p>count: {count}</p> <button onClick={() => setCount(count + 1)}>Click</button> </div> ); };
To use state variables in a functional component, we need to use
useState Hook, which takes an argument of initial state. In this case we start with 0 clicks so the initial state of count will be 0.
Of course you can have more variety of initial state including
null,
string, or even
object - any type that JavaScript allows! And on the left side, as
useState returns the current state and a function that updates it, we are destructuring the array like this. If you are a bit confused about the two elements of the array, you can consider them as a state and its setter. In this example we named them
count and
setCount to make it easy to understand the connection between the two.
See state with functional component in CodePen
Handling state in class components
class ClassComponent extends React.Component { constructor(props) { super(props); this.state = { count: 0 }; } render() { return ( <div> <p>count: {this.state.count} times</p> <button onClick={() => this.setState({ count: this.state.count + 1 })}> Click </button> </div> ); } }
The idea is still the same but a class component handles state a bit differently. Firstly, we need to understand the importance of the
React.Component constructor. Here is the definition from the official documentation:
“The constructor for a React component is called before it is mounted. When implementing the constructor for a React.Component subclass, you should call super(props) before any other statement. Otherwise, this.props will be undefined in the constructor, which can lead to bugs.”
Basically, without implementing the constructor and calling super(props), all the state variables that you are trying to use will be undefined. So let’s define the constructor first. Inside the constructor, you will make a state object with a state key and initial value. And inside JSX, we use
this.state.count to access the value of the state key we defined in the constructor to display the count. Setter is pretty much the same, just different syntax.
Alternatively, you can write an
onClick function. Remember, the
setState function takes argument(s) of state, props(optional) if needed.
onClick={() => this.setState((state) => { return { count: state.count + 1 }; }) }
See state with class component in Codepen
Lifecycle Methods
Finally, let’s talk about lifecycles. Hang on, we are almost there! As you already know, lifecycles play an important role in the timing of rendering. For those of you who are migrating from class components to functional components, you must be wondering what could replace lifecycle methods such as
componentDidMount() in a class component. And yes, there is a hook that works perfectly for the purpose, let’s check it out!
On Mounting (componentDidMount)
The lifecycle method
componentDidMount is called right after the first render completes. There used to be a
componentWillMount that happens before the first render, but it is considered legacy and not recommended to use in newer versions of React.
const FunctionalComponent = () => { React.useEffect(() => { console.log("Hello"); }, []); return <h1>Hello, World</h1>; };
Replacing
componentDidMount, We use the
useEffect hook with the second argument of
[]. The second argument of the
useState hook is normally an array of a state(s) that changes, and
useEffect will be only called on these selected changes. But when it’s an empty array like this example, it will be called once on mounting. This is a perfect replacement for a
componentDidMount.
class ClassComponent extends React.Component { componentDidMount() { console.log("Hello"); } render() { return <h1>Hello, World</h1>; } }
Basically the same thing happens here:
componentDidMount is a lifecycle method that is called once after the first render.
On Unmounting (componentWillUnmount)
const FunctionalComponent = () => { React.useEffect(() => { return () => { console.log("Bye"); }; }, []); return <h1>Bye, World</h1>; };
I am happy to tell you that we can also use a
useState hook for unmounting as well. But be careful, the syntax is a bit different. What you need to do is return a function that runs on unmounting inside the
useEffect function. This is especially useful when you have to clean up the subscriptions such as a
clearInterval function, otherwise it can cause a severe memory leak on a bigger project. One advantage of using
useEffect is that we can write functions for both mounting and unmounting in the same place.
class ClassComponent extends React.Component { componentWillUnmount() { console.log("Bye"); } render() { return <h1>Bye, World</h1>; } }
See lifecycle with functional component in Codepen
See life cycle with class component in Codepen
Conclusion
There are pros and cons in both styles but I would like to conclude that functional components are taking over modern React in the foreseeable future.
As we noticed in the examples, a functional component is written shorter and simpler, which makes it easier to develop, understand, and test. Class components can also be confusing with so many uses of
this. Using functional components can easily avoid this kind of mess and keep everything clean.
It should also be noted that the React team is supporting more React hooks for functional components that replace or even improve upon class components. To follow up, the React team mentioned in earlier days that they will make performance optimizations in functional components by avoiding unnecessary checks and memory allocations. And as promising as it sounds, new hooks are recently introduced for functional components such as
useState or
useEffect while also promising that they are not going to obsolete class components. The team is seeking to gradually adopt functional components with hooks in newer cases, which means that there is no need to switch over the existing projects that utilize class components to the entire rewrite with functional components so that they can remain consistent.
Again, there are a lot of valid coding styles in React. Yet I prefer using functional components over class components for those reasons listed above. I hope this article helped you get more familiar with modern React. To learn more, check out the official documentation! You can also check out our post on building a Twilio Video app see a practical use of functional components with hooks.
Shiori Yamazaki is a Software Engineering Intern on the Platform Experience team. She loves to develop modern web applications. She can be reached at syamazaki [at] twilio.com or LinkedIn. | https://www.twilio.com/blog/react-choose-functional-components | CC-MAIN-2021-31 | refinedweb | 1,576 | 56.66 |
Reader:
CLSID clsid; IPersistFile* ppf = ...; HRESULT hr = ppf->lpVtbl->GetClassID(ppf, &clsid);
The above macro at least removes the error potential of
passing the wrong
this pointer:.
I’ve used COM from C a lot, because it’s quite fun.
However, it becomes quite a pain in the backside when people distribute COM components with only a Type Library, and no C header: To use the functions statically, a C programmer has to regenerate the header by fiddling with OLEVIEW and MIDL!
Seems that the emphasis on "plain C" in original question is a bit misplaced. I’m pretty sure Will was interested more in second part – "Windows API" code. Which I presume means not using MFC/WTL/whatever.
If you look at the Wine source, you will see tons of this stuff, Wine being written in C and constructing its COM objects "by hand", as it were.
When you step into the window API with a stripped PDB you can easily see a lot of C++ code (I did not step into the kernel yet, just user-land code).
Does the #import directive work with .C files? If so, you could just #import "foo.tlb" no_namespace and have the compiler generate the class definitions for you.
"However, it becomes quite a pain in the backside when people distribute COM components with only a Type Library, and no C header: To use the functions statically, a C programmer has to regenerate the header by fiddling with OLEVIEW and MIDL!"
I think maybe he was actually asking if the Vista look requires using COM APIs.
Where’s the link to the question? The link above is for some site that sells a programing language. I don’t get it.
Unless you insist on using C++ exported name mangled entrypoints, where no two compilers agree.
Trying to read between the line, I assume Mr. Rayer was confused by the hype around WPF (codename Avalon), and wondering whether you have to use that to get all the new-fangled transparencies and stuff.
I remember similar confusion when Windows Server 2003 was called "Windows .NET Server" – some people thought it was written in .NET, and that you must program in .NET to run on it. Over-hyping has its perils.
No, it doesn’t work with C files, that’s the problem.
Also, you have the same problem if you want to program in C++ with MinGW, which doesn’t support #import either.
I’m pretty sure he was asking about the *API* and not the language.
Interesting that you wrote an entire article on what is essentially a gigantic “Nitpicker’s Corner” of your own!!!!
@Gabe: far from some other language being required for Vista APIs, .NET code actually has to call an unmanaged entry point to create ‘glass’ areas of a window: DwmExtendFrameIntoClientArea.
The new ‘big buttons’ dialog boxes are created with the (C-style) TaskDialog function, or TaskDialogIndirect.
The newest ‘open file’ and ‘save file’ dialogs are COM-based, however.
I once worked on a project with a mandate that Ada be the programming language used for most of the code. We were able to create data types in Ada that looked like the COM object layout. The Ada code was then called from MFC-based user interface code using COM. By the time the code hit the CPU it didn’t matter that the application was a weird mix of C++ and Ada.
What about if/when .NET is used for the API being called? Can C directly call code in .Net assemblies? All the examples I could find involve a C++/CLR wrapper that exposes an unmanaged interface being used.
I appreciate that it must be technically possible – but is it feasible?
@Joe
You could use the hosting (COM) API to host the CLR in your application, load the relevant assemblies and call their code.
It seems unnecessarily painful though.
@Joe, @Sunil Joshi: If I remember correctly, if a .Net assembly is COM-Visible, it can be called directly from an unmanaged application via COM, without having to call the hosting API from the client code (the hosting API is called by the COM infrastructure itself).
Great article!
(Nit-pick Warning:) I’d say this isn’t the case. It reminds me of a newsgroup war over what language Doom was written in. Anyone reasonable knowledgeable could easily tell it was written in C via the .exe, without even disassembling it.
However I get the point: It’s all machine code after the compiler is done (well, at least IL code in the case of .NET; which is pretty much the same thing). Neat concept really.
I’m surprised you attempted to answer this.
I read the question about 10 times and still don’t have even a guess as to what Will is asking. I would say the degree to which the code works under Vista is the same as XP – very well.
Whilst I get the idea of what you’re saying, it’s not entirely accurate.
Although the CPU can’t ‘tell the difference’ it’s very easy to fire up a disassembler and take a look for yourself.
Code output by a Delphi compiler looks different to that output by a C++ compiler, which looks different again to that output by a C compiler, and different again to that output by an assembler.
Just because they all end up at the same level at the end doesn’t mean that “the identity of the source language is long gone”. It still continues to linger.
I guess it depends on what you define as the ‘identity’. I would say that because the original language can usually be identified under normal circumstances that at least some ‘identity’ continues to linger.
Of course someone could go to the trouble of ‘faking’ the language, but even you have to admit seeing that in the wild would be very unlikely.
My point was not that it was foolproof (naturally, there is lots of information that is unknown to you when disassembling a piece of software), but rather, that under normal circumstances there is still information left over that can identify the language in which the software was written.
Jim, I’ve actually found Wine quite useful when I couldn’t figure out how to use some set of functions using just the SDK.
For those who use MinGW who have mangling issues, the killat flag can help fix that in certain scenarios.
Also, I think Raymond’s point was not whether you could in principle determine with a certain amount of certainty what the original language was (if it was VB you will spot the pcode) but that when the code is running, it’s all machine code, no matter what compiler (or even interpreter, after all even interpreters have to execute corresponding machine code at some point) was used.
Well, if you want to take the stance that you’re talking about the CPU not knowing the difference, not humans, that’s something different entirely.
Of course the CPU will be indifferent, it’s a machine. ;)
I get what you’re saying though. :)
The CPU doesn’t know the program is compiled from C-code. It doesn’t help if the pe/coff format would have had a "original programming language"-property or even if you try to send the string "this program is compiled from C-code" into the cpu. The cpu cannot understand such attributes. It lacks the ability to be able to know such things.
As others have pointed out, the original question sounds like they were talking about the library support, not the language itself. (It’s more complicated to write a Windows app in plain C because there are less libraries like MFC/WTL available.)
And regarding calling into .NET code from native code: by far the simplest way is to write a mixed-mode C++ "wrapper" (either directly in the app or in a DLL). You can pretty much just call the .NET code directly then; the compiler sorts out the details for you.
The only way to fix this is if the compiler manufacturers embedded some kind of crypto certificate manifest in executable files/libraries (and checksum signed the file also), which the OS have to read and confirm which compiler was used. Then the OS can alter the code for whatever reason. This is currently not supported, a flaw.
The question really was about getting “native look and feel” in unmanaged code with maybe some new, but old-style, API (i.e. Win64).
With the .NET hype, one may think that WPF + a managed Microsoft language is required to make applications benefit from voice recognition, nice transparency effects in windows, and everything new in Vista, making MinGW and non-Microsoft C language implementations unable to access those extra features.
Analogically, one can say that the MS-DOS “int 21h” API was well supported in Windows 95, and even had been extended but would produce applications with non-native look & feel (e.g. no Win32-like GUI support).
"Imagine if only WPF applications got "native look and feel". How many applications would have native look and feel? (What about Explorer?)"
I know that the answer obviously is "yes, you can get a native look & feel with the Win64 or Win32 API", but, at least, the question makes some sense. | https://blogs.msdn.microsoft.com/oldnewthing/20090824-00/?p=17023 | CC-MAIN-2016-36 | refinedweb | 1,567 | 71.75 |
Loaders
TypeScript isn't core JavaScript so webpack needs a bit of extra help to parse the
.ts files. It does this through the use of loaders. Loaders are a way of configuring how webpack transforms the outputs of specific files in our bundles. Our
ts-loader package is handling this transformation for TypeScript files.
Inline
Loaders can be configured – inline – when requiring/importing a module:
const app = require('ts!./src/index.ts');
The loader is specified by using the
! character to separate the module reference and the loader that it will be run through. More than one loader can be used and those are separated with
! in the same way. Loaders are executed right to left.
const app = require('ts!tslint!./src/index.ts');
Although the packages are named
ts-loader,
tslint-loader,
style-loader, we don't need to include the
-loaderpart in our config.
Be careful when configuring loaders this way – it couples implementation details of different stages of your application together so it might not be the right choice in a lot of cases.
Webpack Config
The preferred method is to configure loaders through the
webpack.config.js file. For example, the TypeScript loader task will look something like this:
{ test: /\.ts$/, loader: 'ts-loader', exclude: /node_modules/ }
This runs the typescript compiler which respects our configuration settings as specified above. We want to be able to handle other files and not just TypeScript files, so we need to specify a list of loaders. This is done by creating an array of tasks.
Tasks specified in this array are chained. If a file matches multiple conditions, it will be processed using each task in order.
{ ... module: { rules: [ { test: /\.ts$/, loader: 'tslint' }, { test: /\.ts$/, loader: 'ts', exclude: /node_modules/ }, { test: /\.html$/, loader: 'raw' }, { test: /\.css$/, loader: 'style!css?sourceMap' }, { test: /\.svg/, loader: 'url' }, { test: /\.eot/, loader: 'url' }, { test: /\.woff/, loader: 'url' }, { test: /\.woff2/, loader: 'url' }, { test: /\.ttf/, loader: 'url' }, ], noParse: [ /zone\.js\/dist\/.+/, /angular2\/bundles\/.+/ ] } ... }
Each task has a few configuration options:
test - The file path must match this condition to be handled. This is commonly used to test file extensions eg.
/\.ts$/.
loader - The loaders that will be used to transform the input. This follows the syntax specified above.
exclude - The file path must not match this condition to be handled. This is commonly used to exclude file folders, e.g.
/node_modules/.
include - The file path must match this condition to be handled. This is commonly used to include file folders. eg.
path.resolve(__dirname, 'app/src').
Pre-Loaders
The preLoaders array works just like the loaders array only it is a separate task chain that is executed before the loaders task chain.
Non JavaScript Assets
Webpack also allows us to load non JavaScript assets such as: CSS, SVG, font files, etc. In order to attach these assets to our bundle we must require/import them within our app modules. For example:
import './styles/style.css'; // or const STYLES = require('./styles/style.css');
Other Commonly Used Loaders
raw-loader - returns the file content as a string.
url-loader - returns a base64 encoded data URL if the file size is under a certain threshold, otherwise it just returns the file.
css-loader - resolves
@importand
urlreferences in CSS files as modules.
style-loader - injects a style tag with the bundled CSS in the
<head>tag. | https://angular-2-training-book.rangle.io/handout/project-setup/loaders.html | CC-MAIN-2018-09 | refinedweb | 555 | 68.77 |
Hi I am accessing hardware from Python. I am running Linux.
I have a "C" library compiled as libvv_pts.so that I load using CDLL ...
self.lib = CDLL('./libvv_pts.so')
and in that library there is a function as follows ...
/**
* =============================
* @brief read data from the application VME address space
* @param handle returned from a successfull open
* @param the 4 byte aligned byte offset where D32 reads will start
* @param buf points to an array where the read data will be stored
* @param size is the size in bytes of the read transaction to be performed
* @return -1 on error (see errno) or 0 if closed OK
*/
int vv_read(void *handle, int byte_offset, int *ibuf, size_t size)
I implement vv_pts.py that calls the C library as follows ...
def vv_read(self, byte_offset):
""" Read from the application FPGA wishbone bus
The byte offset will be aligned to D32
The value will contain the 32 bit integer read
"""
x = c_int(0)
cc = self.lib.vv_read(self.handle, byte_offset, byref(x), 4)
value = x.value
if cc != 0:
raise BusException("Failed vv_read: offset:0x%X" % (byte_offset))
return value & 0xFFFFFFFF
and call it in my program, and this all works just fine but is horribly slow. So I would like to read an array. I could even do a DMA tarnsfer if I could figure out a way of pointing it at Python memory, but I am trying a more simple array access as follows ...
def vv_read_array(self, byte_offset, buf, size):
""" Read size bytes into the string array buf
The byte_offset and size will be D32 aligned
"""
cc = self.lib.vv_read(self.handle, byte_offset, id(buf), size)
if cc != 0:
raise BusException("Failed vv_read_array: offset:0x%X size:%d" % (byte_offset, size))
return cc
So I tried "by_ref" and that didn't work, so I am using the id(buf) to get the address to copy to. From my "big integers" question I discover Python doesn't represent data in raw format and my read array function destroys the interpreter.
Can anyone make a suggestion as to how to do a hardware array transfer into Python memory ...
Thanks | http://forums.devshed.com/python-programming-11/dma-python-930972.html | CC-MAIN-2016-22 | refinedweb | 352 | 70.23 |
Ingredientes:
1 bag tortilla chips
1 jar hot salsa
1 cup grated cheddar cheese
1 4 oz. can of chopped green chiles
1 can refried beans
Direcciones:
Spread your chips out on a cookie sheet* two layers in depth. Sprinkle the green chiles and cheese over them. Place appropriate amounts of refried beans where they seem to be most needed. Do the same with salsa.
Place this disturbing looking mass into your oven and bake to whichever heat you desire. I usually put them in for 5 minutes at 400 degrees Fahrenheit. You may want to do something different.
When they're done, take them out of the oven and transfer them onto a plate. Some additions to add, if you're so inclined, after they're heated are: black olives, sour cream, or chili.
An extremely simplified instructional operating system used almost exclusively in courses about operating systems. It runs natively on DEC MIPS workstations and little else, although a MIPS simulator is included in the distribution, so it can run on top of most Unix systems, including Linux. It is written in a subset of C++; that is, no polymorphism, C++ streams, operator overloading, or function overloading is used.
NachOS was developed at Berkeley by Wayne Christopher, Steven Procter, and Thomas Anderson. It is available freely on the web from
NachOS features simplified versions of: a filesystem, message-passing networking, and multithreading. The code is extremely simple and easy to modify (from first-hand experience). Its also very easy to test new features, since it has several built-in and modifiable self-test functions.
The simulator is deterministic, meaning that the same code will behave identically each time it's run. This makes things easy to debug, but the system as a whole slightly less realistic. However, the networking simulator will drop random packets - the same random packets each time the code is run. The design decisions in NachOS almost all swing on the side of simplicity instead of power or realism. The point of it all, of course, is to show people the basic issues and problems at hand when writing an operating system.
If want to poke around in the guts of an operating system - see how your theories about process scheduling work or something - or if you're the teacher of an OS class, I'd highly recommend you take a look at NachOS. It's written for precisely that purpose, and it's a joy (well, maybe just not a pain) to work with.
Nachos / Tortilla Chips
There is no difference between the two, except that other condiments go with the tortilla chips to become nachos. Chips alone are just tortilla chips. Tortilla chips are very easy to make, and far better made fresh than anything out of the bag. I make these for guests and have received many compliments. Here's a chip recipe, and some more to go with it.
Tortilla Chips
Heat the oil in a large skillet or a wok for frying.
Cut the tortillas into pie-shaped wedges.
Fry the tortillas in hot oil until crisp.
Remove and drain on paper towels.
Salt to taste and serve with guacamole and salsa.
Taco Dip
Brown hamburger. Add taco seasoning, taco sauce and refried beans. Mix well. Pour into loaf pan; spoon sour cream over and sprinkle with the cheese.
Bake at 350 degrees for 30 minutes. Eat with tortilla chips.
Jalapeno Bean Dip
Cut off tops of peppers. Combine whole peppers and remaining
ingredients in saucepan. Simmer 15 minutes, adding water if
needed, to keep beans from sticking to bottom of pan. Allow
mixture to cool and remove peppers. Serve dip with fresh vegetables.
NOTE: This is a Mexican dish, but can easily be an
appetizer when served with crackers.
Salsa Cruda
Guacamole
Cut the avocado lengthwise down to the seed and pull the two halves apart.
Remove the seed.
Scoop out the avocado meat.
Mash with lemon juice.
Mix in remaining ingredients, use right away.
Nachos is an instructional, monolithic-kernel operating system. When I took an operating systems class, NACHOS was portrayed as an acronym for "Not Another Completely Heuristic Operating System". However, I don't see this documented at what is apparently the Nachos page.1 Perhaps this is for good reason because NachOS is not entirely an operating system; it is a user space program which simulates a simple MIPS machine and an operating system on top of that. Since the name is unclear, I'll write "Nachos" with minimal capitalization.
Nachos was initially developed at Berkeley by Wayne A. Christopher, Steven J. Procter, and Thomas E. Anderson from 1991 through the Spring of 1992.2 Up through version 3.4, Nachos restricted itself to a subset of C++. A 23 page paper is included with Nachos 3.4 and 4.0 which introduces this subset to those with moderate proficiency in C. Version 4.0 is written in a slightly broader subset, utilizing templates to reduce repeated code. Judging by comments in its source files, 4.0 was finished in 1996. Further, a Java version, 5.0j, has been developed at Berkeley by a Dan Hettena and Rick Cox.3 5.0j is a nearly total rewrite, with a similar structure to 4.0. It fixes many old bugs and adds an autograder under which everything runs. 5.0j was developed in 2001 judging from its license.
Nachos lacks many important components of an operating system including synchronization, system calls, support for multiple, simultaneous user programs, networking, a proper filesystem, and virtual memory. Instead, a framework for the introduction of these facilities exists. The job of a student learning about operating systems with Nachos is to actually code most of the kernel of this operating system. Many classes which teach Nachos 3.4 utilize the Nachos roadmap by Thomas Narten4,5 which is a guide to the underlying functions and behavior of Nachos with valuable hints. Since Nachos 4.0 is a large rewrite, Narten's roadmap does not accurately reflect its classes, and so 4.0's author recommends continued use of 3.4 for pedagogical purposes.
Not surprisingly, Nachos is licensed under a variant of the BSD license. It is produced below both to be informative and to be in accordance with its terms so that I may produce its code below without concerns for fair use. Nachos 4.0 doesn't extend the date of copyright even though it clearly was developed beyond 1993. Nachos 5.0j's license extends its copyright date through 2001 but is otherwise identical..
I took an operating systems class wherein we utilized Nachos. We used Nachos 3.4 but my commentary attempts to cover all material relating to 4.0 and 5.0j as well. My view of Nachos is somewhat biased because those who worked on it in my class elected to do so in lieu of doing the (generally easier) projects most people did. Therefore, the material in class did not track with Nachos and much research had to be done to even understand the assignments. Case in point, the first thing we needed to implement were condition variables. None of our group of three had heard of these previously, and they weren't covered in class; that was one small part of the first project. From there, things got worse. See Issues below for elaboration.
All versions of Nachos feature a multi-threaded kernel which is constricted so it only runs a single thread at once. In 3.4/4.0, this is done using purely user mode threads; switching is done with a small trick of assembly. In the Java version, native threads (from java.lang.Thread) are used, but Nachos insures that only one of its threads is run at once. Each thread stores its registers and process and simulated machine information before it switches out and restores these when it begins executing again. This setup makes debugging relatively easy with a few caveats. Most notably, in 3.4/4.0 one should never use until in gdb since there is a fair chance that a thread switch will occur before the current block returns. If this happens, gdb loses track of the block, and Nachos runs to completion.
until
Hardware and software interrupts are all handled by an interrupt queue. The queue is necessary since Nachos is a simulation so interrupts can not be generate in real time. Interrupts are scheduled to occur at some time from the present. Whenever the simulation time is advanced, all necessary interrupt handlers are called before control is returned to the previously executing code. If no threads are ready, Nachos continually advances time to the time of the first interrupt in the queue and executes its handler.
This configuration has a small problem with realism. Simulated machine time is only advanced after executing a user program instruction and each time interrupts are enabled. Further, context switches only occur when time is advanced. Due to these restrictions, some incorrect code will never fail. It is possible to make unlocked access to shared data, and it will never fail so long as no synchronization is done within the code. Further, it's possible to make non-reentrant exception handlers without Nachos ever having a problem. A dilligent teacher will hopefully check for such shenanigans; mine didn't.
As with all modern day OSes, Nachos utilizes dual mode operation. Some code runs in kernel mode wherein all of the machine's assets can be directly accessed. Some code runs in user mode which can only see its personal portion of memory and can only access hardware by calling functions in the kernel through syscalls.
This division is greater than normal because user programs are actually interpreted by Nachos and presented with an emulated MIPS machine. Memory of the machine is a large array of bytes named mainMemory (Machine::mainMemory in C++; Processor.mainMemory in Java). When an exception or syscall occurs, it is handled by kernel code running natively.
To start executing a user program, the path to an executable file is passed with the '-x' command line option. The file is loaded into memory, and then a never-ending loop starts. This loop loads the next instruction, executes it, advances the simulated clock a small amount, and then repeats. This is done in Processor.run() in 5.0j and Machine::Run() in 3.4/4.0. Aside from the fact that a thread running a user program is always running in this simple loop, it is no different from any other kernel thread.
In versions 3.4 and prior, there were no self test functions. This was a little unfortunate for my group. Luckily, Nachos 4.0 provides a SelfTest member function for all of its classes. Nachos 5.0j continues this tradition, but makes sure it is special by naming these functions selfTest. Such functions can be invaluable in regression testing, as any software engineer could tell you.
To ease debugging, Nachos always behaves the same if called with the same arguments. Different context switching behavior can be generated with the '-rs' flag. When Nachos is started with the '-rs' flag an interrupt is scheduled ever few ticks — chosen randomly between 1 and 200 each interrupt — to force a context switch. The random number generator is seeded with the number following the flag so that this variation of behavior is repeatable.
Without this flag, context switches occur only when executing user programs or when a thread explicitly yields. As such, race conditions may never be detected without the use of '-rs'. With this flag, whenever interrupts are enabled or disabled, which happens whenever a synchronization occurs, a switch may occur. This feature can uncover lurking synchronization problems when utilized in automated testing. We always used the following shell script to test for deadlocks and other weird behavior:
#!/bin/ksh
if [ "$1" -eq "" ]
then
runTimes=10;
else
runTimes=$1;
shift;
args=$*;
fi
echo "Running nachos $runTimes times.\n\n";
while [ $runTimes -gt 0 ]
do
myRand=$RANDOM;
echo "nachos -rs $myRand $args";
runTimes=`expr $runTimes - 1`;
nachos -rs $myRand $args;
done
Nachos 3.4/4.0 has its own, simplified, executable file format: Nachos Object File Format (NOFF). Obviously, compilers don't output this sort of file format. So one uses gcc to (cross)compile his user program to a MIPS COFF executable. Then the coff2noff program, a simple C program whose source comes with Nachos, converts the COFF file to NOFF. Unfortunately, NOFF is really limited. It contains exactly three sections named .code, .initData, and .uninitData, and its entry point is always at address 0. Therefore, a COFF file with any sections beyond .(r)data, .(s)bss, and .text can not be converted to NOFF.
Nachos 5.0j avoids NOFF entirely. It directly uses the MIPS COFF executables.
Nachos 3.4/4.0 provide a built-in debugger for debugging user programs. It can single step through a program, displaying the MIPS assembly of the next instruction and current state of all registers. It can also skip forward till a given machine time. Unfortunately, this debugger is completely ignorant of symbols and any high level programming concepts. Further, it does not support breakpoints. My group found it mildly useful for solving a problem where we corrupted a register during syscalls. Otherwise, it is not terribly useful, especially because user programs themselves are severely limited. See below under Issues:Toy Soldiers.
Nachos 5.0j did not see fit to include a user program debugger at all despite its greater ability to handle complex user programs.
Nachos is structured to allow a series of largely independent projects which can be completed in somewhat arbitrary order, once the first is completed. However, the actual recommended projects vary a little with each version of Nachos. Further, Nachos is open-ended enough that professors may design projects of almost any type. Therefore, consider the below to be a sampling of possible projects.
This project is necessary for all future projects since they all require synchronization to some degree.
First, implement locks, semaphores, and condition variables. Second, use these mechanisms to solve concurrency problems possibly including the consumer producer problem and the elevator problem.
A stub filesystem is included with Nachos which maps file functions to the underlying OS file functions. It does not provide syscalls dealing with directories, but this is more of an issue with the next project. This project is not necessary to do any of the others.
One is provided with a simple filesystem which utilizes a single file on the host system as its "disk". First, make access to the disk threadsafe. Next allow for larger files, allow for files which can grow after being created, improve disk performance, and add a directory structure.
Nachos can already load a single program and run it. First, add the ability to load multiple user programs at once and implement handlers for all exceptions and most defined syscalls. Add a process scheduling policy. Secondly, write useful user programs. Thirdly, extend the exec syscall to allow passing of command line arguments; add the fork and yield syscalls.
This assignment requires Multiprogramming to be completed first.
First, manage the TLB appropriately and add virtual memory. Second, evaluate performance of your system and write some user programs which test it.
A simple 'network' exists which allows multiple instances of Nachos running on one machine to pass messages between each other and a flag exists which lets one specify what percent of packets should be dropped. First, implement reliable message passing with no size limitations over the unreliable network. Secondly, implement a distributed application over this network.
Nachos 4.0 notably drops the file system project from its recommended list, even though support for the project remains. On the other hand, the authors of Nachos 5.0j did not port the filesystem project. The projects are largely similar to those for Nachos 3.4 and I will reference those in the below descriptions.
First, re-implement condition variables and use them to solve the consumer producer problem. Secondly, implement Thread::Join(), an alarm clock class, and a preemptive priority scheduling. Finally, use the finished pieces to solve the elevator problem.
Identical to the assignment in 3.4, but requires using either multilevel feedback or lottery scheduling.
First, do same as in 3.4. Secondly, implement memory mapped files like mmap supplies. Thirdly, implement a sbrk-like system call which allows increasing size of the heap and mmaped files.
First, do same as in 3.4's Networking project. Secondly, implement an n-way chat application. Thirdly, implement distributed shared memory.
Nachos 5.0j inherits all of the same basic projects from Nachos 4.0, but naturally has variation in the specific structure.
This project is nearly identical to 4.0's version save the final portion and slight order differences. Instead of an elevator problem at its end, a problem involving Hawaiins and boats is to be solved.
Again, this project is the same as 4.0's version save two differences: User programs need not be written since a variety are already available, and multilevel feedback scheduling is not an option.
First, do as in 3.4. Secondly, implement lazy loading of user programs.
Identical to 4.0 in assignment, though the methods are different.
When working with Nachos my group repeatedly found flaws which made it much more difficult to work with than it should have been. Some might argue it is more educational to retain incidental flaws for the educational value of solving them. You may review the problems I list below and determine if this is accurate in all cases. Regardless, review of these flaws should be instructional to anyone working an operating system or machine simulator, including Nachos.
See also: "Issues: Strings? What're those?" below which was resolved before this issue was discovered.
The greatest annoyance I had with Nachos wasn't even a design flaw or unfixable. My problem was that my group cumulatively spent hundreds of hours working to get it to resemble a somewhat realistic operating system, and, after all of that effort, it couldn't even handle a trivial clone of expr that I wrote. The problem? We had yet to implement virtual memory, so the largest program Nachos would accept was 3k! You're not misreading, that's 3072 bytes.
After spending days of blood, sweat, and tears, we had developed a massive, futuristic arena complete with chainsaws, rabid monkeys, and a sliding window network protocol for our wargames, and we were left to play with toy soldiers. This happened because, by default, Nachos 3.4 and 4.0 provide a grand total of 32 pages of memory, each 128 bytes in size. Further, 4 of these pages are used for each program's stack, and one is used for the stub that starts the executable and maps functions to syscalls. Luckily, Nachos 5j greatly expands memory available for projects prior to the implementation of virtual memory. The hard statistics are below:
Nachos user program size limitations
3.4/4.0 5j
..........................
Page size:: 128B 1024B
Pages:: 32 varies for each project (2:64, 3:16, 4:16)
StackPages:: 4 8
TLB Size:: 4 4
Memory restriction problems can be fixed in 3.4/4.0 by changing the #define for NumPhysPages found in machine.h. Unfortunately, this necessitates recompiling nearly all of Nachos. It would be wonderful if this were a global const int, so that memory size could be adjusted without going through a full build cycle. Nachos 5.0j simply makes it a variable read from a configuring file, an even more refreshing solution.
NumPhysPages
Also neglected for Nachos 3.4 and 4.0 is a standard library. Simple string functions such as strlen(), strcat(), atoi() are strangely absent. Although it is a small pain in the side to write these, it is survivable. In fact, it can be seen as a learning experience. But implementing one's own standard library without dynamic link libraries only exacerbates problems with limited memory. Nachos 5.0j saw fit to provide a relatively complete standard library including an implementation of printf().
Of course, after fixing all of this, my expr still wouldn't actually add because Nachos 3.4/4.0 doesn't completely emulate the LWR instruction. Yet again, Nachos 5.0j fixes this.
A sample run of Nachos may produce the following output:
HHHHHeeeeellllllllllooooo WWWWWooooorrrrrlllllddddd!!!!!
No threads ready or runnable, and no pending interrupts.
Assuming the program completed.
Machine halting!
Ticks: total 8330, idle 0, system 8330, user 0
Disk I/O: reads 0, writes 0
Console I/O: reads 0, writes 0
Paging: faults 0
Network I/O: packets received 0, sent 0
Cleaning up...
Note how it detects that it is finished and stops, printing valuable statistics -- important statistics you often need to know when debugging. Unfortunately, Nachos sometimes predicts the end of execution prematurely. Nachos determines it is finished when a thread sleeps, there is zero or one pending interrupt, and all other threads are sleeping. This decision is made in Interrupt::CheckIfDue() found in code/machine/interrupt.cc called from Interrupt::Idle() called from Thread::Sleep().
Interrupt::CheckIfDue()
code/machine/interrupt.cc
Interrupt::Idle()
Thread::Sleep()
The reason for "zero or one" interrupts being the end is because of a "clock" interrupt that is often present. See Design:Deterministic above for an explanation.
Unfortunately, the "zero or one" behavior causes problems if one runs Nachos without the -rs switch. In our case, we were solving a problem involving an elevator and passengers. The elevator would be called at a time in the future via a scheduled interrupt...er, passenger. Unfortunately, the elevator would never pick up the last passenger. The elevator process would sleep, waiting to be awoken by an interrupt, the next-event interrupt handler would see only one remaining interrupt and our kernel would terminate. Whoops.
Ultimately, this problem can be fixed, but two things need to be done. First, hardware and software interrupts must be given separate queues. Secondly, a blocked state for threads must be added which threads will enter when waiting on an interrupt. The end of execution would then occur when all threads are sleeping and the software interrupt queue is empty. That solution is overkill for this one case — we solved it by always running with the -rs switch. But it came back to bite us.
As with most issues, this problem did not carry over to the Java version of Nachos. In 5.0j, the machine is always explicitly terminated when it needs to be finished.
Normally Nachos prints out useful statistics when it finishes execution. Unfortunately, it only prints these statistics on a normal exit. If one presses Ctrl-C to terminate Nachos, it will simply print:
Cleaning up...
Then it exits. This is an issue because of the way in which Nachos detects that it has finished execution. Simulation end is detected when there is zero or one pending interrupt and no ready threads. But each simulated piece of hardware adds an interrupt. For the first project, there is just the hardware timer, so this works fine. Once a console, network, or filesystem is added, a new issue arises. Nachos never detects that it should stop. Therefore, one has to explicitly call Machine::Halt() to end the simulation. This, combined with the "Done Already?" issue, makes Nachos' detection of its finish mostly useless.
Since Nachos 5.0j doesn't try to detect the end of simulation, this is not an issue with it at all.
A small amount of statistics are kept about Nachos' operation to aid in debugging purposes and performance comparisons. One of these statistics is totalTicks in the Stats class in Java and in the Statistics class in C++. This statistic is also used as the current machine time. In Nachos 3.4/4.0 it a signed int while it is a Long in 5.0j. On a 32-bit machine, this allows the value to hold values around 2.15 million. However, if a signed number is near its maximum value and something is added to it, it will overflow and become a very large negative number.
totalTicks
All versions of Nachos eventually lock up in programs which accept user input given enough time. The problem is that the interrupt simulator continually advances to the next interrupt. When user input is being waited for and all threads are idle, time continually advances to the next interrupt which is often the interrupt which checks for console input. Since interrupts are scheduled to occur at the time totalTicks + fromNow, once totalTicks is advanced far enough, interrupts will inevitably be scheduled in the past. Once this happens, Interrupt::Idle() in C++ or Interrupt.checkIfDue() in Java will enter an infinite loop.
totalTicks + fromNow
The recurring console read interrupt will be run because its time to run is less than the current time. The interrupt handler will then schedule the interrupt again, to occur at some time from the current time. Due to overflow, that scheduled time will also be negative. Therefore, that interrupt handler will then be run as well, rescheduling itself at the exact same time. This will continue ad infinitum and Nachos will have to be forcefully terminated.
The situation can be improved by using unsigned long longs. As it stands, it takes approximately two minutes on a reasonably fast Pentium 4 for Nachos 3.4 to lock up in this fashion; with an unsigned long long, this time would extend to about 32,000 years. This is likely a complete enough solution, but an ultimate solution to this problem would be to make all of the tick values be BigNum objects which dynamically resize themselves to prevent overflow.
Nachos 3.4 and 4.0 utilize NOFF executable files, described above in Design:Special Executable Format. To make it easier to convert COFF to NOFF, when user programs are linked, a script is passed which tells ld to place nearly everything into the .data and .text sections. Unfortunately, some things are missed by this script. For instance, constant strings and jump tables for case statements. gcc places these symbols in a section called .rodata which stands for read only data. The result is that the following program can not be used in stock Nachos:
#include "syscall.h"
int main()
{
Write("Hello World!\n", 13, ConsoleOutput);
Exit(0);
}
The solution to this problem is to add a single line in the script file, reproduced below in unified diff format.
.data . : {
*(.data)
+ *(.rodata)
CONSTRUCTORS
}
Because Nachos 5.0j uses COFF files directly, it does not have this issue.
Nachos provides a sample user program to use for testing. It sorts an array of 1024 integers using a bubble sort. This is designed to test the effectiveness of one's virtual memory system. Aside from the fact that it runs incredibly slowly, it also is incorrect in versions 3.4 and 4.0 of Nachos. The code for main() is below:
int A[1024]; /* size of physical memory; with code, we'll run out of space!*/
int
main()
{
int i, j, tmp;
/* first initialize the array, in reverse sorted order */
for (i = 0; i < 1024; i++)
A[i] = 1024 - i;
/* then sort! */
for (i = 0; i < 1023; i++)
for (j = i; j < (1023 - i); j++)
if (A[j] > A[j + 1]) { /* out of order -> need to swap ! */
tmp = A[j];
A[j] = A[j + 1];
A[j + 1] = tmp;
}
Exit(A[0]); /* and then we're done -- should be 0! */
}
The observant reader will note there are several errors with this code. One, the array is filled with values from 1024 to 1, so if it properly sorted, the program would return 1, not 0. Two, it oversteps the bounds of the array A by comparing A[1023] with A[1024] the first time through the first for loop. Three, it doesn't sort properly because it shortens its sorting by two elements instead of one each pass through the first for loop. The line:
for
for (j = i; j < (1023 - i); j++)
for (j = 0; j < (1022 - i); j++)
Nachos 5.0j not only solves these problems in its provided sort.c, but it also verifies whether the sort was correct. Then it returns either 0 to indicate success or 1 to indicate failure.
While debugging an interactive program, I frequently find it convenient to use a file for input rather than typing. This helps insure that tests remain consistent and saves me from much odious typing. A small side effect of doing this is that when the end of the input file is reached, an EOF is sent to the running program. One can manually send an EOF by pressing Ctrl-D, but this is normally not useful.
Unfortunately, either way of sending an EOF will cause Nachos 3.4 and 4.0 to die with an assertion failure. The problem is that select() is used to check whether input is available to be read. When select() returns a positive value, it indicates that an attempt to read input will not block. This means that there is either input available or the pipe is closed — an end of file. Nachos does not account for the possibility that an EOF was received; it attempts to read a single byte from input, finds zero bytes were read, and causes an assertion failure.
A solution to this problem for Nachos 3.4 would be to alter Console::CheckCharAvail() in console.cc as follows:
// do nothing if character is already buffered, or none to be read
if ((incoming != EOF) || !PollFile(readFileNo))
return;
// otherwise, read character and tell user about it
- Read(readFileNo, &c, sizeof(char));
+ int rBytes = ReadPartial(readFileNo, &c, sizeof(char));
+ // On EOF, return.
+ if (rBytes == 0)
+ return;
incoming = c ;
Not having worked with Nachos 4.0, I have not developed a solution for this problem for that version. Since its code is very similar to 3.4, it should require a similar fix in ConsoleInput::CallBack() in console.cc.
As with most issues, Nachos 5.0j gracefully handles this one. EOFs are silently discarded already.
Log in or registerto write something here or to contact authors.
Need help? accounthelp@everything2.com | http://everything2.com/title/nachos | CC-MAIN-2016-40 | refinedweb | 5,055 | 66.64 |
validator framework work in Struts
validator framework work in Struts How does validator framework work in Struts
Which is the good website for struts 2 tutorials?
Which is the good website for struts 2 tutorials? Hi,
After... for learning Struts 2.
Suggest met the struts 2 tutorials good websites.
Thanks
Hi,
Rose India website is the good html tag - Struts
struts html tag Hi, the company I work for use an "id" tag on their tag like this: How can I do this with struts? I tried and they don't
JSP Parse did not work - Java Server Faces Questions
JSP Parse did not work I have these codes:
===========================================================================
index.jsp:
"Poor"
"Good"
"Very Good"
"Excellent"
DropDown.java:
import java.io.vm
Struts 2 + Hibernate
Struts 2 + Hibernate From where can i get good tutorial from integrating struts 2 with hibernate , which explains step by step procedure in integration and to build web applications in Agile Development Environment
Struts 2 in Agile Development Environment
This article explains you how Struts 2 can be used in Agile Development
environment. Struts 2 fully supports Agile... timescale.
Business people and developers must work together daily throughout
Struts2 - Struts
work,
see code below:-
in the Class
Struts Guide
? -
- Struts Frame work is the implementation of Model-View-Controller
(MVC) design...
Struts Guide
- This tutorial is extensive guide to the Struts Framework
collection frame work
collection frame work explain all the concept in collection frame work
how formBackingObject() method will work.
how formBackingObject() method will work. How the formBackingObject method will behave
work - JSP-Servlet
JSP Work Directory What is the absolute path for the JSP working directory
singleton class in struts - Java Beginners
singleton class in struts hi,
This balakrishna,
I want to retrive data from database on first hit to home page in struts frame
How to work with Ajax in spring
How to work with Ajax in spring give some sample code for ajax with spring (example like if i select a state from one drop down in another drop down related districts should come
STRUTS MAIN DIFFERENCES BETWEEN STRUTS 1 AND STRUTS 2 | http://roseindia.net/tutorialhelp/comment/82232 | CC-MAIN-2014-42 | refinedweb | 352 | 56.69 |
CodePlexProject Hosting for Open Source Software
MEF is still a mystery to me.
With Unity I could have an IUnityContainer injected into my module's constructor and from there I would simply register my IService with a Service type by utilizing the UnityContainer.
I couldn't find a single MEF example how to do this within MEF driven Modules. Modules don't get an equivalent to IUnityContainer injected and hence the service can't be resolved within the module.
In a way it seems MEF is used within the BootStrapper class only, while the container that registers everything remains referenced in there only. Other Modules don't have access to it and can't resolve anything.
How do I register my IService with Service type and resolve it within a module please? Any code snippets?
Many Thanks for clarification,
Houman
MEF is initialized from the MEF Bootstrapper from there you can do everything that Unity can do. Now for Resolving types...
As long as the class is attributed with the [Export] or the Variable is tagged with [Import] then it will be resolved. If it can't find a matching item in the container dictionary it will explode on run time and throw errors for you try and interpret.
[Export(typeof(IService))] // typeof(IService) suggested not required, but to be resolved in the Container it must have the [Export] attribute.
public class Service : IService{
[ImportingConstructor]
public Service(){ }
//do what ever here..
}
public interface IService{
// stuff here if necessary.
//ViewModel sample code.
[Export(typeof(ISomeViewModel))]
public class SomeViewModel : NotificationObject{
private IService service;
[ImportingConstructor]
public SomeViewModel (IService service){
this.service = service;
}
Hi,
Many Thanks for your response. I was still stuck with some other issues in Prism that I had to overcome first.
Now while this makes sense, what if i have two implementations for the IService (Obviously to mock it for unit testing). Both Servies would be exported for the type of interface: [Export(typeof(IService))]
But how does it know which one to pick at the constructor injection within the view model?
Many Thanks,
Houman
Houman,
I guess the simplest answer would be use ExportMetaData or even to go as far as strongly typing your exports
Examples can be found here:
Morgan.
Hi Morgan,
Thanks for the tip I have looked into it, but I still don't see the light.
Looking at Shawn Wildermuth example here:
He has an [Export] ViewModel that consumes a [Import]Model.
In his unit test class he is retrieving a new instance (non Shared) of the ViewModel and does his unit testing.
However how does MEF magically know that it has to inject the MockModel inside the ViewModel instead of the GamesModel? There are no attributes as such. Both GamesModel and MockModel implement only the IGamesModel interface. The Export happens to be
[Export(typeof(IGamesModel))] ob both models. No indication of distinctive attributes between the two implementations.
Thanks,
Houman
Are you sure you want to delete this post? You will not be able to recover it later.
Are you sure you want to delete this thread? You will not be able to recover it later. | https://compositewpf.codeplex.com/discussions/241070 | CC-MAIN-2017-30 | refinedweb | 521 | 56.45 |
MailTrap has been renamed to Sendria. Use Sendria now! An SMTP server that makes all received mails accessible via a web interface and REST API.
Project description
MailTrap has been renamed to Sendria.
Please use Sendria now, MailTrap is abandoned.
Sendria
Sendria (formerly MailTrap) is a SMTP server designed to run in your dev/test environment, that is designed to catch any email you
or your application is sending, and display it in a web interface instead of sending to real world.
It help you prevents sending any dev/test emails to real people, no matter what address you provide.
Just point your app/email client to
smtp://127.0.0.1:1025 and look at your emails on.
Sendria is built on shoulders of:
- MailCatcher - original idea comes of this tool by Samuel Cochran.
- MailDump - base source code of
Sendria(version pre 1.0.0), by Adrian Mönnich.
If you like this tool, just say thanks.
Current stable version
1.0.0
Features
- Catch all emails and store it for display.
- Full support for multipart messages.
- View HTML and plain text parts of messages (if given part exists).
- View source of email.
- Lists attachments and allows separate downloading of parts.
- Download original email to view in your native mail client(s).
- Mail appears instantly if your browser supports WebSockets.
- Optionally, send webhook on every received message.
- Runs as a daemon in the background, optionally in foreground.
- Keyboard navigation between messages.
- Optionally password protected access to web interface.
- Optionally password protected access to SMTP (SMTP AUTH).
- It's all Python!
Installation
Sendria should work on any POSIX platform where Python
is available, it means Linux, MacOS/OSX etc.
Simplest way is to use Python's built-in package system:
python3 -m pip install sendria
You can also use pipx if you don't want to
mess with system packages and install
Sendria in virtual environment:
pipx install sendria
Voila!
Python version
Sendria is tested against Python 3.7+. Older Python versions may work, or may not.
If you want to run this software on Python 2.6+, just use MailDump.
How to use
After installing
Sendria, just run command:
sendria --db mails.sqlite
Now send emails through
smtp://127.0.0.1:1025, ie.:
echo 'From: Sendria <sendria@example.com>\n'\ 'To: You <you@exampl.com>\n'\ 'Subject: Welcome!\n\n'\ 'Welcome to Sendria!' | \ curl smtp://localhost:1025 --mail-from sendria@example.com \ --mail-rcpt you@example.com --upload-file -
And finally look at
Sendria GUI on 127.0.0.1:1080.
If you want more details, run:
sendria --help
for more info, ie. how to protect access to gui.
API
Sendria offers RESTful API you can use to fetch list of messages or particular message, ie. for testing purposes.
You can use excellent httpie tool:
% http localhost:1080/api/messages/ HTTP/1.1 200 OK Content-Length: 620 Content-Type: application/json; charset=utf-8 Date: Wed, 22 Jul 2020 20:04:46 GMT Server: Sendria/1.0.0 () { "code": "OK", "data": [ { "created_at": "2020-07-22T20:04:41", "id": 1, "peer": "127.0.0.1:59872", "recipients_envelope": [ "you@example.com" ], "recipients_message_bcc": [], "recipients_message_cc": [], "recipients_message_to": [ "You <you@exampl.com>" ], "sender_envelope": "sendria@example.com", "sender_message": "Sendria <sendria@example.com>", "size": 191, "source": "From: Sendria <sendria@example.com>\nTo: You <you@exampl.com>\nSubject: Welcome!\nX-Peer: ('127.0.0.1', 59872)\nX-MailFrom: sendria@example.com\nX-RcptTo: you@example.com\n\nWelcome to Sendria!\n", "subject": "Welcome!", "type": "text/plain" } ] }
There are available endpoints:
GET /api/messages/- fetch list of all emails
DELETE /api/messages/- delete all emails
GET /api/messages/{message_id}.json- fetch email metadata
GET /api/messages/{message_id}.plain- fetch plain part of email
GET /api/messages/{message_id}.html- fetch HTML part of email
GET /api/messages/{message_id}.source- fetch source of email
GET /api/messages/{message_id}.eml- download whole email as an EML file
GET /api/messages/{message_id}/parts/{cid}- download particular attachment
DELETE /api/messages/{message_id}- delete single email
Docker
There is also available Docker image of Sendria. If you want to try, just run:
docker run -p 1025:1025 -p 1080:1080 msztolcman/sendria
I'm backend developer, not a frontend guy nor designer... If you are, and want to help, just mail me!. I think GUI should be redesigned, or at least few minor issues could be solved. Also, project requires some logo and/or icon. Again, do not hesitate to mail me if you want and can help :)
Configure Rails
For your rails application just set in your
environments/development.rb:
config.action_mailer.delivery_method = :smtp config.action_mailer.smtp_settings = { :address => '127.0.0.1', :port => 1025 } config.action_mailer.raise_delivery_errors = false
Configure Django
To configure Django to work with
Sendria, add the following to your projects'
settings.py:
if DEBUG: EMAIL_HOST = '127.0.0.1' EMAIL_HOST_USER = '' EMAIL_HOST_PASSWORD = '' EMAIL_PORT = 1025 EMAIL_USE_TLS = False
Behind nginx
If you want to hide
Sendria behind nginx (ie. to terminate ssl) then you can use example
config (see in addons).
Supervisord
To start
Sendria automatically with Supervisor there is in
addons example config file for this purpose.
Authors
- Marcin Sztolcman (marcin@urzenia.net)
- Adrian Mönnich (author of MailDump, base of
Sendria)
If you like or dislike this software, please do not hesitate to tell me about this me via email (marcin@urzenia.net).
If you find bug or have an idea to enhance this tool, please use GitHub's issues.
ChangeLog
v1.0.0
- complete rewrite of backend part. Sendria is using asyncio and aio-libs now:
- using asynchronous version of libraries drastically improved performance
Sendrianow can send a webhook about every received message
- show in GUI information about envelope sender and recipients
- all API requests has their own namespace now:
/api
- allow to replace name of application or url in template
- block truncating all messages from GUI (on demand)
- fixed issues with
WebSockets, should refresh mails list and reconnect if disconnected
- fixed issues with autobuilding assets
- many cleanups and reformatting code
- addons for nginx and supervisor
Backward incompatible changes:
- all api's requests are now prefixed with
/api(look at API section)
--htpasswdcli param is renamed to
--http-auth
v0.1.6
- fixed issue with old call do
gevent.signal
- minimum gevent version set to 1.5.0
v0.1.4
- bumped dependencies - security issues (dependabot)
v0.1.3
- fixed layout issues (radoslawhryciow)
v0.1.2
- fixed encoding issues
v0.1.0
- better support for macOS/OSX
- links now opens in new tab/window (added 'target="blank"')
- show message if there is no assets generated and info how to to generate them
- added debugs for SMTP when in debug mode
- added support for Pipenv
- HTML tab is default now when looking at particular message
- converted to support Python 3.6+, drop support for lower Python versions
- added SMTP auth support (look at pull request 28 )
- copy from MailDump v0.5.6
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/mailtrap/1.0.1/ | CC-MAIN-2021-17 | refinedweb | 1,169 | 60.41 |
On Sat, Mar 07, 2009 at 03:02:22AM +0100, Jukka Zitting wrote:
> > Please give me two to three months to make the next dev release of KinoSearch.
>
> What will happen then?
The next dev release of KS will present real world implementations of many
designs that have been discussed in Lucene and Lucy forums over the last year.
Some might see that as "progress". ;)
> When and how is Lucy development going to start?
It *is* actively progressing. It's just that neither you nor Grant are
willing to acknowledge that any of the design work I just did (in happy
collaboration with Java Lucene devs) applies to Lucy.
Please go read <> and see if
you can still assert after you read it that no work is being done on Lucy. I
warn you, it is a long thread. :)
> You mention that in many cases other forums have been better for
> discussing related design issues. What's the benefit of keeping the
> Lucy project alive if there's next to no code or even discussion
> there?
The proposal remains sound, and there is a deep hunger out there for a solid C
IR library similar to Lucene. The KS-then-Lucy progression is the fastest and
best way to get there.
Things would have gone more smoothly and quickly if Dave Balmain had been able
to contribute more, but even with that setback, we will still reach the
finish.
> I'm sure that everyone here would love to see Lucy become more active.
> How could we help make that happen?
Help Mike McCandless and Jason Rutherglen finish up their work on the designs
we've all been discussing. This is a multi-way collaboration, and Lucy
benefits when I'm able to study alternatate implementations, just as Java
Lucene benefits from being able to see what other projects have done.
Cross-pollination has worked very well in the past. The indexing speedups a
while back started with McCandless riffing on the KinoSearch merge model. (He
followed that up with plenty of interesting innovating on his own.)
> As a wild idea: would there be interest in bringing the KinoSearch
> codebase over to Apache through incubation?
My main reservation is that I really want to see KS and Lucy play out
sequentially, because I want Lucy to benefit from having seen how the features
now in KS work in the real world. There's no sane versioning under Perl/CPAN.
You can't move from Lucy version 1 to Lucy version 2 without screwing over
your users, and therefore I don't want to merge the two projects into one
namespace. If we did that, the unified project has to stay as an "alpha" for
that much longer, and it never really gets the benefit of seeing how a
real-world release goes.
If, then, we're proceeding sequentially as I recommend, I don't see how
putting KS through incubation does anything but slow us down. All we're doing
is adding extra hoops to jump through. It might be politically expedient, but
the engineer in me rebels at the waste, as does the loyal employee.
>From my perspective, what we have is an optics problem. I'm working full
time, and I've been plenty active in the Lucene forums, but you and Grant only
see a big fat zero. :(
Marvin Humphrey | http://mail-archives.apache.org/mod_mbox/lucene-general/200903.mbox/%3C20090307085402.GB11045@rectangular.com%3E | CC-MAIN-2016-50 | refinedweb | 562 | 70.73 |
You are browsing a read-only backup copy of Wikitech. The live site can be found at wikitech.wikimedia.org
Help, and steps for configuring your tool account are provided below. The configuration files themselves are stored in your tool account in the
$HOME/.pywikibot directory, or another directory, where they can be used via the
-dir option (all of this is described in more detail in the instructions).
If you are a developer and/or would like to control when the code is updated, or if you would like to use the 'compat' branch instead of 'core' (not all the Pywikibot scripts have been ported to 'core'), to include the following line. The path should be on one line, though it may appear to be on multiple lines depending on your screen width. When you save the .bash_profile file, your settings will be updated for all future shell sessions:
export PYTHONPATH=/data/project/shared/pywikibot/stable:/data/project/shared/pywikibot/stable/scripts/scripts/version.py Pywikibot: [https] r-pywikibot-core.git (df69134, g1, 2020/03/30, 11:17:54, OUTDATED) Release version: 3.1.dev0 requests version: 2.12.4 cacerts: /etc/ssl/certs/ca-certificates.crt certificate test: ok Python: 3.5.3 (default, Sep 27 2018, 17:25:39) [GCC 6.3.0 20170516]
Note that you do not run scripts using pwb.py, but run scripts directly, e.g.,
python3 /data/project/shared/pywikibot/stable/scripts/version.py. Setting PYTHONPATH means that you no longer need the pwb.py helper script to make, say,
import pywikibot work.
If you need to use multiple user-config.py files, you can do so by adding -dir:<path where you want your user-config.py> to every python command. To use the local directory, use -dir:. (colon dot).
For more information about Pywikibot, please see the Pywikibot documentation. The pywikibot mailing list (pywikibot
lists.wikimedia.org) and IRC -core
Setup a Python virtual environment for library dependencies
When using a local pywikibot install, we recommend that you<50.0 wheel ... Successfully installed pip-20.2.2 setuptools-49.6.0 wheel-0.35.1 (pwb) $ cd $HOME/pywikibot-core (pwb) $ python3 setup.py develop ... Finished processing dependencies for pywikibot==4.2.0
Note: the
setuptools<50
Setup job submission
After installing, you can run your bot directly via a shell command, though this is highly discouraged. You should use the grid to run jobs instead.
In order to setup the submission of the jobs you want to execute and use the grid engine you should first read Help:Toolforge/Grid.
To run a bot using the grid, you might want to be in the pywikibot directory (this is not needed) - which means you have to write a small wrapper script. The following example script (versiontest.sh) is used to run version.py:
$ cat versiontest.sh #!/bin/bash cd (versiontest.out and versiontest.err in this example):
$ cat ~/versiontest.out pywikibot [https] r/pywikibot/compat (r10211, 8fe6bdc, 2013/08/18, 14:00:57, ok) Python 2.7.3 (default, Aug 1 2012, 05:14:39) [GCC 4.6.3] config-settings: use_api = True use_api_login = True unicode test: ok
Example
An infinitely running job such as an irc-bot can be started like this:
$ jsub -once -continuous -l h_vmem=256M -N script_wui python3 $HOME/pywikibot-core/pwb.py script_wui.py -log
or shorter
$ jstart -l h_vmem=256M -N script_wui python3 $HOME/pywikibot-core # Run script_wui.py at 00:17 UTC each day 17 0 * * * jstart -l h_vmem=512M -N script_wui python3 $HOME/pywikibot-core3 foo/bar/pwb.py SCRIPTNAME -page:"SOMEPAGE"
The venv does not get automatically activated in Grid job submissions. Two common workarounds are having wrapping shell scripts that activates the venv, or use absolute paths to the binaries within:
$ jstart -N jobname venv/bin/python3 foo/bar/pwb.py SCRIPTNAME -page:"SOMEPAGE" | https://wikitech-static.wikimedia.org/w/index.php?title=Help:Toolforge/Pywikibot&direction=prev&oldid=576367 | CC-MAIN-2022-27 | refinedweb | 644 | 51.75 |
The
makeServicePlugin method creates a vuex plugin which connects a Feathers service to the Vuex store. Once you create a plugin, you must register it in the Vuex store's
plugins section.
See the setup documentation to learn the basics of setting up a Service Plugin.
# Configuration
The following options are supported on
makeServicePlugin.
{ idField: '_id', // The field in each record that will contain the id nameStyle: 'path', // Use the full service path as the Vuex module name, instead of just the last section namespace: 'custom-namespace', // Customize the Vuex module name. Overrides nameStyle. debug: true, // Enable some logging for debugging servicePath: '', // Not all Feathers service plugins expose the service path, so it can be manually specified when missing. instanceDefaults: () => ({}), // Override this method to provide default data for new instances. If using Model classes, specify this as a static class property. setupInstance: instance => instance, // Override this method to setup data types or related data on an instance. If using Model classes, specify this as a static class property. autoRemove: true, // Automatically remove records missing from responses (only use with feathers-rest) enableEvents: false, // Turn off socket event listeners. It's true by default handleEvents: { created: (item, { model, models }) => options.enableEvents, // handle `created` events, return true to add to the store patched: (item, { model, models }) => options.enableEvents, // handle `created` events, return true to update in the store updated: (item, { model, models }) => options.enableEvents, // handle `created` events, return true to update in the store removed: (item, { model, models }) => options.enableEvents // handle `removed` events, return true to remove from the store }, addOnUpsert: true, // Add new records pushed by 'updated/patched' socketio events into store, instead of discarding them. It's false by default replaceItems: true, // If true, updates & patches replace the record in the store. Default is false, which merges in changes skipRequestIfExists: true, // For get action, if the record already exists in store, skip the remote request. It's false by default modelName: 'OldTask' // Default modelName would have been 'Task' }
# Realtime by Default
Service plugins automatically listen to all socket messages received by the Feathers Client. This can be disabled by setting
enableEvents: false in the options, as shown above.
# New in Feathers-Vuex 2.0
Feathers-Vuex 2.0 includes a few breaking changes to the service plugin. Some of these changes are being made to prepare for future compatibility beyond FeathersJS
- The
servicemethod is now called
makeServicePlugin
- The Feathers Client service is no longer created, internally, so a Feathers service object must be provided instead of just the path string.
- A Model class is now required. The
instanceDefaultsAPI has been moved into the Model class. You can find a basic example of a minimal Model class in the Data Modeling docs.
# The FeathersClient Service
Once the service plugin has been registered with Vuex, the FeathersClient Service will have a new
service.FeathersVuexModel property. This provides access to the service's Model class.
import { models } from 'feathers-vuex' feathersClient.service('todos').FeathersVuexModel === models.api.Todo // true
# Service State
Each service comes loaded with the following default state:
{ ids: [], keyedById: {}, // A hash map, keyed by id of each item idField: 'id', servicePath: 'v1/todos' // The full service path autoRemove: false, // Indicates that this service will not automatically remove results missing from subsequent requests. replaceItems: false, // When set to true, updates and patches will replace the record in the store instead of merging changes paginate: false, // Indicates if pagination is enabled on the Feathers service. paramsForServer: [], // Custom query operators that are ignored in the find getter, but will pass through to the server. whitelist: [], // Custom query operators that will be allowed in the find getter. isFindPending: false, isGetPending: false, isCreatePending: false, isUpdatePending: false, isPatchPending: false, isRemovePending: false, errorOnfind: undefined, errorOnGet: undefined, errorOnCreate: undefined, errorOnUpdate: undefined, errorOnPatch: undefined, errorOnRemove: undefined }
The following attributes are available in each service module's state:
ids {Array}- an array of plain ids representing the ids that belong to each object in the
keyedByIdmap.
keyedById {Object}- a hash map keyed by the id of each item.
servicePath {String}- the full service path, even if you alias the namespace to something else.
modelName {String}- the key in the $FeathersVuex plugin where the model will be found.
autoRemove {Boolean- indicates that this service will not automatically remove results missing from subsequent requests. Only use with feathers-rest. Default is false.
replaceItems {Boolean}- When set to true, updates and patches will replace the record in the store instead of merging changes. Default is false
idField {String}- the name of the field that holds each item's id. Default:
'id'
paginate {Boolean}- Indicates if the service has pagination turned on.
The following state attributes allow you to bind to the pending state of requests:
isFindPending {Boolean}-
trueif there's a pending
findrequest.
falseif not.
isGetPending {Boolean}-
trueif there's a pending
getrequest.
falseif not.
isCreatePending {Boolean}-
trueif there's a pending
createrequest.
falseif not.
isUpdatePending {Boolean}-
trueif there's a pending
updaterequest.
falseif not.
isPatchPending {Boolean}-
trueif there's a pending
patchrequest.
falseif not.
isRemovePending {Boolean}-
trueif there's a pending
removerequest.
falseif not.
The following state attribute will be populated with any request error, serialized as a plain object:
errorOnFind {Error}
errorOnGet {Error}
errorOnCreate {Error}
errorOnUpdate {Error}
errorOnPatch {Error}
errorOnRemo {Error}
# Service Getters
Service modules include the following getters:
list {Array}- an array of items. The array form of
keyedByIdRead only.
find(params) {Function}- a helper function that allows you to use the Feathers Adapter Common API and Query API to pull data from the store. This allows you to treat the store just like a local Feathers database adapter (but without hooks).
params {Object}- an object with a
queryobject and an optional
paginateboolean property. The
queryis in the FeathersJS query format. You can set
params.paginateto
falseto disable pagination for a single request.
get(id[, params]) {Function}- a function that allows you to query the store for a single item, by id. It works the same way as
getrequests in Feathers database adapters.
id {Number|String}- the id of the data to be retrieved by id from the store.
params {Object}- an object containing a Feathers
queryobject.
# Service Mutations
The following mutations are included in each service module.
Note: you would typically not call these directly, but instead with
store.commit('removeItem', 'itemId'). Using vuex's mapMutations on a Vue component can simplify that to
this.removeItem('itemId')
#
addItem(state, item)
Adds a single item to the
keyedById map.
item {Object}- The item to be added to the store.
#
addItems(state, items)
Adds an array of items to the
keyedById map.
items {Array}- the items to be added to the store.
#
updateItem(state, item)
Updates an item in the store to match the passed in
item.
item {Object}the item, including
id, to replace the currently-stored item.
#
updateItems(state, items)
Updates multiple items in the store to match the passed in array of items.
items {Array}- An array of items.
#
removeItem(state, item)
Removes a single item.
item can be
item {Number|String|Object}- The item or id of the item to be deleted.
#
removeItems(state, items)
Removes the passed in items or ids from the store.
items {Array}- An array of ids or of objects with ids that will be removed from the data store.
#
clearList(state)
Removed in 2.0
Clears the
list, excepting the
current item.
#
clearAll(state)
Clears all data from
ids,
keyedById, and
currentId
# Mutations for Managing Pending State
The following mutations are called automatically by the service actions, and will rarely, if ever, need to be used manually.
Before Feathers-Vuex 2.0, these were the available mutations:
setFindPending(state)- sets
isFindPendingto
true
unsetFindPending(state)- sets
isFindPendingto
false
setGetPending(state)- sets
isGetPendingto
true
unsetGetPending(state)- sets
isGetPendingto
false
setCreatePending(state)- sets
isCreatePendingto
true
unsetCreatePending(state)- sets
isCreatePendingto
false
setUpdatePending(state)- sets
isUpdatePendingto
true
unsetUpdatePending(state)- sets
isUpdatePendingto
false
setPatchPending(state)- sets
isPatchPendingto
true
unsetPatchPending(state)- sets
isPatchPendingto
false
setRemovePending(state)- sets
isRemovePendingto
true
unsetRemovePending(state)- sets
isRemovePendingto
false
In Feathers-Vuex 2.0, these have changed to only two mutations:
setPending(state, method)- sets the
is${method}Pendingattribute to true
unsetPending(state, method)- sets the
is${method}Pendingattribute to false
# Mutations for Managing Errors
The following mutations are called automatically by the service actions, and will rarely need to be used manually.
Before Feathers-Vuex 2.0, these were the available mutations:
setFindError(state, error)
clearFindError(state)
setGetError(state, error)
clearGetError(state)
setCreateError(state, error)
clearCreateError(state)
setUpdateError(state, error)
clearUpdateError(state)
setPatchError(state, error)
clearPatchError(state)
setRemoveError(state, error)
clearRemoveError(state)
In Feathers-Vuex 2.0, these have changed to only two mutations:
setError(state, { method, error })- sets the
errorOn${method}attribute to the error
clearError(state, method)- sets the
errorOn${method}attribute to
null
# Service Actions
An action is included for each of the Feathers service interface methods. These actions will affect changes in both the Feathers API server and the Vuex store.
All of the Feathers Service Methods are supported. Because Vuex only supports providing a single argument to actions, there is a slight change in syntax that works well. If you need to pass multiple arguments to a service method, pass an array to the action with the order of the array elements matching the order of the arguments. See each method for examples.
Note: If you use the Feathers service methods, directly, the store will not change. Only the actions will cause store changes.
#
find(params)
Query an array of records from the server & add to the Vuex store.
params {Object}- An object containing a
queryobject and an optional
paginateboolean. You can set
params.paginateto
falseto disable pagination for a single request.
let params = {query: {completed: true}} store.dispatch('todos/find', params)
See the section about pagination, below, for more information that is applicable to the
find action. Make sure your returned records have a unique field that matches the
idField option for the service plugin.
#
afterFind(response)
The
afterFind action is called by the
find action after a successful response is added to the store. It is called with the current response. By default, it is a no-op (it literally does nothing), and is just a placeholder for you to use when necessary. See the sections on customizing the default store and Handling custom server responses for example usage.
#
get(id) or
get([id, params])
Query a single record from the server & add to Vuex store
id {Number|String}- the
idof the record being requested from the API server.
params {Object}- An object containing a
queryobject.
store.dispatch('todos/get', 1) // Use an array to pass params let params = {} store.dispatch('todos/get', [1, params])
Make sure your returned records have a unique field that matches the
idField option for the service plugin.
#
create(data|ParamArray)
Create one or multiple records. Note that the method is overloaded to accept two types of arguments. If you want a consistent interface for creating single or multiple records, use the array syntax, described below. Creating multiple records requires using the
paramArray syntax.
data {Object|ParamArray}- if an object is provided, a single record will be created.
let newTodo = {description: 'write good tests'} store.dispatch('todos/create', newTodo)
data {ParamArray}- if an array is provided, it is assumed to have this structure:
ParamArray {Array}- array containing the two parameters that Feathers'
service.createmethod accepts.
data {Object|Array}- the data to create. Providing an object creates a single record. Providing an array of objects creates multiple records.
params {Object}- optional - an object containing a
queryobject. Can be useful in rare situations.
Make sure your returned records have a unique field that matches the
idField option for the service plugin.
#
update(paramArray)
Update (overwrite) a record.
paramArray {Array}- array containing the three parameters update accepts.
id {Number|String}- the
idof the existing record being requested from the API server.
data {Object}- the data that will overwrite the existing record
params {Object}- An object containing a
queryobject.
let data = {id: 5, description: 'write your tests', completed: true} let params = {} // Overwrite item 1 with the above data (FYI: Most databases won't let you change the id.) store.dispatch('todos/update', [1, data, params])
Alternatively in a Vue component
import { mapActions } from 'vuex' export default { methods: { ...mapActions('todos', [ 'update' ]), addTodo () { let data = {id: 5, description: 'write your tests', completed: true} this.update([1, data, {}]) } } }
Make sure your returned records have a unique field that matches the
idField option for the service plugin.
#
patch(paramArray)
Patch (merge in changes) one or more records
paramArray {Array}- array containing the three parameters patch takes.
id {Number|String}- the
idof the existing record being requested from the API server.
data {Object}- the data that will be merged into the existing record
params {Object}- An object containing a
queryobject.
let data = {description: 'write your tests', completed: true} let params = {} store.dispatch('todos/patch', [1, data, params])
Make sure your returned records have a unique field that matches the
idField option for the service plugin.
#
remove(id)
Remove/delete the record with the given
id.
id {Number|String}- the
idof the existing record being requested from the API server.
store.dispatch('todos/remove', 1)
Make sure your returned records have a unique field that matches the
idField option for the service plugin.
# Service Events
By default, the service plugin listens to all of the FeathersJS events:
createdevents will add new record to the store.
patchedevents will add (if new) or update (if present) the record in the store.
updatedevents will add (if new) or update (if present) the record in the store.
removedevents will remove the record from the store, if present.
This behavior can be turned off completely by passing
enableEvents: false in either the global Feathers-Vuex options or in the service plugin options. If you configure this at the global level, the service plugin level will override it. For example, if you turn off events at the global level, you can enable them for a specific service by setting
enableEvents: true on that service's options.
# Custom Event Handlers 3.1.0+
As of version 3.1, you can customize the behavior of the event handlers, or even perform side effects based on the event data. This is handled through the new
handleEvents option on the service plugin. Here is an example of how you might use this:
handleEvents: { created: (item, { model, models }) => { // Perform a side effect to remove any record with the same `name` const existing = Model.findInStore({ query: { name: item.name }}).data[0] if (existing) { existing.remove() } // Perform side effects with other models. const { SomeModel } = models.api new SomeModel({ /* some custom data */ }).save() // Access the store through model.store const modelState = model.store.state[model.namespace] if (modelState.keyedById[5]) { console.log('we accessed the vuex store') } // If true, the new item will be stored. return true }, updated: () => false, // Ignore `updated` events. patched: item => item.hasPatchedAttribute && item.isWorthKeeping, removed: item => true // The default value, will remove the record from the store }
As shown above, each handler has two possible uses:
- Control the default behavior of the event by returning a boolean.
- For
created,
patched, and
updateda truthy return will add or update the item in the store.
- For
removeda truthy return will remove the item from the store, if present.
- Perform side effects using the current service
modelor with other
models. The
modelsobject is the same as the
$FeathersVuexobject in the Vue plugin.
Each handler receives the following arguments:
item: the record sent from the API server
utils: an object containing the following properties
modelThe current service's Model class.
modelsThe same as the
$FeathersVuexobject, gives you access to each api with their respective model classes.
# Pagination and the
find action
Both the
find action and the
find getter support pagination. There are differences in how they work.
Important: For the built in pagination features to work, you must not directly manipulate the
context.params object in any before hooks. You can still use before hooks as long as you clone the params object, then make changes to the clone.
# The
find action
The
find action queries data from the remote server. It returns a promise that resolves to the response from the server. The presence of pagination data will be determined by the server.
feathers-vuex@1.0.0 can store pagination data on a per-query basis. The
pagination store attribute maps queries to their most-recent pagination data. The default pagination state looks like this:
{ pagination: { defaultLimit: null, defaultSkip: null } }
You should never manually change these values. They are managed internally.
There's not a lot going on, by default. The
defaultLimit and
defaultSkip properties are null until a query is made on the service without
$limit or
$skip. In other words, they remain
null until an empty query comes through, like the this one:
params = { query: {} }
{ pagination : { defaultLimit: 25, defaultSkip: 0, default: { } } } } }
It looks like a lot just happened, so let's walk through it. First, notice that we have values for
defaultLimit and
defaultSkip. These come in handy for the
find getter, which will be covered later.
# The
qid
The state now also contains a property called
default. This is the default
qid, which is a "query identifier" that you choose. Unless you're building a small demo, your app will require to storing pagination information for more than one query. For example, two components could make two distinct queries against this service. You can use the
params.qid (query identifier) property to assignn identifier to the query. If you set a
qid of
mainListView, for example, the pagination for this query will show up under
pagination.mainListView. The
pagination.default property will be used any time a
params.qid is not provided. Here's an example of what this might look like:
params = { query: {}, qid: 'mainListView' }
// Data in the store { pagination : { defaultLimit: 25, defaultSkip: 0, mainListView: { } } } } }
The above example is almost exactly the same as the previous one. The only difference is that the
default key is now called
mainListView. This is because we provided that value as the
qid in the params. Let's move on to the properties under the
qid.
# The
mostRecent object
The
mostRecent propery contains information about the most recent query. These properties provide insight into how pagination works. The two most important properties are the
queryId and the
pageId.
- The
queryIddescribes the set of data we're querying. It's a stable, stringified version of all of the query params except for
$limitand
$skip.
- The
pageIdholds information about the current "page" (as in "page-ination"). A page is described using
$limitand
$skip.
The
queryParams and
pageParams are the non-stringified
queryId and
pageId. The
query attribute is the original query that was provided in the request params. Finally, the
queriedAt is a timestamp of when the query was performed.
# The
queryId and
pageId tree
The rest of the
qid object is keyed by
queryId strings. Currently, we only have a single
queryId of
'{}'. In the
queryId object we have the
total numer of records (as reported by the server) and the
pageId of
'{$limit:25,$skip:0}'
'{}': { // queryId total: 155, queryParams: {}, '{$limit:25,$skip:0}': { // pageId pageParams: { $limit: 25, $skip: 0 }, ids: [ 1, 2, 3, 4, '...etc', 25 ], queriedAt: 1538594642481 } }
The
pageId object contains the
queriedAt timestamp of when we last queried this page of data. It also contains an array of
ids, holding only the
ids of the records returned from the server.
# Additional Queries and Pages
As more queries are made, the pagination data will grow to represent what we have in the store. In the following example, we've made an additional query for sorted data in the
mainListView
qid. We haven't filtered the list down any, so the
total is the same as before. We have sorted the data by the
isComplete attribute, which changes the
queryId. You can see the second
queryId object added to the
mainListView
qid:
params = { query: {}, qid: 'mainListView' }
params = { query: { $limit: 10, $sort: { isCompleted: 1 } }, qid: 'mainListView' }
// Data in the store { pagination : { defaultLimit: 25, defaultSkip: 0, mainListView: { mostRecent: { query: { $sort: { isCompleted: 1 } }, queryId: '{$sort:{isCompleted:1}}', queryParams: { $sort: { isCompleted: 1 } }, pageId: '{$limit:10,$skip:0}', pageParams: { $limit: 10, $skip: 0 }, queriedAt: 1538595856481 }, '{}': { total: 155, queryParams: {}, '{$limit:25,$skip:0}': { pageParams: { $limit: 25, $skip: 0 }, ids: [ 1, 2, 3, 4, '...etc', 25 ], queriedAt: 1538594642481 } }, '{$sort:{isCompleted:1}}': { total: 155, queryParams: {}, '{$limit:10,$skip:0}': { pageParams: { $limit: 10, $skip: 0 }, ids: [ 4, 21, 19, 29, 1, 95, 62, 21, 67, 125 ], queriedAt: 1538594642481 } } } } }
In summary, any time a query param other than
$limit and
$skip changes, we get a new
queryId. Whenever
$limit and
$skip change, we get a new
pageId inside the current
queryId.
# Why use this pagination structure
Now that we've reviewed how pagination tracking works under the hood, you might be asking "Why?" There are a few reasons:
- Improve performance with cacheing. It's now possible to skip making a query if we already have valid data for the current query. The
makeFindMixinmixin makes this very easy with its built-in
queryWhenfeature.
- Allow fall-through cacheing of paginated data. A common challenge occurs when you provide the same query params to the
findaction and the
findgetter. As you'll learn in the next section, the
findgetter allows you to make queries against the Vuex store as though it were a Feathers database adapter. But what happens when you pass
{ $limit: 10, $skip: 10 }to the action and getter?
First, lets review what happens with the
findaction. The database is aware of all 155 records, so it skips the first 10 and returns the next 10 records. Those records get populated in the store, so the store now has 10 records. Now we pass the query to the
findgetter and tell it to
$skip: 10. It skips the only 10 records that are in the store and returns an empty array! That's definitely not what we wanted.
Since we're now storing this pagination structure, we can build a utility around the
findgetter which will allow us to return the same data with the same query. The data is still reactive and will automatically update when a record changes.
There's one limitation to this solution. What happens when you add a new record that matches the current query? Depending on where the new record would be sorted into the current query, part or all of the cache is no longer valid. It will stay this way until a new query is made. To get live (reactive) lists, you have to use the
find getter with its own distinct query, removing the
$limit and
$skip values. This way, when a new record is created, it will automatically get added to the array in the proper place.
# Pagination and the
find getter
The
find getter queries data from the local store using the same Feathers query syntax as on the server. It is synchronous and returns the results of the query with pagination. Pagination cannot be disabled. It accepts a params object with a
query attribute. It does not use any other special attributes. The returned object looks just like a paginated result that you would receive from the server:
params = { query: {} }
// The returned results object { data: [{ _id: 1, ...etc }, ...etc], limit: 0, skip: 0, total: 3 }
# Customizing a Service's Default Store
As shown in the first example, the service module allows you to customize its store:
const store = new Vuex.Store({ plugins: [ // Add custom state, getters, mutations, or actions, if needed service('things', { state: { test: true }, getters: { getSomeData () { return 'some data' } }, mutations: { setTestToFalse (state) { state.test = false }, setTestToTrue (state) { state.test = true } }, actions: { // Overwriting the built-in `afterFind` action. afterFind ({ commit, dispatch, getters, state }, response) { // Do something with the response. // Keep in mind that the data is already in the store. }, asyncStuff ({ commit, dispatch }, args) { commit('setTestToTrue') return doSomethingAsync(id, params) .then(result => { commit('setTestToFalse') return dispatch('otherAsyncStuff', result) }) }, otherAsyncStuff ({commit}, args) { return new Promise.resolve(result) } } }) ] }) assert(store.getters['todos/oneTwoThree'] === 123, 'the custom getter was available') store.dispatch('todos/trigger') assert(store.state.todos.isTrue === true, 'the custom action was run') | https://vuex.feathersjs.com/service-plugin.html | CC-MAIN-2020-10 | refinedweb | 4,012 | 56.55 |
ASP.NET Core in .NET 6 - Part 03 - Support for IAsyncDisposable in MVC
Jürgen Gutsch - 29 March, 2021
This is the third part of the ASP.NET Core on .NET 6 series. In this post, I want to have a look at the Support for
IAsyncDisposable in MVC.
The
IAsyncDisposable is a thing since .NET Core 3.0. If I'm right, we got that together with the async streams to release those kind of streams asynchronously. Now MVC is supporting this interface as well and you can use it anywhere in your code on controllers, classes, etc. to release async resources.
When should I use IAsyncDisposable?
When you work with asynchronous enumerators like in async steams and when you work with instances of unmanaged resources which needs resource-intensive I/O operation to release.
When implementing this interface you can use the DisposeAsync method to release those kind of resources.
Let's try it
Let's assume we have a controller that creates and uses a
Utf8JsonWriter which as well is a
IAsyncDisposable resource
public class HomeController : Controller, IAsyncDisposable { private Utf8JsonWriter _jsonWriter; private readonly ILogger<HomeController> _logger; public HomeController(ILogger<HomeController> logger) { _logger = logger; _jsonWriter = new Utf8JsonWriter(new MemoryStream()); }
The interface needs us to implement the
DisposeAsync method. This should be done like this:
public async ValueTask DisposeAsync() { // Perform async cleanup. await DisposeAsyncCore(); // Dispose of unmanaged resources. Dispose(false); // Suppress GC to call the finalizer. GC.SuppressFinalize(this); }
This is a higher level method that calls a DisposeAsyncCore that actually does the async cleanup. It also calls the regular Dispose method to release other unmanaged resources and it tells the garbage collector not to call the finalizer. I guess this could release the instance before the async cleanup finishes.
This needs us to add another method called DisposeAsyncCore():
protected async virtual ValueTask DisposeAsyncCore() { if (_jsonWriter is not null) { await _jsonWriter.DisposeAsync(); } _jsonWriter = null; }
This will actually dispose the async resource .
Further reading
Microsoft has some really detailed docs about it:
What's next?
In the next part In going to look into the support for DynamicComponent in Blazor. | http://asp.net-hacker.rocks/2021/03/29/aspnetcore6-03-iasyncdisposable.html | CC-MAIN-2021-17 | refinedweb | 349 | 57.47 |
In this tutorial, we'll learn all the things that are necessary to build an accessible pagination component with a good User Experience using React. The source code is available on Github.
What is a Pagination component and where is it used?
A Pagination component helps in fetching and showing data in steps. It's mainly useful in the following scenarios:
- Lists or tables which consists of a large set of data
- Cases, where we might want to show data in steps
- Cases, where a long time will be taken to fetch and show the whole data
Difference between Pagination and Infinite Scrolling
A Pagination component is useful for fetching fragments of data and showing it to the user. The user can control when they want to view the next set of data. In a Pagination component, the user can also control the amount of data that they want to view.
In most cases like in Ant Design and Semantic UI, the user can also view the data for a particular page directly by typing the number of that page in a small input box.
In an Infinite Scrolling component, initially, a subset of the whole data is shown. However, when the user scrolls to the bottom of the list or table, the next set of data is fetched from the server and shown on the browser.
For example, initially a set of 5 records out of 100 records will be fetched from the server and rendered on the list or table. When the user scrolls to the bottom of the list or table, the next 5 records are fetched from the server and shown.
It's also possible that the whole data is fetched initially but the records are shown lazily. This is done in order to reduce Cognitive Overload.
When should we use Pagination instead of Infinite Scrolling?
This article explains the answer for this question in details. However, if an application needs to remember the scroll-position of the user during page transition, then it's always better to use pagination. For example, if we have an e-commerce application, we'll need to remember the scroll-position of the user in case of a page transition.
Types of Pagination
Effectively, there are two types of pagination:
- Client-side pagination
- Server-side pagination
We'll be talking about both of these in details.
Client-side Pagination
In this type of pagination, the whole data is fetched from the server but is shown on the browser in steps. There can be certain cases in which the pagination can't be implemented on the server. In those cases, the pagination is generally done on the client-side.
This type of pagination can be achieved in the following ways:
- Fetch all the records from the server
- Divide the records into certain parts based on a certain limit per page
- Show a set of records on each page
- Show the next set of records when the user clicks on the next page and so on
- Sorting, searching and filtering of records is done on the client-side
Server-side Pagination
In this type of pagination, the records are fetched from the server in steps. This is a better way of doing pagination because of the following:
- Size of API requests are smaller since they are fetched in chunks
- The load on both the Server and Client is lesser since they've lesser data to deal with. On the Server-side, since the number of records is lesser, computing time for generating those records will be lesser as compared to computing all the records. This might not be noticeable if the total number of records aren't much but when we're fetching thousand records vs ten records, this will be noticeable. On the Client-side, if the number of records are lesser, there will be lesser load on the browser to parse the data and show it to the user.
This type of pagination can be achieved in the following ways:
- Fetch only the necessary records from the server for a particular page
- Show the fetched records for that particular page
- Fetch and show the next set of records when the user clicks on the next page and so on
- Sorting, searching and filtering of records is done on the server-side
Building a simple Pagination component using React
In this step, we'll build a simple Pagination component using React and the Hacker News API. We'll be building a server-side paginated component.
Let's start with creating a new directory for our component:
mkdir react-pagination-component
The above command will create a new directory called react-pagination-component.
We'll be using Yarn to manage our dependencies.
If you don't have Yarn installed on your machine, you can check out their Installation guide.
Let's initialize our package using Yarn inside our react-pagination-component directory:
cd react-pagination-component && yarn init
The above command will show an output similar to the following on our terminal:
question name (react-pagination-component): question version (1.0.0): 0.0.1 question description: A simple pagination component for React.js question entry point (index.js): question repository url: question author: Nirmalya Ghosh (nirmalya.email@gmail.com) question license (MIT): question private: false success Saved package.json ✨ Done in 95.33s.
It'll also generate the following package.json file:
{ "name": "react-pagination-component", "version": "0.0.1", "description": "A simple pagination component for React.js", "main": "index.js", "author": "Nirmalya Ghosh (nirmalya.email@gmail.com)", "license": "MIT", "private": false }
Doing the above steps would help us in installing npm packages. We'll be installing React and other packages from npm. npm stands for Node Package Manager and is the world’s largest software registry.
Although it's very easy to add React to a website, managing and upgrading packages would be much easier using npm.
We'd also use Parcel for bundling our application.
yarn add react yarn add react-dom yarn add --dev parcel-bundler @babel/preset-react @babel/preset-env
The above command will add react and react-dom to our list of dependencies. It'll add parcel-bundler to our list of dev-dependencies.
We've added parcel-bundler, @babel/preset-react and @babel/preset-env to our dev-dependencies >since we won't need it on production. Our application code will be bundled by Parcel and we'll deploy those assets (HTML, CSS and JavaScript) to our server.
We also need to adding the following script to our package.json file:
"scripts": { "start": "parcel index.html" }
Our package.json file should now contain these:
{ .... "scripts": { "start": "parcel index.html" }, "dependencies": { "react": "^16.13.1", "react-dom": "^16.13.1" }, "devDependencies": { "@babel/preset-env": "^7.10.2", "@babel/preset-react": "^7.10.1", "parcel-bundler": "^1.12.4" } }
We also need to create an index.html file in the root of our project and add a reference to our JavaScript entry point:
<!DOCTYPE html> <html lang="en"> <head> </head> <body> <div id="app"></div> <!-- Here 👇 --> <script src="./index.js"></script> </body> </html>
Let's also create a new index.js which will be our JavaScript entry point:
console.log("Hello from Parcel!");
Now, we can start our application by running the following command:
yarn start
The about application will start running at.
Let's start by fetching a list of news from Hacker News API and show it on our browser. We need to update our index.js file with the following code:
import React, { useEffect } from "react"; import ReactDOM from "react-dom"; const ReactPaginationComponent = () => { useEffect(() => { fetchData(); }, []); const fetchData = async () => { const response = await fetch( "" ); console.log(response); }; return <>Hello</>; }; ReactDOM.render(<ReactPaginationComponent />, document.getElementById("app"));
Now, if we visit, we'll see the following error:
Uncaught ReferenceError: regeneratorRuntime is not defined
To fix that, we need to add a couple of babel plugins:
yarn add @babel/plugin-transform-runtime @babel/runtime --dev
If we visit, we should be able to view the page without any errors.
Let's show the data fetched in a list. To do that, we'll need to modify our index.js file:
import React, { useEffect, useState } from "react"; import ReactDOM from "react-dom"; const ReactPaginationComponent = () => { const [data, setData] = useState([]); useEffect(() => { fetchData(); }, []); const fetchData = async () => { const response = await fetch( "" ); const data = await response.json(); setData(data.hits); }; const listNode = () => { return ( <ul> {data.map((datum, index) => { return <li key={index}>{datum.title}</li>; })} </ul> ); }; return <div>{listNode()}</div>; }; ReactDOM.render(<ReactPaginationComponent />, document.getElementById("app"));
Here, we're fetching the data from the server and storing it in the
data
state. In the
listNode function, we're iterating over this data and showing it
in a list.
If we visit, we should be able to view the following:
Let's now add the logic to fetch records from the server based on the selected page.
First, we need to define two states to store the
totalPages and the
currentPage:
const [totalPages, setTotalPages] = useState(1); const [currentPage, setCurrentPage] = useState(1);
We also need to update the
fetchData function to use the
currentPage:
const fetchData = async () => { const response = await fetch( `{currentPage}` ); const data = await response.json(); setData(data.hits); setTotalPages(data.nbPages); };
We also need to add a new
paginationNode function to show the pagination:
const paginationNode = () => { return ( <ul> {[...Array(totalPages)].map((_, index) => { return ( <li key={index + 1}> <button onClick={() => handlePageChange(index + 1)}> {index + 1} </button> </li> ); })} </ul> ); };
Finally, let's update our
return function and add the
paginationNode
function to it:
return ( <div> {listNode()} {paginationNode()} </div> );
Adding styles to our Pagination component
In this section, we'll be adding some styles to our pagination component using Theme UI.
Theme UI is a library for creating themeable user interfaces based on constraint-based design principles. Build custom component libraries, design systems, web applications, Gatsby themes, and more with a flexible API for best-in-class developer ergonomics.
Let's install the package first:
yarn add theme-ui
Once the package is installed, we need to import
ThemeProvider and it to our app first:
import { ThemeProvider } from "theme-ui"; const theme = { colors: { text, background, primary, }, }; .... return ( <ThemeProvider theme={theme}> {listNode()} {paginationNode()} </ThemeProvider> );
The values for
text ,
background and
primary colors are the props that can
be passed to our component. We can define defaults for those values:
const ReactPaginationComponent = ({ text = "#000", background = "#fff", primary = "#33e", }) => { ....
We need to update our
listNode and
paginationNode functions to use
Box,
Button and
Flex components from Theme UI:
.... const listNode = () => { return ( <Box as="ul" sx={{ listStyleType: "none", p: 0, }} > {data.map((datum, index) => { return ( <Box key={index} as="li" mb={2}> {datum.title} </Box> ); })} </Box> ); }; const paginationNode = () => { return ( <Flex as="ul" sx={{ listStyleType: "none", p: 0, flexWrap: "wrap" }}> {[...Array(totalPages)].map((_, index) => { return ( <Box key={index + 1} as="li" mr={2}> <Button onClick={() => handlePageChange(index + 1)}> {index + 1} </Button> </Box> ); })} </Flex> ); }; ....
Let's update our pagination logic for a better User Experience.
.... const paginationNode = () => { //; } return ( <Flex as="ul" sx={{ listStyleType: "none", p: 0, flexWrap: "wrap" }}> {rangeWithDots.map((pageNumber, index) => { return ( <Box key={index} as="li" mr={2}> <Button onClick={() => handlePageChange(pageNumber)} disabled={pageNumber === "..."} > {pageNumber} </Button> </Box> ); })} </Flex> ); }; ....
Now, it looks much better.
Making our Pagination component theme-able using Theme UI
As we've used the
ThemeProvider component from Theme UI, our pagination
component supports theming out of the box. We can do so by passing a
primary
prop to our
ReactPaginationComponent:
<ReactPaginationComponent primary="green" />
We can also pass the
text prop to update the color of the text on the list items.
Conclusion
In this tutorial, we learnt how to create a basic pagination component using React.js. We also leant how to make our component themeable using Theme UI. The source code is available on Github.
Please note that the code for this tutorial is till this commit. Further changes to this Github repository might make the code different. It's recommended that you check the code for this tutorial till this commit. | https://nirmalyaghosh.com/articles/create-pagination-component-react-theme-ui | CC-MAIN-2022-27 | refinedweb | 1,997 | 54.93 |
nearbyint, nearbyintf, nearbyintl, rint, rintf, rintl − round to nearest integer
#include <math.h>
double
nearbyint(double x);
float nearbyintf(float x);
long double nearbyintl(long double x);
double
rint(double x);
float rintf(float x);
long double rintl(long double x);
Link with −l
The nearbyint() functions round their argument to an integer value in floating-point format, using the current rounding direction (see fesetround(3)) and without raising the inexact exception.
The rint().
No errors occur. POSIX.1-2001 documents a range error for overflows, but see NOTES.
For an explanation of the terms used in this section, see attributes(7).
C99, POSIX.1-2001..
ceil(3), floor(3), lrint(3), round(3), trunc(3)
This page is part of release 3.53 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at−pages/. | http://man.linuxtool.net/centos7/u3/man/3_nearbyint.html | CC-MAIN-2019-30 | refinedweb | 145 | 55.84 |
Slashdot Log In
Java Performance Tuning, 2nd Ed. are more factors that you don't control than ones that you do, and it can be difficult to say whether one piece of code will be "more efficient" than another without testing with actual usage patterns. The second edition of Review of Java Performance Tuning provides substantial benchmarks (not just simple microbenchmarks)
Chapters 4-9 cover the nuts and bolts, code-level optimizations that you can implement. Chapter 4 discusses various object allocation tweaks including: lazy initialization, canonicalizing objects, and how to use the different types of references (Phantom, Soft, and Weak) to implement priority object pooling. Chapter 5 tells you more about handling Strings in Java that you ever wanted to know. Converting numbers (floats, decimals, etc) to Strings efficiently, string matching -- it's all here in gory detail with timings and sample code.
This chapter also shows the author's depth and maturity; when presenting his algorithm to convert integers to Strings, he notes that while his implementation previously beat the pants off of Sun's implementation, in 1.3.1/1.4.0 Sun implemented a change that now beats his code. He analyzes the new implementation, discusses why it's faster without losing face. That is just one of many gems in this updated edition of the book. Chapter 6 covers the cost of throwing and catching exceptions, passing parameters to methods and accessing variables of different scopes (instance vs. local) and different types (scalar vs. array). Chapter 7 covers loop optimization with a java bent. The author offers proof that an exception terminated loop, while bad programming style, can offer better performance than more accepted practices.
Chapter 8 covers IO, focusing in on using the proper flavor of java.io class (stream vs. reader, buffered vs. unbuffered) to achieve the best performance for a given situation. The author also covers performance issues with object serialization (used under the hood in most Java distributed computing mechanisms) in detail and wraps up the chapter with a 12 page discussion of how best to use the "new IO" package (java.nio) that was introduced with Java 1.4. Sadly, the author doesn't offer a detailed timing comparison of the 1.4 NIO API to the existing IO API. Chapter 9 covers Java's native sorting implementations and how to extend their framework for your specific application.
PART 3 : Threads, Distributed Computing and Other Topics
Chapters 10-14 covers a grab bag of topics, including threading, proper Collections use, distributed computing paradigms, and an optimization primer that covers full life cycle approaches to optimization. Chapter 10 does a great job of presenting threading, common threading pitfalls (deadlocks, race conditions), and how to solve them for optimal performance (e.g. proper scope of locks, etc).
Chapter 11 provides a wonderful discussion about one of the most powerful parts of the JDK, the Collections API. It includes detailed timings of using ArrayList vs. LinkedList when traversing and building collections. To close the chapter, the author discusses different object caching implementations and their individual performance results.
Chapter 12 gives some general optimization principles (with code samples) for speeding up distributed computing including techniques to minimize the amount of data transferred along with some more practical advice for designing web services and using JDBC.
Chapter 13 deals specifically with designing/architecting applications for performance. It discusses how performance should be addressed in each phase of the development cycle (analysis, design, development, deployment), and offers tips a checklist for your performance initiatives. The puzzling thing about this chapter is why it is presented at the end of the book instead of towards the front, with all of the other process-related material. It makes much more sense to put this material together up front.
Chapter 14 covers various hardware and network aspects that can impact application performance including: network topology, DNS lookups, and machine specs (CPU speed, RAM, disk).
PART 4 : J2EE Performance
Chapters 15-18 deal with performance specifically with the J2EE APIs: EJBs, JDBC, Servlets and JSPs. These chapters are essentially tips or suggested patterns (use coarse-grained EJBs, apply the Value Object pattern, etc) instead of very low-level performance tips and metrics provided in earlier chapters. You could say that the author is getting lazy, but the truth is that due to huge number of combinations of appserver/database vendor combinations, it would be very difficult to establish a meaningful performance baseline without a large testbed.
Chapter 15 is a reiteration of Chapter 1, Tuning Strategy, re-tooled with a J2EE focus. The author reiterates that a good testing strategy determines what to measure, how to measure it, and what the expectations are. From here, the author presents possible solutions including load balancing. This chapter also contains about 1.5 pages about tuning JMS, which seems to have been added to be J2EE 1.3 acronym compliant.
Chapter 16 provides excellent information about JDBC performance strategies. The author presents a proxy implementation to capture accurate profiling data and minimize changes to your code once the profiling effort is over. The author also covers data caching, batch processing and how the different transaction levels can affect JDBC performance.
Chapter 17 covers JSPs and servlets, with very little earth shattering information. The author presents tips such as consider GZipping the content before returning it to the client, and minimize custom tags. This chapter is easily the weakest section of the book: Admittedly, it's difficult to optimize JSPs since much of the actual running code is produced by the interpreter/compiler, but this chapter either needs to be beefed up or dropped from future editions.
Finally, chapter 18 provides a design/architecture-time approach towards EJB performance. The author presents standard EJB patterns that lend themselves towards squeezing greater performance out of the often maligned EJB. The patterns include: data access object, page iterator, service locator, message facade, and others. Again, there's nothing earth shattering in this chapter. Chapter 19 is list of resources with links to articles, books and profiling/optimizing projects and products.
What's Bad?
Since the book has been published, the 1.4.1 VM has been released with the much anticipated concurrent garbage collector. The author mentions that he received an early version of 1.4.1 from Sun to test with. However, the text doesn't state that he used the concurrent garbage collector, so the performance of this new feature isn't indicated by this text.
The J2EE performance chapters aren't as strong as the J2SE chapters. After seeing the statistics and extensive code samples of the J2SE sections, I expected a similar treatment for J2EE. Many of the J2SE performance practices still apply for J2EE (serialization most notably, since that his how EJB, JMS, and RMI ship method parameters/results across the wire), but it would be useful to fortify these chapters with actual performance metrics.
So What's In It For Me?
This book is indispensable for the architect drafting the performance requirements/testing process, and contains sage advice for the programmer as well. It's the most up to date publication dealing specifically with performance of Java applications, and is a one-of-a-kind resource.
You can purchase Java Performance Tuning, 2nd Edition from bn.com. Slashdot welcomes readers' book reviews -- to see your own review here, read the book review guidelines, then visit the submission page.
Web pages (Score:1, Insightful)
()
Networking and secure transactions asside, I have a major problem with things like scrolling text java applets. The problem is I see it too much.
__
cheap web site hosting [cheap-web-...ing.com.au] with coins per month.
Insightful? (Score:4, Insightful)
Re:Web pages (Score:4, Interesting)
Isn't this the compiler's job? (Score:5, Insightful)
()
I've often found that will bytecode languages (Java, C#...) the bytecode instructions are made for the language so that the compiler can just throw them out easy peasy, but they seem to overlook the sort of optimizations that C compilers, for example, work hard to implement.
Re:Isn't this the compiler's job? (Score:5, Informative)
( | Last Journal: Thursday October 21 2004, @11:47AM)
Java (and other bytecode languages) were desinged to run well not just on a single platform, but on a variety of platforms. So as a trade-off, you lose environment-specific optimizations at compile time.
JIT JRE/compilers can work to prevent this. They can further optimize the bytecodes at execution time because they are platform specific.
An online Starcraft RPG? Only at [netnexus.com]
In soviet Russia, all your us are belong to base!
Re:Isn't this the compiler's job? (Score:5, Insightful)
If all these performance hacks are documented, why doesn't the compiler implement them?
The most common reason is that most performance hacks and optimizations are not decidable, and you want a compiler to implement only decidable algorithms becuase those are the ones that enable a compiler to be deterministic. It is usually much easier for a person, i.e., human, to determine what can be done, than it is for a machine to determine that exact same thing.
Consider the following piece of code.
boolean f(int[] a, int[] b)
{
int x = a[0];
b[0] = a[0] + 2;
int y = a[0];
return (x == y);
}
Does f always return true? Only if we can prove that a and b never point to the same array. A person maybe able to do this, but a machine would have great difficulty (assuming the machine could even do it).
So to sumarize, compiler's don't implement many optimization hacks becuase then they might not be deterministic, and that is a bad thing.
Re:Isn't this the compiler's job? (Score:4, Informative)
()
When a program thrashes strings around, why doesn't the compiler detect that, and switch to a string buffer object to perform those operations, and then convert the final result back to a string?
String/StringBuffer (Score:5, Informative)
For concatenating two strings, the concat() method can be faster than using StringBuffer, since it only needs to create a new char[] and do a (fast) arraycopy from the two internal arrays.
Also, everyone should be aware of the 1.4.1 memory leak associated with using StringBuffer's toString() and setLength() methods.
Re:Isn't this the compiler's job? (Score:4, Interesting)
()
The optimization the book proposes are all hit-or-miss adventures. Even for a programmer with intimate knowledge of the code, it is sometime difficult to predict if a change will help or imper performance. The compiler has even less chance to do so correctly -- and nobody like a compiler which slows down their code trying to optimize it.
Definite purchase (Score:3, Informative)
As a side note I would disagree about performance being an albatross for Java. Well written Java code can be very high performant just as poorly written code in ANY language can perform slowly. Many of the performance issues associated with Java are inexperienced developers using inappropriate methods and objects.
New Title (Score:3, Funny)
()
Is this a review or a synopsis? (Score:3, Insightful)
Correct ISBN is 0596003773 (Score:5, Informative)
( | Last Journal: Thursday September 12 2002, @12:54PM)
0596003773
Java Strings are the main problem (Score:3, Interesting)
Java performance better in the Sun IDE? (Score:3, Interesting)
( | Last Journal: Thursday May 24 2007, @06:16AM)
Process (Score:3, Insightful)
The book starts off with three chapters of sage advice about the tools and process of profiling/tuning. Before you spend any time profiling, you have to have a process and a goal. Without setting goals, the tuning process will never end and it will likely never be successful.
No, you have to profile first. Profiling will tell you whether there is even any point in tuning, and, if so, what goals are reasonable.
What performance issues? (Score:5, Funny)
Pre-written appendix for Java Tuning (Score:1)
Keith
Re:Sysadmins don't buy into this article. (Score:5, Interesting)
I challenge you to make a C++/C# application that is thread-safe and can scale to millions of pageviews per day without writing a ton of supporting code. With a good J2EE app server, a java coder essentially just has to wrap his thread-unsafe code in a syncronized() statement, and he's done-- his app is now thread-safe.
Additionally, the "cross-platform doesn't matter for sysadmins" is a false statement; our CIO asked our net ops group "what would be the impact of us moving to an Intel platform?" and our sysadmins (after consulting with the coders) replied "absolutely no impact". That made our CIO very, very happy. Again, I challenge you to move your C++ apps from Solaris to Linux, or even to Windows, without any hiccup.
All of these other arguments are very specious: "I don't have enough RAM" will get you a reply of "go down to Fry's and spend $125 on another GB" every time. Processor speeds, even on Sun boxes, is getting to the point where the processor will never be a bottleneck for anything. Sure, java won't run as fast as a natively-compiled app. Neither will perl, php, tcl, or what have you. Raw processor speed is not as important when you have a couple of GHz to play with.
Re:Pre-written appendix for Java Tuning (Score:5, Informative)
()
Then he invents other ways to talk about the startup time without seeming to talk about the startup time (for instance, trussing Hello World results in a ton of output, but naturally that's Java starting up and loading its classes. Again, do you consider what the machine has to do to boot itself up when you're talking about C programs?). I will point out again that Java's startup time is almost irrelevant, especially in a server environment (which is what he's talking about).
The rest of the article is picking on the "jar" tool. jar is a program written in Java. Criticisms against the jar tool no more reflect on Java than criticisms against gzip reflect on C. The fact that jar doesn't do a good job of reporting errors is (A) irrelevant, because it's a developer tool and we know how to read exceptions, and (B) still more irrelevant, because how well it reports errors has nothing to do with what language it was written in. Tons of C programs have lousy error reporting as well, such as a number of Unix utilities I might name.
Further, this article is obviously very old. He's talking about Java 1.1.8, which is what, five years old now? Might as well criticize Linux by talking about obscure video driver bugs that were fixed five years ago. Obviously, that's not the article's fault for having been written so long ago, but it is the parent poster's fault for bringing it up as if it is somehow still relevant.
2nd edition? (Score:1)
( | Last Journal: Monday April 19 2004, @04:55PM)
The only frustration is that I use safari.oreilly.com and love it, but they don't seem to have the 2nd edition from what I can tell... oh well - I'll add the edition that comes up in the search and that's better than nothing.
Java doesn't cut it (Score:3, Interesting)
We also ported some of our backend tools for use with Mono. In use with the newly released Mono JIT runtime, Mini [ximian.com], we've achieved some truly stunning results. It turns out that some of the optimisations in the new JIT are better than those used by GCC, so once the code is loaded in memory, it performs better than raw C code. Although I don't yet have hard numbers to back up these result (the transition is still in progress), it has to be said that Mono is the real answer to Java performance. Being Open Source, we can also contribute back to the runtime to make it better suit our needs. It also plays nicely with RedHat 9's NPTL threading implementation, which is more than I can say for the current crop of Java JREs.
Re:Java doesn't cut it (Score:5, Insightful)
This might become an option in a few years, but the GNU classpath is as yet not complete enough for our years. We actually didn't find gcj output that performant, despite it being compiled to native code. The JRE still beat it in many cases.
Use SWT with Java. SWT uses Windows native widgets on Windows or GTK on Linux.
We also investigated this. SWT is a _horrendous_ API which offers very little abstraction. You end up writing your code once for the Gtk+ target, and again for the native Windows target. It isn't really a cross-platform abstraction like WxWindows, and it's probably the reason why the Eclipse codebase is so large. You end up writing your application for each UI target platform. Gtk# runs and integrates with the platform instead, so you only write your code once.
Either your telling a big lie or dont have your facts straight. Unless you can show hard facts your not going to sway anyone into believing interpreted code outperformed compiled.
I did mention the results are empirical, but they're also pretty obvious from where I stand. You don't need benchmarks when something performs, in some cases, eight times faster than the original implementation. I may well put togther some benchmarks and post them to mono-list or linuxtoday.com. I don't have benchmarks yet; does that make me a liar? Sigh.
What is exactly wrong with Java's use of native threads on Linux boxes?
It's pointless to interface with the threads layer directly when pthreads exists. It makes the runtime essentially unportable to other unices/operating systems. Mono plays nicely with the environment, so the runtime can just be compiled on any POSIX-compilant system. Linux is great, but being attached to it so firmly that your application breaks when Linus changes some internal interfaces is not.
Inherent performance issues (Score:3, Informative)
That said for most network centric applications java is plenty fast. Now if we only stopped short of introducing the unbelievable overhead of XML's excessive verbosity...
idiots.; (Score:2, Interesting)
()
Re:idiots.; (Score:5, Insightful)
()
Those of us who can program in more than one language and know that sometimes it's a matter of choosing the right tool for the job (peanut butter for sandwiches, masonry paint for walls) tend to go through three stages:
1) Try to engage in such discussions on the premise that there's actual intelligent debate going on.
2) Discover ourselves becoming violently opposed to whatever rant we're reading at the time, writing tracts about how Java sucks when we're reading the work of a Java fanatic and drooling about the glory of Java when faced with a C++-toting moron.
3) Either give up in disgust and let the language fanboys get on with it, or sit on the sidelines and snipe at both sides - similar to stage 2, but more consciously applied. Normally that progresses towards giving up, though, since the zealots are just too easy and predictable...
Finally (Score:1)
()
More efficient != better (Score:3, Insightful)
(Last Journal: Wednesday December 17 2003, @09:23AM)
Perhaps it is more efficient. I say, let the compiler do it for me. Code like this: is much more readable/maintainable than
Re:More efficient != better (Score:5, Informative)
J2SE has more coverage... (Score:1)
J2SE has more coverage, because this is the area where Sun is focusing right now on improving speed. J2EE has been fairly successful - also, since CPU, RAM, HD resources tend to be more excessive on servers than desktops, J2EE speed on the server isn't as critical than J2SE speed on the desktop. Getting Java-based desktop apps to perform as well as their C/C++ brethern is the 'holy grail' of Java/J2SE development right now, so the focal point of this book makes perfect sense.
Who cares? (Score:5, Insightful)
flippant comment but let's think about this for a second: The majority of the time the alleged efficiency advantage is small or, as is generally the case, a pointless optimisation. Java coders seem to have the major efficiency/speed hangup - they use it to lord it over scripting programmers but they want/lack/desire the swiftness of C. (And yes, I do program in Java.)
To my mind, this is approching the problem from entirely the wrong direction: CPU time and CPU power are far cheaper than developer time and designer time. Therefore, rather than use some cobbled-together hack, use the standard implementations and take the performance hit.
This will be cheaper, probably 95% as efficient and, most importantly, be 195% easier to maintain or change at a later date. Consider the big picture rather than a single aspect.
NB - YMMV, for certain apps, it really does make sense to break all of the above ideas and principles, but if you REALLY need it to run that fast, you should be using C anyway.
Elgon
Re:Who cares? (Score:4, Insightful)
()
That is the value I see from books like this.
Re:Who cares? (Score:4, Informative)
( | Last Journal: Wednesday April 11 2007, @09:55AM)
Which just brings me to my biggest beef about Java: no syntactic sugar. Operator overloading should be a part of Java, and bugger whatever the purists say. I want to save time typing dammit!
I'm offended! (Score:1, Funny)
Free Software in Java? (Score:1, Offtopic)
I have Limewire.org and DVarchive already. I know about Moneydance, which might be popular someday. Freenet might work well enough someday to qualify. Anything else? If you got 'em, post 'em.
Albatrosses (Score:2, Interesting)
()
Ick.
The albatross doesn't need killing -- it's already dead. The albatross was hanging from the mariners neck because he had killed it, and by doing so had brought bad luck upon his ship.
Quoting from memory here, because I can't be bothered to go find my copy of the poem:
As I said, that's from memory, so there are probably plenty of mistakes in there, but I'm sure a little googling will turn up a proper copy of the poem.
Killing the Albatross (Score:2)
()
It's all about the VM (Score:5, Insightful)
Java is plenty fast (Score:2, Informative)
Even with CPU specific optimisations, advanced compiler options etc, the Java version is 30-80% faster than GCC's binary. (this is on both AMD and Intel CPUs) To get anything faster, you'd have to pay for it.
I also do server side programming, and I don't see why so many Linux users complain about Java's performance, while using/promoting Perl and PHP. If you want a high performance, responsive site, Java completely blows Perl and PHP away. I've only used JSP and servlets so far but they're all most web sites need anyway.
Re:Java is plenty fast (Score:5, Informative)
Java Deserves more Credit (Score:1)
Native Code vs. HotSpot (Score:1)
I seem to remember seeing some benchmark that said that native compiled code was actually slower than the Hotspot JRE.
Can any confirm this and/or explain how this is possible?
Java performance is second priority for us (Score:3, Insightful)
( | Last Journal: Wednesday September 17 2003, @11:59AM)
Our bottleneck is how fast we can execute lots and lots of stored procedures in our SQL and Oracle databases.
It really hasn't mattered if one of our coders has been terminating loops via try{}catch{}, or ending on a condition.
The most important thing has been, "Does each line, each method, each class do what it's actually supposed to do?"
Our bottlenecks have always been flow back and forth between different systems, including Lotus Domino, Oracle, MS SQL Server, Websphere, etc. etc.
Java is a small player in all this... C++, C#, Fortran, Lisp would not speed this up for us.
Blah blah performance tuning... (Score:3, Interesting)
How much does that extra development time cost?
Writing ones' own java.lang.String takes time. Writing routines to convert com.donkeybollocks.String to java.lang.String and back again takes time. Supporting it takes time. And time is money. Me, I'd rather spend an extra £100 on a faster processor, or a Gb of RAM, and take a 25% performance improvement.
Come on guys, one of the major wins of the OO methodology is code reuse. Time was when programmers would always have to write their own I/O routines - I thought those days were long-gone. Rewriting fundamental parts of the Java API is just plain silly, unless it has a bug or a serious limitation (eg, it's non-threadsafe).
99% for me..... (Score:1)
I primarily use java, both for work and home projects, and I have to say that I have almost no performance issues with java. I think that the way CPU speed has increased (and will probably continue to do so) that bytecode interpreted languages now have less of a performance gap againts native compiled code.
Profiling as Early as Possible (Score:1)
Random performance finding (Score:1)
()
To all people claiming speed doesn't matter (Score:1)
( | Last Journal: Saturday May 03 2003, @11:59AM)
Well, yes it is, but it's not always that simple.
I have a Java app here I'm performance tuning for my PhD that allocates frequency hop sets to mobile phone networks. Running on a 2.2Ghz Athlon wih 512Mb RAM for a 15-transmitter test case takes an hour, it scales exponentially with transmitter size, and I want to address a 458 transmitter case. It's about 10^500 calculations, or it was before I started improving the algorithm. Even so, it's still going to be billions of iterations through the inner loop. Even a 1% speedup is hours off my runtime and that's a big thing for me.
So when you dismiss all performance tuning with a wave of your hand, remember us poor beleagured scientists. We actually need all this stuff.
Not a bad albatross (Score:3, Insightful)
When I write a Java program... if it's too slow today, then, in time, the problem will go away without any more effort on the part of the programmer. In a year from now, we'll certainly have faster computers, which will make up for any speed problems.
On the other hand...
A year from now, we will almost certainly not have CPUs that are suddenly immune from dangling pointers and memory leaks.
In other words, there are not plausible, near-future-forseeable advancements in computing hardware that could fix the worst problems of C/C++. Meanwhile, the near-future advancements in hardware are almost guranteed to fix Java's worst problem.
The same holds true for doing your computing today... regardless of what hardware is available a year from now. Personally, I'd rather have a slow program that could keep running than one that was really fast, but crashed before I could save my work.
can any one validate this? (Score:1)
()
Re:Oxymoron ? (Score:3, Insightful)
( | Last Journal: Wednesday April 11 2007, @09:55AM)
That said, "slow" performing Java GUI aps are not so much the fault of the platform itself as they are the fault of the Java programmer's inability to deal efficiently with threads.
Re:Oxymoron ? (Score:5, Interesting)
()
Previously, the startup slowdown was due to the system having to load, verify, and link the twenty or so classes a simple program depends upon. Pjava and J2ME-CDC solved that by storing an image of the heap with the system classes already loaded, verified, and linked (and quickened) so the system was run-ready almost immediately. I wonder if the J2SE folks picked up on that? Alternatively, they could just be skipping the verify for those classes in the signed rt.jar, and offline preverify them prior to signature - the verifier always was the slow part of the process.
Your point about threads is well taken, and applies more generally to much of java programming. Java's language and libraries make it all to easy to write architecturally-slow programs - you really still have to fully understand what you're doing in order to write a decent program, regardless of the language.
Re:Don't use Java.... (Score:3, Insightful)
()
The best thing about java is the richness of the api. And the size of the documentation. C++/C should take a page from java's book in this department.
You don't have to use the standard classes, go ahead and write the classes you need.
Jonathan
Re:Don't use Java.... (Score:4, Insightful)
(Last Journal: Friday November 26 2004, @05:49PM)
Well, gosh, you go right ahead and write your own replacement classes for everything that Sun has done already. What's stopping you?
That's exactly why I like Java. They have a lot of good built-in libraries that cover a wide-range of applications. I don't have to reinvent the freaking wheel every time I write an app.
Re:So (Score:2)
Re:Many Java projects rewritten in other languages (Score:1)
By the way - how much of Google or Yahoo is written in Java... let's see - none of it.
Every site makes their own choice of tools and technologies. Just because Google and Yahoo doesn't use Java, doesn't mean we can conclude it is useless
Also, where does the "Gig of RAM per web page view" figure come from?
Re:Many Java projects rewritten in other languages (Score:1)
()
yada yada yada
Gig of ram per web view? You're totally clueless...
Re:A warning sign (Score:1)
Lets just remove needless things such as:
oh yeah, they already stripped those out before it was ever released.
Actually I would much rather see a slower JVM and be able to have at least multiple inheritance. But alas, I am stuck with workarounds invloving interfaces.
The huge market for Java tuning books is probably more acurately attributed to coders' over emphasis on software tuning.
Tuning should be done wherever most time is spent in the software, not just willy nilly where a book tells you too.
Re:Cost of Java Performance (Score:1)
How do you "rip off" something in the public domain? Doesn't "public domain" mean "no copyright"? | http://books.slashdot.org/books/03/04/02/1847204.shtml | crawl-002 | refinedweb | 5,138 | 63.09 |
why java file save as its class name
why java file save as its class name hi,this is subbareddy vajrala.my doubt is why java files save as its class names
Java Compiler
. The file name must be the same as the class name, as classname.java.
When...;
The output from a Java compiler comes in the form of Java
class files (with .class... for the java source file and saves in a class file with a
.class extension
Name
all the name of that character
E.g User give character "B".
The program shows all the human names which starts from B.
Java show all names... the user and show all the names that starts with that character.
class
Java class name and file name are different
Java class name and file name are different Can we run a Java program having class name and file name different?
Hi Friend,
No, you can't.
It is necessary to save the java file with its class name otherwise
Java Compiler
compiler comes in the form of Java
class files (with .class extension). The java source code contained in files end with the .java
extension. The file name must be the same as the class name, as classname.java.
When the javac compiles
Java Compiler,Java Compiler Example
. The file name must be the same as the class name, as classname.java.
When the javac... from a Java compiler comes in the form of Java
class files (with .class extension... for the java source file and saves in a class file with a
.class extension.
The most
Java file get name
Java file get name
In this section, you will learn how to get the name... the name of the file.
Here is the code:
import java.io.*;
public class... name is: " + st);
}
}
Through the method getName() of File class, you can
java compiler
java compiler how we can convert .java file to a .class file without using javac
Java Program MY NAME
Java Program MY NAME Write a class that displays your first name...() { }, Then, method main should create an object of your class, then call the methods... should have the lines of code needed to display that
letter in a method
My name, java project
My name, java project Write a class that displays your first name... letter being displayed should have the lines of code needed to
display that letter in a method with the following name (for example, for
letter A,
public
Master Java In A Week
Master Java In A Week
Master Java Programming Language in a week... the
significance of Java Compiler. When we write any program in a text editor
like Notepad, we use Java compiler to compile it.
Interpreter
java compiler error
as well.But putting class B in different file named B.java compiler complains about...java compiler error I am trying to compile a simple program which... is in file B.java which is as follows:
public class B
{
public B
Java Get File Name
Java Get File Name
In this section, you will study how to obtain the name of file..."
to the constructor of class File. The method getName() returns the
need solution to get file name from path which should give by user
need solution to get file name from path which should give by user how do i write a program in java that i should get file path from arguments.currently this code creates the file at d:\ but i need to get this file name
Java Compiler Error - Java Beginners
Java Compiler Error I get this error when i compile this Java inheritance OOP. What I'm i doing wrong.
F:\Java\WorkerDemo.java:9: cannot find symbol
symbol : constructor ProductionWorker(java.lang.String)
location: class
Constructing a File Name path
Constructing a File Name path
... construct a file name path. By using the constructing
filename path it is possible to set dynamic path, which is helpful for mapping
local file name
Constructing a File Name path in Java
Constructing a File Name path
... is helpful for mapping
local file name with the actual path of the file using... will see how the
same program constructs a File object from a more complicated
Get Property by Name
to get Property by Name. For this we have a class name "Get
Property By Name". Inside the main method we have the list of method that
help you... by Name
Get computer name in java
Get computer name in java
We can get the computer name by the java code program.
For getting computer name we have used java.net.InetAddress class. We
will use static
class name
class name what is the class name of circle.java
how do i complie my jdk file, what happen s if i put in the correct commands and it still does not complie, what do i do next
Java example to get Object class name at runtime
Java example to get Object class name at runtime
java get Object class name
In java...;java RoseIndia
Constructor Called
Object's Class name =>
CORE JAVA get middle name
CORE JAVA get middle name hello sir...how to get middle name using string tokenizer....???
eg..like name ANKIT it will select only K...!!!!
... character from the name.
import java.util.*;
class GetMiddleCharacter
{
public
NAME SORTING. . .anyone? - Java Beginners
NAME SORTING. . .anyone? how can I sort names without using the 'name.sort' method?
please help. . .anyone?
the program should sort the first three(3) letters of the names
tnx java masters out there!! (^_^) cVm Hi
Find Name of Excel Sheet
This is basically stores the sheet name and tells where the
beginning of file record is within the HSSF file.
getSheetname():
This method is used to find the name...
Find Name of Excel Sheet
name of year in chinese
name of year in chinese hello my name kiemtheng i'm come from Cambodia, i would to write java programming to calculate a name of year for each...*;
import java.text.*;
public class FormatDate{
public static void main(String
Duplicate name in Manifest - Struts
Duplicate name in Manifest Hello I'm facing the following error... java.util.jar.Attributes read
WARNING: Duplicate name in Manifest: Depends-On.
Ensure... file.
how can I solve it?
Please help me..
Thanks. Hi friend
Java Compiler error - Swing AWT
Java Compiler error Hi,
I try to add quartz Lib in my HelloQuartz... can add Jar File in Java Build Path Library. i get this message : " No entries... org.quartz.impl.StdSchedulerFactory;
public class HelloSchedule {
public
Compiler errors in java
Compiler errors in java Hi,
I used GenerateRDF java file.
Am getting errors when i run this code.
I used command prompt only.
getting errors as no package exist.
i followed your instructions properly.
Please help me out
Logger in Java
into the log file.
The name of Logger are dot-separated and should be the package name or class
name. getLogger factory method provides the object of Logger...In this section we will learn how to use the Logger in
Java program. Logger
Java program to get class name without package
Java program to get class name without package ...:\javaexamples>java GetClassWithoutPackage
Full class name =java.lang.String... example which
describes you that how you can get the class name without package
Why PHP ?
Why PHP?
Reasons to use PHP are given below:
1. PHP is open.... Virtual hosting, large file support are the main features of web servers. Every webserver has an IP address and possibly a domain name. If you enter the URL
Master java in a week
;
Class Declaration:
Class is the building block in Java, each
and every methods & variable exists within the class or object... to
instantiate an object of the class. This is used by the Java interpreter
Use of name() function in XPath
This xml file contains various nodes and we have used name() function in
our XPathName class to retrieve name of different nodes according to
queries:
Here... Use of name() function in XPath
to retrive e mails as per user name - Java Beginners
mails as per user "user name "
for ex:
class Mail{
private String subject;
private String message;
private Date sentDate;
}
///
class...
output : should be like this
--------------
output:
-----------
username :bala
Duplicate name in Manifest - JSP-Servlet
Duplicate name in Manifest Hello,
I don't have the classpath... just put the current JDK that I'm using now.
C:\Program Files\Java\jdk1.6.0_12...
WARNING: Duplicate name in Manifest: Depends-On.
Ensure that the manifest does
Why we should use string args[] in main method in java?
Why we should use string args[] in main method in java? we use only string in the main method not any other one.. specify the reason...
and tell me...;import javax.swing.JOptionPane;
public class a3c4e23
{
public static void main
Java Constructor
to
instance variables of the class. Name of the constructor is same as class name... used
is "another" while second
is the main class by the name...; constructor.
Example of Java Constructor:
class another{
int x,y;
another
personal name aliases
personal name aliases enter the name search result in internet connection using java code
Please visit the following links:
Retrieving the class name through Reflection API
Retrieving
the class name through Reflection API
... of the class (that is used in the program) that reflects the
package name by using... the reference of the
class java.util.Integer.class to it. Now retrieve the class name
How to read text file to two different name array
How to read text file to two different name array I have those numbers:12,4,9,5
numbers:19,12,1,1
how to put it in two different name array in text file to java
Class
. Constructor is the method which name
is same to the class. But there are many difference... we assign the name of the method same as
class name. Remember...;
class_name object_name = new class_name();
Output of the program :
how to get java path name
Java error cannot find symbol
Whenever a Compiler does not recognize a class name, Java displays an error... the java error cannot find symbol. In this example a class name 'cannot find... is:
A programmer has misspelled the name of the class.
A programmer
Java program to get domain name by URL
Java program to get domain name by URL
We can also get the domain name by the java...:\javaexamples>java GetDomainName
Domain name by URL "
Finding out the super class name of the class
Finding out the super class name of the class
... name by using the
getSuperclass() method. The given example demonstrates
the use of getSuperclass() method in more detail.
Create a class "Fsupercls"
upload image and fields.....fields is id name.....
upload image and fields.....fields is id name..... Get Data using Java Servlet
The frame takes following input..
Id:
Name:
browse:Image are file
Java error cannot find symbol
;
The java error cannot find symbol occurred when a Compiler does not recognize
a class name. The following are the reason for such an error -
1)When a programmer misspelled the name of the class.
2When a programmer
class name
class name how to create a class named box thaT INCLUDES integer data fields for length,width,and height, include three contructors that require one,two,three arguments
Java Get Host Name
Java Get Host Name
In this Example you will learn how to get host name in Java. Go through...;
Java code to get host name
import java.net.
How to extract name,surname, doamin name from mailid
How to extract name,surname, doamin name from mailid Hi sir
How to extract name,surname, doamin name from mailid using java coding?
for example... name as java
plz tel me
Set the mapping name
in a base class. Why is this so important? Because, to use some types of aggregations, we should extend our action class from a specific subclass of Action...;
2. Set the action name to the action attribute
Find Your Host Name
Find Your Host Name
... the
host name of the local system in a very simple example. Here we are just call
the InetAddress class and after made a object of that class we call a
getLocalHost
Java Interpreter
options] class name [program arguments]
The class should be specified... interpreter. That is why Java applications are platform
independent. Java interpreter... virtual machine and runs Java applications. As the Java compiler compiles
Why <servlet-name> use in web.xml - JSP-Servlet
Why <servlet-name> use in web.xml WHY DO WE USE THE <SERVLET-NAME>TAG IN WEB.XML APPLCATION WHY DO WE USE THE <SERVLET...;web.xml > tag <servlet-name> tag specifies the name of the servlet
abstract class and interface - Java Beginners
? when should we use an abstract class?
when should we use interface instead of abstract class?
Hi friend,
Abstract classes
In java...() { System.out.println ("Moo! Moo!"); }
}
Why not declare an abstract class as an interface
Find Your Host Name/IP Address
the getLocalHost()
method to print the Host name as well as the IP Address of the local system. For
do for the same we have to call InetAddress class then we need to create...
Find Your Host Name/IP Address
Overview of Networking through JAVA,Find Your Host Name
Find Your Host Name
... the
host name of the local system in a very simple example. Here we are just call
the InetAddress class and after made a object of that class we call a
getLocalHost
Reading xml file using dom parser in java with out using getelementby tag name
Reading xml file using dom parser in java with out using getelementby tag name Hi, How to read the xml file using java with dom parser, but without using getelementbytag name, and also read the attribute values also.
I had
missing some import class(may be) - Java Beginners
on the class name"class shape is public,should be declared in a file named shape.java",
what i missed? Hi,
You have to change the java file name to "shape.java".
In java file name and class name should the the same
Difference between Java 6 and Java 7
, JDBC 4.0 API etc. Apart from theses Java 6 was also equipped with Java Compiler... at performance levels near to that of the Java language itself - Strict class-file checking: Class files of version 51 (SE 7) or later must be verified
Java get class directory
Java get class directory
... to fetch the class directory's path by the class name. To get the class
directory's...:file:/C:/Java/jdk1.6.0_03/jre/lib/rt.jar!/java/lang/Object.class
Download
why we use abstract class in java?
why we use abstract class in java? what is the the purpose of abstract class.Give example when to use abstract and when use interface
different output trying to execute same java code
different output trying to execute same java code i am using net... on the jar file and tried to run, JOptionPane showing "portList : false"
Why there is different output trying to execute same java code
Match the name with a description of purpose or functionality, for each of the
following deployment descriptor elements: ejb-name,
abstract-schema-name,
ejb-relation,
ejb-relat
Java identifier and MUST
be UNIQUE within the ejb-jar file.... The
abstract-schema-name MUST be a valid Java identifier and
MUST be unique within the ejb-jar file.
The abstract-schema-name element
Getting the Parents of file name path in Java
Getting the Parents of file name
path in Java... uses getParent() method of the File
class object to find the parent directory
name of the current directory.
getParent():
Above code returns the parent
Getting the method name used in the Application
Getting the method name used in the Application
... name
by using the
getMethods() method. Here is an example that
demonstrates the use of the getMethods() method in more detail.
Define a class named "
different output trying to execute same java code
" portList : false"
Why there is different output trying to execute same java...different output trying to execute same java code i am using net beans 7 ide and java 6 to develop my java projects. i used the following coding
Change Column Name of a Table
Change Column Name of a Table
... for
renaming a column name in a database table. As we know that each table keeps
contents in rows and column format. While making a table we specify the name
Class in Java
Class in Java
... structure.
First of all learn: what is a class in
java and then move on to its... for constructing java program:
package package_name
java client server program for playing video file(stored in folder in the same workspace) using swings
java client server program for playing video file(stored in folder in the same... file name");
l.setBounds(50, 30, 100, 50);
getContentPane().add(l... a client server program to play a video file, when I run both client and server
Class and interface in same file and accessing the class from a main class in different package
Class and interface in same file and accessing the class from a main class... a class named CantFly in same file. A main method CheckAnimal.java. Below... pattern videos on YouTube, however I have a small doubt on some basic Java concept
Domain Name
. More than one domain name can be mapped to the same Internet address. This allows... identities while sharing the same Internet server.
Getting a domain name....
So, domain name is must for a business and business owners should understand
How to get month name from date(like-25/06/2012) using java?
How to get month name from date(like-25/06/2012) using java? How to get month name from date(like-25/06/2012) using java
Java file class
Java file class What is the purpose of the File class
Java class file
Java class file How do i open a java class file using eclipse
Getting ISD code when user input country name
have created a text file which includes pipe delimited with country name and ISD code, from that list i have to read the file and when user input any country name... program which user needs by typing Country name he will get ISD code of that country... cannot find servlet class org.apache.jsp.index_jsp or a class it depends
Set the mapping name
and the same Action class name can be used.
What are the disadvantages... mapping can be created while the action class remains the same. ... the other does not but both uses the same Action class. In such cases, two new
Find the Host name in reverse of given IP address
Find the Host name in reverse of given IP address... to
retrieve the information about the local host name using the getHostName()
method... detail.
Create a class "IPReverse". Call the
InetAddress
Change Background of Master Slide Using Java
Change Background of Master Slide Using Java
... to create a slide then change background of the
master slide.
In this example we are creating a slide master for the slide show. To create
slide show we
Pkg Inheritance under same pkg - Java Beginners
Pkg Inheritance under same pkg
Hi Friends
I want to extend the Predefined ( . java File ) class in a another inherited class( .java file ) in the same pkg
If this is allowed in the same pkg, please tell me how
Advertisements
If you enjoyed this post then why not add us on Google+? Add us to your Circles | http://www.roseindia.net/tutorialhelp/comment/37432 | CC-MAIN-2015-40 | refinedweb | 3,288 | 72.87 |
Here are some notes that are intended to help you to sucessfully complete the projects and programming assignments in our classes. They cover various aspects of software development, e.g. the choice of development platform, source code management tools and testing strategy. This page does not teach you e.g. C++ (how could it?), but is rather intended to refer you to material and tools that are helpful and prevent you from making bad decisions when starting a new project.
This page is work in progress, but please contact me if you have any comments.
You can use whatever operating system you want for development (of course), as long as your programs can be built and run on Linux. Especially when low-level code is involved (as e.g. in the
Please write your programs in C++ (C++11, to be precise). I would claim that C++ is the best choice for writing a database system as it gives you control over low-level details such as memory allocation and layout and allows you to write high-performance programs (unlike Java and other managed languages). Additionally, it is more expressive than C and has various features that simplify your life (unlike low-level languages). The downside is its steep learing curve. Nevertheless, I would recommend to use the project/assignments as an opportunity to learn C++ or improve your skills. C++ skills are rare and and in demand.
autotype inference
forloops
unique_ptrs. There are typically very few cases, where you cannot use either a reference or a
unique_ptr(also important to mention in this context: move semantics).
unordered_set,
unordered_map,
unordered_multiset,
unordered_multimap
Please refrain from using any libraries other than the STL (and googletest for unit testing) in your projects/assignments unless you have checked with me first.
Please comment your code. Comment all class definitions, non-trivial member functions and variables, and steps in your algorithms. Also, please use a consistent coding style accross you project. I don't want to overspecify this, but be consistent with indentation (tabs or spaces) and naming schemes (e.g.
UpperCaseCamelCase for classes,
lowerCaseCamelCase for variables/methods/functions).
CppQuiz.org is a great resource for testing your understanding of the language!
Keep your project tidy on a file system level by using subfolders for different parts of you code. Example:
MyDBMS +-Makefile +-.gitignore +-README.md +-bin +-index | +-HashTable.hpp | +-HashTable.cpp | +-BTree.hpp | +-BTree.cpp +-buffer | +-BufferManager.hpp | +-BufferManager.cpp | +-AsyncWriter.hpp | +-AsyncWriter.cpp +-testing +-BufferManagerTest.cpp +-HashTableTest.cpp
You don't have to replicate the exact same structure as depicted above, but should ensure the following: Separate binary files from source files, split your code into different components (here: index structures and buffer management) and separate production code from unit tests. If you build libraries, you should also separate
.hpp files from
.cpp files (this is not done in the example).
There are some files that make sense to keep at the top level:
Makefile is a file used by the build system
make and
.gitignore an ignore-list of your Git repository.
README.md does not need to include a lengthly description of what your project is about (I know this anyways), but should briefly specify how your project can be built and run (including which parameters), how you tested it (platform, configuration, parameters, etc) and what issues you are aware of.
There are many source code management systems out there -- I have a clear favorite: Git. It has numerous advantages compared to its competitors and is lightweight and easy to set up & use. There is a great free book, a free, interactive tutorial and there are great cheat sheets available to get you started. Plus, github and Bitbucket give out free, private repositories to university students. Even though you do not need these in order to use Git, they can be helpful for collaboration and as a backup.
Once installed, setting up Git for you project is as easy as
git init (make this directory a Git repository),
git add <file1> <file2> ... <fileN> (add file1 to fileN to repository) and
git commit -m "initial commit" (committing the changes, i.e. the addition of file1 to fileN). Refer to the resources mentioned above to learn how to take it from here. Please maintain a .gitignore file to exclude any unwanted file (e.g. the directory with the binary files, backup/temporary files of your editor/IDE, large files containing generated testdata, ...) from your repository.
When submitting your code or your solution of an assignment, instead of emailing me your (compressed) project files, you can simply refer me to your Git repository (I prefer that). Make sure I have read access to your repository, then send me an email (before the project is due) with the repository information and indicate which branch and commit ID you want me grade.
Any publicly available build system is okay with me (as long as it runs on Linux), but especially for single-platform projects, good old make is an excellent choice. In a
Makefile, you specify how your system can be built in an easy format:
target: dependency1 dependency2 dependencyN command1 command2 commandM
This tells
make that the
target (usually an object file or a binary) depends on the files
dependency1 to
dependencyN. I.e. if one of these dependencies has been updated, since the last built,
target has to be rebuilt and this can be done by invoking the commands
command1 to
commandM. The only pitfall is, that the commands have to be indented using tabs, not spaces.
I have not found the perfect
make tutorial yet, but this one seems decent and brief.
make has many powerful functions that help you keep your
Makefile concise. However, if you are a
make-novice, it's best to stay away from stuff you don't fully understand, as debugging
Makefiles is no fun.
GCC's
g++ is popular and proven, while LLVM's
clang++ is also a great, free C++ compiler and a promising challenger to
g++ (especially its comprehensible error and warning messages are compelling). For both, I recommend the latest version, in particular because C++11 support is constantly being improved.
Useful compiler flags:
-std=c++11/
-std=c++0x: Enable (experimental) C++11 (C++0x) support
-g: Denerate debug symbols
-O0: Disable optimizations to allow for more reliable debugging (update: use
-Og, if supported by your compiler). Use
-O3when running bechmarks.
-Wall(GCC) and resp.
-Weverything(Clang): Generate helpful warnings. Do not ignore them! In fact, force yourself to deal with warnings by turning them into errors with
-Werror.
Use a debugger to find bugs, don't rely on debug output. Good debuggers: GDB (the GNU debugger) and LLVM's LLDB. Most IDEs have a graphical debugger front-end, but the command line can already be very helpful when your program crashes. There's a curses-based interface for
gdb, called
cgdb that I can recommend. Little known fact: GDB now supports (limited) reverse debugging.
If you have never used a debugger, check out this toy example:
#include <iostream> #include <cstdlib> int bar(int len, char* args[]) { int sum = 0; for (unsigned i=0; i<len; ++i) sum += std::atoi(args[i]); return sum; } void foo(int len, char* array[]) { if (len > 1) std::cout << bar(len,array+1) << std::endl; else std::cout << 0 << std::endl; } int main(int argc, char* argv[]) { foo(argc, argv); return 0; }
me@mymachine:/tmp$ ./test 123 456 789 Segmentation fault (core dumped)
me@mymachine:/tmp$ gdb --args ./test 123 456 789 GNU gdb (GDB) 7.5-ubuntu Copyright (C) 2012 Free Software Foundation, Inc. ... Reading symbols from /tmp/test...done. (gdb) run Starting program: /tmp/test 123 456 789 Program received signal SIGSEGV, Segmentation fault. ____strtol_l_internal (nptr=0x0, endptr=0x0, base=10, group=<optimized out>, loc=0x7ffff7ad1040 <_nl_global_locale>) at ../stdlib/strtol_l.c:298 (gdb) bt #0 ____strtol_l_internal (nptr=0x0, endptr=0x0, base=10, group=<optimized out>, loc=0x7ffff7ad1040 <_nl_global_locale>) at ../stdlib/strtol_l.c:298 #1 0x00007ffff77519e0 in atoi (nptr=<optimized out>) at atoi.c:28 #2 0x0000000000400898 in bar (len=4, args=0x7fffffffe040) at test.cpp:7 #3 0x00000000004008db in foo (len=4, array=0x7fffffffe038) at test.cpp:13 #4 0x0000000000400934 in main (argc=4, argv=0x7fffffffe038) at test.cpp:19 (gdb) f 2 #2 0x0000000000400898 in bar (len=4, args=0x7fffffffe040) at test.cpp:7 7 sum += std::atoi(args[i]); (gdb) p i $1 = 3 (gdb) quit
By default, printing objects works well for built-in types in GDB, but is often less helpful for STL data structures (e.g.
std::string or
std::vector) as you only see memory addresses, not the content of the container. Here is some information on how to change this for GDB. LLDB has a similar feature called synthetic children.
If your program behaves somehow "indeterministic" or "mysterious", Valgrind is your friend. Valgrind's memcheck finds illegal accesses to memory, uninitialized reads and much more. The option
--db-attach=yes starts the debugger when an error is found. Check out this blog post on the interaction between GDB and Valgrind.
Valgrind's Helgrind and DRD can help you find thread-related problems. This short blog post gives some helpful advise on how to detect the cause of a deadlock. A significantly faster and only marginally less thorough alternative to Valgrind's
memcheck is AddressSanitizer.
Before making a commit in your SCM system, make sure your program is
memchecked and passes all unit tests.
If you prefer an IDE over a setup with just an editor and a command line, Eclipse with CDT is a good (but heavyweight) cross-platform IDE. KDevelop is also a good choice for KDE users. Both have the advantage, that you can easily import
make-based projects and build your programs from within the IDE using make. C++ guru (and Microsoft employee) Herb Sutter recommends the free version of Microsoft Visual C++ for Windows users.
Unit tests help to improve the correctness of your code and prevent regression. googletest a great unit-testing framework for C++. Use it to write testcases for each class/algorithm that actually try to
bcov is a code coverage analysis tool that tells you how much of your code is covered by your unit tests. Using a code coverage tool is probably a case of using a sledgehammer to crack a nut for your (smallish) project, but I found it worth mentioning... especially since my boss wrote it.
Profilers help you to understand the performance of your program (and the environment it is running in). As profiling is probably not required for assignments/projects (but may help!), I keep this section short. To put it bluntly: Avoid
oprofile (and
gprof), prefer
perf. It is easy to use
perf and it helps you understand where you spend your CPU cycles, how many cache misses you produce and much more.
If you qualify for a student license, you can get Intel® VTune™ for free. Check it out, it's complex but amazing.
Great! Talk to me about a student job. | https://db.in.tum.de/~funke/projects.shtml | CC-MAIN-2018-47 | refinedweb | 1,839 | 64.61 |
SVG links
Contents
- 1 Definition
- 2 Tools
- 3 Standards
- 4 Manuals and Textbooks
- 5 Example sites
- 6 Tutorials
- 7 General SVG resource pages
- 8 Communities
1 Definition
This is a links page about SVG. According to Wikipedia,
SVG images can be produced by the use of a vector graphics editor, such as Inkscape, Adobe Illustrator, or CorelDRAW, and rendered to common raster image formats such as PNG using the same software.
Software can be programmed to render SVG images by using a library such as librsvg or Batik. SVG images can also be rendered to any desired popular image format by using the free software command-line utility ImageMag).(retrieved 18:45, 28 March 2012 (CEST))
This page should help people finding tools, texts (introductions, tutorials, manuals, ...) and other resources. Read the SVG article for a short technical overview.
Updated: Checked all (well most) links on 18:45, 28 March 2012 (CEST) - Daniel K. Schneider
2 Tools
2.1 Indexes for SVG tools
- Comparison of vector graphics editors (Wikipedia). Start by looking at File format support
2.2 Editors and drawing tools with SVG support
- Text (programming) editors with SVG support
- You can hand code SVG with any XML editor
- Free drawing programs with native SVG
- Inkscape. Inkscape is a free (GPL) open source SVG editor with capabilities similar to Illustrator, CorelDraw, Visio, etc. Supported SVG features include basic shapes, paths, text, alpha blending, transforms, gradients, node editing, svg-to-png export, grouping, and more. Available for Win, Linux, source. Recommended, but the tool requires learning and SVG code is not optimised for the web.
- Free online drawing programs with native SVG
- SVG Edit is an online vector graphics editor that uses only JS, HTML5, CSS and SVG (i.e. no server-side functionality). You can use this online web 2.0 tool without installing anything on your computer, then "save as" in your navigator. Recommended: This tool is the best thing for learning SVG and for creating shapes that you later want to hand edit for animation and scripting
- DrawSvg.org an other online SVG editor.
- Imagebot by Flamingtext is a simple to use SVG editor. No source code editing but you can import SVG and save-as svg.
- Pilat drawing tool (online)
- Free drawing programs that can import/export SVG
- Skencil, a free Linux/Unix vector drawing program can export/import SVG. (formely "Sketch", not tested). Imports/exports SVG.
- OpenOffice and Libre Office (a fork of OpenOffice) can import/export SVG.
- DIA, a popular open source diagramming tool can import/export SVG (Win/Mac/Linux). There are also portable app (Windows) and zip versions (good for teachers who live in a restrictive environment...). Recommended.
- Commercial drawing programs with native SVG
- Sketsa SVG Graphics Editor by Kiyut, $89 Payware, Java 1.7+ based, demo and Java Webstart version available.
- As of Jan 2014, this is probably the best pure SVG 1.0 tool, i.e. it is best suited for creating web graphics and adding animation and interactivity. In terms of pure drawing operations, it can do less than Inkscape, but it is much easier to use and also produces pure SVG, i.e. web ready.
- There is no animation and interactivity support (as in Flash CS6), but both the built-in text and DOM editors allow to add SMIL animation code without problems.
- Versions 6.6 (jan 2012) and 7.1x (jan 2014) tested with Ubuntu 10.4 and Win7. Claims to support SVG 1.1 full profile. You can edit code with a structure editor, e.g. for doing SMIL animations.
- Hairy models from openclipart.org import rather well.
- Commercial drawing programs with SVG import/export
These programs can usually both import/export SVG. However, you may loose information in both cases.
- Adobe Illustrator can import/export SVG
- SVG Kit for Adobe Creative Suite is a plug-in, which adds full-functional SVG and SVGZ file support to Adobe Creative Suite (Photoshop, PS Elements and InDesign).
- Corel Draw can import/export SVG (not tested recently)
- Microsoft Viso can import/export SVG (look at DIA above if you want a free solution)
- SiteSpinner. (formerly WebDwarf). Drag-and-Drop Visual authoring and animation for HTML, DHTML, and SVG. ($50) - 18:45, 28 March 2012 (CEST)
- Scribus, an open source document processing tool that can (only) import SVG
2.3 SVG support in web browsers
As of Jan 2014 all modern browsers do provide basic SVG support. None is fully complete, i.e. CanIUse (Jan 2014) computes overall compliance of 80 to 90 % for each major browser, except IE.
IE 11, Opera mini and IE Mobile still do not support SMIL-type animation.
Can I use provides more details.
See also Comparison of layout engines (SVG) (Wikipedia). There is also an interesting SVG test suite by test by Marek Raida.
2.4 Other viewing software
- Plugins for web browsers
Since 2010, SVG plugins are no longer needed !. Information below is kept for historical reasons.
- The Adobe SVG Viewer, worked with with IE and Firefox. Adobe discontinued support after they finally managed to acquire Macromedia (and Flash). The quality of this plugin was impressive. It implemented all of SVG 1.0. After Adobe dropped the project, getting it to work with IE 7 and Firefox 3 (or 4?) was quite a nightmare.....
- Special purpose web browsers
- The X-smiles XML-browser was a research browser made to explore various new web formats) supported SVG (uses CSIRO SVG viewer), also SMIL, XForms, X3D, etc. (2008)
- Viewers
- Batik is mainly a toolkit for server-side applications, but it also includes a good stand-alone SVG browser called Squiggle that fully implements static SVG and scripting (but you may as well use Firefox or Opera for this). We use this toolkit for this wiki in order to render SVG graphics as PNG, but you also may use it as a helper application for your web browser.
2.5 SVG emulators
- SVGWeb. A google project that translates SVG to Flash. SVG Web is a JavaScript library which provides SVG support on many browsers, including Internet Explorer, Firefox, and Safari. Using the library plus native SVG support you can instantly target ~95% of the existing installed web base.
2.6 Convertors
- Most vector graphic software can import/export in several formats (see above). E.g. to import SVG in Flash one can open the SVG in Illustrator and then select all/copy/paste to CS3/4/5
- SVG Import Filter for OpenOffice 2 (tested with OO 3.1 / Ubuntu: Import worked, but I did not test much - Daniel K. Schneider 13:51, 12 May 2010 (UTC)). An imported image then can be exported in other formats, e.g. WMF.
- SVG to PNG converter (old Firefox 3 extension)
- Vector Magic. Popular online tool for converting Bitmap Images To Clean Vector Art (not tested - Daniel K. Schneider 18:45, 28 March 2012 (CEST)).
- SVG Maker Commercial multi-format to SVG translator (not tested - Daniel K. Schneider 18:45, 28 March 2012 (CEST) !)
2.7 Validation and cleanup tools
- W3C Validator
- Simplify SVG for HTML5 App Debugging. A "simple" bash shell script to clean up Inkscape code.
2.8 Online editors
2.9 Toolkits for programmers
(needs updating - Daniel K. Schneider 18:45, 28 March 2012 (CEST) !)
- CariSVG (zoom, animation, drag and drop, etc.)
- Batik SVG Toolkit. It is mostly used for server-side applications. E.g. this Wiki uses Batik to translate SVG into PNG. There is also a stand-alone batik viewer.
- phpHtmLLib. A set of PHP classes and library functions to help facilitate building, debugging, and rendering of XML, HTML, XHTML, WAP/WML Documents, and SVG, and complex widgets. Stable and popular - Version 3.02 (March 2008). Supports basic SVG functionality an image and a graph classe. E.g. see this SVG Graph
- SVG-TT-Graph , a popular Perl module for line graphs, bars and pies [1/2004, not tested]
- More PHP kits can be found on the web, see also our php example directory (some code is disabled for security reasons).
- Pergola JavaScript SVG Library
3 Standards
- The official SVG specification at W3C. (Scalable Vector Graphics (SVG) 1.1 (Second Edition), W3C Recommendation 16 August 2011)
- Spécification SVG 1.0 (français)
- Mobile SVG Profiles: SVG Tiny and SVG Basic W3C Recommendation 14 January 2003, edited in place 15 June 2009 (SVG on the road !)
- W3C Validator. This popular online validation tool can validate SVG up to a certain extent. It will mostly just check syntax, i.e. it cannot detect bad attribute values.
- SMIL animation SVG has incorporated SMIL-type animations (a superset). This means that SMIL tutorials will also be useful to you.
- ECMAScript Language Binding for SVG (just SVG specific stuff, SVGElement has all the properties and methods of dom::Element !)
- sXBL (W3C Working Draft 15 August 2005). SVG's XML Binding Language (sXBL) is proposed as a mechanism for defining the presentation and interactive behavior of elements described in a namespace other than SVG's. (This will allow definition of high-level tagsets). [9/2004]
4 Manuals and Textbooks
4.1 Free Online Textbooks
- SVG Essentials The free wiki version of O'Reilly's SVG Essentials book by J. David Eisenberg (2002)
- Javascript DOM and SVG database (from Pilat). This a searchable and browsable database of JavaScript tricks including SVG DOM . Also includes examples.
- Learn SVG from the authors of the same print book.
- The tutorials include an interface lists attributes for tags and then provides both examples and code. Recommended.
- Start from Basics.
- For DOM Scripting, see Scripting tutorial
- You also can downlaod chapters of the original book. However, some examples probably will need fixing.
- An SVG Primer for Today's Browsers W3C Working Draft — September 2010. (200+ pages)
- Learn SVG Interactively by Jay Nick. This is a free IBook (for Mac or iOS device only).
4.2 Online references
(easier than looking at standards)
- SVG at Mozilla Developper Network. As of April 2012, probably the most useful reference.
- At Zvon (also includes examples):
- SVG Scripting Reference Internet Explorer Developer Center (Microsoft)
- SVG reference at W3Schools. As of March 2012, probably too concise to be useful to beginners.
4.3 Books
- HTML5 Graphics with SVG & CSS3, Drawing with Markup. By Kurt Cagle. Publisher: O'Reilly Media. Released: May 2012 (est.)
- Kurt Cagle is an SVG expert. The book should be good (when it comes out ...)
- SVG Essentials. By J. Eisenberg. Publisher: O'Reilly Media. Released: February 2002. Pages: 358
- line within the first SVG tag of examples that break with a "prefix not bound to a namespace" message:
- xmlns:xlink=""
- Building Web Applications with SVG, by David Dailey, Jon Frost and Domencio Strazzullo, Publisher: Microsoft Press. Also available as e-book at O'Reilly. There is a sampler.
5 Example sites
Once you have some basic understanding of SVG after reading a tutorial or two, you probably will learn best by example.
5.1 General
- SVG animation with JavaScript and SMIL by David Dailey & al. (last checked 2013, and a this time (still) one of the best examples sites). D. Dailey is the author of the "Building Web Applications with SVG book.
- Official SVG Test suites (includes good examples, but browsing and finding takes some time....)
- LearnSVG. Good sample extracts from the Learn SVG: The Web Graphics Standard" by Jon Frost, Stefan Goessner and Michel Hirtzler book. Revisited by Robert DiBlasi and Tobias Reif. (Some examples max lack namespace definitions and will not run unless you fix these - but that may have been fixed by now - 18:45, 28 March 2012 (CEST)).
- Marek Raida's SVG blog. Includes some good recent examples and also some kind of benchmarking suites. (last checked Mars 2011).
- SVG - Learning By Coding Site with many very nice examples including advanced topics (in German), easy browsing and looking at code.
- Croczilla SVG resources has a good list of examples, in particular XHTML/SVG integration and DOM scriptin
- Pilat Several kinds of examples in french (e.g. SVG/Javascript, complex applications. There is an English version. This website has quite a lot of information.
- TECFA example directory (not very interesting, but I use them in my teaching slides) - Daniel K. Schneider.
- Wikipedia German includes 2 interesting animation examples]
- HTML 5 demos that may include SVG
- Chrome Experiments (Not your mother's JavaScript): E.g. SVG Mazes and Rotating Spiral
- Mozilla labs gaming includes some examples made with SVG, e.g. SVG-EDU
5.2 SVG Clipart
- Open Clip Art Library (openclipart.org). Recommended
5.3 Code fragments
- DOM tree output ECMA Script from MecXpert
- Web Solutions for Mechanical Engineering, has a nice list of code fragments.
6 Tutorials
6.1 General overviews
- Scalable Vector Graphics (Wikipedia article).
- Prince SVG Tutorial (essentials)
6.2 Basic Tutorials
- Updated, i.e. example code should work in modern web browsers
- SVG Tutorial, by Matthew Bystedt, November 2012
- SVG Essentials The free wiki version of O'Reilly's SVG Essentials book by J. David Eisenberg (2002)
- Pilat Several tutorials + examples + complex applications.
- Using SVG For Flexible, Scalable, and Fun Backgrounds, Part I by Shelley Powers, A LIST apart. Jan 2010 and Using SVG for Flexible, Scalable, and Fun Backgrounds, Part II Jan 2010. SVG and XHTML.
- Basic Shapes, chapter three of David Eisenberg's "SVG Essentials" O'Reilly book. (2002)
- See also: Example SVG sites (above). Many are in made for learning, e.g. include explanations and comments.
- Create client-side diagrammatic interaction in Web applications with SVG by Cameron Laird, IBM Developerworks (2010), retrieved 18:39, 21 March 2011 (CET).
- SVG Tutorial at W3Schools
- Older materials
Code may be broken, but these texts may still be useful
- cours SVG de 2 jours (in french) made by E.Sierra and M-A Thibaut 2003 ("learning by teaching", some code broken)
- Introduction à SVG (my own slides)
- SVG at Dev.Opera. Several tutorials (including SMIL tag animation, use Opera to view this.)
- 2 day SVG course in french made by N. Nova and Y. Grassioulet 2002 ("learning by teaching"). Needs updating !
- An Introduction to Scalable Vector Graphics by J.D. Eisenberg, XML.com, 2001.
- Scalable Vector Graphics: The Art is in the Code Eddie Traversa, Webreference (2001)
- Developer tutorials (incl. Dynamic SVG with JavaScript), part of adobe's illustrator/svg zone
- KevLindeV Kevin Lindev's SVG site (tutorials and examples)
- SVG - Open for Business, a (short) webreference.com tutorial (2002)
- Introduction to Scalable Vector Graphics by Nicholas Chase, IBM developerWorks, 2004.
- Pike's SVG tutorial
6.3 SVG with JavaScript and DOM
- Add interactivity to your SVG by Brian John Venn, IBM developperworks article, Aug 2003. Good introductory SVG/DOM/JS tutorial.
- Manipulating SVG Documents Using ECMAScript (Javascript) and the DOM at carto:net (2009). This page includes a set of interactive commented examples.
6.4 SMIL Animation
(as of fall 2011 supported in all modern browsers)
- Animate Your HTML5. A tour of HTML5 animation techniques with CSS3, SVG, Canvas and WebGL, by Martin Gorner - Google developer relations, last checked on Jan 2014.
- SVG animation with SMIL (Mozilla dev network). Ok, but too short as of March 2012.
- SVG Animation (Inkscape Wiki)
- Digging Animation, by Antoine Quint, XML.com, 2002 (SMIL animation, most examples seem to be broken as of 18:45, 28 March 2012 (CEST)).
- Rather see the various example sites
6.5 Server-side SVG
- Combiner SVG avec PHP JDNet développeurs article de Xavier Boerderie 12/2002.
- Creating Scalable Vector Graphics with Perl by K. Hamptom, xml.com, 2001.
- Example for Serverside SVG generation with PHP from the excellent carto.net website.
- Google SVG Search a webreference.com XML column
- Example for getURL & parseXML of the Adobe plugin. The same tutorial gives additional links, e.g. for postURL() that can be used to save data. (Outdated, since the Adobe plugin is dead!)
6.6 SVG and XSLT
- Client-side image generation with SVG and XSLT by Inigo Surguy. Interesting
- Component-Based Page Layouts xml.com article by Didier Martin showing how to embedd SVG in XHTML using XSLT processing
6.7 Various articles (not classified)
- W3C Graphics Activity Statement. This shows the place of SVG in the overall W3C strategy.
- W3C Scalable Verctor Graphics (SVG). This is the overview page (includes News and Pointers)
- Picture Perfect xml.com, (2001)
- SVG: A Sure Bet by Paul Prescod, xml.com (July 2003)
- Flash is Evil by dack (not related to svg but food for thought)
- Vector-based Web Cartography: Enabler SVGScalable Vector Graphics (french & german version also available). This is an early (good) propaganda article for SVG.
- SVG - a new dimension in producing interactive netbooks by Vladimir BATAGELJ and Matjaž ZAVERŠNIK (PDF article). This is an overview paper
7 General SVG resource pages
- The Ultimate Guide to SVG, 2015
- User Documentation at Inkscape. Mostly documentation for Inkscape, but also includes some links to general SVG tutorials, e.g. the SVG Animation in its own wiki.
- SVG - Learning By Coding Site with many very nice examples including advanced topics (in German), easy browsing and looking at code. Many examples need a state-of-the-art browser (e.g. Chrome 10) or else the old Adobe plugin. Bester Site auf Deutsch.
- Pilat Informative Educative [francais]. SVG - PHP - MySQL - Javascript. Le meilleur site français. Also includes an English variant with (less?) contents.
- svg.pagina.nl Good dashboard page page by Ruud Steltenpool. Many good links.
- SVG Page at DMOZ (for nostalgic people, when links pages were made by humans)
- carto:paperssvg (A.M. Winter & A. Neumann). VERY good SVG resources, examples, etc. Swiss made :) .... but less active since 2007 :(
- W3C Scalable Verctor Graphics (SVG). This is the W3C SVG overview page (includes News and Pointers)
- Sacré SVG, a XML.com column (not updated since 2005).
- SVG Open conference series. Usually, papers are open content.
- dev.opera includes several good svg articles (including some good live examples, but you will have to search ...)
- SVG magazine. The quarterly publication covering technique, coding, art, reviews and more. So far, there are 2 issues - 18:45, 28 March 2012 (CEST).
- IBM Developer Works (search for "SVG" gives 709 results - 18:45, 28 March 2012 (CEST)).
8 Communities
- SVG-Developers (Yahoo group) | http://edutechwiki.unige.ch/en/SVG_links | CC-MAIN-2018-26 | refinedweb | 3,010 | 59.5 |
Captures a screenshot of the game view into a Texture2D object.
When
superSize parameter is larger than 1, a larger resolution screenshot will be
produced. For example, passing 4 will make the screenshot be 4x4 larger than it normally
would. This is useful to produce screenshots for printing.
To get a reliable output from this method you must make sure it is called once the frame rendering has ended, and not during the rendering process. A simple way of ensuring this is to call it from a coroutine that yields on WaitForEndOfFrame. If you call this method during the rendering process you will get unpredictable and undefined results.
using UnityEngine; using System.Collections;
public class ScreenShotter : MonoBehaviour { IEnumerator RecordFrame() { yield return new WaitForEndOfFrame(); var texture = ScreenCapture.CaptureScreenshotAsTexture(); // do something with texture
// cleanup Object.Destroy(texture); }
public void LateUpdate() { StartCoroutine(RecordFrame()); } }
Did you find this page useful? Please give it a rating: | https://docs.unity3d.com/ScriptReference/ScreenCapture.CaptureScreenshotAsTexture.html | CC-MAIN-2020-16 | refinedweb | 151 | 50.63 |
Writes Cynthia Lee in "I'm a woman in computer science. Let me ladysplain the Google memo to you" (Vox).
Read that. I'm not going to ladysplain the ladysplaining.
I tried to help you guys long ago when I devised the rule that many people have repeated and called The Althouse Rule (e.g., Instapundit, 3 days ago). The rule, as I put it in November 2005:
Scientists: remember to portray whatever you find to be true of women as superior.I've mainly used this rule to make fun of reports that follow this rule. I said:.
It's patronizing. And it's unscientific! I understand the motivation of the scientists, though. I think they have reason to be afraid not to couch their findings this way.
I'd been thinking that if only Damore had followed the rule, maybe he wouldn't have gotten fired. I have been tempted to take his memo and rewrite it following The Althouse Rule. I'm rereading the memo, however, looking for a good paragraph to make an example of, and it's not that easy to find one. I think Damore did try to assume a neutral pose and balanced the aptitudes he ascribed to the 2 gender stereotypes.?
157 comments:
"No man, no fear" -- A nice riff on Stalin's "No man, no problem.".
Google has lost its credibility, by catering to some leftwing feminist bad ideas.
I take things and competition . After the fun is finished, you can always find a kind and intelligent women who will put up with you.
Brooks hits it out of the park.
Things and competition, of course. Generally speaking, people suck. And generally speaking, "cooperation" means I do the work while the group shares the credit.
Damore, I am more convinced than ever, knew exactly what was going to happen to him when he wrote it and published it. In fact, I think that if he didn't believe that, he wouldn't have written it in the first place.
It is pretty damned clear to me that he intended to make a statement, and he knew Goolag and a lot of its employees were going to prove his statement for him.?
Fucking whatever. The only question that (ISTM) to matter is, would you rather have the love of a woman or the love of a man? Me for Column A.
But yeah, not to be nonresponsive, things and competition, every time.
wouldn't you pick female?
A woman decides being a woman is better than being a man.
This is my shocked face.
things and competition, every time.
Hmm, this may be a lie. But, you see, men can do that too. Before women entered the workforce, men just did everything. Men used to be secretaries and nurses and teachers.
On average, people should do what they're good at, unless perhaps they're only good at evil.
He wasn't doing the stereotypes though. He was describing what happens mathematically in the tails where Google hires technical staff. Lots of men, few women.
Blogger Ken B said...
Brooks hits it out of the park.
Since paywall, could you just tell us what he said?
When you say that one normally distributed population has a mean value of some attribute that is greater than that of another normally distributed population, it sounds less strident to note that the two distributions overlap quite a bit.
But places like Google are hiring from the far right-hand tail of the curve, where differences in the mean can have more dramatic implications (differences in the distributions can, too).
Things and competition suit the male ethic of obligation.
Obligations usually to women and children. A man with no obligations is generally restless and unhappy.
Note that competition in males is quickly resolved on a micro-level - males, under proper conditions, self organize into teams and assign roles and status near automatically, and retain those roles, if satisfactory in spite of status, with equanimity. Men are not that far from being social insects.
Women much less so IMHO. Its much easier to manage men.
I can't describe how many happy hours and weekends I spent long ago on huge mainframes doing math and physics.
No waiting for job turnaround, you run it yourself on weekends.
Exactly what Megan Mcwhatshername said was not her people. But there was a steady bunch of guys every weekend.
The hysterical response proves Damore correct.
Isn't there a genre of books on how women can succeed in the corporate world (Ex:) embracing the idea that men and women tend to really be different?
There wasn't competition in my experience, beyond the pajama boys looking for advancement opportunities. The real science guys wanted to stay where they were.
Cooperation was the norm, if you can help with his project you did so. But you have your own project, or share completely with a co-worker who's working with you. Curiosity is the thing.
Maybe Damore should have spent more type about the few "really bright stars" in a Summers distribution?
The real science guys wanted to stay where they were.
Also true in other fields. Let somebody else run the business and see to it that I am paid what I am worth. I just want to work and be good at it.
The uncomfortable fact is simply this- most cutting edge things in technology and science cannot be developed by people who don't have IQs above 130. The right hand tail of the intelligence spectrum is really all that matters here.
'splaining is not explaining. 'splaining is using your pretended authority to shut people up.
Women outnumber men in the nursing profession by about 8:1.
Does this mean that men are incapable of being nurses? Even top-of-the-profession nurses? Of course not. It only means that, for whatever reason, men are much less interested in becoming nurses than women are. This is the point that Damore was making. Immediately he was accused by our intellectual elites of saying something entirely different -- that woman were not suited to technical work due to their nature as women.
If it wasn't for straw men and ad hominem, the elites would have no arguments at all.
Our 'elites' are, by and large, dimwits.
As a certified scientist I can tell you that Althouse's rule breaks down re peeing.
It's impossible to argue that peeing via a cock is not better than doing so from you ass.
Other than that, women are better.
I think the proposition is mixed wrongly; it is People and competition or Things and cooperation.
Cynthia Lee's article is pretty (unintentionally) funny, and it really highlights the underlying divide. Dalmore brings logic and the current state of research, and she tells us he's wrong because it makes her feel bad.
Also, her third point is incoherent. If you have two overlapping distributions at the very tail one of them will dominate, i.e. if men are better programmers on average and the curve for both is roughly the same shape, all the very best programmers will be men bar very rare exceptions. That is, the fact that Google is far more selective than the average company means of all workplaces it's the least likely to have a 50/50 distribution by sex. The fact that you can get published on a relatively high traffic site addressing this subject without understanding it is just depressing.
Bad Lt:
Well, I can't just cut& paste.
He discusses the memo briefly and fairly, noting its science seems decent. He discusses why women at Google might be upset.
Here's part of his conclusion:
"The fourth actor is the media. The coverage of the memo has been atrocious. ... The mob that hounded Damore was like the mobs we’ve seen on a lot of college campuses. We all have our theories about why these moral crazes are suddenly so common.
...
Which brings us to Pichai, the supposed grown-up in the room ....... Pichai...."
The author of the Vox article misses the point of the memo.
It's not that he wants to end Google's outreach programs because they may possibly be illegal. He wants Google to acknowledge those programs don't work and they need to take a different approach towards changing Google to be more accommodating of everyone that doesn't fall into the "ideal male programmer" model, which includes men and women.
Bad Lieutenant said...
Since paywall,
I never see that, dunno why, but it's definitely not because I gave them money.
could you just tell us what he said?
That Damore's paper was basically correct as well as harmless, and that Sundar Pichai lied about it.
Sailer's got a few articles from California employment lawyers saying that google probably broke the law in firing Damore.
Bob Loblaw: Cynthia Lee's article is pretty (unintentionally) funny, and it really highlights the underlying divide. Dalmore brings logic and the current state of research, and she tells us he's wrong because it makes her feel bad..
A women producer on the John and Ken show reported on a pee-standing-up accessory that it worked but it was traumatic.
John asked if it would be okay for a woman less psychologically unbalanced than you.
No, she said, it's traumatic to everybody. Too competitive, aiming and stuff. Sitting down is fine.
The paragraph most in need of rewrite to conform to the Althouse Rule:
"Women, on average, have more: ... Neuroticism (higher anxiety, lower stress tolerance). This may contribute to the higher levels of anxiety women report on Googlegeist and to the lower number of women in high stress jobs."
PB, if I ever pee from my ass, I will call an ambulance immediately. Agree that penises make peeing much more efficient.
Not ever having to buy tampons and pads would be a big plus too.
Google is okay on employment law. Men are not a protected group.
They're vulnerable on defamataion. They not only fired Damore, which they're free to for any reason, but they said why, and their why is not true and exposes him to public damage.
Big payoff.
Your hypothetical is hard to imagine objectively as we are already formed in one gender or another already, such that we cannot readily preceive the other state. Although I love women, the more feminine the better, I have never understood their point of view. My wife has a completely different set of thought processes than I do; she considers so many things, especially about the welfare or our boys, that simply never occur to me. I see the value and appreciate it but I can not envision having that mindset myself.
We are complimentary, more complete as a team, an example of diversity functioning when unified by shared goals. When you see men and women as competing against each other you've failed already. The joy is the team.
Although this is most frequently seen in marriage and family it can exist in the work place. I'm in a technical field (Computational Biology) and my boss is a woman. She's frankly the best boss I've ever had in that she has assembled a hard working, high functioning devoted team. It's all about merit and productivity. This place functions more like a family than any environment I have been in before. Perhaps the fact that she and about half the department are first generation Chinese immigrants is why it works; I think they are less culturally influenced by PC thinking.
In order to get gains from trade, the parties have to be different. Hence men and women.
PB, if I ever pee from my ass, I will call an ambulance immediately.
Surface tension. The effect depends on anatomical details in the individual case.
Ugh spelling errors again: "perceive" and "of our boys" rather than "or our"
Google has made a larger unsafe space for its employees and even non-googlers. Aside from the risk of firing, who would want the firestorm of attention this dissident got? Talking about normal people, not SJWs.
They now must stew in silence.
rhhardin said...
Google is okay on employment law.
No they're not.
Men are not a protected group.
Irrelevant.
They not only fired Damore, which they're free to for any reason,
No they're not.
The ulitmate "I choose things and competition" scene:.
Another difference between me and women is that if I am not paid what I think I am worth, I am soon gone, while women tend to stay and just complain about it.
"And generally speaking, "cooperation" means I do the work while the group shares the credit."
The leaps and bounds of human progress are the proprietorship of individuals, not groups.
I'vwe taught at least four different programming languages, including assembly
She thinks assembly is hard. The language is easy. Whether you can think like a machine without thinking and see a path right away to do what you want governs how much fun it is.
hey - a FAP homework assignment of mine in 1963, saved by a colleague and sent to me when he retired
code
run
I remember spending hours optimizing the code for speed and size, a huge entertainment.
Althouse said..?
Of course you would! Esp. if you knew you'd be big into STEM stuff; from a Rawlsian perspective you'd choose "woman" every time!
That's one reason we he-man woman-haters make so much fun of the cult of victimhood that says women deserve constant compensation for their oppressed status. Women are vastly underrepresented in the dirty, dangerous jobs and vastly overrepresented in the clean, safe jobs. That women agitate for MORE representation in certain high-status clean jobs is little more than a beacon of their own hypocrisy! Where are the protests for equal representation, by sex, in sanitation workers? Those jobs pay pretty well on average (they have to, to get anyone to take 'em) but they're almost exclusively taken by men. No one seems to mind! But computer engineers...
It's bullshit. We die earlier, we kill ourselves at a much higher rate, we're the victims of violence at a much higher rate, we serve longer sentences in less-safe prisons for the same crimes...and on and on.
Don't mention that, though, or it's "whining."
When women talk about fairness hang on to your wallet.
I think it's a Bill Burr line, but it's true: the only reason we put up with most of this BS is 'cause we want to fuck 'em.
Women have successfully weaponized traditional male regard and deference/"chivalry" and continue to use that to their political advantage. Women as a group are, no shock here, one of the Left's favorite special interest groups.
Of course it's better to be a woman!
Yes, I’m a woman in tech
You were. Now you're in academia, a less stressful and time-consuming career. Why'd you leave.
Her fisk was not convincing. Bad ladysplaining.
Brooks hits it out of the park.
And, of course, he followed the Althouse Rule: "But there are some ways that male and female brains are, on average, different. There seems to be more connectivity between the hemispheres, on average, in female brains."
stutefish said...
Ding ding ding! Nailed it.
Add on the fact that "things that make women feel bad" are a form of violence and "feeling bad about something" is a form of trauma and it all lines up: things that make women feel bad--true or not!--cannot be allowed. Preventing trauma is more important than finding truth.
All in the name of empathy. Thanks, nice centrist people!
optimizing the code for speed and size
Isn't that also an obsolete skill?
The first little bit of assembly I did was in hexadecimal, so I immediately put a negative voltage into the $300 microprocessor and burned it out, after the prof said not to.?
How do you know we aren't living out the life we had chosen? That theory with some naturally-occurring buyer's remorse would seem to fit the known universe quite nicely.
Well some things have to be optimized still, but owing to huge amounts of RAM allowing huge arrays rather than running out of space.
Some optimizations are not obvious but pay off - optimizing so that cache hits happen often instead of seldom, for example.
"The fact that you can get published on a relatively high traffic site addressing this subject without understanding it is just depressing."
The fact that a woman can understand statistics well enough to teach it at Stanford, without having the least notion of how it applies in the real world, is even more depressing.
That, by far, is one of the best critiques I've read of the Damore essay; the point about how Damore references race is short and eviscerating.
One thing that bugs me is the extent to which the author accepts Google's corporate status quo. For example:
3) The author cites science about “averages.” But Google isn’t average.
No. It's far worse (better?) than average. Whatever biases exist in computer science will be magnified at Google. This is a huge problem with Damore's original essay -- his love-affair with evolutionary psychology leads him to assign strong causal force (on average he says) to biological determinism. But Lee's generic statistics about female CS graduates are equally meaningless in interpreting Google's 19% female work force. It's a disappointingly tepid response.
One of the truly weird things about this fiasco is the focus on the environment at Google Headquarters when Google has a world-wide workforce including numerous off-shored development teams. It's as if they don't exist. So here's a question that also goes unanswered: what does the data look like when you break it down by nationality?
This is a PR nightmare for Google. I can tell because none of my left-of-center friends on social media have made a peep about it.
They are all-in whether you are on Team Trump or Team Jong-Un in the Great Nuke War of 2017.
She thinks assembly is hard.
She thinks assembly is fundamental.
In advertising, there are much fewer good creative women than there are men.
However, there are much, much more mediocre men than women.
Brooks left out another possibility, that Pichai didn't read the memo but listened to its mischaracterization by others. That's most likely to me.
Is any of this relevant to Colin Kaepernick and his ongoing struggle to land a qb job in the NFL? I'd like some reporter to ask a google exec that question or,even better, Colin Kaepernick.
The last assembler I wrote was via C in 2011
#include "stdio.h"
main()
{
long long unsigned x;
__asm__ volatile (".byte 0x0f, 0x31" : "=A" (x));
srandom(time(0)*991+getpid()*99991);
srandom(random()*99^(int)x);
printf("%u\n",(random())&(~(unsigned)0>>1));
}
Nerds don't lie. They can't. They lack the social skills to pull it off and they know it so they don't try.
Microcode is fundamental. The VAX 11/780 let you write microcode.
"She thinks assembly is hard."
She thinks assembly is fundamental.
No you wouldn't list a fundamental course separately. You'd list the esoteric or difficult ones.
From a guy's point of view, the fun ones. It's a tell.
$ ./a.out
2011820918
I don't think Henry knows what eviscerating means.
""I have known, worked for, and taught countless men who could have written the now-infamous Google 'manifesto' — or who are on some level persuaded by it."
"Given these facts,"
Uhhh...that's called an opinion
It's a random number generator that uses the cpu clock as well as time of day to seed the random number, so it can be called more than once per second (in a shell script, say).
And women, too. Of a kind, anyway. The female chauvinists shamed, lured, and indentured women who would have otherwise chosen to look after the household, the children, community affairs, and other interests in order of priority.
I don't think Henry knows what eviscerating means.
I think we have a difference of opinion.
No you wouldn't list a fundamental course separately. You'd list the esoteric or difficult ones.
Assembly is esoteric. How many software developers have written assembly in the past year? A show of hands, please.
Assembler is what serious coders wrote. Fortran was for stuff that didn't care about fitting or efficiency.
First courses were in binary, octal and assembler.
It wasn't hard, just something to learn. In turn it created interesting puzzles, if you're that kind of person.
"When women talk about fairness hang on to your wallet. "
At one point in the Peterson video he comments that feminists seem only interested in high status high income jobs. Nobody is complaining that 98% of garbage collectors are men.
The IBM 360 brought the plague of hexadecimal.
Henry, that was rhhardin's point. It isn't fundamental.
Pichal got a pay package worth $199 million (!) for the year 2016. He is apparently a brilliant product developer and technician. In other words the high end prototype of the male tech engineer. But it's pretty clear from this incident that he got about everything wrong from a social, political, publicity, worker morale, reputation, fairness and human development perspective. It's a massive mistake, especially because he was handed a great opportunity to do something positive, and completely blew it. Civil War seems to be brewing at Google.
Unless it is as dysfunctional as the executive suite, the Google board of directors has to be searching the horizon for replacements asap.
Google is one of the best investments I ever made. (I blew it on Amazon and sold my cheaply acquired Apple stock a decade too soon.) But now I am casting a suspicious eye on Google.
How many software developers have written assembly in the past year? A show of hands, please.
It's not so much that it's esoteric, more like it's tedious unnecessary wheel reinventing. How many times does a bubble sort or a binary search need to be coded in assembly? Unless you're literally coding firmware for a novel piece of hardware or something.
It was a Kim microprocessor, IIRC. We did some crude things with it (not anal).
I looked at the Cynthia Lee article at Vox and it makes some good points.
But I am left with the fact that Google didn't pursue discussion on the points Damore raised, even to the extent of defending the corporate position and explaining why including maybe the perspectives that Lee articulates.
None of that; they just summarily fired him. People (i.e., me, but I think I am pretty representative)who are upset about all this are not upset because Damore was correct in every respect, they are upset over the way he was treated.
So my conclusion about the Cynthia Lee/Vox piece is it misses the point. And it insults my intelligence by trying to pull a fast one on me.
You have your one shot at living the life of a human being: Do you want to be oriented toward people and cooperation or to things and competition?
That's a false choice. Competition and cooperation are not mutually exclusive, and working with things almost always means working in collaboration (and often -- at the same time -- in friendly competition) with a group of people sharing a common goal.
To step away from software, think about, say, the Beatles writing and recording Sgt Pepper. A bunch of guys messing around with music and words and, yes, instruments and electronic technology -- helping each other work out songs, but also trying to outdo and push each other in order to make something great. Would you consider that process people-oriented or thing-oriented? Or is that, perhaps, the wrong question?
Four of the 15 directors of Google are female. The only female director with a technical engineering background is Diane Greene. The female directors are all very accomplished and well qualified. Shirley Tilghman, former President of Princeton and a director, has a background in biological science but not tech.
C replaced assembler ("All the power of assembler with all the convenience of assembler") through advances in efficient code generation. The compiler can do better than a human by making also global changes; but it can only use operations that it knows about the machine having, which leaves an opening for humans here and there.
> How many software developers have written assembly in the past year?
Back in the day I spent 24 hours writing a disk driver in assembly. These days I use python when I need some computation, C when I need python extensions, but spend most of my time reviewing other peoples code and doing releases.
I feel Cynthia Lee’s ladysplain empathy over the fact that Damore got - fired..
Huh, as a 20-year programmer I've felt all that too, even the part about the anxiety of not being good enough, of not being passionate enough. I'm pretty sure everyone feels that.
Are we having a contest about esoteric programming skills?
Let's see, my first major job involved writing custom 56-bit-wide microcode for an aircraft control interface computer.
Before that I wrote a modest amount of IBM JCL (oh, joy) while learning PL/1 on a 360/65.
Assembly? Shoot, that was easy.
Althouse has gotten a lot of posts out of this brouhaha, so we can guess how much fur is flying at Google.
Nice of her to be "charitable" to men.
What a fucking twat.
I read the Vox article and am unconvinced.
1) She claims to understand statistics, and I hope she does given that she lectures at Stanford. But she doesn't seem to understand the effects of a higher standard deviation at the right tail of the distribution. Actually, I bet she does understand it in other circumstances, but her belief system doesn't allow her to see it here.
2) Maybe people would be less skeptical of women in tech if there weren't such a push to hire more women in tech. It will always put the idea in others' minds that the person was hired to fill a quota. I include my current boss in this category.
3) Nearly 50% of the tech majors at Harvey Mudd are female. How many will choose to stay in the industry? Were they accepted to fill one of those quotas?
4) She sure seems to spend a lot of time making Damore out to be a nefarious character using soothing language to fool people into believing his evil ideas.
The content of the memo doesn't matter to the Red Guards. That should be obvious given the characterizations of it in the media. It was merely a pretext to Purge the White Male Oppressor.
Known Unknown (8/11/17, 1:24 PM) says "In advertising, there are much fewer good creative women than there are men. However, there are much, much more mediocre men than women."
I manage a university lab, got our Business College, PR course to work on a project to bring more students into my field and learned that there are very few men in PR and that PR firms often hire token men to interface with customers (think cigar smoking mogul stereotypes who don't want to be lead by the little lady). Pretty sure whenever you hire tokens a large fraction of them (not all, this is the recap of the averages argument) are sub par because you weren't looking for the best.
In defense of the Vox article: Goolge hires out of the right-hand tail, but they don't hire EVERYONE out of that right-hand tail, but only a small fraction of those people. They could hoover up enough to get a 50/50 sex ratio, if they outbid other employers for the underrepresented sex.
Whether that would make financial sense I do not say, but it makes statistical sense.
You guys are old.
"To be a woman in tech is also to always and forever be faced with skepticism that I do and feel all those things authentically enough to truly belong. There is always a jury, and it’s always still out."
What Lance said. She appears to believe that she has a right to impress everyone she meets. You know how many times I've interviewed for a position and not been chosen? I guess I should have gotten the interviewers fired for failing to validate me. Oh, wait, I have a dick. Sorry, my mistake.
So am I, but I started out in visual design.
Ralph L. I think fundamental is a perfectly good word, in the sense that it is the language that exposes the hardware and that higher-level languages compile to. Esoteric is a good word as well.
I had no idea there were so many bare metal programmers still walking around. Back when I started out in the racket there were guys who had worked on the abacus.
"You guys are old."
Yes, some of us are. My first computer programming class was Apple Basic on the Apple ][, my second was FORTRAN IV using punch cards and a time share on a mainframe 80 miles away. (My third programming class was Z-80 assembly.)
All but the Z-80 assembly class bored me so I majored in Film, but still ended up in a career as a programmer, I mean Software Engineer!
IIRC, most Google employees are not in the US.
And are not subject to counting diversity noses.
> You guys are old.
Why yes, yes we are, and here we sit, waiting on the bank of the Styx and passing time at Ann's salon for the ancient. What's a nice kid like you doing here ;)
Assembly is esoteric. How many software developers have written assembly in the past year? A show of hands, please.
I don't myself, but I know a couple guys who do. Embedded stuff is still mostly written in assembly.
It's not a large part of the job market for programmers, because the market is probably 85%+ CRUD applications, but "esoteric" is going a bit far.
"How many software developers have written assembly in the past year?"
I haven't, but just this week, I looked at assembly output from C++ to understand what something was doing.
Almost 20 years ago, I wrote a highly optimized lossless audio compressor using 80x86 assembly. I then rewrote my "C" version to be highly optimized and got it run within 10% the performance of the assembly version.
About five years ago, I wrote a highly optimized CRC algorithm using SSE, which is essentially assembly. It runs 300% faster than the the C/C++ algorithm.
Geek out.
"To be a woman in tech is also to always and forever be faced with skepticism that I do and feel all those things authentically enough to truly belong. There is always a jury, and it’s always still out."
That will be the case in any situation where it is assumed that the group you are a member of receives preferential treatment.
I too learned to program writing BASIC on an Apple II.
My freshman year in college I took a CS 101 class out of my major. Pascal. Hated it. Hated waiting for my pointless Hello World programs to compile. Hated paper tests where points were deducted for syntax errors. Pascal stole the joy of programming from my heart.
Started programming again about 15 years later -- about 15 years ago -- when javascript showed up.
Ah, THAT Pascal class on Apples and written tests. Yet another class that sucked joy out programming.
javascript: The language that makes Pascal look good. I kid, I kid!
* * *
"Embedded stuff is still mostly written in assembly."
I work heavily in the embedded space and C++ is most widely used. Some micro-controller firmware still uses assembly, but that's pretty rare.
OTOH, learning assembly is an excellent way to really understand computers, but C and/or procedural C++ work nearly as well.
That will be the case in any situation where it is assumed that the group you are a member of receives preferential treatment.
Ding ding ding! Give that lady a prize!
Somemore phlegm for ann: now women aren't scientists because they care about humanity, but scientists only care about things. The more you try and rationalize why men dominate hard science and engineering the more silly you get. Methinks the empress has no clothes..
Shhhhhh! Are you trying to get her fired too?!
Comparing the percentages of computer science majors in elite college programs to those working at elite tech companies is not logical. One, there are far fewer people that make up these college programs, so it's easier to fill larger percentages with women. What are the percentages of women in lower tier college studying comp sci?
Two, private industry has to produce, so if you've given preference to women at the college level, that will not necessarily translate into equality at producing in the private sector. There is too much focus on credentialism in this comparison. Fancy college is nice. Creating fancy things regardless of where one went to college is what counts after.
Three, not all of the top computer science grads go into computer science, especially those from elite programs. Being a quant can pay big. Staying in comp sci may make one a greater outlier than simply being someone who majors in comp sci.
Four, there's a difference between pursuing something for a limited time and pursuing something for a lifetime. Even if you don't particularly like the work, you may stick with a certain college major in the interest of finishing your degree. How long will a person stick with a career he doesn't particularly like if he has other options?
"How many software developers have written assembly int the past year?"
Years ago, writing software for real-time machine control (using 16-bit Intel processors), we always use assembly for the math calculations and use C for the rest of the program. Nowadays, with more powerful processors, optimized compilers and improved algorithms for math functions, I am sure the same program will be written without any assembly routines.
"Nowadays, with more powerful processors, optimized compilers and improved algorithms for math functions, I am sure the same program will be written without any assembly routines."
Well, you're wrong. The highly vectorized code used in x86 math libraries is all written in assembly. Or, embedded assembly and/or intrinsics in C/C++ programs. You can't trust a compiler to use all the special new instructions correctly.
Nowadays, with more powerful processors, optimized compilers and improved algorithms for math functions, I am sure the same program will be written without any assembly routines.
Yes, but on the small end microcontrollers are so cheap engineers are using them for things like debouncing switches. Last time I looked you could get one with 512 bytes of EEPROM and 256 bytes of RAM for (IIRC) about eleven cents in quantity. You can't realistically use anything but assembly for something that small.
Damore didn't "publish" a memo.
He sent his piece to an internal "Skeptics" discussion group as a kind of "Change my view" excercize.
It got leaked and everything went crazy.
Its not that he hit "CC: all" on an email.
"My first computer programming class was Apple Basic on the Apple ][, my second was FORTRAN "
Mine was not a class but a job writing in IBM decimal on an IBM 650. 2K memory.
The main plant (Douglas AC) across the street finally got a transistor version of the 704 the year I went back to do premed.
My girlfriend at the time programed the analog.
The point of all this is that, for guys, it's all fun.
I've never known a woman to like it that well, and there were lots of women. They don't come in holidays and weekends.
After a few years of this difference, the guys have a far superior skill level.
One way to equalize things is to make programming not fun, by the way.
"Do you want to be oriented toward people and cooperation or to things and competition?"
Depends on whether or not you want to get laid by slugs (towards people) or hot babes (competitiion).
Comp sci courses are one way to make programming not fun, the way algebra courses kill off your interest in solving equations that had until then been interesting.
Lance said...
Huh, as a 20-year programmer I've felt all that too, even the part about the anxiety of not being good enough, of not being passionate enough. I'm pretty sure everyone feels that."
Everyone feels it. But we have a generation of whiners trained to complain about it for 16-18 years in our public education system.
The worm is turning on theee people. Finally.
I don't bother reading things by people with that attitude. Does that make me a bigot?
Back in 2007, when Rush could discuss subjects not related to Trump, he boiled down the difference between men and women:
RUSH: Hubba hubba. I found this story. It’s from SpringerScience.com .... Let me just read the first paragraph. I’m going to summarize what the story says. “New evidence on sex differences in people’s brains and behaviors emerges with the publication of results from the BBC’s sex ID Internet survey. Survey questions and tests focused on participants’ sex-linked cognitive abilities, personality traits, interests, sexual attitudes and behavior, as well as physical traits. The archives of sexual behavior has devoted a special section in its April 2007 issue to research papers based on the BBC data.” Now, this is sort of high end in terms of its literature, so let me just summarize this. They conclude based on their massive survey that men are different than women. Now, why would somebody have to do a survey to conclude this ...[?] I’ll never forget TIME Magazine in the late nineties, or in the mid-nineties, actually ran a cover as though they were shocked, “New research indicates men and women are born different.”
She also leaves off that even if you only consider the tail, there's a huge spread in the tail. Google is likely a tail of the tail sort of place for interest.
Why do legal minds so often counsel cowardice, and holding onto principles like truth with a light grip so that you don't get overburdened if they get a little heavy.
He's lucky to have gotten fired. It's like getting thrown out of North Korea. Google needs new leadership at the top too.
Ok. I get it. In spite of extending a charitable impulse at the outset, she still strongly disagreed with him. That doesn't address the question whether he should have been fired for it. And that's the ONLY question that requires an answer.
I never realized that a different point of view....even a wrong one...was a fireable offense. As Bernie Goldberg wrote:
If Google had fired Damore for being naive, they'd have a case. But they didn't. They fired him for having an unacceptable opinion, and that just can't be tolerated at such an open-minded place as Google, a place that welcomes a wide array of points of view -- as long as they're acceptable liberal points of view.
- Krumhorn
I've never known a woman to like it that well, and there were lots of women. They don't come in holidays and weekends.
After a few years of this difference, the guys have a far superior skill level.
I think that captures it.
Tails of tails of interest. Maybe even tails of tails of tails sometimes.
I didn't care for Pascal either, and I never got beyond BASIC and Fortran on Ms-Dos, so I don't have much to compare it to.
The first practice program in Pascal that I wrote in college filled the screen with [something-something] butt-fuck Dr. Roberts (our prof) until his navel bleeds. Of course, he walked up behind the girl next to me who'd just seen my screen. Thankfully, he didn't.
I am not Laslo
This reminds me of the brawl that broke out after the NY TIMES published Bret Stephens' oped piece on AGW.
Adriana Heguy, a genomics scientist and professor of pathology at NYU, urged her colleagues to scrap their subscriptions.
“Composing my letter to the editor today and canceling @nytimes,” she tweeted. “‘Balance’ means a VALID alternative opinion, not pseudoscience. I’m so sad.”
That was the perfect illustration of the issue. The lefties are more than happy to embrace alternate views to their cherished catechism so long as they have the right to approve what a valid contrary opinion might be.
- Krumhorn
Get Smart (2008) is about the interest difference of men and women.
The skilled and feminisst-egotistical spy Anne Hathaway falls for the novice but natural spy Steve Carell who loves the job.
veni vidi vici said...
Nice of her to be "charitable" to men.
What a fucking twat.
Steve Hsu has some non-trash, er, non-Vox, articles.
Stanford Neuroscience Professor Nirao Shah and Diane Halpern, past president of the American Psychological Association, would both make excellent expert witnesses in the Trial of the Century.
"Two minds: The cognitive differences between men and women"
He WANTED to get fired. He's set up for a very nice law suit, he's attracted the attention of every potential serious employer, and he gave Google a well deserved flip of the finger. People are getting fed up with this bullshit and if the judiciary won't sort it out like adults, we the people will..
-Ann Althouse
Brilliant!!
Coming to the conversation late, I note that rhhardin had some of the same experience I did when I started programming in the early 60s. I almost lost my family from spending night after night staying up 'til dawn with the computer instead of being home in bed with my wife.
One thing in Lee's Vox article struck me. She complains that no diversity program would satisfy Damore. I can think of one he would love: none at all. Hiring strictly on job-related qualifications should do it. The workforce would be diverse to the exact extent that individuals of either (or any) sex, race, national origin, or previous condition of servitude could perform job-specific tasks at the level required.
To my feeble mind it seems to be a no-brainer that any diversity program, by its nature, discriminates against as well as for. It does not surprise me that this would not occur to someone writing at Vox.
More opportunity for those in the early stages of the educational system so as to create a deeper pool of qualified candidates at the output end would be much more productive, IMHO.
"he's attracted the attention of every potential serious employer"
I think he is blackballed with everyone in the F1000 and any contractors to these.
He may have problems with academic jobs also.
Granted, I don't know exactly what it is he does.
I hope he gets a good settlement as his career is going to suffer for a while.
> You guys are old.
and way out of date.
In cloud computing the valuable problems will be about data - AI, functional programming, devops, and glueing parts together from a software utility.
Seems there is a "gay gene", but no gene separating men and women...
"Steve Hsu has some non-trash, er, non-Vox, articles."
Steve Hsu gets away with what he has been doing for decades as he is Chinese, and has a public academic job. He is unfireable.
The big thing lacking here? Common sense. By anybody.
"I've never known a woman to like it that well, and there were lots of women. They don't come in holidays and weekends."
Very brilliant men often have a single-minded obsessiveness about their work that women do not generally exhibit.
Isaac Newton wouldn't have been much fun at a cocktail party.
"Very brilliant men often have a single-minded obsessiveness about their work that women do not generally exhibit."
Are you a brilliant man if you have a single-minded obsessiveness about women?
Or does that make you a psycho?
If the latter, I'm asking for a friend.
You see what is happening with social media/news in relation to computer tech and social justice-
Yet, many can't wait to usher in the age of the AI/robots and driver-less cars.
Snowden and Damore are two canaries in the same coal mine. I will now try to prove to a computer that I'm not a robot to post this comment...
On a lighter note, re l'affaire Goolag.
"Are you a brilliant man if you have a single-minded obsessiveness about women?"
Define brilliant. This fellow fits the bill, to a degree, and has attracted enormous attention for centuries -
and his work -
Which is worth reading. Modern English edition is quite good IMHO.
"History of My Life" Willard R. Trask
Terribly long (12 vol) and expensive for the whole thing, but there is an abridged edition (quite reasonably priced) that gets you the picture pretty well. If you want a unique perspective on the long-gone world of 18th century Europe. It is quite often a page-turner. No Kindle or Audio.
BTW, many here have noted that Althouse has really run w/ this theme of giving a nerd/loser a bunch of fuss re the world that supposedly oppresses him.
Ha ha ha ha ha.
As if this blog isn't doing that every single day, for it's readers and commenters.
Funny stuff.
Buw,
Thanks for the tip.
If I change into a nerd/beta who reads instead of being a strong/F-ing/rich alpha, I'll file that away re stuff losers do, i.e. read.
Um, the correct word is broadsplain'
Hugh Hefner made his singleminded obsessiveness about women work for him, didn't he?
It's Really Funny, but: my Failure courses in school were those in the domain of Translatology. (Which is why I don't have a Masters, nor a Bachelor in Translation.)
PB you don't come across as a rich alpha. you come across as permanently stuck in adolescence.
It's certainly evident that you are not much of a reader.
In cloud computing the valuable problems will be about data - AI, functional programming, devops, and glueing parts together from a software utility.
People are making millions of dollars trying to sell that to big companies. It's a recipe for generating large amounts of very low quality software to tight deadlines. In a few years the winds will shift again when companies realize they can't maintain it.
reading is for nerds.
The one undeniably good thing about DJT is that he doesn't read much.
Technically, he as other positive attributes:
He Fs models.
And, he's rich.
But, he's not hot, so he missed the triad. Triad = strong/Fs hotness/rich.
Up your game, man. Larger than life is a tall category.
I recently had an opportunity to observe this bullshit up close, at Intel (Adios, assholes! Thank God, I found some humans to work for). The really amusing thing was when the diversicrats would try to make the business case for "diversity". Remember, "diversity" at Intel is code for ... well, for example, dizzy twits wrote things on their little subsidized internal woman-in-Tech blogs like "as a diverse person, I feel ...", without a hint of irony. See, they are "diverse". You are -- whatever is the opposite. Universe, I guess.
Anyway, the business case for diversity, as these clowns tell the story, is that if Intel is going to make widgets that appeal to a diverse customer base, those products must be designed and produced by a diverse workforce, since only diverse people know what appeals to diverse people. Ahem.
The peculiarity of saying that there are no important differences between what men can do, and what women can do, while simultaneously claiming that there are certain very important things that only women can do, did not seem to cross their lo-res screens.
Actually, I think characterizing them as "lo-res" is unnecessarily charitable. The fact is, these witches know perfectly well that the shit they are shoveling makes no sense at all. That is why it is so important to them to ensure that no rational discussion is ever allowed on these topics.
They're not stupid. They're evil. Burn 'em!
PB&J,
I concede you are my superior in that personality style. It is in fashion.
And it is true that reading is out of fashion. The last generation of leaders who were great readers is on its way out. Perhaps the last chieftain-scholar, Gen Matt is, has his position for a few years, but he has no one to follow.
Statesmen-scholars and Warrior-scholars and even businessmen-scholars were quite common in their day.
Obviously, everything is better run these days, so good riddance eh?
PB, Trump was good looking when he was a young man. The only man on earth who was still hot at age 70 was Paul Newman.
But Buwaya, Winston Churchill might have been a great reader and superb writer, but he wasn't hot. PB has his priorities.
The only man on earth who was hot at 70? Paul Newman?
Bah.
I'm not 70, yet, but I like my chances.
"I'm not 70, yet, but I like my chances."
I have no way of judging that, buwaya, but I'll take your word for it. :)
Which brings us to the larger question. At what point do similar arguments intrude on all scientific pronouncements, those that argue from authority some true and not positions? Why even bother with the Scientific method? Measurement and inference from a clear truth. Newton for instance. Some laws are unwritten but still laws.
Bad Lieutenant said...
"On average, people should do what they're good at .....?"
8/11/17, 11:44 AM
In the real word my friend, "..people.." pick jobs/work that they like. That doesn't mean they are good at it, in fact, by a large most people suck at their chosen jobs no matter what they think.
Women who graduate college usually avoid STEM careers and pick (top 10) careers in fields as follows: Fashion Design, Interior Design, Elementary Education, Social Work, Nursing, Occupational Therapy, Art History, Medical Technology. Food and Nutrition and Health Care Administration. As compared to men (top 10) careers in fields as follows: Construction Management, Mechanical Engineering, Electrical Engineering, Physics, Aerospace Engineering, Civil Engineering, Computer Science, Landscape Architecture, Agriculture and Chemical Engineering.
This should tell everyone on the feminist "equal pay" bandwagon that it is mostly choice bias for women/men in picking careers, not institutional discrimination. Just another bubble being burst by the feminists own hands - LOL! | http://althouse.blogspot.com.au/2017/08/i-have-known-worked-for-and-taught.html | CC-MAIN-2017-43 | refinedweb | 8,649 | 65.93 |
Eric.
This is great information.
Can I do this without Visual Studio 2005?
Yep - the steps should be the same if you are using Visual Studio .NET (2001/2003). I am not sure about previous versions of VS (6, etc.)
If you aren't using Visual Studio at all, check your C++ compiler documentation to see if it supports building COM add-ins for Office. If it doesn't have explicit support, you may have to write a lot of code by hand in order to duplicate all the stuff that Visual Studio generates automatically.
Could you please upload the sample project?
I sent the example project to Jensen, but it looks like he might have just left for a nice long vacation. He probably won't be around to update the article for a while, so in the meantime you can download the sample project here:
I'll post the solution on officeblogs.net and link it to the article tonight.
OK, it's posted. Check the last line of the updated post.
This is great. It's exactly what I've been looking for.
Can I do this with the Express Editions from MS? This is great to be able to customize.
I don't have the Express Editions installed to test this so I'm not absolutely sure, but according to this VS feature matrix, it appears that the Express Editions do not support "writing add-ins" under Extensibility :(
This is excellent information, this is the only place that I have seen with comprehensive steps for adding ribbon support in unmanaged code. I do have one question though, I have everything running well but cannot seem to get the Id or Tag from the IRibbonControl in any of my callbacks. I am able to find the dispid without a problem using getidsofnames with “Id” and “Tag” but invoke always fails. Is there something that I am doing blatantly wrong. Could you post a code snippet for doing this? Thank you very much!
I'm not sure why it isn't working - are you using DISPATCH_PROPERTYGET instead of DISPATCH_METHOD?
Normally I wouldn't use the IDispatch interface - in this case I'd just use the IRibbonControl interface, like this:
Office::IRibbonControl *pRibbonControl;
pDispatch->QueryInterface(Office::IID_IRibbonControl, (void**)(&pRibbonControl));
pRibbonControl->get_Tag(&bstrTag);
I don't need to own office 12 to use MSO.DLL do I? Can I install my application that uses these cool controls on computers that don't have office 12?
Is there a redist package?
MSO.DLL is part of Office, and RibbonX only applies to 'eXtending' the built-in UI of Office, so it's not actually a full-featured set of components that can be reused in other apps.
But, the UI can be licensed for free, see this post for details:
There are several 3rd-party component vendors that provide controls you can use in your own apps. For example:
- DotNetBar ()
- SandRibbon ()
- etc..
In order to create Office Business Applications (OBAs), you need to understand the basics. There are
Great code ! I tried to do exactly the same thing in a shim DLL generated with the Shim Wizard v2 (my addin is in C#, but I need a C++/ATL DLL to deploy it in a better way). For an unknown reason, my callbacks aren't working. Actually, the GetCustomUI method is found correctly but my "OnAction" and "GetImage" callbacks are still not found. Any idea ?
Note that GetCustomUI is called directly on the interface, but the callbacks use IDispatch, as discussed above. Are you forwarding all of the IDispatch methods (GetIDsOfNames, Invoke(), etc) from your shim to your managed DLL?
It works ! I used the IDispatchImpl class of ATL which implements the GetIDsOfNames(), Invoke() for me so it wasn't the problem.
I finally managed to have the callbacks to work recreating a blank shared add-in with the wizard (like you did), implementing IRibbonExtensibility and ICallbackInterface and then including the CLR Loading methods of the Shim in this new addin. There was probably something wrong in the ATL options of the project generated by the Shim wizard.
."
Er, you do realise that this violates the transitivity requirement for QueryInterface?
()
Yes, if an add-in were to actually do this it would violate QI transitivity. It's not recommended and most tools (ATL, CLR-COM interop, etc.) won't let you do it, but the option is there for complex C++ add-ins if they need it.
I am new to C++ and ATL; but, how the heck would I use late binding to do this stuff so my addin works in older versions of outlook? I appreciate any feedback or tips. Thanks!
Since you're using C++ it should be pretty straightforward to make your add-in work on both Outlook 2007 and older versions (at least as far as RibbonX is concerned).
Previous versions of Outlook will simply not query for IID_IRibbonExtensibility, so you can have all that code there but it just won't run.
Your best best will probably be to link your add-in against the OFFICE11 version of the MSO.DLL typelibrary and just manually copy over the GUID and definition of IRibbonExtensibility from the OFFICE12 version.
It might work to just link against the 12 version and deploy that on 11, but I haven't tried it (I know that will not work for managed .NET add-ins because of PIA signing, but I am not sure about unmanaged).
Thanks! I did not realize using IDispatchImpl was already taking care of all this. I did try using OFFICE12 dll and everything worked fine all the way back to Outlook 2000. I don't undertand COM all that weel although I have created many programs (funny I know). Maybe I will try the GUID idea just to make sure??? Thanks again for the quick response.
Is there any samples to get at the office button through code? It says its possible in the customization guide for developers, doesn't tell you how.
It's just a tag in the XML under <ribbon> so you can get at it just like it's another tab:
<customUI ...>
<ribbon>
<officeMenu>
<!-- put your controls here -->
</officeMenu>
</ribbon>
</customUI>
Note that the Office Button itself cannot be altered, but the contents of the Office Menu can be (clicking the Button drops the Menu - the nomenclature is a bit confusing)
Were looking for a UI Engineer, I thought you might be interested in this job.
If you are, please send your word doc resume asap. Also they need someone who knows .Net
Please see the following job description below. If your not interested if you can referr someone that would be great.
Thanks Sean!
Michelle Yang
Technical Recruiter
Senior UI Engineer - Build the future system for a fast growing technology company - San Francisco
Location: San Francisco.
CALL TO ACTION
Keywords: Java, Swing, .Net, UI, multithreading, garbage collection, Core Java, Capital Markets, Fixed Income, real time applications, design and development, San Francisco
yangmichell@gmail.com
I don't know how to implement like "home style in word 2007" ,I have known gallery,but I have tried many times and failed. I have read your html,but i have not found xml descption.Could you tell me how to
implment it ?
thanks
I know this isn't your doing, but hopefully someone in Microsoft is escalating this issue:
We just can't believe how absurd this is.
Jensen Harris’ blog hosts an interesting article on Using RibbonX with C++ and ATL . RibbonX is the user
I'm using C++ 6.0 I have implememnted GetIDsFromNames and Invoke. The problem I have is I can't get my Invoke to work correctly.
For example OnAction callback I can do a GetIDsFromNames for ID (returns dispid 1) but when I try to Invoke it I get back an errorcode of 800a01a8
In fact any callback gives me the same problem.
Any ideas please ?
I don't know what that error code is. Where does it come from? I'm confused about how you are calling Invoke, Office should call it automatically after GetIDsOfNames. Do you have the "Show add-in user interface errors" option turned on, and does it give you any more information?
You are correct, Outlook is calling my Invoke method.
Sorry I wasn't clear. It after this point that things are not working correctly.
In my Invoke routine called by Outlook after makeing sure its the correct dispid I then do the following to get the Id of the RibbonControl - I know I canb use SmartPointers but I still get an error and I'm trying to track things down.
Ribbon::IRibbonControl * pCtrl = NULL;
LPDISPATCH pDisp = pDispParams->rgvarg[0].pdispVal;
pDisp->QueryInterface(Ribbon::IID_IRibbonControl, (LPVOID *)&pCtrl);
OLECHAR * szId = L"Id";
DISPPARAMS dispparamsNoArgs = {NULL, NULL, 0, 0};
DISPID dspid;
VARIANT vtResult;
// this returns S_OK and dispid = 1
hr = pCtrl->GetIDsOfNames(IID_NULL, &szId, 1, LOCALE_SYSTEM_DEFAULT, &dspid);
// this returns 800a01a8
hr = pCtrl->Invoke(dspid, IID_NULL, lcid, DISPATCH_PROPERTYGET, &dispparamsNoArgs, &vtResult, NULL, NULL);
Ah, I see now. I'm not sure why it's returning that error, but maybe it is related to the LCID you are passing in (I don't see where that's defined).
But, since you already have the IRibbonControl pointer, why don't you just call pCtrl->get_Id(&bstrId) instead of doing all of the IDispatch stuff?
The lcid is passed to me by Outlook - I have tried the default and it makes no difference.
pCtrl->get_Id(&bstrID) also fails with the same error.
I'm doing the IDispatch stuff to try to figure out where there error is happening.
The error code of 800a01a8 means something about Object Required - which I don't understand because I was passed the pointer to IRibbonControl.
I am looking at the code for get_ID() and it can only return these values:
0x80007000 E_OUTOFMEMORY
0x80004005 E_FAIL
0x80004003 E_POINTER
0x00000000 S_OK
So I have no idea where 0x800A01A8 is coming from (?).
What kind of control is this and where is it? (Button, gallery, etc, in a custom tab on an Outlook inspector?)
Its a button control on a custom tab on the Outlook inspector.
I can't reproduce the problem here, so I don't know what else to try :(. I am using an ATL-generated IDispatch like discussed above, so it's possible it has something to do with your homemade IDispatch implementation, though I don't know what it would be.
Hopefully you can work around it without needing the ID property, or you could try packing up a simple repro case and posting it in the Office support forums. Someone there should be able to take a look at it.
Thanks for trying. I need the ptr to IRibbonControl - all my buttons use the same OnAction callback - I then use the Id to distinguish which button I'm working with.
An interesting thing that I have noted is that all my button callbacks, getVisible, getLabel etc are all passing me just one parameter in Invoke (which answers to a QueryInterface for IRibbonControl).
Looking at the docs, getVisible should return 2 parameters ...
So I'm not sure what is going on.
if (!RibbonXml)
return E_POINTER;
*RibbonXml = SysAllocString(
L"<customUI xmlns=\"\">"
L" <ribbon startFromScratch=\"flase\">"
L" <tabs>"
L" <tab idMso=\"TabNewMailMessage\">"
L" <group id=\"GroupTest\""
L" label=\"Test\""
L" insertAfterMso=\"GroupClipboard\">"
L" <button id=\"CustomButton\""
L" imageMso=\"HappyFace\""
L" size=\"large\""
L" label=\"Click me!\""
L" onAction=\"ButtonClicked\"/>"
L" </group>"
L" </tab>"
L" </tabs>"
L" </ribbon>"
L"</customUI>"
);
return (*RibbonXml ? S_OK : E_OUTOFMEMORY);
In outlook 2007,i am adding a group test next to the clipboard in the Message tab.My above code is not working..Plz help
Finally one more question?
Also i need use xml file separately...plz tell me how to do it with vc++ 2005.
Make sure to enable "Show add-in user interface errors" in the options dialog. Then it will show you where your XML fails to validate. I can see a couple of errors to fix, such as startFromScratch=\"flase\".
As for loading from a file, it's just a string, so it shouldn't be too difficult. Searching for "load string from file" ought to come up with some code examples.
how to get a PNG file,how to converted into IpicureDisp and how to return back to office. give some sample code...
one more question? why do we need to implement ICallbackInterface?
Plz
Explain in detail..
i am using VC++2005
For info on PNGs and IPictureDisps, see this other blog post:
In the example above, the ATL classes use the typeinfo defined by ICallbackInterface in order to know how to invoke the callbacks, so it's necessary if you want to use ATL to auto-generate your IDispatch implementation. If you're not using ATL, it's not necessary.
You might have better luck posting further questions to the official support groups linked at the bottom of the article above. This is not a support forum.
."
Can I download this file from microsoft web site?
best regards from
No, MSO.dll is part of Office. It doesn't make sense to download it separately because you could not test any code written against it unless you have Office 2007 installed.
Your code works great for creating new ribbon items, but I'm having problems repurposing an existing ribbon item.
I have added the following to my xml:
<commands>
<command idMso="FileSaveAs" onAction="BuiltInControlClicked"/>
<command idMso="FileSaveAsPowerPoint97_2003" onAction="BuiltInControlClicked"/>
</commands>
I have added the following to my idl:
[id(6), helpstring("method BuiltInControlClicked")] HRESULT BuiltInControlClicked([in] IDispatch* ribbonControl, [in,out] VARIANT_BOOL* cancel);
I have added the following to my CConnect class:
STDMETHOD(BuiltInControlClicked)(IDispatch* ribbonControl, VARIANT_BOOL* cancel);
However, I get an error when Office tries to call the callback ("An error occurred while calling the callback").
Any suggestions about what I might be doing wrong?
Thanks
This error message means that the IDispatch::Invoke() call returned a failure HRESULT code, but the EXCEPINFO was not filled in and the code was not E_INVALIDARG or DISP_E_BADPARAMCOUNT (each of those conditions would give you a different message).
So my first thought would be that your callback is successfully getting invoked, but maybe it's returning a failure code?
If not, then the error code must be coming from the ATL classes that implement your IDispatch for you (maybe ATL is determining that your IDL info is malformed or something like that). When you added the new method to your IDL, did you use the ATL wizard to do it? You can do it manually, but the wizard does several steps which are easy to miss when doing it manually (updating the IDL, the typeinfo, the interface, the implementation, etc.)
I have a breakpoint in my callback which is never reached so the ATL classes must be returning the error. I created the callback once manually firat and then created a second callback using the ATL wizard and got the same result both times. Is it possible that the documentation about the expected method signature is out of date?
Sorry about that - you're right that the signature in the documentation is out of date. For the second parameter it should be "VARIANT*" instead of "VARIANT_BOOL*" in order to work with ATL's IDispatch parameter marshaling.
I figured that was not the problem since I would expect E_INVALIDARG to be returned in this case, but ATL seems to return a generic error code instead.
I'll get that documentation page updated with a note to use VARIANT instead of VARIANT_BOOL if you're using ATL.
Thanks for pointing out this problem!
Changing the method signature fixed my problem.
Thanks!!
Hi,
This is an informative blog, but it's updates are too infrequent. Can you update on a more regualr basis?
Thanks!
Paul
I appreciate the article but one would have to ask why the chuck didn't MS just include the ability to drag and drop "ribbons/commands/buttons into the tabs. Hmm - before it was right click on the toolbar, customize - remove buttons I didn't use drag in buttons I did. Now if I just learn programming I could probably create a button in 5 minutes - times the 30 I would change - OH yes, much simpler - thanks MS!
One new subscriber from Anothr Alerts
Trademarks |
Privacy Statement | http://blogs.msdn.com/jensenh/archive/2006/12/08/using-ribbonx-with-c-and-atl.aspx | crawl-002 | refinedweb | 2,729 | 62.98 |
Normally, terminate. powerful target <sys/types.h> and <signal convention in your programs because other components of the GNU/Linux system assume this behavior. For instance, shells assume this convention when you connect multiple programs with the && (logical and) and || (logical or) operators. Therefore, you should explicitly return zero from your main function, unless an error occurs..
If you typed in and ran the fork and exec example in Listing 3.4, you may have noticed that the output from the ls program often appears after the "main program" has already completed. That's because the child process, in which ls is run, is scheduled information about the process that exited, and you can choose whether you care about which child process terminated. function returns CPU usage statistics about the exiting child process, and the wait4 function allows you to specify additional options about which processes to wait for..
#include <stdlib.h>
#include <sys/types.h>
#include <unistd.h> <defunct>, information is no longer available. Listing 3.7 is what it looks like for a program to use a SIGCHLD handler to clean up its child processes.
#include <signal.h>
#include <string.h>
#include <sys/types.h>
#include <sys/wait.h>. | http://www.makelinux.net/alp/026 | CC-MAIN-2015-48 | refinedweb | 201 | 58.99 |
I am new to programming and ran into an interesting problem. I wrote a simple program to try out using switch statements and at the end of the program I put cin.get(); to make the program wait for the user to press return. When I run the program, it'll do everything fine except it skips over cin.get();. I am using C++ 2008 Express Edition to write, compile and run my code. Here is my code.
Thanks for everyone's help! :-DThanks for everyone's help! :-DCode:#include <iostream> using namespace std; int main() { int num; cout<<"Please pick a number from 1 to 3: "; cin>> num; switch ( num ) { case 1: cout<<"You are #1!\n"; break; case 2: cout<<"2 is the 2nd loneliest #...\n"; break; case 3: cout<<"3rd times a charm!\n"; break; default: cout<<"I did not understand you input.\n"; break; } cin.get(); } | http://cboard.cprogramming.com/cplusplus-programming/97834-cin-get-%3B-not-working-my-switch-statement.html | CC-MAIN-2015-32 | refinedweb | 150 | 86.3 |
Source: Devanagari (Unicode block)
Unicode is a standard for representing characters in different languages using four digit hexadecimal number called code points. Each character is associated with a unique code point. In python, these code points are represented as \uXXXX, where \u indicates Unicode and XXXX is the four digit hexadecimal number.
Nepali texts are written in Devnagari script. Unicode code points for characters used in Devnagari script ranges from \u0900 to \u097F.
Find Unicode table for Devnagari texts at: Unicode/UTF-8
Nepali texts are encoded using utf-8 encoding. It is one of the widely used character encodings. In utf-8 encoding an 8-bit block is used to represent a character.
Handling UTF-8 Encoded Texts in Python
Default encoding for Python 2.x is ASCII. So, when working with non-ASCII texts, include the encoding type in the source header, in this case utf-8.
# -*- coding: utf-8 -*-
If the header is not included in the source then you will get a SyntaxError:
SyntaxError: Non-ASCII character '\xe0' in file example.py on line 3, but no encoding declared; see for details
You don't need the declaration in Python 3 source because the default encoding for Python 3.x is utf-8.
String Representations in Python
Python 2.x supports two different types of strings, str, which is a 8-bit string and unicode, which is used for unicode strings.
In Python 2.x unicode strings are prefixed with u and byte strings are written as normal strings.
#python 2.x s = u'unicode string in Python 2' b = 'byte string in Python 2'
Unicode character can also be represented in a string using escape sequences.
a = u'\u0905' print (a) OUTPUT: अ
The python string supported by Python 3.x, str holds Unicode data and two byte types: bytes and bytearray. In Python 3.x Unicode strings are written as normal strings and byte strings are prefixed with b.
#python 3.x s = 'unicode string in Python 3' b = b'byte string in Python 3'
Encoding and Decoding
When characters are stored, they are stored as bytes, not characters. This is where the concept of encoding fits in. The process of mapping characters into bytes is called encoding and decoding is the process of mapping the bytes back into characters.
str() function is equivalent to byte string. Passing a string in encoding other than ASCII(default encoding in 2.x) gives an UnicodeEncodeError. This is because ASCII encoding represents characters in the range 0-127 only.
str(u'हरुले')
UnicodeEncodeError: 'ascii' codec can't encode characters in position 0-4: ordinal not in range(128)
The Unicode string must be encoded in utf-8 by using the encode() function before passing it to the str() function. In this stage the Unicode text is converted to bytes.
utfstring = u'हरुले' bytestring = utfstring.encode("utf8")
unicode() function takes 8-bit string which is converted to Unicode using the encoding that has been specified. If the encoding is not specified the default ASCII encoding is used. UnicodeDecodeError occurs when the characters in the string is above the ASCII range(128).
utfstring = u'हरुले' bytestring = utfstring.encode("utf8") unicode(bytestring)
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe0 in position 0: ordinal not in range(128)
UnicodeDecodeError can be fixed using the decode(). Here, byte strings are converted to Unicode code points.
utfstring = u'हरुले' bytestring = utfstring.encode("utf8") unicode(bytestring.decode("utf8"))
The default encoding for Python 3.x is utf-8, so all string in Python 3.x source code are Unicode.
Inspect Unicode Properties
Properties of Unicode characters can be inspected using the python unicodedata module. The module includes Unicode Character Database, which contains information about Unicode code points.
#-*- coding: utf-8 -*- import unicodedata
To find Unicode character code point in decimal use ord() function.
print ord(u"क") OUTPUT: 2325
To find Unicode character's name use name() function.
print unicodedata.name(u"क") OUTPUT: DEVANAGARI LETTER KA
Unicode in re
The re module provides regular expression support for both 8-bit and Unicode strings. Python 3.x matches Unicode strings by default. Nepali text processing using re is explained in details in Applications of Regular Expression in Text Analysis. Python 2.x has different approach to using regular expressions with Unicode strings.
The regex string in Python 2.x is changed into Unicode-escape string by prefixing the string with u.
string = u'नपाली कंग्रेसका'
To use search and match Unicode strings, ur is prefixed with regular expression pattern so that it becomes a raw Unicode string.
Unicode strings can be processed using re module.
import re
One common task of string processing is replacing character in string.
string = u'नपाली कंग्रेसका' replaced_string = re.sub(ur'[\u0928-\u0929]+', u'ने', string) print replaced_string OUTPUT: नेपाली कंग्रेसका
In the above example, न, which lies within the Unicode range u0928-u0929 is replaced with ने.
Finding patterns in strings is another common task in string processing. re.compile() function can be used compile a regular expression pattern into a regular expression object. findall() function can be used to find the pattern in a Unicode string.
regex = re.compile(ur'[^\u092E-\u092E]+') result = regex.findall(u'अमला') print result OUTPUT: [u'\u0905', u'\u0932\u093e'] //\u0905=अ and \u0932\u093e = ला'
match() function can be used to search raw Unicode string at the beginning of the search string. search() function can be used to search raw Unicode string anywhere in the search string.
wordlist = [u'चार', u'पाँच', u'छ'] print ([w for w in wordlist if re.match(ur'\u091A',w]) OUTPUT: [u'\u091a\u093e\u0930'] // \u091a\u093e\u0930 = चार wordlist = [u'चार', u'पाँच', u'छ', u'एक'] print ([w for w in wordlist if re.search(ur'\u093e\u0901',w)]) //\u093e\u0901= ाँ OUTPUT: [u'\u092a\u093e\u0901\u091a'] // \u092a\u093e\u0901\u091a = पाँच
The regular expression character classes, such as \w, \W, \b, \B, \d, \D, \s and \S, can also be used in Unicode processing. To use these character classes in Python 2.x you need to set flag to re.U/re.Unicode.
text= re.compile('[\W]+',re.UNICODE) print text.findall(u'नपाली') OUTPUT: [u'\u093e', u'\u0940'] // \u093e = ा and \u0940 = ी
\W character class matches non-alphanumeric characters. However, in the above example, \W class considers vowel modifiers as non-alphanumeric characters which is not correct for Nepali texts, so test the predefined character classes before using them.
Reading and Writing Unicode
Reading and writing unicode data involves reading from or writing into a file with a particular encoding. We can read from or write into an encoded file using the codecs(encoders and decoders) module in Python 2.x.
The codecs module provides functions for encoding and decoding with any text encodings. The text file with a particular encoding can be opened in read, write or both mode using codecs.open() function. The default mode opens file in read mode 'r'.
import codecs #default mode f= codecs.open('test.txt', encoding='utf-8')
codecs.open() function returns Unicode text, so the text returned by object f must be encoded using suitable encoding type.
for line in f: line = line.strip() print (line.encode("utf8"))
To print the read text in \uXXXX representations encode it using the python specific encoding called unicode_escape, which converts all the non-ascii characters in their respective \uXXXX forms. Code points above ASCII range(0-127) and below 256 are represented in two digit forms as \xXX.
INPUT: तराई-मधेसको OUTPUT: \u0924\u0930\u093e\u0908\u2013\u092e\u0927\u0947\u0938\u0915\u094b //python 2 b'\\u0924\\u0930\\u093e\\u0908\\u2013\\u092e\\u0927\\u0947\\u0938\\u0915 \\u094b' //python 3
write() function can be used to write the file.
f = codecs.open('test', encoding='utf-8', mode='w+') f.write(u'\u092a\u0930\u093f रुपायन')
The codecs module still works in Python 3.x but it is no longer needed because Python 3.x, comes with a built-in open() function to work with encoded files.
with open('test.txt', encoding='utf-8') as f: for line in f: print(repr(line))
write() function can be used to write the file.
with open('test', encoding='utf-8', mode='w+') as f: f.write('\u092a\u0930\u093f रुपायन')
References
Tags: Python , Regular Expressions | http://nepalinlp.com/detail/processing-unicode-devnagari-in-python/ | CC-MAIN-2018-47 | refinedweb | 1,415 | 66.84 |
noun
a plan for the successful operation of a business, identifying sources of revenue, the intended customer base, products, and details of financing.
During the Anti-Trust hearing with Mark Zuckerberg in April 2018, we got to watch Zuckerberg explain the Internet to lawmakers of the United States. One Senator, Orrin Hatch seemed especially confused by how exactly Facebook managed to buy Zuckerberg his expensive suit. Zuckerberg explained how advertisements form the backbone of Facebook’s return revenue. Orrin however didn’t seem to catch the drift as he proceeded to ask, “How do you maintain a business model whereby your users do…
The Coca-Cola company has embraced the reuse of its bottles and all the environmental and monetary benefits that come with that. When customers buy a Coke drink in glass bottles, they are rewarded upon returning the empty bottle. This got me thinking about all the plastic bottles and cans that do no warrant a reward leading to them being tossed and wasted. There should be a way of automatically identifying Coca Cola bottles for reuse within the company.
Coca Cola bottles are easily discernable using the labels that have a large, “Coca Cola “ print on them. The print is…
github.com
Lung Cancer is the leading cancer killer in both men and women in America. It claims more lives than breast, colon, and prostate cancer combined. Like all the other kinds of cancer, a nodule in the lungs is what is indicative of cancer.
CT scans taken of the lungs are what the doctors usually use to determine the presence of nodules in the lungs. To note is that several images are drawn from a single patient such that a 3D image of the lungs can be formed. The images are of the saggital, coronal and axial view. Hence for a…
As a recent believer of the works of dimensionality reduction and noise filtering, I would be averse to keep myself from preaching the good word of PCA.
Principal Component Analysis, like most other ML models, is self descriptive. Take it to mean that we are taking the principal(or most important) components of the data that is provided to us. Thusly, PCA is essentially a dimentionality reduction algorithm.
from sklearn.decomposition import PCA
pca = PCA()
Higher dimension data can be reduced to lower dimensionality by zeroing some of the components(principal). The purpose is mainly to maintain maima data variance.
pca =…
One of the playground datasets for Data Scientists is the House Prices dataset. I will use this to demonstrate how to go about completing a data science project involving Regression.
Before taking up any DS project, one should always try to get an understanding of what exactly the output of the project should be. You can do that in a couple of ways:
1. Check the sample submission files
2. Determine the target data
Doing this helps you determine whether you will employ the services of a Supervised or Unsupervised ML technique. If there is a target data (like in…
Riddle me this, what probably has two thumbs but is definitely the reason why you are so screwed up mentally? The answer, your African parent.
I recently decided to go for therapy after I had suffered an intense bit of depression. I can’t lie, things were looking pretty bleak for me. Let’s delve a bit into what was happening with me. I had just got(Notice I used ‘got’ and not ‘gotten’) out of a two year relationship after finding out that my girlfriend had been having fun times with honestly my closest friend. My mind was in shambles since I…
A neural network can be described as a series of algorithms that solve a problem by mimicking the way the human brain works. Neural networks adapt to different inputs without having to change the algorithm.
My lack of expertise in human Biology keeps me from delving too deep into the human brain. However, I believe it’s pretty much obvious that when we’re talking about neural networks, the inspiration comes from the neuron in human beings.
The main parts of a neuron are
Neurons take in inputs (p).
The input is weighted using a weight function (w).
…
For this we will use the Breast Cancer Wisconsin Dataset.
The aim here is to classify tumors of the breast as either ‘Malignant’ or ‘Benign’.
Firstly, I feel it is important to decide whether we need a Supervised or Unsupervised Machine Learning technique. Supervised ML techniques are used when we need to feed the algorithm with the target dataset (Usually labelled y_train) whereas in Unsupervised ML one does not assign the algorithm the target dataset but instead allows it to form associations of its own and classifies the datasets using the aforementioned associations.
The aim is clearly to classify the…
Anyone who has ever had a conversation on programming with me knows how I love to spam the phrase, “You have to be ready not to know anything”. Looking back I can still remember being a total greenhorn (Forgive my use of this cliché) on matters concerning DS. I am by no measure an expert on the matter but I wouldn’t count myself as a slouch either.
A lot of experts recommend a top-down approach when trying to learn Data Science and Machine Learning and I feel the same. …
I am an aspiring Data Scientist who enjoys talking about the topic. | https://tommytsuma7.medium.com/?source=post_internal_links---------7---------------------------- | CC-MAIN-2021-25 | refinedweb | 910 | 61.36 |
Because processor-specific assembly language is used, this library does not work on Netduino, ChipKIT or other advanced “Arduino-like” boards. Others may have written code and libraries for such boards, but we can’t provide technical support for any bugs or trouble there; that’s frontier stuff. Some of this is covered in the “Advanced Coding” section.
- Visit the Adafruit_NeoPixel library page at Github.com.
- Select the “Download ZIP” button, or simply click this link to download directly.
-.
- Re-start the Arduino IDE if it’s currently running.
Basic ConnectionsTo get started, let’s assume you have some model of Arduino microcontroller connected to the computer’s USB port. We’ll elaborate on the finer points of powering NeoPixels later, but for now you should use a separate 5V DC power supply (or a 3.7V lithium-ion battery for a Flora wearable project).
Identify the “input” end of your NeoPixel strip, pixel(s) or other device. On some, there will be a solder pad labeled “DIN” or “DI” (data input). Others will have an arrow showing the direction that data moves. The data input can originate from any digital pin on the Arduino, but all the example code is set up for digital pin 6 by default. The NeoPixel shield comes wired this way.
If using Flora with an attached lithium-ion battery: connect the +5V input on the strip to the VBATT pad on Flora, GND from the strip to any GND pad on Flora, and DIN to Flora pin D6. 144 pixel strips are so tightly packed, there’s no room for labels other than –, + and the data direction arrows. Data is the un-labeled pad.
Can NeoPixels be powered directly from the Arduino’s 5V pin?
A Simple Code Example: strandtestLaunch, you should see a little light show.
Nothing happens!
All NeoPixel sketches begin by including the header file:
#include <Adafruit_NeoPixel.h>
);
The last line declares a NeoPixel object. We’ll refer to this by name later to control the strip of pixels. There are three parameters or arguments in parenthesis:
-.
void setup() { strip.begin(); strip.show(); // Initialize all pixels to 'off' }
void setup() { strip.begin(); strip.show(); // Initialize all pixels to 'off' }
There are two ways to set the color of a pixel. The first is:
strip.setPixelColor(n, red, green, blue);
strip.setPixelColor(n, red, green, blue);
The next three arguments are the pixel color, expressed as red, green and blue brightness levels, where 0 is dimmest (off) and 255 is maximum brightness.
To set the 12th pixel (#11, counting from 0) to magenta (red + blue), you could write:
strip.setPixelColor(11, 255, 0, 255);
strip.setPixelColor(11, 255, 0, 255);
strip.setPixelColor(n, color);
strip.setPixelColor(n, color);
You can also convert separate red, green and blue values into a single 32-bit type for later use:
uint32_t magenta = strip.Color(255, 0, 255);
uint32_t magenta = strip.Color(255, 0, 255);
strip.show();
strip.show();
uint32_t color = strip.getPixelColor(11);
uint32_t color = strip.getPixelColor(11);
uint16_t n = strip.numPixels();
uint16_t n = strip.numPixels();
strip.setBrightness(64);
strip.setBrightness(64);!
- forgetting to call strip.begin() in setup().
- forgetting to call strip.show() after setting pixel colors.
Can I have multiple NeoPixel objects on different pins?
Adafruit_NeoPixel strip_a = Adafruit_NeoPixel(16, 5); Adafruit_NeoPixel strip_b = Adafruit_NeoPixel(16, 6);
Adafruit_NeoPixel strip_a = Adafruit_NeoPixel(16, 5); Adafruit_NeoPixel strip_b = Adafruit_NeoPixel(16, 6);.
Pixels Gobble RAMEach. | https://learn.adafruit.com/adafruit-neopixel-uberguide/arduino-library | CC-MAIN-2015-48 | refinedweb | 565 | 60.51 |
The QTextIStream class is a convenience class for input streams. More...
#include <qtextstream.h>
Inherits QTextStream.
List of all member functions.
For simple tasks, code should be simple. Hence this class is a shorthand to avoid passing the mode argument to the normal QTextStream constructors.
This makes it easy for example, to write things like this:
QString data = "123 456"; int a, b; QTextIStream(&data) >> a >> b;
See also QTextOStream.
Constructs a stream to read from the array ba.
Constructs a stream to read from string s.
Constructs a stream to read from the file f.
Search the documentation, FAQ, qt-interest archive and more (uses):
This file is part of the Qt toolkit, copyright © 1995-2005 Trolltech, all rights reserved. | https://doc.qt.io/archives/2.3/qtextistream.html | CC-MAIN-2021-49 | refinedweb | 121 | 76.52 |
Hooking the System Service Dispatch Table (SSDT)
Introduction
In this article we’ll present how we can hook the System Service Dispatch Table, but first we have to establish what the SSDT actually is and how it is used by the operating system. In order to understand how and why the SSDT table is used, we must first talk about system calls.
We know two ways that a system call can be invoked:
- int 0x2e instruction: used mainly in older versions of Windows operating systems, where the system call number is stored in the eax register, which is then called in the kernel.
- sysenter instruction: the sysenter instruction uses the MSRs in order to quickly call into the kernel and is mainly used in newer versions of Windows.
The SSDT table holds the pointer to kernel functions, which are used upon system call invocation either by “int 0x2e” or sysenter instructions. The value stored in register eax is a system call number, which will be invoked in the kernel. On the picture below we can see that sysenter is called in ntdll.dll library and the system call number 0x25 will be called.
When calling the system call routine, the system call number is stored in the eax register, which is 32-bit value. But how is that number then used? It can’t be an index into a table of pointers, because if 32-bits were used as an index, it would mean the table is 4GB large, which is certainly not so. With a little bit of research we can find our that the system service number is broken up unto three parts:
- bits 0-11: the system service number (SSN) to be invoked.
- bits 12-13: the service descriptor table (SDT).
- bits 14-31: not used.
Only the lower 12-bits are used as an index into the table, which means the table is 4096 bytes in size. The upper 18-bits are not used and the middle 2-bits are used to select the appropriate service descriptor table – therefore we can have a maximum of 4 system descriptor tables (SDT). In Windows operating systems, only two tables are used and they are called KeServiceDescriptorTable (middle bits set to 0x00) and KeServiceDescriptorTableShadow (middle bits set to 0x01).
This means that the value in the EAX register, which is the system service number, can hold the following values (presenting the 16-bit values):
- 0000xxxx xxxxxxxx: used by KeServiceDescriptorTable, where the x’s can be 0 or 1, which further implies that the first table is used if the system service numbers are from 0×0 – 0xFFF.
- 0001yyyy yyyyyyyy: used by KeServiceDescriptorTableShadow, where y’s can be 0 or 1, which further implies that the second table is used if the system service numbers are from 0×1000 – 0x1FFF.
This means that the system service numbers in the EAX register can only be in the range of 0×0000 – 0x1FFFF, and all other values are invalid.
We can dump all the symbols which start with KeServiceDescriptor by using the “x nt!KeServiceDescriptor*
” command in WinDbg. The result of running that command can be seen below.
Note that the KeServiceDescriptorTable is exported by the ntoskrnl.exe, while the KeServiceDescriptorTableShadow is not exported. Both Service Descriptor Tables (SDTs) contain a structure called System Service Table (SST), which have a structure like presented below [2].
Every System Service Table (SST) contains the following fields:
- ServiceTable: points to an array of virtual addresses – the SSDT (System Service Dispatch Table), where each entry further points to a kernel routine.
- CounterTable: not used.
- ServiceLimit: number of entries in the SSDT table.
- ArgumentTable: points to an array of bytes – the SSDP (System Service Parameter Table), where each byte represents the number of bytes allocated for function arguments for corresponding with each SSDT routine.
Let’s present an overwrite of the process with a picture below, where we can see that “int 0x2e” as well as the sysenter instruction execute a system call based upon the SSN stored in the eax register. The Service Descriptor Table Number (SDTN) points to one of the 4 SDT tables, where only the first two are actually used and point to the SST. The KeServiceDescriptorTable points to one SST, which further points to the SSDT table. The KeServiceDescriptorTableShadow points to two SSTs where the first one points to the same SSDT table and the second one point to a secondary SSDT table.
Let’s now look at the whole process from a practical point of view and actually present all the previously described stuff on an actual Windows operating system. We’re already presented the KeServiceDescriptorTable and KeServiceDescriptorTableShadow, so we must now display the SST fields of the KeServiceDescriptorTable as well as the KeServiceDescriptorTableShadow, which we can do with the dps command as presented below. Notice that the first 4 bytes contain a pointer to the SSDT table KiServiceTable, while the last 4 bytes contain the pointer to the argument table KiArgumentTable.
To summarize the values above, let’s present all fields of:
- KeServiceDescriptorTable
- ServiceTable : 0x826af6f0
- CounterTable : not used
- ServiceLimit : 0x191 (401 in decimal)
- ArgumentTable : 0x826afd38
- KeServiceDescriptorTableShadow
- ServiceTable : 0x967a5000
- CounterTable : not used
- ServiceLimit : 0x339 (825 in decimal)
- ArgumentTable : 0x967a602c
Note that the KeServiceDescriptorTableShadow actually contains two SST entries, where the first one is the same as with KeServiceDescriptorTable, and a second SST is totally different. This is also the reason why we displayed 32 bytes in the second dps commad whereas we only displayed 16 bytes in the first dps command.
We can also see that there are 0x191 entries in the first SSDT table, while there are 0x339 entries in the second SSDT table. The maximum number of entries in a single SSDT table is 0x400 (or 1024 in decimal). To dump the whole KiServiceTable we can use the “dps nt!KiServiceTable L poi nt!KiServiceLimit” command, which will automatically dump the whole table since we’re using the “poi nt!KiServiceLimit”. The “dps nt!KiServiceLimit l1” command actually prints the value of 0x191, which is exactly the number of entries in the SSDT table.
We’re specifically interested in the KiServiceTable, which is the SSDT table we’ll be hooking in the remainder of the article. At this point I just wanted to say that the SSDT table is very similar to the IDT table we’ve met in real mode, which is still used before entering protected mode. Therefore, we must just overwrite the pointers in the SSDT table in order to hook any kernel routine stored in there.
Read-Only Memory and the SSDT Table
The first problem that we encounter when overwriting the SSDT entries is that the SSDT is located in a read-only memory, which means that we can’t just write a pointer to our function in the selected entry in the SSDT table.
To really understand what’s going on when we would like to read, write or execute from some virtual address, take a look at the picture below. The picture presents the whole process of translating a virtual address to its corresponding physical address and the security features along the way.
Let’s describe the picture above in detail: in the code the code segment and virtual address are used to reference certain memory location. At first, the WP flag in CR0 register is checked whether it contains the value 0 or 1. Basically the WP is used to protect the read-only memory from being written to in kernel-mode, which allows additional protection when we’ve gained access to protected mode. Note that the WP bit only takes effect in kernel-mode, while the user-mode code can never write to read-only pages, regardless of the value stored in WP bit. The WP bit can hold two values:
- 0: the kernel is allowed to write to read-only pages regardless of the R/W and U/S flags in PDEs and PTEs.
- 1: the kernel is not allowed to write to read-only pages due to the WP not being set; rather than that, the R/W and U/S flags in PDEs and PTEs are used to determine whether kernel has access to certain pages – it only has access to pages marked as writeable, but never to pages marked as read-only.
When checking whether certain:
- Segment Table: if the DPL of the segment descriptor is higher than the RPL of the code segment register, then the access to that segment is allowed, otherwise it’s ignored. The DPL member in the segment descriptor is used to differentiate between privileged and unprivileged instructions. In segment table there are two code and two data segments, one having the DPL=0 and the other the DPL=3. They are both mapped to the same base address 0x00000000, but are considered as different segments. This is because the CS register can hold the value 0x08 (when privileged code executes) or 0x1B (when unprivileged code executes). If we’re executing a code when CS is set to 0x08 then privileged code is being executed, but if we’re executing the same code when CS is set to 0x1B, it’s considered unprivileged code.
- Page Directory Table / Page Table: if the WP flag in CR0 register is set to 1, then the R/W and U/S flags in page directory table and page table are used to defined access the kernel has to specific memory page. If the R/W (Read/Write) flag is set to 1, then the kernel can read and write to the page, otherwise it can merely read from it. The U/S (User/Supervisor) flag is set 1 for all pages that contain the kernel addresses, which are above 0x80000000. If an unprivileged code (from user-mode) tries to access such pages, an access violation occurs: a code that has CS register set to 0x1B doesn’t have access to such pages.
We’ve seen that kernel address protection is realized with the combination of segmentation and page-level protection. Basically the CS register is used to determine whether the code is given read/write access to the specific page in memory. Remember that we can execute privileged instructions from user-mode by using one of the following approaches [4]:
- “int 0x2e” instruction
- sysenter instruction
- far call
Now that we’ve got that all cleared out, let’s see what kind of display we have regarding the SSDT table. First we have to check the value stored in the 16-bit (WP – Write Protect) of the CR0 register. We can easily do so by using the .formats command to print the value of CR0 register in binary form as seen below. The highlighted bit is WP bit and is set to 1, which means that page directory table and page table are used to determine whether the CPU can read/write from/to pages. Because of this, the CPU won’t be able to write to read-only pages.
Let’s now display a single entry from the SSDT KiServiceTable by using the “dps nt!KiServiceTable l1” command, which displays the first entry from the SSDT table that’s located at address 0x826af6f0. After that, we used the “!pte 0x826af6f0” command to display various flags about the PDE/PTE in which the address is located.
On the picture above we can see the first column, which represents the PDE that has the following flags: DAKWEV and the second column, which represents the PTE that has the following flags: GAKREV. The flags in PDE/PTE entries printed by the !pte command are presented below [5]:
- Valid (V): in data is located in physical memory and has not been swapped out.
- Read/Write (R/W): the data is read-only or writeable.
- User/Kernel (U/K): the page is either owned by user-mode or kernel-mode.
- Writethrough (T): a writethrough caching policy.
- CacheDisable (N): whether or not the page can be cached.
- Accessed (A): set when the page has been accessed by either reading from it or writing in it.
- Dirty (D): the data in the page has been modified.
- LargePool (L): only used in PDE and specifies whether large page sizes are used, which is true when PAE is enabled. If set, the page size is 4MB, otherwise it’s 4KB.
- Global (G): affects the translation caching flushes and translation lookaside buffer cache.
- Prototype (P): a software field used by Windows.
- Executable (E): the instructions in the page can be executed.
Once we’ve studied the flags used by the !pte command, we can see that the PDE is marked as writeable, but PTE is read-only, but both are executable. This means that we cannot simply write some values into the PTE where the SSDT table is located. Now that we know that we’re dealing with a system service dispatch table located in read-only memory, we can start looking at a ways to bypass that limitation and mark the memory as writeable. There are three ways to get write access to the SSDT table [1]:
- Change CR0 WP Flag: if we set the WP flag in CR0 register to 0, the PDE/PTE restrictions are not considered when granting write access to the read-only pages when we’re in kernel-mode.
- Modify Registry: we can alter the “HKLMSYSTEMCurrentControlSetControlSession ManagerMemoryManagementEnforceWriteProtection” registry key, which allows us write access.
- MDL (Memory Descriptor List): the Windows operating system uses MDL to describe the physical page layout for a virtual memory buffer. To make the SSDT table writeable, we need to allocate our own MDL, which is associated with the physical memory of where the SSDT table is stored. Because we have allocated our own MDL, we can control it in any way we want and therefore can also change its flags accordingly.
In this article, we’ll take a look at how we can make the SSDT writeable through the first method by changing the WP bit in the CR0 register. I chose this method, because it’s the easiest to achieve and can be programming a few lines of assembly code. In the output below, I presented two functions for enabling and disabling the WP bit in the CR0 register.
[plain]
/*
* Disable the WP bit in CR0 register.
*/
void DisableWP() {
__asm {
push edx;
mov edx, cr0;
and edx, 0xFFFEFFFF;
mov cr0, edx;
pop edx;
}
}
/*
* Enable the WP bit in CR0 register.
*/
void EnableWP() {
__asm {
push edx;
mov edx, cr0;
or edx, 0x00010000;
mov cr0, edx;
pop edx;
}
}
[/plain]
The DisableWP function is storing the value of register edx on the stack, so we can later restore it: the push and pop instructions. Then we’re moving the value of register CR0 into register edx and performing an AND operation with 0xFFFEFFFF (notice the middle E). This effectively performs the AND operation with a binary number [1111 1111 1111 1110 1111 1111 1111 1111], which means that we’re keeping all the bits except the WP bit from the original CR0 register – basically we’re clearing the WP register. After the AND operation, we’re overwriting the value in CR0 register.
The EnableWP function does something very similar to the DisableWP function, except that it’s performing an OR operation with 0x00010000. This means that the OR operation with a binary number [0000 0000 0000 0001 0000 0000 0000 0000] is performed – this sets the WP flag back to 1.
The DisableWP function makes the memory where the SSDT table is contained writeable and in the next part of the article we can actually hook a function call by overwriting the address in the SSDT table. Remember that hooking the functions without somehow enabling kernel-mode to write to the read-only memory wouldn’t be possible.
Let’s see what happens if we don’t disable the WP bit in the CR0 register, but still try to write to the read-only memory. We can try the same program as defined in hookssdt project, except that we comment out the DisableWP() function call in HookSSDT function. At the time of the InterlockedExchange function call, the system will crash and we’ll be left with the following message written in WinDbg debugger.
From the message, we can see that the previous DbgPrint was still successfully executed, but the system crashed at the time of calling InterlockedExchange. At this time it’s not immediately clear why the crash occurred, but we can execute the “!analyze -v” command, where we can see the details about the error. Part of the output can be seen below, where we can see that the driver wanted to write to read-only memory. This happened because we didn’t call the DisableWP() function, which would disable the WP bit in the CR0 register and thus enable the kernel-mode to write to read-only memory.
Because we clearly don’t have write access to the read-only memory, the system crashed giving us the above error. We can also execute the “r cr0” after the crash to display the contents of the CR0 register to make sure that the WP bit is set to 1. If you look at the picture below, you can notice the middle 1 in the register value, which indicates the WP bit is set and the kernel doesn’t have read-write access to the read-only memory.
In this part of the article we’ve shown why we need to use one of the three techniques to enable kernel-mode to be able to write to read-only memory. Additionally, we also presented how the operating system performs the checks when going from user-mode to kernel-mode to disable user-mode code to access the privileged memory.
Setting up the Environment
At this point we also have to present what kind of environment we’ll be working with and how to set it up. I was working with the Windows 7 operating system. We need to be aware of the fact that for kernel debugging we need two Windows operation systems (basically with SoftICE we would only need one, but we’ll be doing everything with WinDbg).
The first Windows operation system needs to be configured so it will start in debugging mode – this can be done by executing the following instructions in Windows cmd.exe under Administrator privileges. Commands below will set Windows to start in debugging mode where we’ll be able to debug Windows over a serial port.
[plain]
# bcdedit /set debug on
# bcdedit /set debugtype serial
# bcdedit /set debugport 1
# bcdedit /set baudrate 115200
# bcdedit /set {bootmgr} displaybootmenu yes
# bcdedit /timeout 10
[/plain]
In order to debug the Windows operating system, we must first start another Windows virtual machine with WinDbg installed and go to File – Kernel Debugging and accept the defaults as presented below. If we didn’t use exactly the same commands as outlined above, we need to change the settings in the Kernel Debugging dialog appropriately.
After pressing the OK button, the WinDbg will listen for incoming connections on a serial port. Because we’ve setup the Windows operating system in the other virtual machine to connect to the same serial port, we’ll be able to debug Windows from the started WinDbg debugger. More than that, we’ll be able to follow the execution of the whole operating system, not just the user-mode code. When debugging with Ida Pro, OllyDbg or ImmunityDebugger, we can’t see the kernel-mode instructions located at virtual addresses 0x80000000-0xFFFFFFFF being executed; we’re jumping right over them, because we’re running a user-mode debugger. In this case, we’ve specifically instructed our Windows operating system to connect to the serial port, where the WinDbg debugger is listening for an incoming connection. Therefore, we’re able to debug user-mode as well as kernel-mode instructions with ease.
At this point, we’ve effectively started Windows operating system in debug mode and we can start/stop it at will though WinDbg debugger. Let’s first pause.
We’ve presented how we can go about kernel debugging in a Windows operating system. We need to use that knowledge in the next section where we’ll actually hook a function whose pointer is stored in the SSDT table. Remember that without kernel debugging, this kind of endeavour would have been much harder, if not impossible to achieve.
Hooking the SSDT
Up until now, we’ve been laying the ground preparing for the actual SSDT hooking and at this time we’ve done all preparations and can actually do it.
Remember that we mentioned that KeServiceDescriptorTable is exported by the ntoskrnl.exe, while the KeServiceDescriptorTableShadow is not? We’re going to need to know this detail later in the article, so you should make a note of it. Whenever a symbol is exported by the kernel, we can access it by using the “__declspec(dllimport)” declaration. The dllimport can be used to tell the compiler that a specified function is exported by the kernel and shouldn’t throw an error when using it in the program – usually the compiler would throw an error, because it doesn’t know anything about that function, so we must specifically tell it not to worry about it. When using an exported symbol like that, it is the job of a linker to find out its address and make appropriate changes by exchanging the symbol with its actual address. So, in order to be able to use the KeServiceDescriptorTable symbol, we need the following code.
[plain]
/* The structure representing the System Service Table. */
typedef struct SystemServiceTable {
UINT32* ServiceTable;
UINT32* CounterTable;
UINT32 ServiceLimit;
UINT32* ArgumentTable;
} SST;
/* Declaration of KeServiceDescriptorTable, which is exported by ntoskrnl.exe. */
__declspec(dllimport) SST KeServiceDescriptorTable;
[/plain]
Since the KeServiceDescriptorTable symbol is just a location in the memory, we need to define the structure and apply it to that memory location. This means that when referencing the KeServiceDescriptorTable, the 16-byte SystemServiceTable will automatically be applied to the contiguous memory to form a meaningful structure. Therefore, we’ll be able to access the member of the SystemServiceTable by using the dot notation.
We also have to discuss a few functions that we need to be aware of when hooking SSDT entries. The first function is InterlockedExchange, which sets a 32-bit variable to the specified value as an atomic operation. Atomic operation is an operation which will be complete in one go, no matter what. That means that when calling the InterlockedExchange function, the value 32-bit value will be written to the specified location without being interrupted by some other processor. Note that there’s only one SSDT table for all processors, so we need to ensure that the second processor cannot interrupt the first processor while writing the values, because we could end up with first processor writing just a word, not a dword to the destination, which could result in a system crash or some other error. The prototype of the InterlockedExchange function can be seen below [7].
The InterlockedExchange function takes 2 arguments as input [7]:
- Target: a pointer to the value to be exchanged.
- Value: the value to be exchanged with the value pointed to by the Target.
The InterlockedExchange function returns the initial value of the Target parameter.
The second function, which we’ll also be hooking, is the ZwQuerySystemInformation, which retrieves the specified system information. Note that the function is no longer available on the Windows 8 operating system. The prototype of the function is presented below [8]:
The function takes the following parameters [8]
- SystemInformationClass: specifies the type of system information that we would like to retrieve. The parameter can be one of the following values defined in a SYSTEM_INFORMATION_CLASS enum type; the most important are the highlighted arguments, which can be used to retrieve relevant system data.
- SystemBasicInformation: the number of processes in the system.
- SystemPerformanceInformation: returns information that can be used to generate a seed for a random number generator.
- SystemTimeOfDayInformation: returns information that can be used to generate a seed for a random number generator.
- SystemProcessInformation: returns an array of structures for each process running in the system, which can be used to get various information about a process, like its number of open handles, page-file usage, number of allocated memory pages, etc.
- SystemProcessorPerformanceInformation: returns an array of structure for each processor in the system used to get information about each processor.
- SystemInterruptInformation: returns information that can be used to generate a seed for a random number generator.
- SystemExceptionInformation: returns information that can be used to generate a seed for a random number generator.
- SystemRegistryQuotaInformation: returns a SYSTEM_REGISTRY_QUOTA_INFORMATION structure.
- SystemLookasideInformation: returns information that can be used to generate a seed for a random number generator.
- SystemInformation: a pointer to the buffer, which will receive the requested information – the size and structure of returned buffer depends entirely upon the SystemInformationClass.
- SystemInformationLength: the size of the buffer pointed to by the SystemInformation parameter, specified in bytes.
- ReturnLength: optional parameter, where the actual size of the requested information is written.
When hooking a function, the very first thing we must do is define its original prototype, which we can do with the code presented below [1].
[plain]
NTSYSAPI NTSTATUS NTAPI ZwQuerySystemInformation(
ULONG SystemInformationClass,
PVOID SystemInformation,
ULONG SystemInformationLength,
PULONG ReturnLength
);
[/plain]
We also need to store the address of the old ZwQuerySystemInformation function, which is why we need additional definition. This is needed so we can call the old routine from the existing hooking routine, so the functionality of the functions is not changed, just altered a little bit.
[plain]
typedef NTSTATUS (*ZwQuerySystemInformationPrototype)(
ULONG SystemInformationCLass,
PVOID SystemInformation,
ULONG SystemInformationLength,
PULONG ReturnLength
);
ZwQuerySystemInformationPrototype oldZwQuerySystemInformation = NULL;
[/plain]
The oldZwQuerySystemInformation global variable is used as a placeholder for saving the old address from SSDT that we’ve overwritten. That variable is set in the DriverEntry function when calling the HookSSDT function. The actual function call can be seen below, where we can see that the HookSSDT function returns the address of the old pointer.
[plain]
oldZwQuerySystemInformation = (ZwQuerySystemInformationPrototype)HookSSDT((PULONG)ZwQuerySystemInformation, (PULONG)Hook_ZwQuerySystemInformation);
[/plain]
The HookSSDT function can be seen below and accepts two parameters: the first parameter is a pointer to system function that we would like to hook and the second parameter is a pointer to a function which will be hooking the syscall routine. In the function, we’re first reserving some space for local variables, after which we’re calling the DisableWP function.
Then we’re using the KeServiceDescriptorTable.ServiceTable to get a pointer to the SSDT table. This is exactly why we had to assign the SST structure to the KeServiceDescriptorTable – by doing that we can use the dot notation to access the data members of the structure.
The “*((PULONG)(syscall + 0x1));” code line looks quite complicated, but it actually isn’t. Basically it identifies the number of the system call we’re trying to hook. It does so in quite a quick and hacky way: it adds one byte to the pointer of the system call we’re trying to hook. The way system calls are structured, the first instruction is always “mov eax, 105h”, where the 105h is a system number. The system number is different in every system call, but basically it’s stored in the first byte at the address of the system call – therefore by adding 1 to the system call pointer, we’re actually accessing the system call number. To read the number from the address, we must de-reference the pointer by using the *() syntax.
After that, we’re calculating the address to the function pointer in SSDT table and storing it in the target variable. At the end of the HookSSDT we’re calling the InterlockedExchange function to store the pointer to our hooking function into the SSDT table and return the previous value: the pointer to the old hooked function.
[plain]
PULONG HookSSDT(PUCHAR syscall, PUCHAR hookaddr) {
/* local variables */
UINT32 index;
PLONG ssdt;
PLONG target;
/* disable WP bit in CR0 to enable writing to SSDT */
DisableWP();
DbgPrint("The WP flag in CR0 has been disabled.rn");
/* identify the address of SSDT table */
ssdt = KeServiceDescriptorTable.ServiceTable;
DbgPrint("The system call address is %x.rn", syscall);
DbgPrint("The hook function address is %x.rn", hookaddr);
DbgPrint("The address of the SSDT is: %x.rn", ssdt);
/* identify ‘syscall’ index into the SSDT table */
index = *((PULONG)(syscall + 0x1));
DbgPrint("The index into the SSDT table is: %d.rn", index);
/* get the address of the service routine in SSDT */
target = (PLONG)&(ssdt[index]);
DbgPrint("The address of the SSDT routine to be hooked is: %x.rn", target);
/* hook the service routine in SSDT */
return (PUCHAR)InterlockedExchange(target, hookaddr);
}
[/plain]
There’s still the Hook_ZwQuerySystemInformation function, which will be called instead of the original ZwQuerySystemInformation. The function accepts the same arguments as the original function. The function basically just prints the message, so we know that it’s being called, but we could have executed anything right now. At the end of the function, we still need to call the oldZwQuerySystemInformation, which stores a pointer to the old ZwQuerySystemInformation function.
[cpp]
NTSTATUS Hook_ZwQuerySystemInformation(ULONG SystemInformationClass, PVOID SystemInformation, ULONG SystemInformationLength, PULONG ReturnLength) {
/* local variables */
NTSTATUS status;
/* calling new instructions */
DbgPrint("ZwQuerySystemInformation hook called.rn");
/* calling old function */
status = oldZwQuerySystemInformation(SystemInformationClass, SystemInformation, SystemInformationLength, ReturnLength);
if(!NT_SUCCESS(status)) {
DbgPrint("The call to original ZwQuerySystemInformation did not succeed.rn");
}
return status;
}
[/cpp]
When unloading the driver from the kernel, we must restore the old function to clean up after ourselves. If we don’t restore the original pointer to the old ZwQuerySystemInformation, then the system would probably crash while trying to execute the function, which doesn’t exist. If we unload the driver from the kernel, its code is no longer present in the kernel, but the SSDT still points to it. So if the ZwQuerySystemInformation is called, it would try to execution the function from a memory address with unknown instruction, which means the system would probably crash completely.
[cpp]
/* restore the hook */
if(oldZwQuerySystemInformation != NULL) {
oldZwQuerySystemInformation = (ZwQuerySystemInformationPrototype)HookSSDT((PULONG)ZwQuerySystemInformation, (PULONG)oldZwQuerySystemInformation);
EnableWP();
DbgPrint("The original SSDT function restored.rn");
}
[/cpp]
Once we compile and load the driver into the kernel, the following will be printed into the WinDbg output. Notice how our DbgPrint is invoked every time the ZwQuerySystemInformation function is called? This means that our code is working and we’re able to intercept function calls to ZwQuerySystemInformation function.
Once we unload the driver from the kernel, the following is printed in WinDbg output, which clarifies that the hooked SSDT entry was restored to its original value.
Detecting SSDT Hooks
Here we’ll try to describe how we can go about detecting the SSDT hooks. We can download the GMER rootkit detector and remove from [9]. From its official web page, we can see that GMER is able to detect and remove rootkits while it scans for malicious activity in the following items:
- hidden processes
- hidden threads
- hidden modules
- hidden services
- hidden files
- hidden disk sectors (MBR)
- hidden Alternate Data Streams
- hidden registry keys
- drivers hooking SSDT
- drivers hooking IDT
- drivers hooking IRP calls
- inline hooks
After we’ve downloaded and installed GMER, we can start it normally. If we’ve already loaded the hookssdt driver into the kernel, the ZwQuerySystemInformation is already hooked at the time of running GMER. If that is the case, GMER will quickly identify that our ZwQuerySystemInformation was hooked, as we can see on the picture below.
In order to scan the system for rootkits, we have to run GMER and click on the Rootkit/Malware tab, then click the Scan button. The picture above also discloses that it is the mydriver.sys, which was used to hook the ZwQuerySystemInformation.
If we would like to detect this manually by our own program, we can do that quite easily. Remember that we don’t have to run the thread on every processor on the system, because there is only one SSDT table and all processors share it. Also, we don’t have to worry about writing to read-only memory, because we only need to read from it – this greatly simplifies the code. Therefore, the program used to detect whether SSDT has been hooked can be quite simple: all we need to do is get the address of the SSDT table by reading the KeServiceDescriptorTable.KiServiceTable and traversing it. We only need to look whether all the pointers are pointing to the ntoskrnl.exe module and not somewhere else. When hooking the ZwQuerySystemInformation with mydriver.sys driver, the pointer is pointing to the mydriver.sys code, which makes it a suspect to hooking. All the pointers not pointing to the ntoskrnl.exe driver memory space are considered hooked and can be detected.
Conclusion
In the article we’ve seen how we can hook SSDT function pointers in order to take over the execution when a system call is invoked. We’ve looked at an actual implementation, where we’ve hooked the ZwQuerySystemInformation function. In order to do so, we first had to disable the WP bit in CR0 register, so the kernel was able to access read-only memory. After being able to write to a read-only memory, we’ve overwritten the pointer to the ZwQuerySystemInformation function call in the SSDT table with a pointer to our own routine. This enables us to take control of execution every time the ZwQuerySystemInformation function is called.
Later in the article we also saw how we can detect the SSDT pointer being hooked. We used the GMER rootkit detector to detect our mydriver.sys driver. It’s quite easy to detect that the SSDT entry has been hooked, because the pointer doesn’t point to the ntoskrnl.exe module’s memory space.
We’ve covered quite some ground in this article, which is important when we would like to hook SSDT pointers. Remember that SSDT hooks can be easily detected and removed, so they are not extensively used anymore, but can still be useful when analyzing the code of some user-mode application. If we have to analyze a program, which is protected with various anti-debugging tricks, we can try to hook the SSDT table to get various information about which system calls it’s using. This can provide a lot of information in figuring out what the program actually does and will certainly help in its analysis.
There are a lot of cases where this information can be quite valuable, we just have to identify when to use it. All the code samples are also contained in my Github account, so it can easily be downloaded and modified for specific needs that may arise.
References
[1] Writing drivers to perform kernel-level SSDT hooking, AndrewThomas,.
[2]Rootkit Analysis: Hiding SSDT hooks,.
[3]New reverse engineering technique using API hooking and sysenter hooking, and
capturing of cash card access, NetAgent Co., Ltd, Kenji Aiko,.
[4] Entering the kernel without a driver and getting interrupt information from APIC,.
[5] Understanding !PTE, Part2: Flags and Large Pages,.
[6] Using MDLs,.
[7] InterlockedExchange function,.
[8] ZwQuerySystemInformation function,.
[9] GMER,. | https://resources.infosecinstitute.com/topic/hooking-system-service-dispatch-table-ssdt/ | CC-MAIN-2020-50 | refinedweb | 5,857 | 50.67 |
in reply to
Re^7: memory leaks with threads
in thread memory leaks with threads
I'm running Linux, 2.6.18, 32bit on a turion x2 tl-60.
I just ran a similar program written in c.
It doesn't grow (stays exactly at a res memory of 944B), also after 15 million threads created and destroyed.
So I can at least say it's not related to my pthreads library.
I'll however wait a bit before submitting a bug report, maybe someone else knows a solution.
#include <iostream>
#include <cstdlib>
using namespace std;
#include <pthread.h>
#include <unistd.h>
#include <sys/select.h>
#include <sys/time.h>
using namespace std;
static int threadscount = 0;
pthread_mutex_t threadscount_mutex = PTHREAD_MUTEX_INITIALIZER;
void* thread( void* data ){
pthread_mutex_lock( &threadscount_mutex );
threadscount--;
//cout << threadscount << endl;
pthread_mutex_unlock( &threadscount_mutex );
pthread_exit(0);
}
int main(int argc, char *argv[]){
int count = 0;
int a;
struct timeval timeout;
while (1){
for ( int a = 1; a <= 10; a++ ){
pthread_mutex_lock( &threadscount_mutex );
threadscount++;
pthread_mutex_unlock( &threadscount_mutex );
pthread_t p_thread; /* thread's structu
+re */
pthread_create(&p_thread, 0, thread, (void*)&a
+);
pthread_detach( p_thread );
count ++;
}
cout << "count: " << count << endl;
int c;
do {
timeout.tv_usec = 100;
timeout.tv_sec = 0;
select( 0, 0, 0, 0, &timeout );
pthread_mutex_lock( &threadscount_mutex );
c = threadscount;
pthread_mutex_unlock( &threadscount_mutex );
} while ( c > 0);
}
}
[download]
It's no great surprise that it's not the system apis that are at fault, but the perl source code on top of them.
I'll however wait a bit before submitting a bug report, maybe someone else knows a solution.
Okay, but if the last version of code I posted leaks that badly on 5.9.5, it's most unlikely that there is anything that can be done at Perl level (ie. in your code) to 'fix' the problem. You're going to have to wait at least until dave_the_m and co. at p5p track down and fix whatever is wrong. That's why I suggested the perlbug to ensure that a) it comes to their attention; b) they get all the relevant information they need.
Looking at the problem from a different angle. As you saw by comparing the speed and memory usage of pthreads at the C level, and ithreads at the perl level, ithreads are very heavy by comparison. That's easily explained by the benefits they give you--automatic memory management, explicitly shared, rather than implicitly shared variable spaces, and everything else that is Perl over C. And the work that is involved in achieving those--basically, starting a new interpreter and cloning the current environment each time you start a thread.
What all that is building up to, is that spawning threads for small amounts of work and then throwing them away--whilst very effective in C--is not the best way of tackling a threading problem using ithreads and perl. A better way is to use a pool of threads and re-use them.
micha@laptop ~/prog/perl/test $ time ./threads_benchmark.pl
real 0m36.666s
user 1m11.872s
sys 0m0.172s
(resident memory 6.3MB)
[download]
micha@laptop ~/prog/perl/test $ time ./threadpool.pl
real 0m33.001s
user 1m4.728s
sys 0m0.024s
(max res memory indefinite rising, 10MB)
[download]
micha@laptop ~/prog/perl/test $ time ./thread_pool_benchmark.pl
real 0m36.105s
user 1m9.780s
sys 0m0.048s
15M res memory
[download]
Is there a proposed way to publish such code, perhaps in Snippets ?
Personally, I think adding your code (all 3 versions) to your post above in separate <readmore><code>...</code></readmore> blocks would be good. It'd put the benchmark figures into context, and make the code available for searching for those | http://www.perlmonks.org/index.pl?node_id=625705 | CC-MAIN-2015-06 | refinedweb | 598 | 67.25 |
change backlight brightness in linux with python
xbacklight ubuntu
I'm working on a python project which should be able to control my backlight brightness. I working with Ubuntu 17.04 and I already locate where the file is that show my backlight brightness
/sys/class/backlight/acpi_video0/brightness
the command that I can use in the bash terminal to change the value is
sudo su -c 'echo 12 > /sys/class/backlight/acpi_video0/brightness'
but I have no clue how to implement this in a py project. Maybe this is also the wrong way to start.
Thank you Guys for probably helping me out.
In Ubuntu, I achieved this using xbacklight package and python's
os.system() imported from
os module.
Installation :
sudo apt install xbacklight
Python command:
os.system('xbacklight -set ' + str(value)) where value is the input.
How to use cron + python to regularly adjust screen brightness , In older kernels there was brightness control file somewhere in /proc , but I think that it was the same functionality as /sys in your code snippet. In this /proc file Automatically adjust Linux display brightness. The auto-adjust-display-brightness program automatically adjusts the brightness of Linux computer displays based on whether it's light or dark outside. To get it working you create a configuration file defining your physical location (this is how it figures out whether it's light or dark outside
you can use either
os.system()
or
subprocess.Popen()
Not really recommended, but I see no harm in it for a personal project where the input isn't coming from an external source.. That being said, one should take care using this, because you will be executing straight from your command line, so anything your CLI can do, this can do. You have been warned.
Using
os.system() (you might have to prepend the path to your shell to the command. This is usually
/bin/bash in Linux.):
import os
os.system('echo "your command goes here"')
if that doesn't work, then it should look something like:
os.system('/bin/bash echo "your command goes here"')
Using
subprocess.Popen() (again, you might need to prepend the path to your shell before the rest of the command.:
import subprocess
subprocess.Popen('echo "your command goes here"')
Once again, I will say, this is NOT recomended for frequent usage especially where outside sources may affect the output of the command being ran. Only use this when you KNOW what will be input into the command.
brightness-control · GitHub Topics · GitHub, MacOS app to change the screen brightness on the menubar. electron macos menubar osx A linux kernel module to control brightness using keyboard. This is an Introduction to image processing with Python and OpenCV. python opencv.
So I do a bit of research and on a this site I have found the command
gdbus call --session --dest org.gnome.SettingsDaemon.Power --object-path /org/gnome/SettingsDaemon/Power --method org.freedesktop.DBus.Properties.Set org.gnome.SettingsDaemon.Power.Screen Brightness "<int32 50>"
I have no idea how this works but i changed my backlight.
It only works on gnome!! but because i use gnome and the application should be for gnome it is OK for me
my function now looks like this:
def change_brightness(self, value): os.system('gdbus call --session --dest org.gnome.SettingsDaemon.Power --object-path /org/gnome/SettingsDaemon/Power --method org.freedesktop.DBus.Properties.Set org.gnome.SettingsDaemon.Power.Screen Brightness "<int32 ' + str(value) + '>"')
the value must be between 0 and 100
xolox/python-auto-adjust-display-brightness: Automatically , Automatically adjust Linux display brightness - to manually adjust the brightness of my laptop screen and the external monitor attached to my laptop to avoid unnecessary eye strain. So, to adjust contrast and brightness at the same time, do for each pixel: new_value = (old_value - 0.5) × contrast + 0.5 + brightness Bellow a nice function which do the job :
Try this:
def set_brightness(brightness): if int(brightness) > 15: raise TypeError("Need int 0 < and > 15") elif int(brightness) < 0: raise TypeError("Need int 0 < and > 15") with open("/sys/devices/pci0000:00/0000:00:02.0/backlight/acpi_video0/brightness","w") as bright: bright.write(str(brightness)) bright.close() set_brightness(0) #Brightness 0-15
auto-adjust-display-brightness · PyPI, Automatically adjust Linux display brightness. to manually adjust the brightness of my laptop screen and the external monitor attached to my laptop The auto-adjust-display-brightness program is written in Python and is available on PyPI To install the python-pigpio useful to test the PWM backlight in Python type: sudo apt install python-pigpio Run pigpiod. sudo pigpiod - Try this code example in Python. It wil turn off the screen for 1 sec and the turn on. import pigpio import time port=pigpio.pi() port.write(22,1) time.sleep(1) port.write(22,0)
rpi-backlight · PyPI, A Python module for controlling power and brightness of the official have to run Python code, the GUI and CLI as root when changing the power or brightness: Linux/macOS/Windows machine) you can use linusg/rpi-backlight-emulator . You can change the path #defined by the preprocessor macro DEFAULT_CTRL_DIR to suit your own system. It must point to the directory where the files 'brightness' and 'max_brightness' reside. On my laptop, for example, this path is "/sys/class/backlight/intel_backlight/", however yours may be different.
How to change LCD brightness from command line (or via script , you can also increase and decrease the brightness from present value to echo 400 | sudo tee /sys/class/backlight/intel_backlight/brightness. $ rpi-backlight --board-type tinker-board For all available options see docs. GUI. Open a terminal and run rpi-backlight-gui. Adding a shortcut to the LXDE panel. See docs. Tests. Tests use pytest, install with pip3 install pytest. Now, run from the repository root directory: $ python3 -m pytest Contributing
Changing screen brightness in accordance with human perception , Changing screen brightness in accordance with human perception update to version 3.16 in Arch Linux the function keys for changing screen brightness in my The following Python script does exactly that, and a bit more:. The value inside the max_brightness file is the maximum brightness you can assign to the display, and it's not possible to change. The brightness file actually controls the amount of backligh, you can change it like below. root@host:~# echo 200 > /sys/class/backlight/intel_backlight/brightness
- Task 1: fix the permission of the files so you are able to change the brightness without
sudo.
- Yes I have just figured out but I have also no idea how to do this. I have found something useful on here raspberrypi.org/forums/viewtopic.php?t=134390&p=894761 Let me search around a bit than i can post my code
- This is sysfs not a real filesystem. You may not change its permissions. In Python just open the file, write to it and close it. | http://thetopsites.net/article/53783639.shtml | CC-MAIN-2020-50 | refinedweb | 1,154 | 55.84 |
Games, .NET, Performance, and More!
The program I used to replicate this behavior is shown below. I had to put the WriteLine's in, because the /optimize+ versions were omitting the n = i statement, since n was never used. While I found that portion of the code extremely smart, I found that the variable re-usage of all forms of compilation were extremely lacking. For example, the IL output from each function is as follows:ShowBlockScopedVariablesInILAsMethodScoped, this method demonstrates that loop scoped variables within a C# application, actually manifest at the method level rater than somehow manifesting at the loop scoped level. This makes perfect sense so that upon entry into the method, the stack is set to the appropriate size as needed by the running code. 2 stack variables are allocated in this method as expected.
ShowMultipleBlockScopedVariablesInILAsMultipleMethodScopedVariables, this method demonstrates that loop scoped variables within a C# application don't get re-used at all. Normally, you would expect any stack variable used in a loop, to become available for use by the program in another loop once the current loop has been exited. However, this isn't the case since there are 6 stack variables allocated 2 for each loop with one variable mapping to i in each loop and the other being n.
ShowMethodScopedVariablesReUsed, this method demonstrates that you can reduce your stack footprint by not using loop scoped variables and simply declaring method scoped variables and re-using them manually. This is somewhat counter-intuitive to me, since the language could optimize the loop scoped variables and re-use them, but doesn't.
Long story short, try not to use locally scoped variables for loop constructs unless you only have one loop in your code. Try to re-use variables where appropriate. This is the lesson I've learned. Maybe I'm trying to be too protective of the generated IL and the JIT is doing something behind the scenes. I notice that the Partition II document shipped with the Framework SDK demonstrates that ILAsm allows locally scoped variables within nested blocks to share the same location as a method scoped variable. You can read about this in 14.4.1.3. I even tried to create some IL that did this and compile it out:
.locals init ( int32 V_0, int32 V_1, [0] int32 V_2, [1] int32 V_3, [0] int32 V_4, [1] int32 V_5)The above compiled just fine under ILAsm. However, it did wind up producing IL that simply removed the remaining 4 variables and made sure the references pointed to V_0 through V_1. However, the really funny thing is that ldloc.{digit} and stloc.{digit} are used everywhere they would have normally been used, but the final two references that mapped V_4/V_5 to V_0/V_1 were listed as stloc.s/ldloc.s. I'm not sure there is any speed increase over any of the different versions of stloc/ldloc, but if there was, even the ILAsm compiler is lacking in performance optimizations.
Well, I'm done with this topic for the evening. I probably won't change very much about how I code based on any of this information. It may force me to use method level variables in places where I have lots of loop structures, but probably not.
using System;
public class LocalityOfVariables { private static void Main(string[] args) { } private static void ShowBlockScopedVariablesInILAsMethodScoped() { for (int i = 0; i < 10; ++i) { int n = i; Console.WriteLine(n); } } private static void ShowMultipleBlockScopedVariablesInILAsMultipleMethodScopedVariables() { for (int i = 0; i < 10; ++i) { int n = i; Console.WriteLine(n); } for (int i = 0; i < 10; ++i) { int n = i; Console.WriteLine(n); } for (int i = 0; i < 10; ++i) { int n = i; Console.WriteLine(n); } } private static void ShowMethodScopedVariablesReUsed() { int i, n; for (i = 0; i < 10; ++i) { n = i; Console.WriteLine(n); } for (i = 0; i < 10; ++i) { n = i; Console.WriteLine(n); } for (i = 0; i < 10; ++i) { n = i; Console.WriteLine(n); } }} | http://weblogs.asp.net/justin_rogers/archive/2004/02/16/73627.aspx#134535 | crawl-003 | refinedweb | 659 | 62.07 |
An exception in Java is a signal that indicates the occurrence of some important or unexpected condition during execution. For example, a requested file cannot be found, or an array index is out of bounds, or a network link failed. Explicit checks in the code for such conditions can easily result in incomprehensible code. Java provides an exception handling mechanism for systematically dealing with such error conditions.
The exception mechanism is built around the throw-and-catch paradigm. To throw an exception is to signal that an unexpected error condition has occurred. To catch an exception is to take appropriate action to deal with the exception. An exception is caught by an exception handler, and the exception need not be caught in the same context that it was thrown in. The runtime behavior of the program determines which exceptions are thrown and how they are caught. The throw-and-catch principle is embedded in the try-catch-finally construct.
Several threads can be executing in the JVM (see Chapter 9). Each thread has its own runtime stack (also called the call stack or the invocation stack) that is used to handle execution of methods. Each element on the stack (called an activation record or a stack frame) corresponds to a method call. Each new call results in a new activation record being pushed on the stack, which stores all the pertinent information such as storage for the local variables. The method with the activation record on top of the stack is the one currently executing. When this method finishes executing, its record is popped from the stack. Execution then continues in the method corresponding to the activation record which is now uncovered on top of the stack. The methods on the stack are said to be active, as their execution has not completed. At any given time, the active methods on a runtime stack comprise what is called the stack trace of a thread's execution.
Example 5.9 is a simple program to illustrate method execution. It calculates the average for a list of integers, given the sum of all the integers and the number of integers. It uses three methods:
The method main() which calls the method printAverage() with parameters giving the total sum of the integers and the total number of integers, (1).
The method printAverage() in its turn calls the method computeAverage(), (3).
The method computeAverage() uses integer division to calculate the average and returns the result, (7).
public class Average1 { public static void main(String[] args) { printAverage(100,0); // (1) System.out.println("Exit main()."); // (2) } public static void printAverage(int totalSum, int totalNumber) { int average = computeAverage(totalSum, totalNumber); // (3) System.out.println("Average = " + // (4) totalSum + " / " + totalNumber + " = " + average); System.out.println("Exit printAverage()."); // (5) } public static int computeAverage(int sum, int number) { System.out.println("Computing average."); // (6) return sum/number; // (7) } }
Output of program execution:
Computing average. Average = 100 / 20 = 5 Exit printAverage(). Exit main().
Execution of Example 5.9 is illustrated in Figure 5.6. Each method execution is shown as a box with the local variables. The box height indicates how long a method is active. Before the call to the method System.out.println() at (6) in Figure 5.6, the stack trace comprises of the three active methods: main(), printAverage() and computeAverage(). The result 5 from the method computeAverage() is returned at (7) in Figure 5.6. The output from the program is in correspondence with the sequence of method calls in Figure 5.6.
If the method call at (1) in Example 5.9
printAverage(100, 20); // (1)
is replaced with
printAverage(100, 0); // (1)
and the program is run again, the output is as follows:
Computing average. Exception in thread "main" java.lang.ArithmeticException: / by zero at Average1.computeAverage(Average1.java:18) at Average1.printAverage(Average1.java:10) at Average1.main(Average1.java:5)
Figure 5.7 illustrates the program execution. All goes well until the return statement at (7) in the method computeAverage() is executed. An error condition occurs in calculating the expression sum/number, because integer division by 0 is an illegal operation. This error condition is signalled by the JVM by throwing an ArithmeticException (see "Exception Types" on page 185). This exception is propagated by the JVM through the runtime stack as explained on the next page.
Figure 5.7 illustrates the case where an exception is thrown and the program does not take any explicit action to deal with the exception. In Figure 5.7, execution of the computeAverage() method is stopped at the point where the exception is thrown. The execution of the return statement at (7) never gets completed. Since this method does not have any code to deal with the exception, its execution is likewise terminated abruptly and its activation record popped. We say that the method completes abruptly. The exception is then offered to the method whose activation is now on top of the stack (method printAverage()). This method does not have any code to deal with the exception either, so its execution completes abruptly. Lines (4) and (5) in the method printAverage() never get executed. The exception now propagates to the last active method (method main()). This does not deal with the exception either. The main() method also completes abruptly. Line (2) in the main() method never gets executed. Since the exception is not caught by any of the active methods, it is dealt with by the main thread's default exception handler. The default exception handler usually prints the name of the exception, with an explanatory message, followed by a printout of the stack trace at the time the exception was thrown. An uncaught exception results in the death of the thread in which the exception occurred.
If an exception is thrown during the evaluation of the left-hand operand of a binary expression, then the right operand is not evaluated. Similarly if an exception is thrown during the evaluation of a list of expressions (for example, a list of actual parameters in a method call), then evaluation of the rest of the list is skipped.
If the line numbers in the stack trace are not printed in the output as shown previously, it is advisable to turn off the JIT (Just-in-Time) compilation feature of the JVM in the Java 2 SDK:
>java -Djava.compiler=NONE Average1 | http://etutorials.org/cert/java+certification/Chapter+5.+Control+Flow+Exception+Handling+and+Assertions/5.5+Stack-based+Execution+and+Exception+Propagation/ | CC-MAIN-2017-22 | refinedweb | 1,061 | 56.96 |
Serverless programming has been a buzzword in technology for a while now, first implemented for arbitrary code by Amazon on Amazon Web Services (AWS) in 2014, and first spoken about two years before that. The term normally refers to snippets of backend code running in environments that are wholly managed by the cloud provider, totally invisible to developers. This approach has some astounding benefits, enabling an entirely new paradigm of computing architecture. Now, let's understand the benefits and drawbacks of serverless computing.
Following are the benefits of serverless computing:
- Speed of development
- Zero management (near)
- Cost/cost flexibility
- Auto-scaling
Following are the drawbacks of serverless computing:
- Warmup latency
- Vendor lock-in
- Lack of control for specific use cases
Serverless code can be scaled to handle as much demand as your cloud provider's data center can handleâit is essentially infinite for all but the most demanding applications. However, the real key is elastic, unmanaged scaling. Rather than having to manually set the scale at which your code is running (for example, by spinning up extra virtual machines), serverless code will react to the demand, and scale appropriately. This means that you are charged according to computing resource usage, rather than paying in advance for a scale that you might need for an expected spike of users. It also means that serverless code needs no active management whatsoeverâonly monitoring. This has some profound impacts, and leads to an architecture that tends toward the microservices approach.
We will introduce this new architecture from the bottom up, starting by creating serverless code, and then building a serverless application. Our first objective will be to create a simple RESTful API with serverless code, before venturing into more interesting and unusual architectures that are unique to serverless code. This book will focus on Microsoft's serverless product Azure Functions.
By the end of this chapter, you will be able to:
- Identify the benefits and drawbacks of serverless computing
- Create an Azure Function
- Debug an Azure Function locally
- Deploy an Azure Function
- Explain the Azure Functions runtime
While serverless computing has a compelling set of benefits, it is no silver bullet. To architect effective solutions, you need to be aware of its strengths and weaknesses, which are shown in the following table:
We will discuss each of its main benefits in detail here.
Firstly, a major strength of serverless computing is its speed of development. The developer doesn't have to concern oneself with, or write any of the code for, the underlying architecture. The process of listening for HTTP messages (or other data inputs/events) and routing is done entirely by the cloud provider, with only some configuration provided by the developer. This allows a developer to simply write code that implements business logic, and deploy it straight to Azure. Each function can then be tested independently.
Secondly, serverless computing has automatic scaling by default. The routing system in a serverless service like Azure Functions will detect how many requests are coming in and deploy the serverless code to more servers when necessary. It will also reduce the number of servers all the way down to zero, if necessary. This is particularly useful if you are running an advertisement or a publicity stunt of some kindâhalf-time advertisements in the Premier League final, for example. Your backend can instantly scale to handle the massive influx of new users for this brief period of time, before returning to normal very quickly. It's also useful for separating out what would usually be a large, monolithic backend. Some of the parts of the backend will probably be used a lot more than others, but usually, you have to scale the whole thing manually to keep the most-utilized functions responsive. If each function is separated, automatically scaling it allows the developer to identify and optimize the most-used functions.
Continuing on the subject of being able to scale down to zero servers, the cost of serverless code is very flexible. Due to auto-scaling, providers are able to charge according to resource usage. This means that you only pay according to usage, which has benefits for all developers, but for small businesses most of all. If your business has lots of customers one month, you can scale accordingly and pay your higher bill with the higher revenue you have made. If you have a quieter month, your serverless computing bill will be proportionally lower for that lower revenue period.
Finally, serverless code requires very little active management. An application running on a shared server will usually require a significant amount of monitoring and management, for both the server itself and its resource allocation at any given moment. Containerized code, or code running on virtual machines, is better, but still requires either container management software or a person to monitor the usage and then scale down or scale up appropriately, even if the server itself no longer requires active management. Serverless code has no server visible to the developer and will scale according to demand, meaning that it requires monitoring only for exceptions or security issues, with no real active management to keep a normal service running.
Serverless code does have some weaknesses which are described here.
The first weakness is warmup latency. Although the traffic managers of various platforms (AWS, Azure, Google Cloud, and so on) are very good at responding to demand when there are already instances of serverless code running, it's a very different problem when there are no instances running at all. The traffic managers need to detect the message, allocate a server, and deploy the code to it before running it. This is necessarily slower than having a constantly running container or server. One way to combat this is to keep your code small and simple, as the biggest slowdown in this process can be transferring large code files.
Secondly, there is the issue of vendor lock-in. Each cloud provider implements its serverless service differently, so serverless code written for Azure is difficult to port over to AWS. If prices spike heavily in the future, then you will be locked in to that provider for your serverless architecture.
There's also the issue of languages. JavaScript is the only language that is universally available, with patchy service across providers for other languages, like C# and Java. There is a solution to this, however; it is called the serverless framework. This is a framework that you can use to write simple HTTP-triggered functions, which can then be deployed to all of the major cloud providers. Unfortunately, this means that you will miss out on a lot of the best features of Azure Functions, because their real power comes from deep integration with other Azure services.
Finally, there is the issue of a lack of low-level control. If you are writing a low latency trading platform, then you may be used to accessing networking ports directly, manually allocating memory, and executing some commands using processor-specific code. Your own application might require similar low-level access, and this isn't possible in serverless computing. One thing to bear in mind, however, is that it's possible to have part of the application running on a server that you have low-level access to, and background parts of it running in a serverless function.
If an Azure Function isn't executed for a while, the function stops being deployed and the server gets reallocated to other work. When the next request comes in the function needs to deploy the code, warm up the server and execute the code, so it's slower. Inevitably, this leads to latency when the functions are triggered again, making serverless computing somewhat unsuitable for use cases that demand continuous low-latency. Also, by its very nature, serverless computing prevents you from accessing low level commands and the performance benefits that they can give. It's important to emphasize that this doesn't mean serverless computing is unbearably slow; it just means that applications that demand the utmost performance are unlikely to be suitable for serverless computing.
Overall, there is a clear benefit to using serverless computing, and particularly, Azure Serverless, especially if you use some of the tips detailed in the weaknesses section. The benefits are strong for both the developer and the business.
The serverless framework () can help with vendor lock-in by making your serverless functions cross-cloud.
In this section, we will create an Azure Function in Visual Studio, debug it locally, and deploy it to an Azure cloud instance. While doing this, we will cover material about the basics of serverless runtime and the high-level benefits and disadvantages of serverless computing.
The core of all serverless products is to get straight into development with minimal setup time, so that's what we'll do in this subtopic. You should have a serverless function in the cloud at the end of this topic, and once you've learned the ropes, you could create and deploy one in a few minutes.
Note
To develop Azure Functions for production, you need a computer running Windows and Visual Studio 2015 or later; however, the smoothest experience is present on Visual Studio 2017, version 15.4 or later. If your computer can run Visual Studio, it can handle Azure Function development.
An Azure Function can be written in several languages. At the time of writing, there are three languages with full support: C#, JavaScript, and F#. Generally, the most popular languages are C# and JavaScript. Java is a language in preview, but, being in preview, it is not ready for production yet, so it will not be covered.
Note
There are also lots of other languages available experimentally for Azure Function runtime version 1 (Python, PowerShell, and so on), but it is not advised to use these for your business architecture, and they will generally fall behind the curve for new features and support. There is also a version 2 of the runtime, but this is only in preview at the time of writing, and is therefore not ready for production. It's interesting to note that Azure Data Lake and Azure Data Lake Analytics could be considered serverless programming, too. These are designed for processing very large datasets using a new language, U-SQL. You can read about them at.
Visual Studio 2017 has a comprehensive suite of Azure tools, including Azure Function development. While it's possible to use a combination of any other IDE and the command-line interface to develop Azure Functions, the best tooling will always arrive on Visual Studio first.
Before you begin, confirm that you have Visual Studio 2017 version 15.4 installed; if not, download and install it. To do so, perform the following steps:Â
- Open the
Visual Studio Installer, which will show you the version of Visual Studio that you have installed and allow you to select the Azure Workflow and install it, if it is missing, then update Visual Studio, if required, to latest version:
Note
Refer to the complete code placed at
Code/Serverless-Architectures-with-Azure/Lesson1/BeginningAzureServerlessArchitecture/BeginningAzureServerlessArchitecture.csproj.
Go to to access the code.
Now, we'll create a new Azure Function as a part of our serverless architecture that listens to HTTP requests to a certain address. It will listen to HTTP requests to a certain address as its trigger. Let's begin by implementing the following steps:
- Create a new solution. The example is called
BeginningAzureServerlessArchitecture, which is a logical wrapper for several functions that will get deployed to that namespace.
- Use the
Visual C#Â |Â
Cloud |Â
Azure Functiontemplate. Select the
Empty triggertype and leave the default options, but set storage to
None. This will create a Function App, which is a logical wrapper for several functions that will get deployed and scaled together:
- You now have a solution with two files in it:
host.jsonand
local.settings.json. TheÂ
local.settings.json file is used solely for local development, where it stores all of the details on connections to other Azure services.
Note
When uploading something to a public repository, be very careful not to commit unencrypted connection settingsâby default, they will be unencrypted.
host.json is the only file required to configure any functions running as part of your Function App. This file can have settings that control the function timeout, security settings for every function, and a lot more.
- Now, right-click on the project and select
Add New Item. Once again, choose the
Azure Functiontemplate:
- On the next screen, select
Http trigger
 with parameters andÂ
Access rights should be set toÂ
Anonymous. Right-click on your solution and select
Enable NuGet Package Restore:
          Â
Note
An important thing to remember is that this template is different from the first one you used, because it is inside the solution. Call it
PostTransactions, or something similar.Â
-.
Note
Refer to the code for this example placed at
Code/Serverless-Architectures-with-Azure/Lesson 1/BeginningAzureServerlessArchitecture/PostTransactionsExA.cs. Go to to access the code.
Configuration as code is an important modern development practice. Rather than having servers reconfigured or configured manually by developers before code is deployed to them, configuration as code dictates that all of the configuration required to deploy an application to production be included in the source code.
This allows for variable replacement by your build/release agent, as you will (understandably) want slightly different settings, depending on your environment.
Azure Functions implement this principle, with a configurationÂÂ files play, and learned about configurationÂÂÂ HTTP requests. The five levels are shown in the following table:
Note apps. An improved approach is discussed in the next chapterâusing Azure Active Directory.
The next parameters are GETÂ,Â
PUT, andÂ,Â
/ {name}. This will assign any text that appears where the curly braces are to a parameter called
name.
This completes the
HttpTrigger object. Other types of triggers, which we will touch on later, have different objects and different constructors. can be seen in the following example, and you should see that it will take whatever name is put into it and send back an HTTP response saying
Hello to that name. We will test this out in the next subtopic.
You now have a working Azure Function that can be deployed to Azure or run locally. We will first host and debug the function locally, to show the development cycle in action. At the moment, its logic is very basic and not particularly useful, so we will be developing this to store some data in a cloud-hosted database over the rest of the chapter.
In this section, we'll run an Azure Function locally and debug it. We'll be developing new functions and test the functionality before deploying to public cloud. And to ensure that it happens correctly, we'll require the single function created directly from the HTTP trigger with
p
arameters template.Â
Currently, your machine does not have the correct runtime to run an Azure Function, so we need to download it:
- Click the
Playbutton in Visual Studio, and a dialog box should ask you if you want to download
Azure Functions Core Toolsâclick onÂ
Yes. A Windows CMD window will open, with the lightning bolt logo of Azure Functions:
Note
It will bootstrap the environment and attach the debugger from Visual Studio. It will then list the endpoints the Function App is listening on.
- Open up Postman app and copy and paste the endpoint into it, selecting either a
POSTor
GETverb.
Note
You should get the response
Hello {name}. Try changing the
{name} in the path to your name, and you will see a different response.
You can download Postman at.
- Create a debug point in the
Runmethod by clicking in the margin to the left of the code:
       Â
Note
Refer to the code for this example placed at
Code/Serverless-Architectures-with-Azure/Lesson 1/BeginningAzureServerlessArchitecture/PostTransactionsExA.cs.
Go to to access the code.
- Use Postman to send the request:
- You are now able to use standard Visual Studio debugging features and inspect the different objects as shown in the following screenshot:
- Set your verb to
POST, and add a message in the payload. See if you can find the verb in the
HttpRequestMessageobject in debug mode. It should be in the
methodproperty.
Note
If you need to download Azure-functions-core-tools separately, you can use npm command to download itâ
npm install -g azure-functions-core-tools for version 1 (fully supported) andÂ
npm install -g [email protected] for version 2 (beta). We will go into the differences in versions later in this chapter. You can then use the debug setup to set Visual Studio to call an external program with the command
func host start when you click on the
Debug button. container handles absolutely everything, leaving your code to simply do the business logic.
In this activity, we will add a JSON payload to the request and write code to parse that message into a C# object.
Prerequisites
You will require a function created from the HTTP trigger with the parameters template.
Scenario
You are creating the start of a personal finance application that allows users to add their own transactions, integrate with other applications and perhaps allow their credit card to directly log transactions. It will be able to scale elastically to any number of users, saving us money when we don't have any users.
Aim
Parse a JSON payload into a C# object, starting your RESTful API.
Steps for Completion
- Change the
Routeto
transactions.
- Remove the
getverb. Remove the String parameter called
name:
- Add the
Newtonsoft.jsonpackage, if it isn't already present. You can do this by right-clicking the
Solution |Â
Manage NuGet packages |Â
Newtonsoft.Json.
- Right-click on the project and add a folder called
Models, then add a C# class called
Transaction. Add two properties to this class: a
DateTimeproperty called
ExecutionTime, and a
Decimalproperty called
Amount:
Note
Refer to the complete code placed at
Code/Serverless-Architectures-with-Azure/Lesson 1/BeginningAzureServerlessArchitecture/Models/Transaction.cs.
Go to to access the code.
- UseÂ
JsonConvert.DeserializeObject<Transaction>(message).Result()to deserialize the
HttpRequestMessageinto an instantiation of this class. To do that, you need to import the
Modelsnamespace and
Newtonsoft.Json. This will parse the JSON payload and use the
Amountproperty to file the corresponding property on the
Transactionobject:
Note
Refer to the complete code placed at
Code/Serverless-Architectures-with-Azure/Lesson 1/BeginningAzureServerlessArchitecture/PostTransactionsExC.cs.
Go to to access the code.
- Change the return message to use a property of the new
Transactionobject, for example,
You entered a transaction of £47.32! Go to Postman and open the
Bodytab and select
raw.
- Enter in the following JSON object:
{ Amount: 47.32, ExecutionTime: "2018-01-01T09:00:00Z" }
- Run locally to test. Make sure you change the endpoint to
/transactionsin Postman.
Outcome
Youâonly through command windows started in Visual Studio. If you want to use it independently, then you have to download it using npm. If you need to download
azure-functions-core-tools separately, you can use npm to get itâ
npm install -g azure-functions-core-tools for version 1 (fully supported) and
npm install -g [email protected] for version 2 (beta). We will go into the differences in versions later in this chapter..
Note
Developing in Visual Studio is much easier for Azure Functions, especially if it is your usual IDE, but it's perfectly possible to use a simple text editor and the CLI to develop. If you are developing for version 2 on a Mac or Linux machine, this setup is basically the only way to develop at the moment, although Visual Studio Code has a good Azure Functions extension.. We will focus on using Visual Studio, but we will mention some of the equivalent methods in the CLI.
Note will need an Azure login with a valid Azure subscription.
In this section, we'll deploy our first function to the public cloud, and learn how to call it. We'll be going live with our Azure Function, starting to create our serverless architecture. And to ensure that it happens correctly, we'll need a function project and a valid Azure subscription. Let's begin by implementing the following steps:
- Right-click on your project and select
Publish.... Now select
Azure Function App|
Create Newas shown in the following screenshot:
Note
If you've signed in to Visual Studio with a Microsoft ID that has an Azure subscription associated with it, then the next screen will be pre-populated with a subscription. If not, then you need to sign in to an account with an associated Azure subscription.
- Enter a memorable name and create a resource group and a consumption app service plan to match the following:
- Click on the
Publishbutton to publish your function.
- Open a browser, navigate to, and find your function. You can use the search bar and search the name of your function. Click on your Function App, then click on the function name. Click the
Get function URLin the upper-right corner paste the address of your functioninto Postman, and test it. Please bear in mind that if you have a paidsubscription, these executions will cost a small amount of moneyâyouare only charged for the compute resources that you actually use. On a free account, you get a million executions for free.
Outcome
You now have a fully deployed and working Azure Function in the cloud.
Note.
Anyone can execute your function if they can work out the address, which is why the security we discussed previously is a good idea to implement. We will further discuss the optimal way to employ security for a large enterprise using Azure Active Directory in the next chapter.
The old way of deploying Azure Functions allowed you to edit the C# code in the portal, but all of those sections will come up as read-only. For proper development, you should rarely even interact with the portal; everything is possible in Visual Studio with the Cloud Explorer.
So, we successfully created an Azure Function, ran it on a local environment, and deployed it to the cloud.
Note
Azure Functions run on servers that have Azure WebJobs installed on them.Azure WebJobs allow DLLs, or any supported code, to be hot-swapped in andtake advantage of deep Azure integrations.
We'll cover the two runtimes in more depth now, but the key takeaway is that there is a fully-supported, Windows-based version, and also a beta cross-platform version. Microsoft are pushing towards the cross-platform version.
Version 1 is based on Azure WebJobs, which is written in C# and .NET. Azure WebJobs is essentially a service with a set of adapters for Azure products. These adapters usually go a lot deeper than a standard API. When code is deployed into it, Azure WebJobs reads
host.json to create an environment with the same context every time. When you downloaded
azure-functions-core-tools, it contained a full production Azure Functions environment, exactly the same as the ones used in Azure. That is why we can only develop on Windows machines for version 1.
Version 2 is based on the new Azure WebJobs, written in C# and .NET Core. It can run on any server that supports .NET Core, which includes all major distributions of Linux, as well as Windows.
Note
.NET Core is significantly faster than .NET and can run on any platform, allowing Microsoft to allocate more types of servers to Azure Functions work. Rewriting Azure WebJobs in .NET Core will remove support for the plethora of semi-supported, experimental languages, like Python and PowerShell, but will add Java support.
The containers that Azure Functions run in are inherently short-lived, with a maximum execution time of 10 minutes allowed (this was set in
host.json, as seen earlier), but it is generally advisable to execute HTTP requests in two minutes. This short timeframe is part of what allows for the flexibility of Azure Functions, so if you need longer execution times but still want most of the convenience of serverless computing, then look at using Azure WebJobs SDK, running on a VM or physical machine that you control, or Azure Data Lake Analytics, if you are processing large datasets.
The resources that Azure allocates to functions is set at the Function App level, split equally among functions inside that app. Most of the time, this will have absolutely no practical impact on your work, and generally, you should keep logically connected functions in the same app. However, in cases where you suddenly demand massive scaling for one of your functions but not for the others, this can cause issues. If your suddenly popular function is one of twenty in an app, it's going to take longer to allocate enough resources to supply it. On the other hand, if you have a seldom-used function that you need better latency from, consider adding it to the Function App of a commonly-used one, so that the function doesn't have to warm up from the cold.
Prerequisites
You will need Visual Studio installed, with Azure development workload.
Scenario
Creating a personalized application for users.
Aim
Create a function that stores user details.
Steps for Completion
- Create a new function, called
PostUser, using the
HttpTriggertemplate. If you are competent in another language, like JavaScript or F#, you can try creating it in that language.
- Create a new model called
User. Think about what properties you'd like to include, but as a minimum, you will need an email address and a unique identifier (these can be the same property).
- Modify the function to decode a JSON-formatted request into this model.
Outcome
You have a function ready to store user details.
Note
Refer to the complete code placed at
Code/Serverless-Architectureswith-Azure/Lesson 3/BeginningAzureServerlessArchitecture/Models/.
Before the transactions have taken place:Â
User.cs:.
After the transactions have taken place:Â
PostUsers.cs:.
In this chapter, you covered the benefits and disadvantages of serverless computing. Next, you learned how to create, debug, and deploy an Azure Function, Azure's primary serverless product. This is the basic building block of serverless applications, and the first step to creating fully serverless architectures. Finally, you learned about the technical basis of Azure Functions, and how you can utilize your knowledge of it to improve the performance of your functions.
In the next chapter, we will be covering integrations with other Azure services and Azure-based logging and security solutions. | https://www.packtpub.com/product/beginning-serverless-architectures-with-microsoft-azure/9781789537048 | CC-MAIN-2020-40 | refinedweb | 4,468 | 52.39 |
CodePlexProject Hosting for Open Source Software
Hi,
I am trying to open the Orchard solution in newly installed Visual Studio Express 2012 RC. So far, I've had issues that for some reason VS2012 doesn't know that the projects are MVC projects, so it won't add Views/Controllers. Got past this by adding the
following GUID to the .csproj file of each project in the solution:
{E53F8FEA-EAE0-44A6-8774-FFD645390401}
However, when I open a razor view, it shows these errors (among other related ones):
Error 20 The name 'model' does not exist in the current context c:\Users\willem\Documents\Visual Studio 2010\Projects\000-Orchard Development\src\Orchard.Web\Modules\EventManagement\Views\EditorTemplates\Parts\Event.cshtml
2 2 EventManagement
Error 21 The name 'T' does not exist in the current context c:\Users\willem\Documents\Visual Studio 2010\Projects\000-Orchard Development\src\Orchard.Web\Modules\EventManagement\Views\EditorTemplates\Parts\Event.cshtml
5 14 EventManagement
Error 22 'System.Web.WebPages.Html.HtmlHelper' does not contain a definition for 'LabelFor' and no extension method 'LabelFor' accepting a first argument of type 'System.Web.WebPages.Html.HtmlHelper'
could be found (are you missing a using directive or an assembly reference?) c:\Users\willem\Documents\Visual Studio 2010\Projects\000-Orchard Development\src\Orchard.Web\Modules\EventManagement\Views\EditorTemplates\Parts\Event.cshtml
6 11 EventManagement
Also, Intellisense seems to be working in the razor view, but it only gives limited amount of fields for the Html helper method. For instance, none of the model specific methods like LabelFor and TextboxFor.
Not sure if this is Orchard specific, but thought you may have come accross it already? Any help is appreciated.
Thanks
What version of Orchard is that?
I downloaded the source zip of Version 1.4.2 from codeplex.
I also realise now that when I open the orchard solution in VS2010 again it also gives the same issues! It definately worked in VS2010 before I installed VS2012.
So:
I think that the issue is that the view doesn't have access to the referenced libraries in the root config:
"/>
</namespaces>
</pages>
</system.web.webPages.razor>
It does at runtime, but the intellilsense and error console doesn't pick it up
Ah well, yes, that's by design. The solution can't depend on a specific version of MVC being installed on the machine.
Hi Bertrand, thanks for the reply. I'm not sure I follow? I have now tested on three different machines, one of which never had VS2012 installed, and it still happens (even in Visual Web Developer Express 2010). Is there a solution to it?
The version of VS doesn't matter. There isn't a solution because we must be able to load the solution even if MVC was not previously installed on the machine. But it works at runtime, right?
It did on one of the machines, but on two of them it throws compiler errors. The only way I've been able to fix it was to uninstall MVC4. That seems to fix all the problems. It's a bit annoying since I would like to get started developing in MVC4, but at
least all my Orchard solutions are now compiling.
Well, MVC 4 is not out yet. It is often recommended to use VMs with beta software.
You could fix this by modifying the binding redirect section in the web.config on Orchard.Web. Use a range like 0.0.0.0-4.0.0.0 => 3.0.0.0
Are you sure you want to delete this post? You will not be able to recover it later.
Are you sure you want to delete this thread? You will not be able to recover it later. | http://orchard.codeplex.com/discussions/362813 | CC-MAIN-2017-09 | refinedweb | 625 | 58.28 |
TYPO ERROR
need to use them them. = Two "them".
:)
I fixed them them. :) Thanks!
Hi!
I thing this is the best place to place this comment.
I'm having trouble with my header file Header.h:
Extra.cpp:
And Blankslate.cpp:
I keep getting the errors:
1>Extra.obj : error LNK2005: "int birb::thing" (?thing@birb@@3HA) already defined in BlankSlate.obj
1>C:\Users\Bastiaan\source\repos\BlankSlate\Debug\BlankSlate.exe : fatal error LNK1169: one or more multiply defined symbols found
And if I comment out the #include "Header.h" in Extra.cpp the errors disappear.
I just don't understand why, I thought you could include headerfiles in multiple code files?
Can't you?
You can, but you can't define a variable twice.
If you include "Header.h" in 2 files, and `thing` is defined in "Header.h", `thing` will be defined once by "Blankslate.cpp" and once by "Extra.cpp".
You can declare `thing` `extern` and define it in "Extra.cpp".
`constexpr` variables can be defined multiple times, that's why `gravity` doesn't cause you trouble.
Perfect, thanks!
Thank you for clear explanation.
"Header guards are designed to ensure that the contents of a given header file are not copied more than once into any single file.
Note that header guards do not prevent the contents of a header file from being copied (once) into separate project files.
There are quite a few cases we’ll show you in the future where it’s necessary to put non-function definitions in a header file."
Then we'll may have the same definition in multiple files which it's fine for compiler but not for the linker. How do we resolve this issue?
Not everything is affected by the one-definition-rule. For example classes and inline entities can be defined multiple times in separate files.
Can you please give an example?
Thanks
a.hpp
b.cpp
c.cpp
Hi! In this lesson you're using square.cpp as an example with the following contents:
I understand that not including the associated header file isn't a problem here but shouldn't this lesson conform to the best practice established in the previous lesson (2.11)?
Yup, I missed updating that example. Thanks for pointing this out. It's fixed now.
Thanks for the advice regarding the compatibility of "pragma once". Now it makes more sense to know when to use it.
Hi again. Regarding header guards on non-function definitions in header files (such as, user defined types), I understand the compiler will complain if duplicate definitions are found. But why is the linker more lenient in this case?
You had said previously that, after the pre-processor acts on a code file, the final outcome does not contain any pre-processor directives because the compiler won't understand them.
But, under the title "Updating our previous example with header guards", you showed a program containing the directives after having gone through the pre-processing phase. Kindly explain.
The example shows the code after resolving #includes, not after fully preprocessing.
I see. Wouldn't it be more complete if you mentioned in main.cpp the guard (imported from geometry.h) around the second inclusion of the content from square.h?
I agree, let's wait until @Alex sees your comment.
Agreed. Lesson updated. Thanks for your feedback!
When are should we be using the stdafx.h file?
It is mentioned in some of the lessons.
I haven't been including it and my programs work fine.
stdafx.h is no longer used. It was used by vc++ (The compiler used by Visual Studio) to speed up compilation. It has been replaced by pch.h.
If you're not using VS, you don't need it. If you're using VS, disable precompiled headers and remove the include to allow cross-platform compatibility.
Hi!
I have a doubt regarding including iostream header file in main function and another function which is called from main.
Eg:
This is the main function.
Now my getUserInput function is :
Since both of the files include iostream, will there be a conflict due to multiple definitions at linking stage?
No, it's fine. You can include headers as many times as you want. Alex talks about why this works later in the tutorials.
Thanks a lot!
You guys are awesome. This is the complete guide to master this language. Thanks for sharing the content.Cheers!!
This is one of the best c++ tutorials site I have seen.
Hello the quiz question answer is in the text already and it has been explained before
Thanks, quiz removed until I can think up a new one.
I'm having trouble understanding the point of the forward declaration in square.h, could someone help explain that to me? Thanks
Hi Timothy!
Without the forward declarations, @main.cpp wouldn't know about the functions in @square.cpp
Alex, i don't normally comment on stuff online but this tutorial is awesome, of all the cpp tutorials I've searched up this one was by far the easiest to follow and didn't overwhelm me with 9 billion words per page. 10/10 would recommend.
I try to keep the word count to less than 1 billion words per page.
Thanks for visiting!
Hi ALEX,
CAN I SCREEN-RECORD CERTAIN PAGES FOR PERSONNEL USE ?
THANKS FOR THIS AWESOME WEBSITE....
Yes, you can do whatever you want with the content of the site for personal use only. It's redistribution or rebroadcasting that's harmful.
Hi there Alex! I am not able to understand the difference between declarations and definitions..
In the code:
Shouldn't that be a declaration? Or am I wrong?
* If you're actually allocating memory for something, it's a definition and a declaration.
* If you're just telling the compiler about the existence of something (e.g. a new type, a forward declaration) that you're going to use later, it's a pure declaration.
When you see int x used to create a local variable, that's acting as both a declaration and a definition. Because it's a definition, it's susceptible to the one definition rule. Thus, if you try to define int x; again in the same scope, you'll get an error.
But Am I not just telling the compiler about the existence of the integer x? How is it both a declaration and a definition?
also reserves space to store @x, you'll learn how to declare variables without defining them later on.
)."
It doesn't work for me. I have gcc compliant c++11. square.cpp must be included also in main.cpp, otherwise the program do not find the definition of functions defined (not just declared) in square.cpp
Hola amigo!
Don't include source files. Tell your compiler to compile both files at the same time.
For g++ the syntax is as follows, other compilers are similar
It's a bit late but HAPPY ANNIVERSARY! Your site is the best, sir!
I think to know how c++ doesn't work really deepen my understanding more than learning how it works
Hello Alex,
Don't you think we should now stick to #pragma once, just to make sure the code is readable, short and less error-prone?
Most of the compilers these days have support for #pragma once.
Regards,
Ishtmeet.
I generally try to recommend practices that are an official part of the C++ language, not compiler-specific extensions, even if they are widely adopted.
I wish C++ would formally adopt #pragma once because it is better than explicitly specifying header guards. But until they do, I won't recommend it, because it may leave your program unable to be compiled on particular compilers.
For what it's worth, the Google C++ style guide also recommends avoiding #pragma once.
Sorry if it's a noob question...
I'm confused about this code:
The preprocessor will look file by file and copy the contents of #includes inside each one that is doing a #includes right?
So when geometry.h does a #include "math.h", it copies to it the: int getSquareSides() definition right?
And when main.cpp does a #include "math.h" and #include "geometry.h", it is ALSO including int getSquareSides() definition right?
Shouldn't the linker complain about this duplicate definition? Or the geometry.h won't copy the contents of math.h to itself?
Help me find out what I'm missing here. Thanks.
Hi Paulo!
There's no duplicate definition, I've shown the include steps in this comment
"Now, when main.cpp #includes “math.h”, the preprocessor will see that MATH_H hasn’t been defined yet. The contents of math.h are copied into main.cpp, and MATH_H is defined. main.cpp then #includes “geometry.h”, which just #includes “math.h”. At this point, the preprocessor sees that MATH_H has previously been defined, and the contents between the header guards are skipped."
How can the preprocessor see that MATH_H has already been defined if there is no header guard for MATH.H in GEOMETRY.H?
Let's look at what happens when the files have been included:
Now, walk through the code line by line
So the final code looks like this
Hi nascardriver,
Please allow me to make up a similar example. What does the final code look like? and What is the output? Thank you.
#ifndef MATH_H // MATH_H isn't yet defined,
#define MATH_H // so we define it and go on.
std::cout << "AAA" << std::endl;
#endif // MATH_H
#ifndef GEOMETRY_H // GEOMETRY_H isn't yet defined,
#define GEOMETRY_H // so we define it and go on.
std::cout << "BBB" << std::endl;
#ifndef MATH_H // MATH_H is already defined
#define MATH_H // This code won't be compiled
#endif // MATH_H
#endif // GEOMETRY_H
int main()
{
std::cout << "CCC" << std::endl;
return 0;
}
Doesn't compile so no output.
You can check this yourself by generating pre-processed files. For g++ the command is
-E Stop after preprocessing
-o Output file
Output for your code
Dear Teacher, please permit me a suggestion: In program
change second return value to some number different than 5, to be clear that problem is same name of functions. Regards.
It's actually better the way it is. If I changed it, people might assume the compiler only complains if the function definition is different. In fact, the actual definition content don't matter -- it's the fact that there's more than one definition (duplicate or not) that's the issue.
Dear Teacher, please let me following question.
Header guards prevent duplicate header files. Is there way for prevent duplicate function declaration?
No. Duplicate function declarations aren't a problem:
However, Duplicate function definitions are a problem. You can generally avoid that by ensuring you define all your functions in .cpp files.
How come square.cpp doesnt require that you forward declare the function when it compiles or am I forgetting or confusing something?
// It would be okay to #include square.h here if needed
// This program doesn't need to.
int getSquareSides() // actual definition for getSquareSides
{
return 4;
}
int getSquarePerimeter(int sideLength)
{
return sideLength * getSquareSides();
}
bcoz square.cpp is a different .cpp file and it doesn't include main function which requires forward declaration to compile if functions are defined after main()...
for example
Dear Teacher,
Please let me ask you whether following source code is valid c++'s iostream header file. Symbols //@ and /// are confusing me. Former is before second "{" and before first "}".
Regards.
Hi Georges!
There is no such thing as 'the iostream header', there are many different versions of it.
What you posted matches the iostream header installed on my system.
Dear nascardriver, please accept my thanks for you replied, and for your helpful answer. I understand that what I posted is just a version of "iostream" header file. Regards.
Anything that starts with // is a single line comment, so //@ is a comment, as is ///. Why they did this, I have no idea.
Dear Teacher, please accept my many thanks for you replied, and for your helpful answer.
Please let me say you another thing I do not understand. I understand that outer body {} belongs to function
but what about inner body {}?
Regards.
I'm not sure why they put those objects inside an inner block.
Dear Teacher, please accept my thanks for you replied, and for your motivating answer. It is motivation for me, for learn c++ object and block. Regards.
What's the difference between .h and .cpp files? Can we write code inside .h files too (as in the third example)?
Hi gSymer!
You can write code inside header files, but that's not what they're meant for.
Declarations go in header files and definitions in source files.
Dear Teacher, please let me say that word pragma is transliteration of greek word πραγμα. In general it means "thing" or "stuff". For more information see at
Regards.
Dear Teacher, please let me say that in section "Header guards do not prevent a header from being included once into different code files", in second (correct) program (project) in main.cpp file, comment is
I think correct is
By the way, color is not the usual green but same as #include ... Regards.
Name (required)
Website
Save my name, email, and website in this browser for the next time I comment. | https://www.learncpp.com/cpp-tutorial/header-guards/comment-page-3/ | CC-MAIN-2021-04 | refinedweb | 2,236 | 68.26 |
Overview
- Explanation of tree based algorithms from scratch in R and python
- algorithms. You can also check out the ‘Introduction to Data Science‘ course covering Python, Statistics and Predictive Modeling.
We will also cover some ensemble techniques using tree-based models below. To learn more about them and other ensemble learning techniques in a comprehensive manner, you can enrol in this free course: Ensemble Learning Course: Ensemble Learning and Ensemble Learning Techniques
Table of Contents
- What is a Decision Tree? How does it work?
- Regression Trees vs Classification Trees
- How does a tree based algorithms decide where to split?
- What are the key parameters of model building and how can we avoid over-fitting in tree based algorithms?
- Are tree based algorithms better than linear models?
- Working with Decision Trees in R and Python
- What are the ensemble methods of tree based algorithms?
- predefined.
Tree based modeling in R and Python Tree based Algorithms
Let’s look at the basic terminology based algorithms
Gini.
You might often come across the term ‘Gini Impurity’ which is determined by subtracting the gini value from 1. So mathematically we can say,
Gini Impurity = 1-G based algorithms and how can we avoid over-fitting in decision trees?
Overfitting is one of the key challenges faced while using tree based algorithms.
Let’s discuss both of these briefly.
Setting Constraints on tree based algorithms.
Pruning in tree based algorithms
Let’s algorithms tree based algorithms Trees in R and Python
For R users and Python users, decision tree is quite easy to implement. Let’s quickly look at the set of codes that can get you started with this algorithm. For ease of use, I’ve shared standard codes where you’ll need to replace your data set name and variables to get started.
In fact, you can build the decision tree in Python right here! Here’s a live coding window for you to play around the code and generate results: algorithms ?
The literary meaning of word ‘ensemble’ is group. Ensemble methods involve group of predictive models to achieve a better accuracy and model stability. Ensemble methods are known to impart supreme boost to tree based models.
Like every other model, a tree based algorithm an ensemble.
How does it work?.
It works in the following manner._4<<
To understand more in detail about this algorithm using a case study, please read this article “Introduction to Random forest – Simplified“.
Advantages of Random Forest
- (on some random data set).
- It has an effective method for estimating missing data and maintains accuracy when a large proportion of the data are missing.
- It has methods for balancing errors in data sets.
Disadvantages of Random Forest
- It surely does a good job at classification but not as good as for regression problem as it does not give precise continuous nature predictions.!
Python & R implementation
Random forests have commonly known implementations in R packages and Python scikit-learn. Let’s look at the code of loading random forest model in R and Python below:
R Code
> library(randomForest) > x <- cbind(x_train,y_train) # Fitting model > fit <- randomForest(Species ~ ., x,ntree=500) > summary(fit) #Predict Output > predicted= predict(fit,x_test)
10. What is Boosting? How does it work?:
- Using average/ weighted average
- does it work?. This is how the ensemble model is built..:
- Regularization:
- Standard GBM implementation has no regularization like XGBoost, therefore it also helps to reduce overfitting.
- In fact, XGBoost is also known as ‘regularized boosting‘ technique.
- Parallel Processing:
- XGBoost implements parallel processing and is blazingly faster as compared to GBM.
- But hang on, we know that boosting is sequential process so how can it be parallelized? We know that each tree can be built only after the previous one, so what stops us from making a tree using all cores? I hope you get where I’m coming from. Check this link out to explore further.
- XGBoost also supports implementation on Hadoop.
- High Flexibility
- XGBoost allow users to define custom optimization objectives and evaluation criteria.
- This adds a whole new dimension to the model and there is no limit to what we can do.
- Handling Missing Values
- XGBoost has an in-built routine to handle missing values.
- User is required to supply a different value than other observations and pass that as a parameter. XGBoost tries different things as it encounters a missing value on each node and learns which path to take for missing values in future.
- Tree Pruning:
- A GBM would stop splitting a node when it encounters a negative loss in the split. Thus it is more of a greedy algorithm.
- XGBoost on the other hand make splits upto the max_depth specified and then start pruning the tree backwards and remove splits beyond which there is no positive gain.
- Another advantage is that sometimes a split of negative loss say -2 may be followed by a split of positive loss +10. GBM would stop as it encounters -2. But XGBoost will go deeper and it will see a combined effect of +8 of the split and keep both.
-.
- Continue on Existing Model algorithms along with these practical implementation. It’s time that you start working on them. Here are open practice problems where you can participate and check your live rankings on leaderboard:
End Notes
Tree based algorithms algorithms.
Note – The discussions of this article are going on at AV’s Discuss portal. Join here!
You can test your skills and knowledge. Check out Live Competitions and compete with best Data Scientists from all over the world.You can also read this article on our Mobile APP
64 Comments
Lovely Manish!
Very inspiring. Your articles are very helpful.
Looking forward to your next,
Trinadh
Glad you found it helpful.
Thanks Trinadh!
Excellent Manish
Thanks Venky
Can I please have it in pdf or rather can you please make all your tutorials available in pdf.
Hi Hulisani
I’ll soon upload the pdf version of this article. Do keep a check.
Very detailed one Manish. Thank you.!
Nice writeup
Very clearly explained .. Good Job
Good job Manish, thank you.
It was nice
Thanks Darshit.
Very nice
Very clear explanations and examples. I have learned a lot from this. Thankyou.
Do you plan to write something similar on Conditional Logistic Regression, which is an area I also find interesting?
Welcome Joe. And, thanks for your suggestion. I guess I need to check this topic.
Amazing teacher you are..thanks for the great work
Thanks Kishore!
You are a really great teacher. Keep up the great work!
means a lot. Thank You!
Good Job Manish
Awesome post, Manish! Kudos to you! You are doing such a great service by imparting your knowledge to so many!
well described.
Perhaps you wish to tell us how many YEARS of experiment learning that you have that you can summarize in a few liners …
For your 30 students example it gives a best tree for the data from that particular school.
It is not clear how you test that fixed best tree for other data from other schools or where the fact of playing cricket, or not, is not known. How do you then establish how good the model is?
It seems that trees are biased towards correlating data, rather than establishing causes. The results for a country, say USA, that did not play much cricket or a school without a cricket pitch and equipments would give completely misleading answers. So the example tree has really just correlated data for a particualr Indian school but not investigated any cause of playing cricket.
It is an All in One tutorial. Really helpful. Thanks a lot.
Hi Manish,
This article is very informative. I have doubt in calculation of Gini Index.
You said “1. Calculate Gini for sub-nodes, using formula sum of square of probability for success and failure (p^2+q^2).”
But in Rpart related pdf in R , formula for Gini index = p(1-p).
Please correct me if anything wrong in my understanding.
Excellent introduction and explanation. You are very good at explaining things and sharing. Appreciate your hard work.
Venkatesh
Good job
Awesome!! Makes life so much easier for all of us.
Hi Manish,
Very detailed (both theory and examples). Really appreciate your work on this.
Keep up the good work.
Rajesh
Hi Manish, Thanks for the awesome post…
Please provide pdf version of this.
Varun
Very well drafted article on Decision tree for starters… Its indeed helped me. Thanks Manish, We’ll look for more 🙂
Good to know. Thanks Himanshu!
hi Manish… very effective and simple explanation on Tree Based Modeling. can you provide me with pdf version please ?
Thanks for the article! Can someone help me to how to address the below scenario!
Is it advisable to use Classification Tree techniques (CHAID / CART) when the class proportions is highly skewed.
For e.g. Class A is 98% of the base and Class B is only 2% of the population.
Awesome post, thank you!
I would like to know why some people use a tree to caterorize varibles and then with this categorized variables build a logistic regression?
Because in some way, a chaid tree defines best/optimal breaks in continuos variable, using points of break where chi test is more significant.
from scipy.stats import mode
mode(df[‘Gender’])
C:\Anaconda3\lib\site-packages\scipy\stats\stats.py:257: RuntimeWarning: The input array could not be properly checked for nan values. nan values will be ignored.
“values. nan values will be ignored.”, RuntimeWarning)
—————————————————————————
TypeError Traceback (most recent call last)
in ()
—-> 1 mode(df[‘Gender’])
C:\Anaconda3\lib\site-packages\scipy\stats\stats.py in mode(a, axis, nan_policy)
642 return mstats_basic.mode(a, axis)
643
–> 644 scores = np.unique(np.ravel(a)) # get ALL unique values
645 testshape = list(a.shape)
646 testshape[axis] = 1
C:\Anaconda3\lib\site-packages\numpy\lib\arraysetops.py in unique(ar, return_index, return_inverse, return_counts)
196 aux = ar[perm]
197 else:
–> 198 ar.sort()
199 aux = ar
200 flag = np.concatenate(([True], aux[1:] != aux[:-1]))
TypeError: unorderable types: str() > float()
can anybody help me on python..new in python..what should I do for this error
Please reply back as soon as possible. Thanks!!
Hi, is the formula ((p^2+q^2).) that you have for calculation of Gini Indx correct? Can you please provide reference of a published paper or standard book.
I am trying to use MLLIB on spark to implement decision tree. How do I determine the best depth without using sklearn ?
Sir, how to decide the number of trees to get a good result from random forest ??
THis is one of the nest explanation. I came across. Thanks a ton.
Is it possible if you coyld talk about the M5 rule based algorithm
hi manish, i want to learn more practical approach in R with some example on control constraints , bias , variance and pruning, can u please suggest .
it was nice and beautiful article. i learned a lot as i am new to machine learning. it cleared many of my confusions on decision tree and RandomForest.
Thank you
you can refer to ISLR book for R code..
Thank you
Hi Manish,
your article is one of the best explanation of decisions trees I have read so far. Very good examples which make clear the gains of different approaches.
Now some things are clearer for me. Thanks a lot!
The fact that I am reading this article at 4 AM and not feeling sleepy even a bit ( in fact I lost sleep somewhere in the middle) and getting ready to execute code fir my own dataset, shows the worth of this article. Hats off. Looking forward to read all your articles. Thanks a lot
Hi Manish,
Nicely written, good job
Is there a way to get sample to root node mapping
Manish,
Very well written comprehensively. Thanks for your efforts. So random forest is special case of Bagging Ensemble method with classifier as Decision Tree?
Thanks
Kishore
Very simple and nicely written..Good job..
What does CV mean?
sorray for the wrong ,the meaning of cv is Cross-validation
Cross-validation
CV = cross-validation. It took me a while to figure that one out too.
Spectacular article….Keep it up. Manish
Thanks a lot Manish for sharing, I have started learning journey with your site, gradually building confidence
Appreciated your efforts for enhancing knowledge across world
How can you tell if the GBM or Random forest did a good job in predicting the response ??
What if i have a low Rsquare but an AUC of .70 . Can i assume my model is good in explaining the variability of my response categories???
Yes, indeed very informative.
Hi Manish,
Thanks for a wonderful tutorial. Is there a way to get the scored probability per student where I can state that a particular student has X% of playing cricket.
This is a great article! very detailed and understandable compared to other introduction of those methods. Please post more like this! appreciate!
Amazingly well written ….
Thank you soo much sir
Simple & Excellent ,!!!
Thanks
Excellent | https://www.analyticsvidhya.com/blog/2016/04/tree-based-algorithms-complete-tutorial-scratch-in-python/?utm_source=blog&utm_medium=beginners-guide-random-forest-hyperparameter-tuning | CC-MAIN-2020-34 | refinedweb | 2,183 | 67.65 |
On Wed, Nov 16, 2011 at 1:29 AM, Nick Coghlan <ncoghlan at gmail.com> wrote: > So, without a clear answer to the question of "from module X, inside > package (or package portion) Y, find the nearest parent directory that > should be placed on sys.path" in a PEP 402 based world, I'm switching > to supporting PEP 382 as my preferred approach to namespace packages. > In this case, I think "explicit is better than implicit" means, "given > only a filesystem hierarchy, you should be able to figure out the > Python package hierarchy it contains". Only explicit markers (either > files or extensions) let you do that - with PEP 402, the filesystem > doesn't contain enough information to figure it out, you need to also > know the contents of sys.path. > After spending an hour or so reading through PEP 395 and trying to grok what it's doing, I actually come to the opposite conclusion: that PEP 395 is violating the ZofP by both guessing, and not encouraging One Obvious Way of invoking scripts-as-modules. For example, if somebody adds an __init__.py to their project directory, suddenly scripts that worked before will behave differently under PEP 395, creating a strange bit of "spooky action at a distance". (And yes, people add __init__.py files to their projects in odd places -- being setuptools maintainer, you get to see a LOT of weird looking project layouts.) While I think the __qname__ idea is fine, and it'd be good to have a way to avoid aliasing main (suggestion for how included below), I think that relative imports failing from inside a main module should offer an error message suggesting you use "-m" if you're running a script that's within a package, since that's the One Obvious Way of running a script that's also a module. (Albeit not obvious unless you're Dutch. ;-) ) For the import aliasing case, AFAICT it's only about cases where __name__ == '__main__', no? Why not just save the file/importer used for __main__, and then have the import machinery check whether a module being imported is about to alias __main__? For that, you don't need to know in *advance* what the qualified name of __main__ is - you just spot it the first time somebody re-imports it. I think removing qname-quessing from PEP 395 (and replacing it with instructive/google-able error messages) would be an unqualified improvement, independent of what happens to PEPs 382 and 402. -------------- next part -------------- An HTML attachment was scrubbed... URL: <> | https://mail.python.org/pipermail/import-sig/2011-November/000384.html | CC-MAIN-2016-36 | refinedweb | 425 | 55.98 |
forcibly inline python functions to their call site
This is a Python hack to let you specify functions as “inline” in much the same way you’d do in C. You save the cost of a function call by inlining the code of the function right at the call site. Of course, being Python, the inlining is done at runtime.
WARNING: Don’t use this in any real code. Seriously. It’s just a fun hack.
Now then, suppose you have some code like this:
def calculate(x): return 3*x*x - 2*x + (1/x) def aggregate(items): total = 0 for item in items: total += calculate(item) return total
This code pays the overhead of a function call for every item in the collection. You can get substantial speedups by inlining the calculate function like so:
def aggregate(items): total = 0 for x in items: total += 3*x*x - 2*x + (1/x) return total
But now you’re paying the costs in terms of code quality and re-use. To get the best of both worlds simply declare that the calculate function should be inlined:
from atinline import inline @inline def calculate(x): return 3*x*x - 2*x + (1/x) def aggregate(items): total = 0 for item in items: total += calculate(item) return total
Now the first time the aggregate() function runs, it will detect that the calculate() function should be inlined, make the necessary bytecode hacks to do so, then continue on its way. Any subsequent calls to aggregate() will avoid the overhead of many function calls.
Currently only plain calls of top-level functions are supported; things won’t work correctly if you try to inline methods, functions looked up in a dict, or other stuff like this. It also doesn’t work with keyword arguments. These limitations might go away in future.
The heavy-lifting of bytecode regeneration is done by the fantastic “byteplay” module, which is atinline’s only dependency.
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/atinline/ | CC-MAIN-2017-30 | refinedweb | 346 | 66.98 |
Chapter 15: Helping web2py
Helping web2py: Bugs, enhancements and documentation
web2py is very welcoming towards bug reports, documentation improvements and enhancements.
Google Group
The main forum for discussing bugs and new features is: web2py-users (URL is)
Filing a bug report via Google Code
If you have found a bug and discussed it on the group, you may be requested to file a bug report by creating an 'Issue' on Google Code
Reporting Security Bugs
When reporting security bugs, be careful about disclosing information which would assist a malicious exploitation. After posting on the Google group or raising an issue at the Google Code site, tends to be at GitHub. web2py has two important versions: the current stable version, and the development snapshot. In git the development snapshot is usually referred to as "master", but the web2py project uses "trunk", which is the equivalent term from Mercurial. Google Code push your changes back to your GitHub fork
-:
git remote add upstream
Next you need a branch name for your changes. git encourages lots of branch names, as specific as possible. For web2py, we recommend names like this:
- every bug-fixing commit should come from a branch named "issue/number_of_the_issue_on_google_code" (like issue/1684)
- every enhancement commit should come in a branch named "enhancement/title_of_the_enhancement" (like enhancement/trapped_links)
In your local environment, checkout the branch for your changes: Substitute CHANGE1 for your branch name.
git fetch upstream git checkout -b CHANGE1 upstream/master
... Make changes or cherry pick from another local branch. commit if necessary. When you are ready to send your local changes to your web2py fork:
git push origin CHANGE1
<TODO insert note about collapsing several commits into one commit>
and now:
python web2py.py --run_system.
class TestVirtualFields(unittest.TestCase): def testRun(self): db = DAL(DEFAULT_URI, check_reserved=['all']) db.define_table('tt', Field('aa')) db.commit() db.tt.insert(aa="test") class Compute: def a_upper(row): return row.tt.aa.upper() db.tt.virtualfields.append(Compute()) assert db(db.tt.id>0).select().first().a_upper == 'TEST' db.tt.drop() db.commit()
Documentation: Updating the book
The book is also in GitHub and the same git workflow can be used. The book source contains sources in various languages, under the sources directory. The content is written in markmin .:
def getcfs(key, filename, filter=None): """ Caches the *filtered* file `filename` with `key` until the file is modified. Args: key(str): the cache key filename: the file to cache filter: is the function used for filtering. Normally `filename` is a .py file and `filter` is a function that bytecode compiles the file. In this way the bytecode compiled file is cached. (Default = None) This is used on Google App Engine since pyc files cannot be saved. """
and an example from Google's style-guide: goes out, napoleon is a requirement for building the docs | http://www.web2py.com/books/default/chapter/29/15 | CC-MAIN-2015-06 | refinedweb | 470 | 55.44 |
Or login with:
Programming Guidelines
Index
- General Layout
- Structure
- Code Formatting
- Flow Control Statements
- Naming Conventions
- Effective Commenting
The following are suggestions on how to write better code. This is an evolving standard that we believe applied generally to most programming languages, and as styles change we would welcome your suggestions for improvement.
General
Write Clearly
Don't be overly clever. Just because you can do something in a single line of code doesn't mean you should. Do not write code just for the compiler; write the code for you and your fellow programmers. Remember it may be you who returns to this code days, months, or years later and can't remember what that complex statement does. Never sacrifice clear code for "efficient" code.
Remember: Clearly written code is 'self' commenting, there should be no need for blocks of comments to describe what is going on. If that isn't the case, consider using more descriptive variable names, and breaking the code up into more distinct modules.top
Make it Work, then Make it Fast
Often during development, developers want to get the most "bang" for their money. If you write code that works, and the interface is clear and correct, you can always go back later and make it faster. Strive to make it correct and readable first and fast second. Fast wrong code is still wrong code.top
Structure
The process of understanding a particular program is speeded up if all programs are organised in the same way. In C++ all files should be organised with the declaration/body of a function above the code where the function is being used. Thus the main function(s) within a program (*.cpp) should always appear at the bottom of the code, i.e.
- #include statements
- functions' bodies
- class bodies
- main function body
A header file (*.h) should contain all the function and class definitions that are intended to be visible to users who include the header. Except in the use of template<>’s, headers should generally only contain the definitions of functions and classes, not the body of these functions or classes. Thus the standard order should be
- #include statements
- #define statements
- struct definitions
- function definitions
- class definitions
Class Declarations
The following is a list of the parts of a class declaration in the order that they should appear:
- Class statement
- Constants and static variables
- Member variables
- Constructors
- Functions
//! A class to do something class SomeClass : public SomeBaseClass { public: int m_numPeople; //! Number of People protected: int m_numCanines; //! Number of Canines private: int m_numFelines; //! Number of Felines //!Default constructor SomeClass(); //!A Function FunctionA(); };
Member Variables and Functions
First list the public, then protected, and then the private. By placing the public section first everything that is of interest to a user is gathered at the beginning of the class definition. The protected section may be of interest to designers when considering inheriting from the class. The private section contains details that should have the least general interest.
For readability, all member variables of a class should be indented to the same column.top
Public Member Class Variables
A public variable represents a violation of one of the basic principles of object-oriented programming, namely, encapsulation of data. For example, if there is a class of the type BankAccount, in which accountBalance is a public variable; any user of the class may change the value of this variable. However, if the variable has been declared protected, it's value may be changed only by the member functions of the class (e.g. setAccount() and getAccount()). This prevents some other arbitrary function changing data; which may lead to errors that are difficult to locate.
If public data is avoided, it's internal representation may be changed without users of the class having to modify their code. This is a core principle of class design, where the existence of a public interface for the class allows the internal implementation of a class to be changed without affecting the users.
Conversely, for classes or structs that act as containers for a collection of data variables, without providing significant additional functionality (beyond perhaps initialisation, copying and destruction), it is simply ludicrous to provide a setX() and getX() for each member variable. Therefore, when designing a class great care should be taken to ascertain its purpose and intended use.top
Function Arguments
Functions having long lists of arguments that look complicated and are difficult to read. In general, if a function has more than five parameters then perhaps it should be redesigned. One alternative is to package the parameters into a class or structure that is then passed to the function.
Try to keep functions short. One compelling reason is that if an error situation is discovered at the end of an extremely long function, it may be difficult for the function to clean up after itself before reporting the error to the calling function. By always using short functions such an error can be more exactly localized.top
Constant Correctness
In the declaration of input variables for a function, a const keyword should precede the type definition of all variables that are not changed during the execution of the function.
It is usual to pass basic type variable (such as int, double etc) by value. In this instance, the use of the const keyword is merely to aid readability – variables defined as const will never change in the function.
For large variables/objects it is common to pass these by either reference or by pointer. Through the use of a const keyword, variables/objects that are not changed by the function can be specifically declared within the function definition. The compiler will always ensure adherence to this rule, so anyone using a function can instally see the potential for a variable to be changed. This also simplifies debugging.Note, although a standard ‘object reference’ can be converted into a ‘const object reference’, this conversion cannot be performed the other way around. However, as you might expect, a function that takes an object by value (it makes its own local copy), will accept both const reference and nonconst variables.
double setZ(const Coordinate &point); double setY(Coordinate &point); double setX(Coordinate point); double init(const int x, const Coordinate &A, Coordinate &B) { int a=x; //ok x++; //illegal point.m_x=x; //illegal, where m_x is a member variable of Coordinate SetX(A); //ok SetY(A); //illegal SetZ(A); //ok SetX(B); //ok SetY(B); //ok SetZ(B); //ok }
Within a class, the const keyword can also be used to designate member functions that do not alter other members of that class. This has the same distinct benefits for debugging as those discussed above, and also assist readability. Note that for obvious reasons, a const member function cannot call a nonconst member function.
class A { int m_size; int getSize() // bad - doesn't alter any member functions, so should be const { return m_size; } int getTheSize() const // good { return m_size; } int doSomething() const { return m_size++; // illegal } int add() const { return getSize()+1; // illegal, getSize() isn't const, so theoretically my alter class } };
Code Formatting
Indentation
Always a touchy subject. The issue of indentation is important for the purposes of common code formatting. The standard we would like to adopt is to use 2 spaces for indentation. To ensure compatibility across development tools, indentation should use spaces instead of tabs.top
Line Length
Avoid lines longer than 80 characters. This is intended to make the code more readable in an editor, as well as to allow for easier printing of the source code.top
Blank Lines
Blank lines improve readability by setting off sections of code that are logically related.
Two blank lines should always be used between the blocks listed under “Order of code sections” and between:
One blank line should always be used in the following circumstances:
- Between functions.
- Before a block comment or leading comment.
- Between logical sections inside a function to improve readability.
Spacing
In our opinion, code is more readable if spaces are not used around the ‘.’ operator. The same applies to unary operators such as increment ("++") and decrement ("--") since a space may give the impression that the unary operand is actually a binary operator.
- Do not use spaces around '.' or between unary operators and operands.
- Spaces around binary operators and operands may increase readability. Too much can have opposite effect. The user of spaces should also draw attention to the operator precedence.
- When calling a function, the left parenthesis should immediately follow the name without any spaces.
- A space between a keyword and a parenthesis should not be used
- Spaces around parentheses may be used in expressions to increase readability
// good examples a += c + d; a = (y + x*(a+b)) / (c*d); // spaces are removed to improve readability a++; aDate.SetYear(1994); for( int i=0; i<10; i++ ) { } // bad examples a+=c+d; a=(y+x*(a+b))/(c*d); a = ( y + x * ( a + b ) ) / ( c * d ) a = ( y+x * (a+b)) / (c+d) // operator precedence evaluates x*(a+b) first, prior to adding y a ++; aDate . SetYear(1994);
Breaking Lines
The breaking of a line of code aids readability. If a line needs to be broken up into 2 or more lines, break it according to these general principles:
- Break after a comma or an operator.
- Prefer higher-level breaks to lower-level breaks.
- Align the new line with the beginning of the expression at the same level on the previous line.
- If the above rules lead to confusing code, or to code that is squashed up against the right margin, use two indentations instead.
// good -- correct alignment var = someFunction1(longExpression1, someFunction2(longExpression2, lonExpression3)); // good -- break at higher level longName1 = longName2 * (longName3 + longName4 - longName5) + 4 * longname6; // avoid -- this is a break at a lower level longName1 = longName2 * (longName3 + longName4 - longName5) + 4 * longname6; // good if((condition1 && condition2) || (condition3 && condition4) || !(condition5 && condition6)) { doSomethingAboutIt(); }
Statements
Only one statement should be written on each line. Placing multiple statements on the same line can cause problems. Many debuggers cannot handle multiple statements on a single line.
argv++; // good argc--; // good argv++; argc--; // avoid
Variables
Avoid assigning several variables to the same value in a single statement. It makes it hard to read. Also, do not use embedded assignments in an attempt to improve run-time performance - this is the job of the compiler.
// avoid the following fooBar.fChar = barFoo.lchar = 'c'; d = (a = b + c) + r; // do it this way instead fooBar.fChar = 'c'; barFoo.lchar = 'c'; a = b + c; d = a + r;
// good variable declarations int level = 4; // indentation level int size = 256; // size of table // avoid int level=4, size=256
Blocks and Braces
Blocks, also called compound statements, are constructs that contain lists of statements enclosed in braces ‘{}’. The placement of braces is the subject of great debate concerning the appearance of code. We recommend the style that, in our opinion, more clearly shows enclosure and block ownership. Other styles may well provide more compact code.
- Braces ‘{}’ which enclose a block are to be placed in the same column, on separate lines directly before and after the block. The enclosed statements should be indented one more level than the compound statement.
- Trailing comments should be used on the closing brace when the block structure is complex, or there are several levels of nesting.
if( x == 0 ) { . . . } try { . . . } catch( Exception ) { . . . }
top
Write Module Code
Code should be broken down into smaller pieces in order to make testing easier and to facilitate re-use of code. Functions that span several pages of printed text are hard to follow, harder to debug and are less likely to be reusable in alternative applications. As a general rule, a function should not span more than 2 pages (or 100 lines). Furthermore, two or more functions can be adapted by others for a wider range of uses than one single function.
top
Break Complex Equations into Smaller Pieces
Complex equations containing many operators should be broken down into smaller pieces, with suitably name local variables to describe the individual portions of the code. This not only makes the code more understandable (self documented), but it is also easier to analyse the values of specific parts of the equation during debugging.
If only locally defined variables are used then writing code in this way is NOT less efficient - the code will run just as fast. However, it is important to constrain the scope of these local variables, which is achieved when you code in a modular fashion. To further aid the compiler you can encapsulate the locally used variables with { }.
// poor double num=(A * 2 * cos(w * t)) * sin(k * x) * cosh(k * d) + 2 * B * sin(k * x - w * t); // better ... double num; { double Amp_A = A * 2 * cos(w * t); double Wave_A = Amp_A * sin(k * x) * cosh(k * d); doule Amp_B = B * 2; Wave_B = Amp_B * sin(k * x - w * t); num = Wave_A + Wave_B } ...
top
Parenthesis
It is generally a good idea to use parentheses liberally in expressions involving mixed operators to avoid operator precedence problems. Even if the operator precedence seems clear to you, it might not be to others - you should not assume that other programmers know precedence as well as you do.
if(a == b && c == d) // avoid if((a == b) && (c == d)) // good x >= 0 ? x : -x; // avoid (x >= 0) ? x : -x; // good
top
Flow Control Statements
There are many styles for formatting control statements, so consistency becomes very important. Multiple styles within one file or set of files is at best distracting and at worst error-prone.
- Statements must never be included on the same line as the control expression. By putting statements on the same line as the control expression it can cause confusion when tracing control flow with most debuggers.
- The flow control primitives if, else, while, for and do should usually be followed by a block, especially when the executed statement is fairly complicated. This also makes it easier to add statements without accidentally introducing bugs due to forgotten braces.
// if-else if( expression ) { statements } else if( expression ) { statements } else { statements } // for loop for( expression1; expression2; expression3 ) { statements } // while while( expression ) { statements } // do-while do { statements } while( expression );
Switch Statements
- If a break is not used at the end of a case and the code “falls through”, a comment should mention that the code intentionally falls through to the next case.
- A break should be used at the end of the last case, even though it may not be explicitly necessary. This prevents an unintentional fall-through error if another case is added later.
- The ‘default’ case should normally appear as the last case, unless it falls through to another case.
- Try to indent all the statements so they start at the same column.
switch( expression ) { case const1: statements break; case const2: statements break; case const3: statements // falls through default: statements break; }
Breaking Within Loops
Breaks within loops should be treated as normal statements. However, it is recommended that the break be commented.
for( propID = 0; propID < MAX_PROP_ID; propID++ ) { while( m_eventLoss[propID] < MAX_PROP_ID ) { if(..) break; // from while eventLoss[propID] } }
Goto Statements - Don't Use!
Use Goto sparingly, since they break the structured flow of the program and can make code difficult to follow. Two harmless places to use goto's are when breaking out of multilevel loops, or to jump to a common 'error catching' location. For example:
for(...) { while(...) { ... if(disaster) goto error; } } ... //*** ERROR CATCHING **************************** //*** Fixing the mess after disaster happened *** error: ... // *********************
Exception Handling
How to handle exceptions depends a lot on the application (console, windows application, etc) and a complete treatment is beyond the scope for this document. However, the basic guide line for all exception handling is:
A function that finds a problem it cannot cope with should throw or re-throw an exception, hoping that it's (direct or indirect) caller can handle the problem. A function that wants to handle that kind of problem should indicate that it is willing to catch that exception.top
Error Catching - assert and verify
An assertion statement specifies a condition that you expect to hold true at some particular point in your program. If that condition does not hold true, the assertion fails, execution of your program is interrupted, and the Assertion Failed dialog box appears.
The key feature of the assert statement is that it is only included in 'Debug' executable code, with these statements being automatically removed for Release code. As such, assert only slows down Debug executables, but has no ill effect (either speed or size) on the Release code.
Through the liberal use of assertions in your code, you can catch many errors during development. A good rule is to write an assertion for every assumption you make. If you assume that an argument is not NULL for example, use an assertion statement to check for that assumption. In addition, the readability of code can be greatly improved through their use, as the reader can easily see what these assumptions are.
#include <assert.h> void add(double** data, int N) { assert(data!=NULL); assert(N<MAXSIZE); ... }
Because assert expressions are not evaluated in the 'Release' version of your program, you need to make sure that assertions does not have side effects.
#include <assert.h> // Never do this assert(nM++>0); // At this point, nM will have different values // in the debug and release versions of the code. // Better assert(nM>0); NM++; // UnSafe assert(myFunc()>0); // myFunc() might have side effects.
In MFC, there is a VERIFY macro that can be used instead of ASSERT. VERIFY evaluates the expression but does not check the result in the Release version, therefore there is now an overhead related to the evaluation of the expression. We feel VERIFY adds confusion to the code, so we strongly discourage its use. Expressions intended to be in the release code should always be evaluated as a part of regular code.
Naming Conventions
The appropriate naming of identifiers is important for a variety of reasons. If clear and meaningful identifiers are used, then it aids in the understandability of the code, and also reduces the need to provide overly detailed comments. Consistency of naming also aids in the ability of developers to read and understand the code. By applying consistent conventions to naming, users of the classes will become familiar with the style and begin to "expect" all classes to follow the same style.
Apply these general rules when naming identifiers:
- Choose names that suggest the usage.
- Do not use names that differ only by the use of uppercase and lowercase letters.
- Do not use names that include abbreviations that are not generally accepted.
- Do not use underscores to separate words or begin an identifier; they are allowed as separators for static finals, and a specific exception for class member variables.
- Do not use dollar sign ‘$’ characters in identifiers.
- It is recommended that identifiers not be extremely long (over 32 characters).
Overloading Functions
Overloading functions can be a powerful tool for creating a family of related functions that only differ in the type of data provided as arguments. If not used properly (such as using functions with the same name for different purposes) they can, however, cause considerable confusion. When overloading functions all variations should have the same semantics (be used for the same purpose).top
Effective Commenting
Comment on your code in a manner which is compact and easy to find, but do not over-comment. Commenting every line of code may seem helpful, but it is not. Make your comments count; they should clarify the code, not complicate it. Remember that by properly choosing names for variables, functions, and classes and by properly structuring the code, there is less need for comments within the code. It is always more effective to choose meaningful variable names than to use a vague name with a good comment. Source code should be virtually self-documenting.
- In general, comments should be used to give overviews of code and provide additional information that is not present in, or clear from, the code itself.
- Comments should contain only information that is relevant to reading and understanding the program. For example, information about how the corresponding package is built or in what directory it resides should not be included as a comment.
- Commenting of nontrivial or non-obvious design decisions is appropriate.
- Avoid creating comments that are likely to get out of date as the code evolves.
-.
Implementation Comments
Implementation comments are delimited by /*...*/, and //. Implementation comments can be used for commenting out lines of source code or commenting on the particular implementation. These comments are to be read by developers reading your source code. Four styles of implementation comments can be used: block, single-line, trailing, and don’t-compile.top
Block Comments
Block comments are used to provide descriptions of files, data structures, compound statement and algorithms. Block comments should appear before the code they are commenting. Block comments should be indented to the same level as the code they describe. To set it apart from the rest of the code, a blank line should precede a block comment.
int daysInYear() { int j = 0; /* * This loop will calculate the number of days in * the year. */ for( int days = 1; days <= 365; days++ ) { } return days; }
Leading Comments
Leading comments can appear on a single line indented to the level of the code that follows. If a comment can't be written in a single line it should follow the block comment format. We recommend the a leading comment should be preceded by a blank line and followed by an additional // to separate it from the code that follows. Here's an example of a leading comment:
int daysInYear() { int j = 0; // This loop will calculate the days in the year // for(int days=1; days<=365; days++) { } return days; }
Trailing Comments
Very short comments can appear on the same line as the code they describe, but should be shifted far enough to separate them from their corresponding statements. If more than one short comment appears in a chunk of code, they should all be indented to the same tab setting.
if(a == 2) { return TRUE; // special case } else { return isPrime(a); // works only for odd a }
Don't Compile Comments
Comment delimiters can also be used to comment out a line, partial line, or multiple, consecutive lines of code. The use of the // comment style is recommended when you're commenting out several lines of code, but not more. Code commented out with /* */ style comments can cause problems, particularly if another developer attempt to nest /* */ style comments (not all compilers support nested comments). For this reason if you need to comment out a large are of code, we recommend using #ifdef 0 ... #endif.
// good if(foo > 1) { int x = 5; // x++; ... return x; } //else //{ // return false; //} // not recommended - be careful /*if(bar > 1) { bar++; }*/ // bad - nesting not supported on all compilers /*while(x > 1) { /* if(bar<1) { bar++; } */ }*/ // good #ifdef 0 for(int i=1; i<10; i++) { // lots of code } #endif
TODO's
Changes that need to be made, or even suggestions for further work, should be place on single line comments starting with "//TODO:". Both Microsoft Development Studio and code documentation programs can detect these comments, therefore they provide a useful reminder of changes that are still needed in the current development and can also be used to provide recommendations for future work.
//! \todo: Rewrite the fred class class fred { public: //!< \todo: The following variables should be declared as private int size; //!< \todo: Need to rename this to m_size; };
Code Revisions
Revisions of code must be clearly marked. To identify the contributions from each author we suggest:
for(i=0; i<iNoSim; i++) { ... // ***** Version v.nn by Author on dd/mm/yy - START ... // ***** Version v.nn by Author on dd/mm/yy - END ... }
where 'v' is the version number and 'nn' is the incremental developments within that version; 'Author' is the author of the revision; and 'dd/mm/yy' is the date the revision was made.
Because such code revision marks makes the code less readable it is suggested that they are removed at regular intervals. For example, whenever the version numbers is increase all revision comments should be removed; alternately remove revision marks more than a year old.top | https://codecogs.com/pages/standards/arrangement.php | CC-MAIN-2020-50 | refinedweb | 4,054 | 52.6 |
Toro: synchronization primitives for Tornado coroutines
I took a break from Motor to make a new package "Toro": queues, semaphores, locks, and so on for Tornado coroutines. (The name "Toro" is from "Tornado" and "Coro".)
Why would you need something like this, especially since Tornado apps are usually single-threaded? Well, with Tornado's gen module you can turn Python generators into full-featured coroutines, but coordination among these coroutines is difficult. If one coroutine wants exclusive access to a resource, how can it notify other coroutines to proceed once it's finished? How do you allow N coroutines, but no more than N, access a resource at once? How do you start a set of coroutines and end your program when the last completes?
Each of these problems can be solved individually, but Toro's classes generalize the solutions. Toro provides to Tornado coroutines a set of locking primitives and queues analogous to those that Gevent provides to Greenlets, or that the standard library provides to threads.
Here's a producer-consumer example with a
toro.Queue:
from tornado import ioloop, gen import toro q = toro.JoinableQueue(maxsize=3) @gen.engine def consumer(): while True: item = yield gen.Task(q.get) try: print 'Doing work on', item finally: q.task_done() @gen.engine def producer(): for item in range(10): yield gen.Task(q.put, item) if __name__ == '__main__': producer() consumer() loop = ioloop.IOLoop.instance() q.join(callback=loop.stop) # block until all tasks are done loop.start()
More examples are in the docs: graceful shutdown using Toro's
Lock, a caching proxy server with
Event, and a web spider with
Queue. Further reading:
Toro logo by Musho Rodney Alan Greenblat | https://emptysqua.re/blog/toro-synchronization-primitives-for-tornado-coroutines/ | CC-MAIN-2019-18 | refinedweb | 281 | 57.27 |
We have a use case where customers give us large numbers of postal addresses (Canada, US and Europe), and we need to sit and examine these addresses to ascertain the logistics of covering these areas from a support perspective (emergency support, etc).
Sometimes we get these addresses as KML/KMZ files. About half or more of the time, we get these addresses as Excel files.
We have one guy who loads these into MS Streets and Maps and produces a map with push-pin markers in them. But, we cannot get approval to purchase this software for everyone who would like to generate a map like this.
From reading the forums and spending some time on the software, I don't see a way to bulk import postal addresses to generate this kind of map in OpenStreetMap. This can be done...can't it?
asked
13 Aug '13, 15:47
Alta-Mapper1
16●1●1●1
accept rate:
0%
If you want a professional solution by a company who seems to be quite up to date in visualizing data and map graphics, have a look at Mapbox.
They are only one among others according to Commercial_OSM_Software_and_Services
Or try frameworks like CartoDB or kartograph.org
In general, you can display maps as tile graphics via leaflet or openlayers ... I assume (not sure) they have some example code to display markers loaded from a CSV file. But maybe you would have to process too many markers?
Tell us about success or failure.
answered
13 Aug '13, 17:06
stephan75
12.5k●4●53●209
accept rate:
6%
edited
13 Aug '13, 17:15
SomeoneElse ♦
32.3k●63●335●756
There is an OpenData plugin for JOSM, the Java based editor. This plugin allows you to import csv files with longitude and latitude and in different projections. (see)
I assume the data does not have the right license to be uploaded to the central OSM database. but from JOSM, you can write a GeoJSON file which can be displayed with Leaflet or OpenLayers (links see stephan75's answer)
answered
13 Aug '13, 17:19
escada
17.8k●16●153●283
accept rate:
21%
edited
13 Aug '13, 17:20
There are many solutions for creating maps with push-pins, but if your Excel files don't include longitude and latitude, then your main problem is geocoding.
Try googleing "batch geocoding"
answered
13 Aug '13, 18:36:
import ×169
kml ×46
addresses ×18
kmz ×5
question asked: 13 Aug '13, 15:47
question was seen: 6,809 times
last updated: 13 Aug '13, 18:36
Umap: inherit color from kml-file
Is it possible to take advantage of KML feeds when adding data to OSM?
[closed] How to import a gpx without time tags?
IS it possible to import a kml file into OSM?
How can I import my buildings (mapped by me) out of google earth into OSM?
Is there a way to use GTFS Data or KML Data?
What OSM is not
How to convert .osm to KML format?
How do I import map data from a .dwg file to OpenStreetMap?
how to import a twl file to osm
First time here? Check out the FAQ! | https://help.openstreetmap.org/questions/25323/can-openstreetmap-load-an-excel-of-us-postal-addresses-into-a-map | CC-MAIN-2019-51 | refinedweb | 534 | 72.05 |
EventSource
A simple Swift event source for fun and profit
EventSource is based on the EventSource Web API to enable Server Sent Events.
If you're unfamiliar with this one-way streaming protocol - Start Here.
Under the hood, EventSource is built on top of
NSURLSession and has zero third-party dependencies.
Enjoy!
Usage
An
Event looks like this
struct Event { let readyState: EventSourceState // The EventSourceState at the time of the event's creation let id: String? let name: String? let data: String? let error: NSError? }
You create an
EventSource with an
NSURL
import EventSource let url = NSURL(string: "")! let eventSource = EventSource(url: url)
Opening and closing the connection
eventSource.open() eventSource.close()
Adding standard event handlers
eventSource.onOpen { event in debugPrint(event) } eventSource.onMessage { event in debugPrint(event) } eventSource.onClose { event in debugPrint(event) } eventSource.onError { event in debugPrint(event) }
Adding named event handlers
eventSource.addHandler("tweet.create") { event in debugPrint(event.data) }
Example
In the Example directory, you'll find the Server and EventSourceExample directories. The Server directory contains a simple python server that sends events to any connected clients, and the EventSourceExample directory contains a simple iOS app to display recent events from that server.
Server Setup
The server uses Redis to setup pub / sub channels, and it uses Flask deployed with Gunicorn to serve events to connected clients.
Install the following packages to run the simple python server
brew install redis pip install flask redis gevent gunicorn
Start redis and deploy the server (in two separate terminal tabs)
redis-server gunicorn --worker-class=gevent -b 0.0.0.0:8000 app:app
Client Setup
Open the
EventSourceExample Xcode project and run the app in the simulator
Tap the "Open" button in the app to open a connection to the server
Sending Events
Now you can visit in your browser to start sending events
Demo
If all goes well, you should get a nice stream of events in your simulator
Heads Up
API Decisions
EventSource deviates slightly from the Web API where it made sense for a better iOS API. For example, an
Event has a
name property so you can subscribe to specific, named events like
tweet.create. This is in lieu of the Web API's
event property of an
Event (because who wants to write
let event = event.event? Not me... 😞).
Auto-Reconnect
An
EventSource will automatically reconnect to the server if it enters an
Error state, and based on the protocol, a server can send a
retry event with an interval indicating how frequently the
EventSource should retry the connection after encountering an error. Be warned: an
EventSource expects this interval to be in seconds - not milliseconds as described by the Web API.
Installation
Carthage
Add the following line to your Cartfile.
github "christianbator/EventSource"
Then run
carthage update --platform iOS
Github
Help us keep the lights on
Dependencies
Used By
Total: 0
Releases
v0.2-alpha - Nov 22, 2016
- Swift 3 support
v0.1-alpha - Aug 30, 2016
- Initial pre-release with basic event source functionality | https://swiftpack.co/package/christianbator/EventSource | CC-MAIN-2018-39 | refinedweb | 503 | 63.19 |
iLoaderContext Struct ReferenceThis interface gives the context for the loader.
More...
#include <imap/ldrctxt.h>
Detailed DescriptionThis 51 of file ldrctxt.h.
Member Function Documentation
Return true if we check for dupes (to avoid objects with same name being loaded again.
Return true if we only want to look for objects in the region given by GetRegion().
Find a light..
Find a texture.
If not found, attempt to load and prepare the texture using the supplied filename as the name.
Return a region if we only want to load in that region.
0 otherwise. If not 0 then all objects will be created in the region.
The documentation for this struct was generated from the following file:
Generated for Crystal Space 1.2.1 by doxygen 1.5.3 | http://www.crystalspace3d.org/docs/online/api-1.2/structiLoaderContext.html | CC-MAIN-2016-30 | refinedweb | 129 | 68.67 |
Hello,
I try to open a widows on image folder with library MGFetch,
i've no probleme in symbian^3, but on symbian^1 when i include :
#include <MGFetch.h>
i've this error during building :
No rule to make target `\QtSDK\Symbian\SDKs\Symbian1Qt473\epoc32\release\armv5\LIB\mgfetch.dso', needed by `\QtSDK\Symbian\SDKs\Symbian1Qt473\epoc32\release\gcce\udeb\fichierSym1_Essai2.exe'. Stop.
is it possible to use MGFetch on symbian^1 ?
i create a new project just with this include,and it's not working. i add this line on pro file :
symbian:LIBS += -lmgfetch
if it's not possible to use it on symbian^1 what else can i use to have a image gallery ?
Thanks | http://developer.nokia.com/community/discussion/showthread.php/227832-QML-Photo-Gallery-symbian-1 | CC-MAIN-2014-15 | refinedweb | 119 | 56.25 |
You might have learnt union of subsets in mathematics, similarly, we have union types in Scala 3.0 as well. Let’s see, how can we use union types while programming in scala:
What is Union?
Union of two things under consideration, is the collection of elements which can belong to either of them or both of them . Let’s understand with respect to set theory:
Set s1 = {2, 3, 5}, set of first three primes
Set s2 = {2, 4, 6}, set of first three evens
Now, union of s1 and s2 will be:
Set s = {2, 3, 5, 4, 6}
What we can infer from it is:
s(union of s1 and s2) is a set which belongs to s1(2, 3, 5) or s2(2, 4, 6) or both s1 and s2(2), i.e ‘s’ is a collection of prime numbers, even numbers and even prime number.
Now, we can see what it means in terms of scala 3.0:
Consider above mentioned sets as types and their elements as their members. So, here’s what we get to know about s(union of type s1 and type s2):
s is a type which is s1 or s2 or s1 and s2 both at any given point in time.
Complete example would look something like given below:
trait LivingThing trait NonMotile
Now, if we have a method isPlant() which takes in an object which can be LivingThing or NonMotile or both LivingThing and NonMotile, then it would be like:
def isPlant(obj : LivingThing | NonMotile) : Boolean = { if (obj.isInstanceOf[LivingThing & NonMotile]) true else false }
Here, in the method signature we have specified that this method takes a parameter which is LivingThing or NonMotile or it can be both. Operator | is available in scala 3.0 and which can only be compiled by dotc compiler. So, a sample class whose object can be passed to isPlant() for successful execution is :
class Tree extends LivingThing with NonMotile // isPlant(new Tree) returns true class Human extends LivingThing // isPlant(new Human) returns false class Furniture extends NonMotile // isPlant(new Furniture) returns false
So, in isPlant() method, “obj” should be a sub-type of LivingThing or NonMotile or both LivingThing and NonMotile, for it to execute successfully.
I hope you understand what Union types in Scala 3.0, Dotty are all about. For more details visit official documentation
Thank you. | https://blog.knoldus.com/union-types-scala-3-0/ | CC-MAIN-2021-43 | refinedweb | 397 | 65.86 |
The
product function is one of several handy combinatoric iterators included in the
itertools module. Combinatoric iterators are related to an area of mathematics called enumerative combinatorics, which is concerned with the number of ways a given pattern can be formed.
product gives us a way to quickly calculate the cartesian product of two or more collections. An example of the cartesian product can be found below:
product also frequently serves as an alternative approach to having multiple
for clauses inside list comprehensions.
In a previous post, we provided code for a list comprehension that would calculate all the possible roll combinations for two six-sided dice.
roll_combinations = [(d1, d2) for d1 in range(1, 7) for d2 in range(1, 7)]
We can do very much the same thing using the
product function.
from itertools import product dice_combinations = product(range(1, 7), repeat=2)
So what's going on here?
The
product function accepts any number of iterables as positional arguments, and has an optional keyword only parameter called
repeat.
When we provide two or more iterables as arguments, the
product function will find all the ways we can match an element from one of these iterables to an item in every other iterable. For example, we might have a pair of lists like so:
list_1 = ["a", "b", "c"] list_2 = [1, 2, 3]
When we pass these lists to the product function, we get the following:
cartesian_product = product(list_1, list_2) # ('a', 1) ('a', 2) ('a', 3) ('b', 1) ('b', 2) ('b', 3) ('c', 1) ('c', 2) ('c', 3)
If we were to add a third iterable, every one of these tuples would be matched up to an item in this third iterable. For example, if we had a third list containing
"x",
"y", and
"z", we would get output like this:
# ('a', 1, 'x') ('a', 1, 'y') ('a', 1, 'z') ('a', 2, 'x') ... etc.
The
repeat parameter is most useful for when we want to use the same iterable multiple times. We can see an example of this in our code for finding roll combinations. We can easily add more and more dice by increasing the value of
repeat.
If we set a
repeat value of
2 or more when we have multiple iterables,
product will duplicate all of the iterables for the purposes of finding the cartesian product. The following functions are identical in terms of functionality:
c_product_1 = product(["a", "b", "c"], [1, 2, 3], repeat=2) c_product_2 = product(["a", "b", "c"], [1, 2, 3], ["a", "b", "c"], [1, 2, 3])
That just about wraps up our introduction to the
itertools
product function. I hope you learnt something new, and I encourage you to play around with the things we've covered here to really understand how it all works.
We release new snippet posts every Monday, and something a little more substantial on Thursdays, but just in case you forget, you might want to follow us on Twitter to keep up to date with all our content. Next Monday we'll be covering some other cool
itertools functions, so make sure to check back next week!
We'd also love to hear about cool tips and tricks you think we should write about, so get in touch! | https://blog.tecladocode.com/python-snippet-8-itertools-part-1-product/ | CC-MAIN-2019-26 | refinedweb | 540 | 53.24 |
This is an excerpt from the Scala Cookbook (partially modified for the internet). This is Recipe 10.16, “How to Combine
map and
flatten with
flatMap”
Problem
When you first come to Scala from an object-oriented programming background, the
flatMap method can seem very foreign, so you’d like to understand how to use it and see where it can be applied.
Solution
Use
flatMap in situations where you run
map followed by
flatten. The specific situation is this:
- You’re using
map(or a for/yield expression) to create a new collection from an existing collection.
- The resulting collection is a list of lists.
- You call
flattenimmediately elements:
scala> bag.map(toInt).flatten res1: List[Int] = List(1, 2, 4)
This makes finding the sum easy:
scala> bag.map(toInt).flatten.sum res2: Int = 7-words from a word you give it. Skipping the implementation for a moment, if you call the method with the string then, it should work as follows:
scala> subWords("then") res0: List[String] = List(then, hen, the)
subWordsshould also return the string
he, but it’s in beta.
With that method working (mostly),-words-words:
def subWords(word: String) = List(word, word.tail, word.take(word.length-1))
See Also
- My collection of Scala flatMap examples
- Recipe 20.6, “How to use Scala’s Option/Some/None pattern”, show another
flatMapexample.
The Scala Cookbook
This tutorial is sponsored by the Scala Cookbook, which I wrote for O’Reilly:
You can find the Scala Cookbook at these locations:
Add new comment | https://alvinalexander.com/scala/how-to-combine-map-flatten-flatmap-scala-cookbook | CC-MAIN-2017-39 | refinedweb | 257 | 63.8 |
First, check to see if your computer already has XCode installed. Open up a terminal and type
gcc -v
If you see a message like
-bash: gcc: command not found
then you will need to install a compiler. Consult your distribution's documentation (or ask me) how to get gcc up and running. Otherwise, you will see some output like this (this output is from my Mac, yours will look different -- the important thing is that the command runs).
Using built-in specs. Target: i686-apple-darwin10 Configured with: /var/tmp/gcc/gcc-5664~105)
There are many text editors available for most distributions, but gedit is a nice simple one. Other than that, there are the two old standbys, vi (vim) and Emacs.
Create a folder for your programs in your home directory.
#include <stdio.h> int main(int argc, char** argv) { printf("Hello, world!\n"); return 0; }
Now it's time to compile your program. Open up a terminal and type in (substituting whatever path you chose to put the source file in):
cd ~. | http://www.swarthmore.edu/NatSci/mzucker1/e15/c-instructions-linux.html | CC-MAIN-2013-20 | refinedweb | 176 | 73.47 |
>
Hello,
In order to do a muzzle flash effect for my FPS, I need to activate a parent gameobject that contains two muzzle textures and a spotlight and deactivate it just after in a very short period of time. So that it gives this "flash" effect.
The problem is that sometimes when I fire no muzzleflash appear, or only one of the two, etc. It's like the period of time is too short for the engine to activate all it has to.
So I don't know how to do this muzzleflash effect. The only solution I found is too increase the period of time but when the muzzle stays 0.5s at the screen it's way too long...
Here is my activation/deactivation code :
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class WeaponRecoil : MonoBehaviour {
public float muzzleDuration = 0.1f;
public GameObject muzzFlashes;
// Use this for initialization
void Start () {
}
// Update is called once per frame
void Update () {
if(Input.GetMouseButtonDown(0)) {
muzzFlashes.SetActive(true);
StartCoroutine(deactivateMuzzle());
}
}
IEnumerator deactivateMuzzle()
{
yield return new WaitForSeconds(muzzleDuration);
muzzFlashes.SetActive(false);
}
}
Thank you.
Just try doing animation and set it as trigger. While animating you will be able to set values as you want and speed how fast to animate. It doesnt work since you are using WaitForSeconds and 0.1f second is just too little to make it work. You can also test with WaitForEndOfFrame
Thank you for your answer. I just tried to animate and that seems a wonderful idea. Thanks !!
Glad to help. If it really helped you in this situation then mark my answer as correct please so people looking for similar problems find their answers.
Answer by Vega4Life
·
Dec 27, 2018 at 02:20 PM
I think disabling the particle isn't the right way to handle this situation (or any particle for the most part). One of the reasons why is what you are encountering now - trying to time it correctly with an outside timer. This goes against the particle having its own timers, etc. The other reason is because if you just disable a particle, its an instant stop. Most times this looks really silly. Imagine disabling a fire particle - it won't look realistic. In those cases you would want to disable the emissions so it lets the current alive particles finish
I have a game that uses a muzzle flash also. It isn't tied with any outside timers or fire rates of my weapons. The most important thing is I just reset and play the particle when the weapon is fire. Using below:
// This is called every time I shoot
public void EnableFX(bool enable)
{
if (enable)
{
particle.Clear(); // This essentially resets the particle
particle.Play(); // This plays particle
}
}
You just need a reference to the particle and the above works. You don't need a timer or anything like that. The particle will always play, but can interrupt itself (which is fine if a weapon is firing super fast). I'll also post an image of my particle settings. The important thing on it is the simulation speed. Mine is set to 2, so that I get a super fast particle. Hope this info helps,.
How to activate all inactive objects?
1
Answer
What is the fastest way to activate or hide/show many gameobjects?
1
Answer
how to activate a inactive gameobject after starting the game?
1
Answer
"Mesh 'NormalCurveRendererMesh': abnormal mesh bounds..." error appears on editor...
0
Answers
Why is the wrong child activated, although I have the right index?
1
Answer | https://answers.unity.com/questions/1584345/need-to-activate-and-deactivate-objects-really-fas.html | CC-MAIN-2019-22 | refinedweb | 595 | 67.15 |
BTW, I just recalled that I had a similar discussion a few days ago here
BTW, I just recalled that I had a similar discussion a few days ago here
I saw this, read through it for inspiration
That);
I thought it would be declared in the header file for the library? Could you explain what you mean by
Thanks
See my edit above.
The Wiring preprocessor helps non-C/C++ people (when it works correctly) by adding “forgotten” forward declarations/prototypes of functions that will be implemented after their first use (and it also includes
Particle.h).
But sometimes - like here - it doesn’t do its job well.
I guess the output of the preprocessor of your code looks something like this - which would explain the error message
// --------------- this part was added by the Wiring preprocessor ------------------ // some comments (to account for the error occuring on line 6 #include "Particle.h" // note: at this point DS18B20.h wasn't included yet void getTemp(DS18B20 sensor); // --------------- this part was added by the Wiring preprocessor ------------------ // This #include statement was automatically added by the Particle IDE. #include "Particle-OneWire.h" // This #include statement was automatically added by the Particle IDE. #include "DS18B20.h" SYSTEM_MODE(SEMI_AUTOMATIC); //comment this in when debugging - prevents Particle connecting to 3G network int sensorPin = D6; DS18B20 eBayTemp = DS18B20(sensorPin); //Sets Pin for Water Temp Sensor int led = D7; char szInfo[64]; float pubTemp; double celsius; ...
@ScruffR You’re a genius, thanks for your help! My strengths lie in Java so migrating to C/C++ will happen with hiccups I’m sure!
@joearkay, tiny little thing… In setup() you call
Particle.syncTime(); but you also have
SYSTEM_MODE(SEMI_AUTOMATIC);. The cloud won’t be connected to sync the time!
I just throw
SYSTEM_MODE(SEMI_AUTOMATIC);. into the Particle when I’m serially debugging, so I don’t have to wait for the Particle to connect to my patchy 3G GSM signal
@joearkay, FYI, when you run
SYSTEM_MODE(AUTOMATIC); which is the default mode, time is synched automatically when the cloud connection is established. However, it may take a few seconds during which the RTC on the Photon will be incorrect if power has been removed and the RTC is wiped. You can do a simple test to see if the time is valid by checking that
Time.year() > 1970. I usually run this code to do the check:
while (Time.year() <= 1970) { delay(10); } Note that the delay(10) will call Particle.process() in the background.
Ah I like that. the reason I was calling
Particle.syncTime(); originally was due to the face that my first message would always send with a time stamp that was pre-epoch!
Guys…you know when you have been looking at the same piece of code for days and you can’t work out what doesn’t work?
I’m running this modified version of the example again, and for some odd reason, half of the values return “0.00”…
Here is the code:
#include "Particle-OneWire.h" #include "DS18B20.h" double getTemp(DS18B20 sensor); SYSTEM_MODE(SEMI_AUTOMATIC); //comment this in when debugging - prevents Particle connecting to 3G network int genericPin = D6; int loxonePin = D5; DS18B20 generic = DS18B20(genericPin); DS18B20 loxone = DS18B20(loxonePin); char szInfo[64]; int DS18B20nextSampleTime; int DS18B20_SAMPLE_INTERVAL = 20000; //2500 default void setup() { Time.zone(-5); Particle.syncTime(); pinMode(genericPin, INPUT); pinMode(loxonePin, INPUT); Serial.begin(115200); } void loop() { //if (millis() > DS18B20nextSampleTime){ sprintf(szInfo, "generic: %2.2f", getTemp(generic)); Serial.println(szInfo); sprintf(szInfo, "loxone: %2.2f", getTemp(loxone)); Serial.println(szInfo); delay(10000); // } } double getTemp(DS18B20 sensor){ int dsAttempts = 0; double celsius; if(!sensor.search()){ sensor.resetsearch(); celsius = sensor(); continue; //return celsius; } dsAttempts = 0; //DS18B20nextSampleTime = millis() + DS18B20_SAMPLE_INTERVAL; //Serial.println(fahrenheit); } return celsius; }
And here is a sample from the serial monitor:
Opening serial monitor for com port: "COM7".25 loxone: 24.00 generic: 0.00 loxone: 0.00 generic: 23.25 loxone: 24.00 generic: 0.00 loxone: 0.00 generic: 23.25 loxone: 24.06 generic: 0.00 loxone: 0.00
I’m sure I’ll feel like an idiot when someone tells me what I’m doing wrong, but I’m at the tipping point!!
Cheers
You might want to put back the debug prints inside that
getTemp()
@ScruffR I’ve had those uncommented for a bit. All I noticed is the ‘bad attempts’ at the first call to this, as the probes are just stablising after boot up. But I still can’t figure out for the life of me why the ‘celsius’ value isn’t being placed into the ‘celsius’ variable every OTHER call.
I guess this is the reason
double celsius; if(!sensor.search()) { ... } return celsius;
@ScruffR: The original code wrote to a global variable, ‘celsius’, but because I’m creating multiple instances of the DS18B20, I can’t do that, so i had to take this approach. I’ve simply replaced the ‘Serial.print()’ statements with ‘return’ statements, to get the celsius value back, but this is obviously not working…
I guess your
sensor.search() can’t find the sensor and hence will not return a valid temperature.
To test this theory, you can just declare
double celsius = 123.45; and see if that is coming back now instead of
0.0.
I’d acutally expect random values to come back, since automatic variables aren’t initialzed by default, but maybe a previous call leaves a 0.0 on the stack.
That’s what I thought with regards to the ‘0.0’! Iv’e changed it and still ‘0.0’ back! Crazy times! Perhaps this library isn’t written in such a way that I can do this!
My next guess then (which would easier be found with the Serial.prints in place) that the lib actually returns 0.0 but since you don’t seem to set the return value to ‘NaN’ or any other failure value when the
while() bails out due to
dsAttempts < 4 being not true anymore, you can’t tell
0.0 from a failure.
Yes. I’ve thrown some Serial.println() statements in to let me know which parts of getTemp() are executing. It seems when the 0.00’ is being returned, it skips everything and returns nothing. I’ll do some more sniffing around!
All I was missing was the final call to getTemperature() before a final return statement! Code below if anyone else is following along in the future!
double getTemp(DS18B20 sensor){ int dsAttempts = 0; double celsius = 123.45; if(!sensor.search()){ sensor.resetsearch(); celsius = sensor.getTemperature(); //Serial.print("First call to(); //Serial.print("Second call to getTemperature()"); //Serial.println(celsius); //continue; return celsius; } dsAttempts = 0; //DS18B20nextSampleTime = millis() + DS18B20_SAMPLE_INTERVAL; //Serial.println(fahrenheit); } //return celsius; celsius = sensor.getTemperature(); //Serial.println("Reached end of function, returning null probably"); return celsius;
} | https://community.particle.io/t/resolved-ds18b20-library-modifications/25980/9 | CC-MAIN-2020-29 | refinedweb | 1,129 | 59.09 |
AutoTrie
A versatile library which solves autocompletion in Dart/Flutter. It is based around a space-efficient implementation of Trie which uses variable-length lists. With this, serving auto-suggestions is both fast and no-hassle. Suggestions are also sorted by how often they've been entered and subsorted by recency of entry, for search-engine-like results.
Read more about Trie here.
A Brief Note
It takes time, effort, and mental power to keep this package updated, useful, and improving. If you used or are using the package, I'd appreciate it if you could spare a few dollars to help me continue development.
Usage
A usage example is provided below. Check the API Reference for detailed docs:
import 'dart:io'; // you don't need to import this, it just has the sleep method (which I use here). import 'package:autotrie/autotrie.dart'; void main() { var engine = AutoComplete(engine: SortEngine.configMulti(Duration(seconds: 1), 15, 0.5, 0.5)); //You can also initialize with a starting databank. var interval = Duration(milliseconds: 1); engine.enter('more'); // Enter more thrice. engine.enter('more'); engine.enter('more'); engine.enter('moody'); // Enter moody twice. engine.enter('moody'); engine.enter('morose'); // Enter scattered words (with mo). engine.enter('morty'); sleep(interval); engine.enter('moment'); sleep(interval); engine.enter('momentum'); engine.enter('sorose'); // Enter scattered words (without mo). engine.enter('sorty'); engine.delete('morose'); // Delete morose. // Check if morose is deleted. print('Morose deletion check: ${!engine.contains('morose')}'); // Check if engine is empty. print('Engine emptiness check: ${engine.isEmpty}'); // Suggestions starting with 'mo'. // They've been ranked by frequency and recency. Since they're all so similar // in recency, frequency takes priority. print("'mo' suggestions: ${engine.suggest('mo')}"); // Result: [more, moody, momentum, moment, morty] // Get all entries. // They've not been sorted. print('All entries: ${engine.allEntries}'); // Result: [more, moody, sorty, sorose, momentum, moment, morty] } // Check the API Reference for the latest information and adv. // methods from this class.
Sorting
The AutoComplete constructor takes a SortEngine, which it uses to sort the result of the autocompletion operation. There are a few different modes it can operate in:
- SortEngine.entriesOnly() -> AutoComplete results are only sorted by number of entries in the engine (High to Low)
- SortEngine.msOnly() -> AutoComplete results are only sorted by how much time has passed since their last entry (Low to High)
- SortEngine.simpleMulti() ->
- Sorted using two logistic curves, one for ms and one for entries
- The ms curve is set to use 3 years (a LOT) as the upper end of how far back entries could have been entered
- The entries curve is set to use 30 entries as the max amount of entries
- These values are highly arbitrary and not likely to fit your project; this mode is not recommended unless you are just playing around.
- Takes two weights (one for recency and one for entries) which can be used to balance how heavily each factor should affect the final sorting.
- SortEngine.configMulti() ->
- Sorted using two logistic curves, one for ms and one for entries
- The ms curve is balanced using a parameter (a Duration) for the max time since entry in this engine.
- The entries curve is balanced using a parameter (an int) for the max amount of entries in this engine.
- If you know approximately how old and how big this AutoComplete engine is, it is highly recommended that you use this mode.
- Takes two weights (one for recency and one for entries) which can be used to balance how heavily each factor should affect the final sorting.
Note that all the recency sorting functionality has a granularity of one millisecond. If you add multiple elements to the tree within the span of a single millisecond, they are regarded as functionally equivalent in the recency metric.
Basic File Persistence
AutoComplete is natively capable of writing itself to and reading itself from a file. To do this, persist to a file
using the
persist method (it takes a
File object):
await engine.persist(myFile);
Then you can rebuild using
AutoComplete.fromFile (it takes a
File along with the mandatory
SortEngine):
var engine = AutoComplete.fromFile(file: myFile, engine: SortEngine.entriesOnly());
This persistence will preserve all the metadata (last insert, number of entries) in the table as well as the core data (the Strings themselves).
Hive Integration
- Hive is a speedy, local, and key-value database for Dart/Flutter. Go check it out if you haven't already!
- Hive integration is now available with autotrie:
- Our way of integration uses extension methods.
- Import Hive and AutoTrie, and create a normal Hive box using
Hive.openBox('nameHere').
- You can then call
searchKeys(String prefix)and
searchValues(String prefix)on that box to get auto-suggestions.
- There is no sorting options: only entry-level sorting is available.
Features and bugs
Please file feature requests and bugs at the issue tracker. | https://pub.dev/documentation/autotrie/latest/ | CC-MAIN-2022-27 | refinedweb | 806 | 50.33 |
Use Case - Visual Elements In QML
The Rectangle Type
For the most basic of visuals, Qt Quick provides a Rectangle type to draw rectangles. These rectangles can be colored with a color or a vertical gradient. The Rectangle type can also draw borders on the rectangle.
For drawing custom shapes beyond rectangles, see the Canvas type or display a pre-rendered image using the Image type.
import QtQuick Item { width: 320 height: 480 Rectangle { color: "#272822" width: 320 height: 480 } // This element displays a rectangle with a gradient and a border Rectangle { x: 160 y: 20 width: 100 height: 100 radius: 8 // This gives rounded corners to the Rectangle gradient: Gradient { // This sets a vertical gradient fill GradientStop { position: 0.0; color: "aqua" } GradientStop { position: 1.0; color: "teal" } } border { width: 3; color: "white" } // This sets a 3px wide black border to be drawn } // This rectangle is a plain color with no border Rectangle { x: 40 y: 20 width: 100 height: 100 color: "red" } }
The Image Type
Qt Quick provides an Image type which may be used to display images. The Image type has a source property whose value can be a remote or local URL, or the URL of an image file embedded in a compiled resource file.
// This element displays an image. Because the source is online, it may take some time to fetch Image { x: 40 y: 20 width: 61 height: 73 source: "" }
For more complex images there are other types similar to Image. BorderImage draws an image with grid scaling, suitable for images used as borders. AnimatedImage plays animated .gif and .mng images. AnimatedSprite and SpriteSequence play animations comprised of multiple frames stored adjacently in a non animated image format.
Shared Visual Properties
All visual items provided by Qt Quick are based on the Item type, which provides a common set of attributes for visual items, including opacity and transform attributes.
Opacity and Visibility
The QML object types provided by Qt Quick have built-in support for opacity. Opacity can be animated to allow smooth transitions to or from a transparent state. Visibility can also be managed with the visible property more efficiently, but at the cost of not being able to animate it.
import QtQuick Item { width: 320 height: 480 Rectangle { color: "#272822" width: 320 height: 480 } Item { x: 20 y: 270 width: 200 height: 200 MouseArea { anchors.fill: parent onClicked: topRect.visible = !topRect.visible } Rectangle { x: 20 y: 20 width: 100 height: 100 color: "red" } Rectangle { id: topRect opacity: 0.5 x: 100 y: 100 width: 100 height: 100 color: "blue" } } }
Transforms
Qt Quick types have built-in support for transformations. If you wish to have your visual content rotated or scaled, you can set the Item::rotation or Item::scale property. These can also be animated.
import QtQuick Item { width: 320 height: 480 Rectangle { color: "#272822" width: 320 height: 480 } Rectangle { rotation: 45 // This rotates the Rectangle by 45 degrees x: 20 y: 160 width: 100 height: 100 color: "blue" } Rectangle { scale: 0.8 // This scales the Rectangle down to 80% size x: 160 y: 160 width: 100 height: 100 color: "green" } }
For more complex transformations, see the Item::transform. | https://doc.qt.io/archives/qt-6.1/qtquick-usecase-visual.html | CC-MAIN-2021-43 | refinedweb | 527 | 59.03 |
public class DancingBug extends Bug { private int steps; private int sideLength; /** * Constructs a box bug that traces a square of a given side length * @param length the side length */ public DancingBug(int length) { steps = 0; sideLength = length; } /** * Moves to the next location of the square. */ public void act() { if(steps < sideLength && canMove()) { move(); turn() * length; steps++; } else { turn(); turn(); steps = 0; } } }
/* * AP(r) Computer Science GridWorld Case Study: * Copyright(c) 2005-2006 Cay S. Horstmann () * * java.util.*; import info.gridworld.actor.ActorWorld; import info.gridworld.grid.Location; import java.awt.Color; /** * This class runs a world that contains box bugs. <br /> * This class is not tested on the AP CS A and AB exams. */ public class DancingBugRunner { public static void main(String[] args) { ArrayList Dancing = new ArrayList(); Dancing.add(new DancingBug(5)); Dancing.add(new DancingBug(4)); Dancing.add(new DancingBug(3)); Dancing.add(new DancingBug(2)); Dancing.add(new DancingBug(1)); ActorWorld world = new ActorWorld(); DancingBug alice = new DancingBug(6); alice.setColor(Color.BLUE); alice.setDirection(90); world.add(new Location(0,0), alice); world.show(); } }
i know the thing i did with turn() * length is incorrect, obviously. what i am trying to figure out is how to do something similar to this. if anyone has any ideas they are greatly appreciated. | http://www.dreamincode.net/forums/topic/141133-making-a-method-run-a-certain-amount-of-times/ | CC-MAIN-2016-22 | refinedweb | 213 | 52.05 |
Wrap Plus
Enhanced "wrap lines" command for Sublime Text 2 or 3.
Details
Installs
- Total 19K
- Win 6K
- OS X 8K
- Linux 5K
Readme
- Source
- raw.githubusercontent.com
Sublime Wrap Plus
Enhanced “wrap lines” command for Sublime Text 2 or 3. This is for the manual hard line wrap command (AltQ in Windows and Linux, CommandAltQ in OS X). It does not affect the automatic soft line wrapping.
Downloading
The best way to download and install Sublime Wrap Plus is to use the Package Control plugin. If you do not already have it installed, it's really the best way to manage your packages.
For users new to the package manager:
- Go to and install Package Control.
- Restart Sublime Text.
Install Sublime Wrap Plus:
- Bring up the Command Palette (CommandShiftP on OS X, CtrlShiftP on Linux/Windows).
Package Control: Install Packageand wait while Package Control fetches the latest package list.
- Select Wrap Plus when the list appears.
Package Control will handle automatically updating your packages.
Alternatively, you can fetch from Github:
git clone git://github.com/ehuss/Sublime-Wrap-Plus.git
and place it in your Packages directory, which can be found by selecting
Preferences → Browse Packages....
Configuring
No need to configure anything. By default it uses the default keystroke for wrap lines:
- Windows/Linux: AltQ
- OS X: CommandAltQ
If you want to use a different keystroke, go to
Preferences → Key Bindings — User, and add an entry like this:
{ "keys": ["alt+q"], "command": "wrap_lines_plus" }
If you want to, you can add keystrokes that use specific wrap sizes:
{ "keys": ["alt+q", "7"], "command": "wrap_lines_plus", "args": {"width": 70}}
There are a few settings you can tweak if you so desire. You can set them in
Preferences → Settings — User. They are:
Advanced Configuration
Sublime supports placing configuration options in a variety of places. You can put any of these settings in one of the following files (last file wins):
- Packages/User/Preferences.sublime-settings
- Project Settings (The “settings” key inside your project file.)
- Packages/User/SyntaxName.sublime-settings
- Packages/User/Distraction Free.sublime-settings
Using
Whenever the cursor is anywhere within a paragraph, hitting the Wrap Plus keystroke will cause it to try to discover where the paragraph starts and where it ends. It will then wrap all of those lines according to the wrap width you currently have set (
View → Word Wrap Column).
Lists
It handles a variety of lists, like bulleted lists or numbered lists. They should line up nicely:
- Kielbasa beef andouille chuck short loin, filet mignon jerky tail fatback ball tip meatloaf sausage spare ribs bresaola rump. * Shankle shoulder ham, strip steak pastrami ground round shank sausage tail corned beef drumstick boudin bacon prosciutto turkey. 1. Jerky prosciutto pork loin shankle, corned beef capicola pork pastrami fatback short loin ground round. a. Sirloin fatback pancetta pork belly ham hock strip steak chuck, drumstick brisket chicken corned beef speck pig kielbasa short loin.
Subsequent Indents
Lines with subsequent indents should maintain their indent:
:param cupcake: Cupcake ipsum dolor sit amet marzipan faworki. Wafer I love croissant. Tart carrot cake pastry applicake lollipop I love cotton brownie.
Comment Lines
In a source code file, it should transparently handle single-line comment characters,
If you use block-style comments in C or C++, it will restrict the wrapping to only the contents in the comment (it won't jump out and wrap nearby code lines). Also, if you use funny C block comments that start with an asterisk, that should be preserved:
/* * This is a multiline C-style comment. The asterisk characters on the * left should be preserved (when in C or C++ mode), if they are already * there. */
In addition, JavaDoc or JsDoc style documentation should work, too:
/** * Sample function description. Just in case the description is very long. * Cupcake ipsum dolor sit amet marzipan faworki. Wafer I love croissant. Tart * carrot cake pastry applicake lollipop I love cotton brownie. * @param {string} paramname Multi-line parameter description (or any javadoc * tag) should indent with 4 spaces. Cupcake ipsum dolor sit amet * marzipan faworki. Wafer I love croissant. Tart carrot cake pastry * applicake lollipop I love cotton brownie. */
Python Strings
When wrapping inside a Python triple quoted string, wrapping will be constrained to the inside of the string. That way, doc strings won't get wrapped with function definitions:
def foo(): """Pressing the wrap lines character while inside this string should wrap it nicely, without affecting the def foo line. """
Lines with email-style quoting should be handled. Nested quotes should be treated as separate paragraphs.
> This is a quoted paragraph. > > This is a nested quoted paragraph. Wrapping the first paragraph won't > > touch this paragraph. > And continuing with a third paragraph.
Selection Wrapping
If you select a range of characters, only the lines that are selected will be wrapped (the stock Sublime wrap lines extends the selection to what it thinks is a paragraph). I find this behavior preferable to give me more control.
Epilogue
Wrap Plus handles a lot of situations that the stock Sublime word wrapper doesn't handle, but it's likely there are many situations where it doesn't work quite right. If you come across a problem, the immediate solution is to manually select the lines you want to wrap (this will constrain wrapping to just those lines). If you'd like, feel free to post an issue on the Github page. | https://packagecontrol.io/packages/Wrap%20Plus | CC-MAIN-2019-35 | refinedweb | 896 | 64.61 |
- 20 May, 2016 1 commit
- Andy Shevchenko authored
UUID library provides uuid_be type and uuid_be_to_bin() function. This substitutes open coded variant by generic library calls.>
- 09 Jan, 2016 2 commits
- Dan Williams authored
These actions are completely managed by a block driver or can use the badblocks api directly. Signed-off-by:
Dan Williams <dan.j.williams@intel.com>
- Vishal Verma authored
NVDIMM devices, which can behave more like DRAM rather than block devices, may develop bad cache lines, or 'poison'. A block device exposed by the pmem driver can then consume poison via a read (or write), and cause a machine check. On platforms without machine check recovery features, this would mean a crash. The block device maintaining a runtime list of all known sectors that have poison can directly avoid this, and also provide a path forward to enable proper handling/recovery for DAX faults on such a device. Use the new badblock management interfaces to add a badblocks list to gendisks. Signed-off-by:
Vishal Verma <vishal.l.verma@intel.com> Signed-off-by:
Dan Williams <dan.j.williams@intel.com>
- 21 Oct, 2015 3 commits
- Dan Williams authored
A trace like the following proceeds a crash in bio_integrity_process() when it goes to use an already freed blk_integrity profile. BUG: unable to handle kernel paging request at ffff8800d31b10d8 IP: [<ffff8800d31b10d8>] 0xffff8800d31b10d8 PGD 2f65067 PUD 21fffd067 PMD 80000000d30001e3 Oops: 0011 [#1] SMP Dumping ftrace buffer: --------------------------------- ndctl-2222 2.... 44526245us : disk_release: pmem1s systemd--2223 4.... 44573945us : bio_integrity_endio: pmem1s <...>-409 4.... 44574005us : bio_integrity_process: pmem1s --------------------------------- [..] Call Trace: [<ffffffff8144e0f9>] ? bio_integrity_process+0x159/0x2d0 [<ffffffff8144e4f6>] bio_integrity_verify_fn+0x36/0x60 [<ffffffff810bd2dc>] process_one_work+0x1cc/0x4e0 Given that a request_queue is pinned while i/o is in flight and that a gendisk is allowed to have a shorter lifetime, move blk_integrity to request_queue to satisfy requests arriving after the gendisk has been torn down. Cc: Christoph Hellwig <hch@lst.de>>
The integrity kobject purely exists to support the integrity subdirectory in sysfs and doesn't really have anything to do with the blk_integrity data structure. Move the kobject to struct gendisk where it belongs.>
- 17 Jul, 2015 2 commits
Percpu refcount is the perfect match for partition's case, and the conversion is quite straight. With the convertion, one pair of atomic inc/dec can be saved for accounting block I/O, which is run in hot path of block I/O. Signed-off-by:
Ming Lei <tom.leiming@gmail.com> Acked-by:
Tejun Heo <tj@kernel.org> Signed-off-by:
Jens Axboe <axboe@fb.com>
So the helper can be used in both generic partition case and part0 case. Signed-off-by:
Ming Lei <tom.leiming@gmail.com> Signed-off-by:
Jens Axboe <axboe@fb>
- 25 Feb, 2013 1 commit
- Mimi Zohar authored
Commit "85865c1f ima: add policy support for file system uuid" introduced a CONFIG_BLOCK dependency. This patch defines a wrapper called blk_part_pack_uuid(), which returns -EINVAL, when CONFIG_BLOCK is not defined. security/integrity/ima/ima_policy.c:538:4: error: implicit declaration of function 'part_pack_uuid' [-Werror=implicit-function-declaration] Changelog v2: - Reference commit number in patch description Changelog v1: - rename ima_part_pack_uuid() to blk_part_pack_uuid() - resolve scripts/checkpatch.pl warnings Changelog v0: - fix UUID scripts/Lindent msgs Reported-by:
Randy Dunlap <rdunlap@infradead.org> Reported-by:
David Rientjes <rientjes@google.com> Signed-off-by:
Mimi Zohar <zohar@linux.vnet.ibm.com> Acked-by:
David Rientjes <rientjes@google.com> Acked-by:
Randy Dunlap <rdunlap@infradead.org> Cc: Jens Axboe <axboe@kernel.dk> Signed-off-by:
James Morris <james.l.morris@oracle.com>
- 23 Nov, 2012 1 commit
- Stephen Warren authored>
- 01 Aug, 2012 1 commit
- Vivek Goyal authored>
- 16 Jul, 2012 1 commit
- Lars-Peter Clausen authored
This function is not really specific to the genhd layer and there are various re-implementations or open-coded variants of it all throughout the kernel. To avoid further duplications move the function to a more generic place. While moving also convert it from a macro to a inline function. Potential users of this function can be detected and converted using the following coccinelle patch: // <smpl> @@ expression k; @@ -container_of(k, struct device, kobj) +kobj_to_dev(kobj) // </smpl> Signed-off-by:
Lars-Peter Clausen <lars@metafoo.de> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
- 15 May, 2012 1 commit
6d1d8050 "block, partition: add partition_meta_info to hd_struct" added part_unpack_uuid() which assumes that the passed in buffer has enough space for sprintfing "%pU" - 37 characters including '\0'. Unfortunately, b5af921e "init: add support for root devices specified by partition UUID" supplied 33 bytes buffer to the function leading to the following panic with stackprotector enabled. Kernel panic - not syncing: stack-protector: Kernel stack corrupted in: ffffffff81b14c7e [<ffffffff815e226b>] panic+0xba/0x1c6 [<ffffffff81b14c7e>] ? printk_all_partitions+0x259/0x26xb [<ffffffff810566bb>] __stack_chk_fail+0x1b/0x20 [<ffffffff81b15c7e>] printk_all_paritions+0x259/0x26xb [<ffffffff81aedfe0>] mount_block_root+0x1bc/0x27f [<ffffffff81aee0fa>] mount_root+0x57/0x5b [<ffffffff81aee23b>] prepare_namespace+0x13d/0x176 [<ffffffff8107eec0>] ? release_tgcred.isra.4+0x330/0x30 [<ffffffff81aedd60>] kernel_init+0x155/0x15a [<ffffffff81087b97>] ? schedule_tail+0x27/0xb0 [<ffffffff815f4d24>] kernel_thread_helper+0x5/0x10 [<ffffffff81aedc0b>] ? start_kernel+0x3c5/0x3c5 [<ffffffff815f4d20>] ? gs_change+0x13/0x13 Increase the buffer size, remove the dangerous part_unpack_uuid() and use snprintf() directly from printk_all_partitions(). Signed-off-by:
Tejun Heo <tj@kernel.org> Reported-by:
Szymon Gruszczynski <sz.gruszczynski@googlemail.com> Cc: Will Drewry <wad@chromium.org> Cc: stable@vger.kernel.org Signed-off-by:
Jens Axboe <axboe@kernel.dk>
- 02 Mar, 2012 1 commit
- Jun'ichi Nomura authored
Since 2.6.39 (1196f8b8), when a driver returns -ENOMEDIUM for open(), __blkdev_get() calls rescan_partitions() to remove in-kernel partition structures and raise KOBJ_CHANGE uevent. However it ends up calling driver's revalidate_disk without open and could cause oops. In the case of SCSI:. Reported-by:
Huajun Li <huajun.li.lee@gmail.com> Signed-off-by:
Jun'ichi Nomura <j-nomura@ce.jp.nec.com> Acked-by:
Tejun Heo <tj@kernel.org> Cc: stable@kernel.org Signed-off-by:
Jens Axboe <axboe@kernel.dk>
- 03 Jan, 2012 1 commit
both callers of device_get_devnode() are only interested in lower 16bits and nobody tries to return anything wider than 16bit anyway. Signed-off-by:
Al Viro <viro@zeniv.linux.org.uk>
- 10 Nov, 2011 1 commit
This reverts commit a72c5e5e. The commit introduced alias for block devices which is intended to be used during logging although actual usage hasn't been committed yet. This approach adds very limited benefit (raw log might be easier to follow) which can be trivially implemented in userland but has a lot of problems. It is much worse than netif renames because it doesn't rename the actual device but just adds conveninence name which isn't used universally or enforced. Everything internal including device lookup and sysfs still uses the internal name and nothing prevents two devices from using conflicting alias - ie. sda can have sdb as its alias. This has been nacked by people working on device driver core, block layer and kernel-userland interface and shouldn't have been upstreamed. Revert it.:
Tejun Heo <tj@kernel.org> Acked-by:
Greg Kroah-Hartman <gregkh@suse.de> Acked-by:
Kay Sievers <kay.sievers@vrfy.org> Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com> Cc: Nao Nishijima <nao.nishijima.xt@hitachi.com> Cc: Alan Cox <alan@linux.intel.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by:
Jens Axboe <axboe@kernel.dk>
- 29 Aug, 2011 1 commit
- Nao Nishijima authored
This patch allows the user to set an "alias" of the disk via sysfs interface. This patch only adds a new attribute "alias" in gendisk structure. To show the alias instead of the device name in kernel messages, we need to revise printk messages and use alias_name() in them. Example: (current) printk("disk name is %s\n", disk->disk_name); (new) printk("disk name is %s\n", alias_name(disk)); Users can use alphabets, numbers, '-' and '_' in "alias" attribute. A disk can have an "alias" which length is up to 255 bytes. This attribute is write-once. Suggested-by:
James Bottomley <James.Bottomley@HansenPartnership.com> Suggested-by:
Jon Masters <jcm@redhat.com> Signed-off-by:
Nao Nishijima <nao.nishijima.xt@hitachi.com> Signed-off-by:
James Bottomley <JBottomley@Parallels.com>
- 23 Aug, 2011 1 commit
There are cases where suppressing partition scan is useful - e.g. for lo devices and pseudo SATA devices which advertise to be a disk but get upset on partition scan (some port multiplier control devices show such behavior). This patch adds GENHD_FL_NO_PART_SCAN which suppresses partition scan regardless of the number of possible partitions. disk_partitionable() is renamed to disk_part_scan_enabled() as suppressing partition scan doesn't imply the device can't be partitioned using BLKPG_ADD/DEL_PARTITION calls from userland. show_partition() now directly tests disk_max_parts() to maintain backward-compatibility. -v2: Updated to make it clear that only partition scan is suppressed not partitioning itself as suggested by Kay Sievers. Signed-off-by:
Tejun Heo <tj@kernel.org> Cc: Kay Sievers <kay.sievers@vrfy.org> Signed-off-by:
Jens Axboe <jaxboe@fusionio.com>
- 01 Jul, 2011 1 commit
Currently, only open(2) is defined as the 'clearing' point. It has two roles - first, it's an acknowledgement from userland indicating that the event has been received and kernel can clear pending states and proceed to generate more events. Secondly, it's passed on to device drivers as a hint indicating that a synchronization point has been reached and it might want to take a deeper look at the device. The latter currently is only used by sr which uses two different mechanisms - GET_EVENT_MEDIA_STATUS_NOTIFICATION and TEST_UNIT_READY to discover events, where the former is lighter weight and safe to be used repeatedly but may not provide full coverage. Among other things, GET_EVENT can't detect media removal while TUR can. This patch makes close(2) - blkdev_put() - indicate clearing hint for MEDIA_CHANGE to drivers. disk_check_events() is renamed to disk_flush_events() and updated to take @mask for events to flush which is or'd to ev->clearing and will be passed to the driver on the next ->check_events() invocation. This change makes sr generate MEDIA_CHANGE when media is ejected from userland - e.g. with eject(1). Note: Given the current usage, it seems @clearing hint is needlessly complex. disk_clear_events() can simply clear all events and the hint can be boolean @flush. Signed-off-by:
Tejun Heo <tj@kernel.org> Cc: Kay Sievers <kay.sievers@vrfy.org> Signed-off-by:
Jens Axboe <jaxboe@fusionio.com>
- 29 May, 2011 1 commit
It was not a good idea to start dereferencing disk->queue from the fs sysfs strategy for displaying discard alignment. We ran into first a NULL pointer deref, and after fixing that we sometimes see unvalid disk->queue pointer values. Since discard is the only one of the bunch actually looking into the queue, just revert the change. This reverts commit 23ceb5b7. Conflicts: fs/partitions/check.c
- 06 May, 2011 1 commit
Currently, hd_struct.discard_alignment is only used when we show /sys/block/sdx/sdx/discard_alignment. So remove it and calculate when it is asked to show. Signed-off-by:
Tao Ma <boyu.mt@taobao.com> Signed-off-by:
Jens Axboe <jaxboe@fusionio.com>
- 21 Apr, 2011 1 commit
Disk event code automatically blocks events on excl write. This is primarily to avoid issuing polling commands while burning is in progress. This behavior doesn't fit other types of devices with removeable media where polling commands don't have adverse side effects and door locking usually doesn't exist. This patch introduces new genhd flag which controls the auto-blocking behavior and uses it to enable auto-blocking only on optical devices. Note for stable: 2.6.38 and later only Cc: stable@kernel.org Signed-off-by:
Tejun Heo <tj@kernel.org> Reported-by:
Kay Sievers <kay.sievers@vrfy.org> Signed-off-by:
Jens Axboe <jaxboe@fusionio.com>
- 22 Mar, 2011 1 commit
- Shaohua Li authored
After the stack plugging introduction, these are called lockless. Ensure that the counters are updated atomically. Signed-off-by: Shaohua Li<shaohua.li@intel.com> Signed-off-by:
Jens Axboe <jaxboe@fusionio.com>
- 07 Jan, 2011 1 commit
We can't use krefs since it's apparently restricted to very basic reference counting. This reverts commit e4a683c8. Signed-off-by:
Jens Axboe <jaxboe@fusionio.com>
- 05 Jan, 2011 1 commit
- Jerome Marchand. Also add a refcount to struct hd_struct to keep the partition in memory as long as users exist. We use kref_test_and_get() to ensure we don't add a reference to a partition which is going away. Signed-off-by:
Jerome Marchand <jmarchan@redhat.com> Signed-off-by:
Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com> Cc: stable@kernel.org Signed-off-by:
Jens Axboe <jaxboe@fusionio.com>
- 16 Dec, 2010 3 commits implements framework for in-kernel disk event handling, which includes media presence polling. * bdops->check_events() is added, which supercedes ->media_changed(). It should check whether there's any pending event and return if so. Currently, two events are defined - DISK_EVENT_MEDIA_CHANGE and DISK_EVENT_EJECT_REQUEST. ->check_events() is guaranteed not to be called parallelly. * gendisk->events and ->async_events are added. These should be initialized by block driver before passing the device to add_disk(). The former contains the mask of all supported events and the latter the mask of all events which the device can report without polling. /sys/block/*/events[_async] export these to userland. * Kernel parameter block.events_dfl_poll_msecs controls the system polling interval (default is 0 which means disable) and /sys/block/*/events_poll_msecs control polling intervals for individual devices (default is -1 meaning use system setting). Note that if a device can report all supported events asynchronously and its polling interval isn't explicitly set, the device won't be polled regardless of the system polling interval. * If a device is opened exclusively with write access, event checking is automatically disabled until all write exclusive accesses are released. * There are event 'clearing' events. For example, both of currently defined events are cleared after the device has been successfully opened. This information is passed to ->check_events() callback using @clearing argument as a hint. * Event checking is always performed from system_nrt_wq and timer slack is set to 25% for polling. * Nothing changes for drivers which implement ->media_changed() but not ->check_events(). Going forward, all drivers will be converted to ->check_events() and ->media_change() will be dropped. Signed-off-by:
Tejun Heo <tj@kernel.org> Cc: Kay Sievers <kay.sievers@vrfy.org> Cc: Jan Kara <jack@suse.cz> Signed-off-by:
Jens Axboe <jaxboe@fusionio.com>
There's no reason for register_disk() and del_gendisk() to be in fs/partitions/check.c. Move both to genhd.c. While at it, collapse unlink_gendisk(), which was artificially in a separate function due to genhd.c / check.c split, into del_gendisk(). Signed-off-by:
Tejun Heo <tj@kernel.org> Signed-off-by:
Jens Axboe <jaxboe@fusionio.com>
There's no user of the facility. Kill it. Signed-off-by:
Tejun Heo <tj@kernel.org> Signed-off-by:
Jens Axboe <jaxboe@fusionio.com>
- 24 Oct, 2010 1 commit
This reverts commit 7681bfee. Conflicts: include/linux/genhd.h It has numerous issues with the cleanup path and non-elevator devices. Revert it for now so we can come up with a clean version without rushing things. Signed-off-by:
Jens Axboe <jaxboe@fusionio.com>
- 19 Oct, 2010 1 commit
- Yasuaki Ishimatsu.>
- 15 Sep, 2010 1 commit
- Will Drewry authored
I'm reposting this patch series as v4 since there have been no additional comments, and I cleaned up one extra bit of unneeded code (in 3/3). The patches are against Linus's tree: 2bfc96a1 (2.6.36-rc3). Would this patchset be suitable for inclusion in an mm branch? This changes adds a partition_meta_info struct which itself contains a union of structures that provide partition table specific metadata. This change leaves the union empty. The subsequent patch includes an implementation for CONFIG_EFI_PARTITION-based metadata. Signed-off-by:
Will Drewry <wad@chromium.org> Signed-off-by:
Jens Axboe <jaxboe@fusionio.com>
- 19 Aug, 2010 1 commit
- Arnd Bergmann authored
This adds annotations for RCU operations in core kernel components Signed-off-by:
Arnd Bergmann <arnd@arndb.de> Signed-off-by:
Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Al Viro <viro@ZenIV.linux.org.uk> Cc: Jens Axboe <jens.axboe@oracle.com> Cc: Andrew Morton <akpm@linux-foundation.org> Reviewed-by:
Josh Triplett <josh@joshtriplett.org>
- 16 Mar, 2010 1 commit>
- 16 Feb, 2010 1 commit
Add __percpu sparse annotations to core subsystems. These annotations are to make sparse consider percpu variables to be in a different address space and warn if accessed without going through percpu accessors. This patch doesn't affect normal builds. Signed-off-by:
Tejun Heo <tj@kernel.org> Reviewed-by:
Christoph Lameter <cl@linux-foundation.org> Acked-by:
Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Jens Axboe <axboe@kernel.dk> Cc: linux-mm@kvack.org Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Dipankar Sarma <dipankar@in.ibm.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Eric Biederman <ebiederm@xmission.com>
- 11 Jan, 2010 1 commit
- Stephen Hemminger authored
This fixes the sparse warning: fs/ext4/super.c:2390:40: warning: symbol 'i' shadows an earlier one fs/ext4/super.c:2368:22: originally declared here Using 'i' in a macro is dubious practice. Signed-off-by:
Stephen Hemminger <shemminger@vyatta.com> Signed-off-by:
Jens Axboe <jens.axboe@oracle.com>
- 10 Nov, 2009 1 commit
While SSDs track block usage on a per-sector basis, RAID arrays often have allocation blocks that are bigger. Allow the discard granularity and alignment to be set and teach the topology stacking logic how to handle them. Signed-off-by:
Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by:
Jens Axboe <jens.axboe@oracle.com>
- 06 Oct, 2009 1 commit
- Nikanth Karthikesan authored
Commit a9327cac>
- 04 Oct, 2009 1 commit
This reverts commit a9327cac. Corrado Zoccolo <czoccolo@gmail.com> reports: "with 2.6.32-rc1 I started getting the following strange output from "iostat -kx 2": Linux 2.6.31bisect (et2) 04/10/2009 _i686_ (2 CPU) avg-cpu: %user %nice %system %iowait %steal %idle 10,70 0,00 3,16 15,75 0,00 70,38 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 18,22 0,00 0,67 0,01 14,77 0,02 43,94 0,01 10,53 39043915,03 2629219,87 sdb 60,89 9,68 50,79 3,04 1724,43 50,52 65,95 0,70 13,06 488437,47 2629219,87 avg-cpu: %user %nice %system %iowait %steal %idle 2,72 0,00 0,74 0,00 0,00 96,53 6,68 0,00 0,99 0,00 0,00 92,33 4,40 0,00 0,73 1,47 0,00 93,40 4,00 0,00 3,00 0,00 28,00 18,67 0,06 19,50 333,33 100,00 Global values for service time and utilization are garbage. For interval values, utilization is always 100%, and service time is higher than normal. I bisected it down to: [a9327cac] Seperate read and write statistics of in_flight requests and verified that reverting just that commit indeed solves the issue on 2.6.32-rc1." So until this is debugged, revert the bad commit. Signed-off-by:
Jens Axboe <jens.axboe@oracle.com>
- 22 Sep, 2009 1 commit
- Alexey Dobriyan authored
Signed-off-by:
Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> | https://gitlab.flux.utah.edu/xcap/xcap-capability-linux/commits/88a678bbc34cecce36bf2c7a8af4cba38f9f77ce/include/linux/genhd.h | CC-MAIN-2019-51 | refinedweb | 3,245 | 51.34 |
read_smtp2
NameName
read_smtp2 — Read the SMTP response from peer
SynopsisSynopsis
#include "smtp.h"
|
int **read_smtp2** ( | dc, | |
| | expected_code, | |
| | now, | |
| | mask, | |
| | extra_codes
); | |
delivery_construct * <var class="pdparam">dc</var>;
int <var class="pdparam">expected_code</var>;
struct timeval * <var class="pdparam">now</var>;
int * <var class="pdparam">mask</var>;
int <var class="pdparam">extra_codes</var>;
DescriptionDescription
**Configuration Change. ** This feature is available starting from Momentum 3.1.0.
Read the SMTP response from peer.
This function is the same as read_smtp except that it takes extra SMTP codes besides expected_code.
- dc
The delivery construct. For a description of this data type see “delivery_construct”.
- expected_code
The expected SMTP response code from peer.
- now
The current time.
- mask
The IO mask such as
E_READ.
- extra_codes
list of SMTP codes in addition to the expected_code. Use a value <= 0 to terminate the list.
On success this function returns a value greater than
0 and on failure a number less than
0.
It is legal to call this function in any thread. | https://www.sparkpost.com/momentum/3/3-api/apis-read-smtp-2/ | CC-MAIN-2021-31 | refinedweb | 164 | 60.92 |
Hi,
I'm trying to develop a java program with a layout that resembles the following:
Attachment 2834
However, I'm having trouble laying this out by hand (I typically use the NetBeans GUI designer, but I want to do it by hand this time). I've been coding it, but I can't decide which layout manager would be the best to use in this case. I really want to use the GridBagLayout manager, but I'm not sure if it would be best to use in this situation. Here is what I've come up with so far:
Code :
package application; public class AppMain { public static void main(String args[]) { new AppInterface(); } } package application; import java.awt.*; import javax.swing.*; public class AppInterface extends JFrame { private JPanel mainPanel; private JScrollPane tabScrollPane, textBoxScrollPane; private JTabbedPane tabPane; private JTextPane textBox; private GridBagConstraints constraints; public AppInterface() { initComponents(); setLocationRelativeTo(null); setVisible(true); } private void initComponents() { mainPanel = new JPanel(new GridBagLayout()); textBoxScrollPane = new JScrollPane(); textBox = new JTextPane(); constraints = new GridBagConstraints(); textBoxScrollPane.add(textBox); setTitle("Application Interface"); setSize(900, 800); setDefaultCloseOperation(WindowConstants.EXIT_ON_CLOSE); mainPanel.setBackground(Color.yellow); constraints.gridx = 0; constraints.gridy = 0; constraints.fill = GridBagConstraints.BOTH; mainPanel.add(textBoxScrollPane, constraints); add(mainPanel); } }
I'm trying to get two different panels laid out with the GridBagLayout on each one so I can place the tab panel and text box on different panels. I need two panels side-by-side with the one on the left taking only about 25% of the JFrame and the right one taking 75% of the JFrame. Then, I hope to place the tab pane on the left panel and the text box on the right panel.
I know that JFrames have BoxLayouts as a default layout. How can I layout two panels with the BoxLayout on the JFrame with one taking 25% and the other taking 75% of the space? I think that would be a good place to start, then I can worry about placing the tab pane and text box. Thank you if you can offer any help. | http://www.javaprogrammingforums.com/%20awt-java-swing/36925-what-best-way-achieve-layout-printingthethread.html | CC-MAIN-2016-30 | refinedweb | 341 | 50.46 |
Hi Greg,I am a little confused by the directories created when one registers aclass device. When a class device is registered as the children of areal device, a subdirectory by the class name is created, and the classdevice is created there, effectively granting each class a separatenamespace. Example:/sys/devices/pci0000:00/0000:00:1f.3/i2c-adapter/i2c-0where 0000:00:1f.3 is the physical device, i2c-adapter the class nameand i2c-0 the class device.OTOH, if I create a class device as the children of another classdevice, the class device is created directly, without a directorybetween the parent and the child. Example:/sys/class/i2c-adapter/i2c-0/i2c-0where the first i2c-0 is an i2c-adapter class device, and the secondi2c-0 is an i2c-dev class device. I would have expected:/sys/class/i2c-adapter/i2c-0/i2c-dev/i2c-0The current behavior seems inconsistent to me. Is it done so on purpose,or is this accidental? If on purpose, what's the reason?I am asking because this is causing trouble in practice. We have bothi2c-dev and firmware_class which try to create class devices by thesame name and this of course collides. While I would blamefirmware_class for coming up with an horrible naming scheme (oractually, for not coming up with any naming scheme) it might still be agood idea to prevent such collisions at the driver core level.Thanks,-- Jean Delvare | https://lkml.org/lkml/2009/3/29/138 | CC-MAIN-2015-27 | refinedweb | 241 | 54.93 |
This blog post is part of C# 6.0 Features Series.Microsoft announced the new version of C# 6.0 at the day of visual studio connect event on November. They have not added any big features to C# 6.0 but they have listened to community and added few small features which comes really handy. One of the that is nameof operator. In this blog post we learn why nameof operator is very useful.
Let’s create a class Employee with few properties.
public class Employee { public string FirstName { get; set; } public string LastName { get; set; } }Now what I want to do here is to create a object with object initializer and print values of properties with print method.
Old way of doing this to pass hardcoded string like following.
class Program { static void Main(string[] args) { Employee employee = new Employee { FirstName = "Jalpesh", LastName = "Vadgama" }; Print(null); } static void Print(Employee employee) { if (employee == null) throw new ArgumentException("employee"); Console.WriteLine(employee.FirstName); Console.WriteLine(employee.LastName); } }Here strings are hardcoded in argument exception so that we have problem because when change names of class we have to change this hardcoded string also.
With nameof operator:
class Program { static void Main(string[] args) { Employee employee = new Employee { FirstName = "Jalpesh", LastName = "Vadgama" }; Print(null); } static void Print(Employee employee) { if (employee == null) throw new ArgumentException(nameof(employee)); Console.WriteLine(employee.FirstName); Console.WriteLine(employee.LastName); } }Here you can see that I have used nameof operator in place of hardcoded string so now even if we change the name of class then also we don’t have to worry about it. Now let’s run this and we will see output as expected as we have passed null to print method.
Here we have passed null so it will throw argument exception. You can find complete source code on github at following location.
That’s it. Hope you like it. Stay tuned for more!.
Your feedback is very important to me. Please provide your feedback via putting comments. | http://www.dotnetjalps.com/2014/12/csharp-6-nameof-operator.html | CC-MAIN-2017-04 | refinedweb | 337 | 57.27 |
Hi!
Why the following program fails to compile with the option -ffreestanding? (Without -ffreestanding there is no errors.)
//test.cpp
#include <climits>
int main() {
char buf[MB_LEN_MAX];
return 0;
}
Command: icpc test.cpp -ffreestanding -o test.exe
Output:
error: identifier "MB_LEN_MAX" is undefined
Link Copied
Seems like a compiler issue. I will report it to the compiler team.
Can you #include"/usr/include/limits.h" as a workaround?
Thanks,
Viet
I've reported this case CMPLRLIBS-2739
Viet Hoang (Intel) wrote:
Can you #include"/usr/include/limits.h" as a workaround?
Hi, Viet!
No, I'm not sure that we can do it. But we may refrain from using the option -ffreestanding.
Our goal is to drop the dependency on libirc.so. The icpc optimizes some code and places the calls to __intel_fast_memcpy (as well as some other __intel_fast_-functions) in our library, which in turn bring us the unacceptable dependency on libirc.so.
So we just need to find a way to drop the dependency on the libirc.so (and we can not link in the static version of libirc library as well). Could you please provide us with any solution?
You can try -fno-builtin to see if it helps; but to avoid dependency on Intel shared libs, you would need to link in statically (-static-intel) | https://community.intel.com/t5/Intel-C-Compiler/identifier-quot-MB-LEN-MAX-quot-is-undefined-when-ffreestanding/td-p/1155075 | CC-MAIN-2021-21 | refinedweb | 219 | 68.77 |
cdk_compat man page
cdk_compat — Cdk4 compatibility functions
Synopsis
Synopsis
cc [ flag ... ] file ... -lcdk [ library ... ]
#include <cdk/cdk_compat.h>
int getDirectoryContents ( char *directory, char **list, int maxListSize); int readFile ( char *filename, char **info, int maxlines); int splitString ( char *string, char **items, char splitChar);
Description
These functions and macros make it simpler to port applications from the older Cdk4 library to Cdk5.
A few functions are deprecated in Cdk5, because they rely upon the caller to know in advance the size of data which will be returned by the function.
Additionally, some macros are deprecated because they serve no realistic purpose: they have direct (standard) equivalents in all modern curses implementations.
Finally, a few macro definitions are added to iron out naming inconsistencies across the Cdk4 header files.
Available Functions
-
Start the porting process by changing the #include's to use
#include <cdk/cdk_compat.h>
rather than
#include <cdk.h>
Some adjustments of course are needed to make your compiler see the compatibility header file. A separate name was chosen so that it in turn can (by adjusting the include path) include either the old Cdk4 cdk.h or the new. If the old is included, you should link your program against the old library. Likewise, including the new requires that you link against the new library.
That is the first step: making your program compile using the compatibility header file using the old headers and library.
The next step is to get it to compile against the new headers and library. Most of the changes will require modifying bare references to certain pointers to wrap them with the ObjOf() and ScreenOf() macros. New Cdk uses these to provide functions which are easily shared among the different widget types. Your compiler should be able to tell you where the changes should be made. See the example programs which are included with Cdk as a guide.
That is the hard part of porting. But even for a large program, the changes can be made simply: there are not that many types of change to make. At the end of this step, you should still be able to build and run your program against the old headers and library. It is reasonably likely that you can do the same with the new headers and library. By using the same source for old/new versions of Cdk, you can test and verify that your program still works properly after these modifications.
Finally, unless this is a purely academic exercise, you will want to remove references to the deprecated functions and macros.
See Also
cdk_objs (3), cdk_util (3) | https://www.mankier.com/3/cdk_compat | CC-MAIN-2018-22 | refinedweb | 432 | 62.68 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.